text
stringlengths 304
664k
| nemo_id
stringlengths 18
18
|
---|---|
STABILITY AND TRANSITIONS OF THE SECOND GRADE POISEUILLE FLOW
11 Sep 2015
Saadet Ozer
Taylan Sengul
STABILITY AND TRANSITIONS OF THE SECOND GRADE POISEUILLE FLOW
11 Sep 2015
In this study we consider the stability and transitions for the Poiseuille flow of a second grade fluid which is a model for non-Newtonian fluids. We restrict our attention to flows in an infinite pipe with circular cross section that are independent of the axial coordinate.We show that unlike the Newtonian (ǫ = 0) case, in the second grade model (ǫ = 0 case), the time independent base flow exhibits transitions as the Reynolds number R exceeds the critical threshold Rc ≈ 4.124ǫ −1/4 where ǫ is a material constant measuring the relative strength of second order viscous effects compared to inertial effects.At R = Rc, we find that generically the transition is either continuous or catastrophic and a small amplitude, time periodic flow with 3-fold azimuthal symmetry bifurcates. The time period of the bifurcated solution tends to infinity as R tends to Rc. Our numerical calculations suggest that for low ǫ values, the system prefers a catastrophic transition where the bifurcation is subcritical.We also find that there is a Reynolds number R E with R E < Rc such that for R < R E , the base flow is globally stable and attracts any initial disturbance with at least exponential speed. We show that R E ≈ 12.87 at ǫ = 0 and R E approaches Rc quickly as ǫ increases.
Introduction
Certain natural materials manifest some fluid characteristics that can not be represented by well-known linear viscous fluid models. Such fluids are generally called non-Newtonian fluids. There are several models that have been proposed to predict the non-Newtonian behavior of various type of materials. One class of fluids which has gained considerable attention in recent years is the fluids of grade n [11,8,13,12,7,17,6]. A great deal of information for these types of fluids can be found in [4]. Among these fluids, one special subclass associated with second order truncations is the so called second-grade fluids. The constitutive equation of a second grade fluid is given by the following relation for incompressible fluids:
t = −pI + µA 1 + α 1 A 2 + α 2 A 2 1 ,
where t is the stress tensor, p is the pressure, µ is the classical viscosity, α 1 and α 2 are the material coefficients. A 1 and A 2 are the first two Rivlin-Ericksen tensors defined by
A 1 = ∇v + ∇v T , A 2 =Ȧ 1 + A 1 ∇v + ∇v T A 1 ,
where v is the velocity field and the overdot represents the material derivative with respect to time. This type of constitutive relation was first proposed in [2]. The following conditions: α 1 + α 2 = 0, µ ≥ 0, α 1 ≥ 0, must be satisfied for the second-grade fluid to be entirely consistent with classical thermodynamics and the free energy function achieves its minimum in equilibrium [5]. Equation of motion for an incompressible second grade Rivlin Ericksen fluid is represented as:
ρ(v t + w × v + ∇ |v| 2 2 ) = −∇p + µ∆v + α[∆v t + ∆w × v + ∇(v · ∆v + 1 4 |A 1 | 2 )], ∇ · v = 0.
where ρ is the density, α = α 1 = −α 2 , represents the second order material constant. Subscript t denotes the partial derivative with respect to time, w is the usual vorticity vector defined by w = ∇ × v. We next define the non-dimensional variables:
v * = v U , p * = p ρ U 2 , t * = tU L , x * = x L ,
where U and L are characteristic velocity and length, respectively. By letting ǫ represent the second order non-dimensional material constant which measures the relative strength of second order viscous effects compared to inertial effects and defining the Reynolds number, R = ρU L µ , ǫ = α ρL 2 , the equation of motion, with asterisks omitted, can be expressed as:
(1) ∇p = 1 R ∆v + ǫ(∆w × v + ∆v t ) − v t − w × v,
where the characteristic pressurep is defined as:
p = p + |v| 2 2 − ǫ(v∆v + 1 4 |A 1 | 2 ).
Taking curl of both sides of (1) we can simply write the equation of motion as:
(2) ∇ × [ 1 R ∆v + ǫ(∆w × v + ∆v t ) − v t − w × v] = 0,
which is the field equation of incompressible unsteady second grade Rivlin-Ericksen fluid independent of the choice of any particular coordinate system. Now we restrict our interest to flows in a cylindrical tube and assume that the velocity is dependent only on two cross-sectional variables x, y and the time t. The incompressiblity of the fluid allows us to introduce a stream function ψ such that v = (ψ y , −ψ x , w) where ψ = ψ(t, x, y) and also w = w(t, x, y).
We further take the cross section of the cylinder to be a disk with unit radius and consider the no-slip boundary conditions. Then the equations (2) admit the following steady state solution
w 0 = 1 2 (1 − x 2 − y 2 ), ψ 0 = 0.
Here the characteristic velocity has been chosen as U = pL 2 4µ . First considering the deviation w ′ = w − w 0 and ψ ′ = ψ − ψ 0 and then introducing the polar coordinates w ′′ (t, r, θ) = w ′ (t, r cos θ, r sin θ) and ψ ′′ (t, r, θ) = ψ ′ (t, r cos θ, r sin θ) and ignoring the primes, the equations become
(3) ∂ ∂t (1 − ǫ∆)w = 1 R ∆w + Rψ θ + J(ψ, (1 − ǫ∆)w), ∂ ∂t ∆(ǫ∆ − 1)ψ = − 1 R ∆ 2 ψ + ǫR∆w θ + J((1 − ǫ∆)∆ψ, ψ) + ǫJ(∆w, w),
in the interior of the unit disk Ω where J is the advection operator
J(f, g) = 1 r (f r g θ − f θ g r ).
The field equations are supplemented with no-slip boundary conditions for the velocity field (4) w = ψ = ∂ψ ∂r = 0 at r = 1.
In this paper, our main aim is to investigate the stability and transitions of (3) subject to (4). We first prove that the system undergoes a dynamic transition at the critical Reynolds number R c ≈ 4.124ǫ −1/4 . As R crosses R c the steady flow loses its stability, and a transition occurs. If we denote the azimuthal wavenumber of an eigenmode by m, then two modes, called critical modes hereafter, with m = 3 and radial wavenumber 1, become critical at R = R c . Using the language of dynamic transition theory [10], we can show that the transition is either Type-I(continuous) or Type-II(catastrophic). In Type-I transitions, the amplitudes of the transition states stay close to the base flow after the transition. Type-II transitions, on the other hand, are associated with more complex dynamical behavior, leading to metastable states or a local attractor far away from the base flow.
We show that the type of transition preferred in system (3) is determined by the real part of a complex parameter A which only depends on ǫ. In the generic case of nonzero imaginary part of A, there are two possible transition scenarios depending on the sign of the real part of A: continuous or catastrophic. In the continuous transition scenario, a stable, small amplitude, time periodic flow with 3-fold azimuthal symmetry bifurcates on R > R c . The time period of the bifurcated solution tends to infinity as R approaches R c , a phenomenon known as infinite period bifurcation [9]. The dual scenario is the catastrophic transition where the bifurcation is subcritical on R < R c and a repeller bifurcates. In the non-generic case where the imaginary part of A vanishes, the limit cycle degenerates to a circle of steady states.
The transition number A depends on the system parameter ǫ in a non-trivial way, hence it is not possible to find an analytical expression of A as a function of ǫ. So, A must be computed numerically for a given ǫ. Physically, the transition number can be considered as a measure of net mechanical energy transferred from all modes back to the critical modes which in turn modify the base flow. We show that A is determined by the nonlinear interactions of the critical modes (m = 3) with all the modes having m = 0 and m = 6. Moreover, our numerical computations suggest that for low ǫ fluids (ǫ < 1), just a single nonlinear interaction, namely the one with m = 0 and radial wavenumber 1 mode, dominates all the rest contributions to A. Our numerical experiments with low ǫ, i.e. ǫ < 1, suggest that the real part of A is always positive indicating a catastrophic transition at R = R c .
We also determine the Reynolds number threshold R E > 0 below which the Poiseuille flow is globally stable, attracting all initial conditions with at least exponential convergence in the H 1 0 (Ω) norm for the velocity. We find that R E ≈ 12.87 when ǫ = 0. The gap between R E and R c shrinks to zero quickly as ǫ is increased.
The paper is organized as follows: In Section 3, the linearized stability of the system is studied and the principle of exchange of stabilities is investigated. In Section 4, transition theorem of the system is presented with its proof given in Section 5. Section 6 is devoted to the energy stability of the system. In Section 7, we give a detailed numerical analysis. Finally in Section 8 the conclusions and possible extensions of this study are discussed in detail.
The model and the functional setting
Throughout Re z, Im z, z will denote the real part, imaginary part and the complex conjugate of a complex number z. Let
X 1 = H 1 0 (Ω) × H 2 0 (Ω), X = L 2 (Ω) × L 2 (Ω),
where Ω is the unit disk in R 2 , H 1 0 (Ω) and H 2 0 (Ω) denote the usual Sobolev spaces and L 2 (Ω) is the space of Lebesgue integrable functions.
For φ i = wi(r,θ) ψi(r,θ) ∈ X, i = 1, 2, the inner product on X is defined by
(5) φ 1 , φ 2 = 2π 0 1 0 (w 1 w 2 + ψ 1 ψ 2 )rdrdθ,
with the norm on X defined as φ 2 = φ, φ . We define the linear operators M : X 1 → X and N : X 1 → X as
(6) M = I − ǫ∆ 0 0 ∆(ǫ∆ − I) , N = 1 R ∆ R∂ θ ǫR∆∂ θ − 1 R ∆ 2 ,
and the nonlinear operator H : X 1 → X as
H(φ) = J(ψ, (1 − ǫ∆)w) J((1 − ǫ∆)∆w, ψ) + ǫJ(∆w, w) , for φ = w ψ ∈ X 1 .
We will use H to denote both the nonlinear operator as well as the bilinear form
(7) H(φ I , φ J ) = J(ψ I , (1 − ǫ∆)w J ) J((1 − ǫ∆)∆w I , ψ J ) + ǫJ(∆w I , w J ) , for φ I = [w I , ψ I ] T , φ J = [w J , ψ J ] T .
Also we will use the symmetrization of this bilinear form
(8) H s (φ I , φ J ) = H(φ I , φ J ) + H(φ J , φ I ).
Then the equations (3) and (4) can be written in the following abstract form
(9) M φ t = N φ + H(φ), φ ∈ X 1 ,
with initial condition
φ(0) = φ 0 ∈ X 1 .
Linear Stability
To determine the transitions of (9), the first step is to study the eigenvalue problem N φ = βM φ of the linearized operator. This is equivalent to the problem (10)
1 R ∆w + Rψ θ = β(1 − ǫ∆)w, 1 R ∆ 2 ψ − ǫR∆w θ = β(1 − ǫ∆)∆ψ,
subject to the boundary conditions (4). An interesting feature of the eigenvalue problem is the following.
Lemma 1. Any eigenvalue β of (10) with boundary conditions (4) is real.
Proof. Multiplying the first equation of (10) by ∆w and the second equation by ψ, integrating over the domain Ω, we obtain after integration by parts
(11) ( 1 R + βǫ) ∆w 2 + R Ω ψ θ ∆wdrdθ = −β ∇w 2 , and (12) − ( 1 R + βǫ) ∆ψ 2 + ǫR Ω ∆w θ ψdrdθ = β ∇ψ 2 . Let A 1 = ǫ ∆w 2 + ∆ψ 2 , A 2 = ǫ ∇w 2 + ∇ψ 2 A 3 = 2ǫ Ω ∆w θ ψdrdθ. Now consider −ǫ×(11) + (12) which is (13) − ( 1 R + βǫ)A 1 + RRe(A 3 ) = βA 2 ,
after integrating by parts. Taking the imaginary part of (13) gives
Im(β)(ǫA 1 + A 2 ) = 0. Since (ǫA 1 + A 2 ) ≥ 0, we must have Im(β) = 0.
Now we turn to the problem of determining explicit expressions of the solutions of the eigenvalue problem of the linearized operator. Thanks to the periodicity in the θ variable, for m ∈ Z and j ∈ Z + , we denote the eigenvectors of (10) by (14) φ m,j (r, θ) = e imθ ϕ m,j (r),
ϕ m,j (r) = w m,j (r) ψ m,j (r)
with corresponding eigenvalues β m,j . Let us set the eigenvalues β m,j for m = 0 to be ordered so that β m,1 ≥ β m2 ≥ · · · for each m ∈ Z.
Plugging the ansatz (14) into (10) and omitting j we obtain two ODE's in the r-variable.
(15) 1 R + βǫ ∆ m w m + imRψ m = βw m , − 1 R + βǫ ∆ 2 m ψ m + iǫmR∆ m w m = −β∆ m ψ m ,
with boundary conditions
(16) w m (1) = ψ m (1) = ψ ′ m (1) = 0 where ∆ m = d 2 dr 2 + 1 r d dr − m 2 r 2 .
When m = 0, the equations (15) become decoupled and we easily find that there are two sets of eigenpairs given by
w 1 0,j (r) = J 0 (α 0,j r), ψ 1 0,j (r) = 0, β 1 0,j = −α 2 0,j R(1 + ǫα 2 0,j ) , w 2 0,j (r) = 0, ψ 2 0,j (r) = J 0 (α 1,j r) − J 0 (α 1,j ), β 2 0,j = −α 2 1,j R(1 + ǫα 2 1,j ) ,
where α k,j is the jth zero of the kth Bessel function J k . In particular, β 0,j < 0 for all ǫ and for all R.
In the m = 0 case, as the eigenvalues are real by Lemma 1, the multiplicity of each eigenvalue β = β m,j = β −m,j ∈ R is generically two with corresponding eigenvectors φ m,j and φ −m,j = φ m,j .
Solving the first equation of (15) for ψ m and plugging it into the second equation of (15) yields a sixth order equation
(17) (λ + ∆ m )(µ + ∆ m )∆ m w m = 0 where (18) λ = √ ǫmR − β m,j 1 R + ǫβ m,j , µ = − √ ǫmR − β m,j 1 R + ǫβ m,j .
It is easy to check that λ = 0 or µ = 0 yield only trivial solutions to equations (16) and (17). So we will assume λ = 0, and µ = 0. When m > 0, the general solution of (17) is
w m = c 1 r m + c 2 J m ( √ λr) + c 3 J m ( √ µr) + c 4 r −m + c 5 Y m ( √ λr) + c 6 Y m ( √ µr),
where J m and Y m are the Bessel functions of the first and the second kind respectively.
The boundedness of the solution and its derivatives at r = 0 implies that c 4 = c 5 = c 6 = 0 and we get the eigensolutions
(19) w m,j (r) = c 1 r m + c 2 J m ( √ λr) + c 3 J m ( √ µr) if m > 0 w −m,j , if m < 0 ψ m,j (r) = d 1 r m + d 2 J m ( √ λr) + d 3 J m ( √ µr) if m > 0 ψ −m,j , if m < 0 with d 1 = −iβ m,j c 1 mR , d 2 = −i √ ǫc 2 , d 3 = i √ ǫc 3 .
The eigenvalues and two of the three coefficients c 1 , c 2 , c 3 in (19) are determined by the boundary conditions (16) which form a linear system for the coefficients c k . This system has a nontrivial solution only when the dispersion relation
(20) 1 J m ( √ λ) J m ( √ µ) β √ ǫmRJ m ( √ λ) − √ ǫmRJ m ( √ µ) βm √ ǫmR √ λJ ′ m ( √ λ) − √ ǫmR √ µJ ′ m ( √ µ) = 0, is satisfied. Using the identity J ′ m (z) = m z J m (z) − J m+1 (z)
we can show that (20) is equivalent to
(21) √ λJ m ( √ λ)J m+1 ( √ µ) + √ µJ m ( √ µ)J m+1 ( √ λ) = 0,
where J m is the Bessel function of the first kind of order m.
To compute the critical Reynolds number R c , we set β = 0 in (21) which, after some manipulation, yields
(22) I m ( √ λ)J ′ m ( √ λ) − J m ( √ λ)I ′ m ( √ λ) = 0.
Once (22) is solved for λ, the corresponding Reynolds number is obtained by the relation λ = √ ǫmR 2 (from (18) when β = 0). We note here that this is the exact same equation as the one obtained in [12]. For each m, the equation (22)
/m) 1/2 for m = 1, . . . , 5. infinitely many solutions {λ m,j } ∞ j=1 where λ m,j increases with j and λ m,j → ∞ as j → ∞. Letting R m,j = ǫ −1/4 (λ m,j /m) 1/2 , we have β m,j = 0 when R = R m,j .
We define the critical Reynolds number
R c = min m,j R m,j = min m R m,1 = ǫ −1/4 min m∈Z+ λ m,1 m , so that β m,j ≤ 0 if R ≤ R c .
There has been recent progress on the properties of zeros of (22). In [1], the estimate
2 4 (m + 1)(m + 2)(m + 3) (m + 4)(m + 5) √ 5m + 15 < λ 2 m,1 < 2 4 (m + 1)(m + 2)(m + 3)(m + 4)(m + 5) 5m + 17 ,(23)
on λ m,1 is obtained. Using (23), we can show that the upper bound for λ m,1 /m for m = 3 is less than its lower bound for all m ≥ 7 which implies that λ 3,1 /3 < λ m,1 /m for all m ≥ 7. Thus R c is minimized for some m smaller than 7 and hence can be found by brute force. Looking at Table 1, we find that the expression above is indeed minimized when m = 3 and obtain the relation (24), i.e. R c = R 3,1 ≈ 4.124ǫ −1/4 .
Defining the left hand side of (21) as ω(β, R), the equation (21) becomes ω(β, R) = 0. By the implicit function theorem, this defines β 3,1 (R) for R near R c with β 3,1 (R c ) = 0. With the aid of symbolic computation, we can compute
dβ 3,1 dR | R=Rc = ∂ω/∂R ∂ω/∂β | R=Rc,β=β3,1 = 0.12 √ ǫ 0.02 + ǫ > 0.
Thus we have proved the Principal of Exchange of Stabilities which we state below.
Theorem 1. For ǫ = 0, let (24) R c = ǫ −1/4 λ 3,1 3 ≈ 4.124ǫ −1/4 . Then (25) β 3,1 (R) = β −3,1 (R) < 0 if R < R c = 0 if R = R c > 0 if R > R c β m,j (R c ) < 0, if (m, j) = (±3, 1).
Note that Theorem 1 is in contrast to the ǫ = 0 case where the basic flow is linearly stable for all Reynolds numbers. This can be seen easily by noting that when ǫ = 0, the inner product of the second equation in (10) with ψ yields
(26) ∆ψ 2 = −βR ∇ψ 2 .
With the Dirichlet boundary conditions, from (26), it follows easily that β < 0 if ψ = 0.
For the proof and presentation of Theorem 2, we also need to solve the eigenvalue problem of the adjoint linear operator which yields adjoint modes orthogonal to the eigenmodes of the linear operator. Adjoint problem is obtained by taking the inner product of (9) by φ * and moving the derivatives via integration by parts onto φ * by making use of the boundary conditions. This yields the following adjoint problem
N * φ * = β * M * φ * , where M * = M = I − ǫ∆ 0 0 ∆(ǫ∆ − I) , N * = 1 R ∆ −ǫR∆∂ θ R∂ θ − 1 R ∆ 2 , φ * = w * ψ * ,
and w * , φ * satisfies the same boundary conditions (4) as w and φ. We denote the adjoint eigenvectors by φ * m,j = w * m,j ψ * m,j T and we also have the adjoint eigenvalues β * m,j = β m,j . The reason we introduce the adjoint eigenmodes is to make use of the following orthogonality relation
(27) φ m,i , M φ * n,j = 0 if (m, i) = (n, j).
Dynamic Transitions
Let us briefly recall here the classification of dynamic transitions and refer to [10] for a detailed rigorous discussion. For ǫ = 0, as the critical Reynolds number R c is crossed, the principle of exchange of stabilities (25) dictates that the nonlinear system always undergoes a dynamic transition, leading to one of the three type of transitions, Type-I(continuous), II(catastrophic) or III(random). On R > R c , the transition states stay close to the base state for a Type-I transition and leave a local neighborhood of the base state for a Type-II transition. For Type-III transitions, a local neighborhood of the base state is divided into two open regions with a Type-I transition in one region, and a Type-II transition in the other region. Type-II and Type-III transitions are associated with more complex dynamics behavior.
Below we prove that for (3), only two scenarios are possible. In the first scenario, the system exhibits a Type-I (or continuous) transition and a stable attractor will bifurcate on R > R c which attracts all sufficiently small disturbances to the Poiseuille flow. We prove that this attractor is homeomorphic to the circle S 1 and is generically a periodic orbit. The Figure 1 shows the stream function of the bifurcated time-periodic solution. The dual scenario is that the system exhibits a Type-II (or catastrophic) transition.
The type of transition at R = R c depends on the transition number
(28) A = ∞ j=1 A 0,j + A 6,j ,
where A m,j represent the nonlinear interaction of the critical modes with the mode with azimuthal wavenumber m and radial wavenumber j. The formulas for A m,j are (29) To recall the meaning of various terms in (29), β n,j is the jth eigenvalue of the nth azimuthal mode with corresponding eigenvector φ n,j and adjoint eigenvector φ * n,j . ·, · denotes the inner product (5), H denotes the bilinear form (7), H s is its symmetrization (8), M is the linear operator defined by (6).
A 0,j = 1 φ 3,1 , M φ * 3,1 Φ 0,j H s (φ 3,1 , φ 0,j ), φ * 3,1 A 6,j = 1 φ 3,1 , M φ * 3,1 Φ 6,j H s (φ 3,1 , φ 6,j ), φ * 3,1 Φ 0,j = 1 −β 0,j φ 0,j , M φ * 0,j H s (φ 3,1 , φ 3,1 ), φ * 0,j , Φ 6,j = 1 −β 6,j φ 6,j , M φ * 6,j H(φ 3,1 , φ 3,1 ), φ * 6,j ,
Theorem 2. If ǫ = 0 then the following statements hold true.
(1) If Re(A) < 0 then the transition at R = R c is Type-I and an attractor Σ R bifurcates on R > R c which is homeomorphic to S 1 . If Im(A) = 0 then Σ R is a cycle of steady states. If Im(A) = 0 then Σ R is the orbit of a stable limit cycle given by Here w 3,1 and ψ 3,1 are the vertical velocity and stream function of the eigenmode of the linear operator with corresponding eigenvalue β 3,1 . (2) If Re(A) > 0 then the transition at R = R c is Type-II and a repeller Σ R bifurcates on R < R c . If Im(A) = 0 then Σ R is a cycle of steady states. If Im(A) = 0 then Σ R is the orbit of of an unstable limit cycle given by (30) with β 3,1 replaced by −β 3,1 .
Remark. In the generic case of Im(A) = 0, Theorem 2 guarantees the existence of a stable (unstable) bifurcated periodic solution on R > R c (R < R c ). By (31), the period of the bifurcated solution approaches to infinity as R ↓ R c (R ↑ R c ).
Proof Of Theorem 2
As is standard in the dynamic transition approach, the proof of Theorem 2 depends on the reduction of the field equations (9) on to the center manifold.
Let us denote the (real) eigenfunctions and adjoint eigenfunctions corresponding to the critical eigenvalue β 3,1 by e 1 (r, θ) = Re(φ 3,1 (r, θ)) e 2 (r, θ) = Im(φ 3,1 (r, θ)) e * 1 (r, θ) = Re(φ * 3,1 (r, θ)) e * 2 (r, θ) = Im(φ * 3,1 (r, θ)) By the spectral theorem, the spaces X 1 and X can be decomposed into the direct sum
X 1 = E 1 ⊕ E 2 , X = E 1 ⊕ E 2 , where E 1 = span{e 1 , e 2 },E 2 = {u ∈ X 1 | u, e * i = 0 i = 1, 2}, E 2 = closure of E 2 in X.
Since M : X 1 → X is an invertible operator, we can define L = M −1 N and G = M −1 H. Now the abstract equation (9) can be written as
(32) dφ dt = Lφ + G(φ).
The linear operator L in (32) can be decomposed into
L = J ⊕ L, J = L | E1 : E 1 → E 1 , L = L | E2 : E 2 → E 2 ,
Since the eigenvalues are real, we have Le k = β 3,1 e k for k = 1, 2. Hence we have J = β 3,1 (R)I 2 , where I 2 is the 2 × 2 identity matrix. We know that when J is diagonal, we have the following approximation of the center manifold function Φ : E 1 → E 2 near R ≈ R c ; see [10].
(33) − LΦ(x) = P 2 G k (x) + o(k),
The meaning of the terms in the above formula (33) are as follows.
a) o(k) = o( x k ) + O(|β 3,1 (R)| x k ) as R → R c , x → 0, b) P 2 : X → E 2 is the canonical projection, c) x is the projection of the solution onto E 1 ,(34)
x(t, r, θ) = x 1 (t)e 1 (r, θ) + x 2 (t)e 2 (r, θ) d) G k denotes the lowest term of the Taylor expansion of G(u) around u = 0. In our case G is bilinear and thus k = 2 in and G = G k . It is easier to carry out the reduction using complex variables. So we write (34) as
(35) x(t, r, θ) = z(t)φ 3,1 (r, θ) + z(t)φ 3,1 (r, θ)
where z(t) = 1 2 (x 1 (t) − ix 2 (t)). Let us expand the center manifold function by (36) Φ = (n,j) =(±3,1) Φ n,j (t)φ n,j (r, θ)
Plugging the above expansion into the center manifold approximation formula (33), taking inner product with M φ * n,j and using the orthogonality (27) we have
(37) Φ n,j = 1 −β n,j φ n,j , M φ * n,j H(x), φ * n,j + o(2).
Since H is bilinear,
H(x) = H(zφ 3,1 + zφ 3,1 ) = z 2 H(φ 3,1 , φ 3,1 ) + zzH s (φ 3,1 , φ 3,1 ) + z 2 H(φ 3,1 , φ 3,1 ), with the operator H s defined by (8). Thanks to the orthogonality 2π 0 e inθ e −imθ dθ = 2πδ nm , we have
(39) H(φ m1,i1 , φ m2,i2 ), φ * m3,i3 = 0, if m 1 + m 2 = m 3 With φ 3,1 = φ −3,1 , this implies (40) H(x), φ * n,j = 0 if n / ∈ {0, −6, 6}.
According to (40), (36) and (37),
(41) Φ(t) = ∞ j=1 Φ 0,j (t)φ 0,j + Φ 6,j (t)φ 6,j + Φ −6,j (t)φ −6,j + o(2)
That is the center manifold is o(2) in eigendirections whose azimuthal wavenumber is not 0, 6 or −6. The equation (38) implies that (42) and (37), we get the coefficients of the center manifold in (41)
(42) H(x), φ * 0,j = zz H s (φ 3,1 , φ 3,1 ), φ * 0,j , H(x), φ * 6,j = z 2 H(φ 3,1 , φ 3,1 ), φ * 6,j , H(x), φ * −6,j = H(x), φ * 6,j . By(43) Φ 0,j = zzΦ 0,j + o(2), Φ 6,j = z 2Φ 6,j + o(2), Φ −6,j = Φ 6,j ,
whereΦ 0,j andΦ 6,j are given by (29).
As the dynamics of the system is enslaved to the center manifold for small initial data and for Reynolds numbers close to the critical Reynolds number R c , it is sufficient to investigate the dynamics of the main equation (9) on the center manifold. For this reason we take
φ(t) = x(t) + Φ(t), in (9) to obtain (44) dz dt M φ 3,1 + dz dt M φ 3,1 = zN φ 3,1 + zN φ 3,1 + H(x + Φ).
To project the above equation onto the center-unstable space E 1 , we take inner product of (44) with φ * 3,1 and use
M φ 3,1 , φ * 3,1 = 0, and N φ 3,1 = β 3,1 M φ 3,1 , N φ 3,1 = β 3,1 M φ 3,1 ,
to get the following reduced equation of (9).
(45) dz dt = β 3,1 (R)z + 1 φ 3,1 , M φ * 3,1 H(x + Φ), φ * 3,1 .
The reduced equation (45) describes the transitions of the full nonlinear system for R near R c and small initial data. At this stage, the nonlinear term in (45) is too complicated to explicitly describe the transition. Thus we need to determine the lowest order expansion in z of the nonlinear term H(x + Φ), φ * 3,1 . By the bilinearity of H,
(46) H(x + Φ), φ * 3,1 = H(x), φ * 3,1 + H s (x, Φ), φ * 3,1 + H(Φ), φ * 3,1 .
The first term in (46) vanish by (40) and the last term in (46) is o(3) as H(Φ) = o(3) since Φ = O(2) and H is bilinear. Thus (46) becomes
(47) H(x + Φ), φ * 3,1 = H s (x, Φ), φ * 3,1 + o(3)
. Using the expression (35) for x, we can rewrite (47) as
(48) H(x + Φ), φ * 3,1 = z H s (φ 3,1 , Φ), φ * 3,1 + z H s (φ 3,1 , Φ), φ * 3,1 + o(3)
. Now we use the expansion (41) of Φ in (48) and the orthogonality relations
H s (φ 3,1 , φ n,j ), φ * 3,1 = 0 if n = 0,
and H s (φ 3,1 , φ n,j ), φ * 3,1 = 0 if n = 6, which follow from (39) to arrive at (49)
H(x + Φ), φ * 3,1 = ∞ j=1 zΦ 0,j H s (φ 3,1 , φ 0,j ), φ * 3,1 + zΦ 6,j H s (φ 3,1 , φ 6,j ), φ * 3,1 + o(3).
Defining the coefficient A by (28) and making use of (43) and (49), we write down the approximate equation of (45) as
(50) dz dt = β 3,1 (R)z + A|z| 2 z + o(3).
To finalize the proof, there remains to analyze the stability of the zero solution of (50) for small initial data. In polar coordinates z(t) = |z|e iγ , (50) is equivalent to
(51) d|z| dt = β 3,1 (R)|z| + Re(A)|z| 3 + o(|z| 3 ), dγ dt = Im(A)|z| 2 + o(|z| 3 ).
For R > R c as β 3,1 > 0, it is clear from (51) that z = 0 is unstable if Re(A) > 0 and is locally stable if Re(A) < 0. In the latter case, the bifurcated solution is
z(t) = −β 3,1 (R) Re(A) exp −i Im(A) Re(A) β 3,1 (R)t .
Thus to determine the stability of the bifurcated state as R crosses the critical Reynolds number R c , we need to compute the sign of the real part of A.
The details of the assertions in the proof of Theorem 2 follow from the attractor bifurcation theorem in [10]. That finishes the proof.
Energy Stability
In this section we study the energy stability of the equations (3) which is related to at least exponential decay of solutions to the base flow. We refer to [16] for a multitude of applications of this theory.
For f , g, h in H 1 0 (Ω), the following two properties of J follows from integrating by parts
d dt ( w 2 + ǫ ∇w 2 ) = − 1 R ∇w 2 + R ψ θ , w + ǫ J(∆w, ψ), w ,(54)1 2 d dt ( ∇ψ 2 + ǫ ∆ψ 2 ) = − 1 R ∆ψ 2 + ǫR ∆w θ , ψ + ǫ J(∆w, w), ψ .
Adding equations (53) and (54) and using (52) once again, we arrive at
(55) 1 2 d dt E(t) = − 1 R I 1 (t) + RI 2 (t),
where E = w 2 + ǫ ∇w 2 + ∇ψ 2 + ǫ ∆ψ 2
I 1 = ∇w 2 + ∆ψ 2 I 2 = ψ θ , w − ǫ∆w . Letting (56) 1 R 2 E = max X1\{0} I 2 I 1 ,
we have by (55)
(57) d dt E ≤ −2R( 1 R 2 − 1 R 2 E )I 1 .
Since I 1 ≥ 0 and I 2 = 0 whenever ψ θ = 0, R E must be nonnegative.
Since w ∈ H 1 0 (Ω) and ∇ψ ∈ H 1 0 (Ω), by the Poincaré inequality, |∇w| 2 ≥ η 1 |w| 2 and |∆ψ| 2 ≥ η 1 |∇ψ| 2 , where η 1 ≈ 5.78 is the first eigenvalue of negative Laplacian on Ω. Thus we have (58)
I 1 ≥ η 1 1 + ǫη 1 E Now let c R = 2Rη 1 1 + ǫη 1 ( 1 R 2 − 1 R 2 E ),
and suppose that R < R E . Then c R > 0 and by (57) and (58),
d dt E(t) ≤ −c R E(t).
Hence the Gronwall's inequality implies
E(t) ≤ e −cRt E(0).
In particular, for R ≤ R E , c R > 0 and any initial disturbance in X 1 will decay to zero implying the unconditional stability of the basic steady state solution.
Using the variational methods to maximize the quantity in (56), we find the resulting Euler-Lagrange equations as
(59) ∆w + R 2 2 (1 − ǫ∆)ψ θ = 0, ∆ 2 ψ + R 2 2 (1 − ǫ∆)w θ = 0.
Considering (59) as an eigenvalue problem with R playing the role of the eigenvalue, R E is just the smallest positive eigenvalue. To solve (59), we plug the ansatz w = e imθ w m (r) and ψ = e imθ ψ m (r) into (59) which yields
(60) ∆ m w m + i mR 2 2 (1 − ǫ∆ m )ψ m = 0 ∆ 2 m ψ m + i mR 2 2 (1 − ǫ∆ m )w m = 0, where ∆ m = d 2 dr 2 + 1 r d dr − m 2 r 2 .
Taking ∆ m of the second equation above and using the first equation, we obtain
(61) p(∆ m )ψ m = 0 where p(ξ) = ξ 3 + m 2 R 4 4 (1 − ǫξ) 2 .
Let ξ 1 , ξ 2 and ξ 3 be the three roots of p. As the discriminant of p is negative, one root is real and the others are complex conjugate. The factorization of the operator in (61) gives
(62) (∆ m − ξ 1 )(∆ m − ξ 2 )(∆ m − ξ 3 )ψ m = 0.
The general solution of (62) is
ψ m = 3 k=1 c k I m ( ξ k r) +c k K m ( ξ k r),
where I m and K m are the modified Bessel functions. The boundedness of the solution at r = 0 necessitatesc k = 0 for k = 1, 2, 3. Thus
ψ m = 3 k=1 c k I m ( ξ k r).ξ −1 1 I m ( √ ξ 1 ) ξ −1 2 I m ( √ ξ 2 ) ξ −1 3 I m ( √ ξ 3 ) I m ( √ ξ 1 ) I m ( √ ξ 2 ) I m ( √ ξ 3 ) √ ξ 1 I ′ m ( √ ξ 1 ) √ ξ 2 I ′ m ( √ ξ 2 ) √ ξ 3 I ′ m ( √ ξ 3 ) = 0.
For fixed m and ǫ, the equation (63) has infinitely many solutions R = R m,j (ǫ), j ∈ Z + . Letting
(64) R m = min j∈Z+ R m,j ,
the critical Reynolds number is given by
(65) R E = min m∈Z+ R m .
We present the numerical computations of R E in the next section.
A N ,
where A N is the series in (28) truncated at N , i.e. A N = N j=1 A 0,j + A 6,j and A m,j represents the nonlinear interaction of the critical modes and the mode with azimuthal wavenumber m and radial wavenumber j given by (29). A symbolic computation software is used to compute A N . We present our numerical computations of A N in Figure 2 for ǫ = 1, 10 −1 , 10 −2 , 10 −3 and 1 ≤ N ≤ 10. The imaginary part of A is nonzero and we are only interested in the sign of the real part of A to determine the type of transition according to Theorem 2. To simplify the presentation, we scale all A N 's so that |Re(A 1 )| = 1. The plots in Figure 2 suggest that the convergence of the truncations A N → A is rapid for small ǫ but a higher order truncation (larger N ) is necessary to accurately resolve A for larger ǫ. For ǫ < 10 −1 , even A 1 is a good approximation to determine the sign of A. For example, the relative error for approximating A with A 1 is approximately %2 for ǫ = 10 −3 and increases to approximately %18 for ǫ = 1.
We also measure the relative strength of the nonlinear interactions, i.e. the ratio
(66) B N = N j=1 Re(A 6,j ) N j=1 Re(A 0,j ) ,
in Figure 3. It is seen from Figure 3 that the contribution from the modes with m = 0 dominates when ǫ is low. But as ǫ increases, the contribution from modes with m = 6 start to become significant. For example, for ǫ = 10 −3 , B N approaches 8 × 10 −5 , for ǫ = 10 −2 , B N approaches −2 × 10 −3 and for ǫ = 10 −1 , B N approaches −8 × 10 −3 . In particular, for low ǫ, we have A ≈ A 1 ≈ A 0,1 .
More significantly, our numerical results presented in Figure 2 show that the real part of A is positive for ǫ = 10 −3 , 10 −2 , 10 −1 , 1, meaning that the transition is catastrophic by Theorem 2. Thus the system moves to a flow regime away from the base Poiseuille flow and the system exhibits complex dynamical behavior for R > R c . 7.2. Determination of Energy Stability Threshold R E . With a standard numerics package, R m (ǫ) in (64) can be computed for given m and ǫ. Then by (65), R E is computed by taking minimum in (65) over all (computed) R m . In Table 2, it is shown that for ǫ = 10 −4 , and ǫ = 10 −3 , R E is obtained for m = 1 while for ǫ = 10 −2 and ǫ = 2 × 10 −2 , R E is obtained for m = 2. In Figure 5, we plot R m (m = 1, 2, 3) for 0 < ǫ ≤ 5 × 10 −2 . We see that the curves R 1 and R 2 intersect approximately at ǫ = 0.009 while R 2 and R 3 intersect approximately at ǫ = 0.024. As ǫ is increased, the value of m for which R E is minimized also increases. For higher values of m, the roots of the determinant in (63) becomes increasingly hard to find. In Table 2, the last column gives the value of R c , the linear instability threshold, computed by (24). Note that the interval [R E , R c ] consists of Reynolds numbers for which the base flow is either not globally stable or globally stable but not not exponentially attracting. We plot the R E and R c data from Table 2 in Figure 4 which shows that this interval shrinks rapidly as ǫ is increased. Table 2. R m denotes the first positive root of (63) and R E is the minimum of R m taken over all m. R c is the linear instability threshold.
ǫ R 1 R 2 R 3 R 4 R 5 R E R
Concluding Remarks
In this work, we considered both the energy stability and transitions of the Poiseuille flow of a second grade fluid in an infinite circular pipe with the restriction that the flow is independent of the axial variable z.
We show that unlike the Newtonian (ǫ = 0) case, in the second grade model (ǫ = 0 case), the time independent base flow exhibits transitions as the Reynolds number R exceeds the critical threshold R c ≈ 4.124ǫ −1/4 where ǫ is a material constant measuring the relative strength of second order viscous effects compared to inertial effects. At R = R c we prove that a transition takes place and that the type of the transition depends on a complex number A. In particular depending on A, generically, either a continuous transition to a periodic solution or a catastrophic transition occurs where the bifurcation is subcritical. The time period of the periodic solution approaches to infinity as R approaches R c , a phenomenon known as infinite period bifurcation.
We show that the number A = ∞ j=1 A 0,j + A 6,j where A m,j denotes the nonlinear interaction of the two critical modes with the mode having azimuthal wavenumber m and radial wavenumber j. Our numerical results suggest that for low ǫ (ǫ < 1), A is approximated well by A 0,1 . That is, a single nonlinear interaction between the critical modes and the mode with azimuthal wavenumber 0 and radial wavenumber 1 dominates all the rest of interactions and hence determines the transition.
Our numerical results suggest that a catastrophic transition is preferred for low ǫ values (ǫ < 1). This means, an unstable time periodic solution bifurcates on R < R c . On R > R c , the system has either metastable states or a local attractor far away from the base flow and a more complex dynamics emerges.
We also show that for R < R E with R E < R c , the Poiseuille flow is at least exponentially globally stable in the H 1 0 (Ω) norm for the velocity. We find that R E ≈ 12.87 when ǫ = 0 and the gap between R E and R c diminishes quickly as ǫ is increased.
There are several directions in which this work can further be extended. First, in this work we consider a pipe with a circular cross section of unit radius and find that the first two critical modes have azimuthal wavenumber equal to 3. Increasing (or decreasing) the radius of the cross section will also increase (decrease) the azimuthal wavenumber of the critical modes. The analysis in this case would be similar to our presentation except in the case where four critical modes with azimuthal wave numbers m and m + 1 become critical. In the case of four critical modes, more complex patterns will emerge due to the cross interaction of the critical modes [15,3]. An analysis in the light of [14] is required to determine transitions in this case of higher multiplicity criticality.
Second, for the Reynolds number region between R E and R c , there may be regions where the base flow is either globally stable but not exponentially attractive or regions where the domain of attraction of the base flow is not the whole space. A conditional energy stability analysis is required to resolve these Reynolds number regimes [16].
Third, the results we proved in this work for the second grade fluids can also be extended to fluids of higher grades and to other types of shear flows.
Fourth, in this work we restricted attention to 2D flows. In the expense of complicating computations and results, a similar analysis could be considered for 3D flows which depend also on the axial variable z.
Figure 1 .
1The time periodic stream function ψ per given by (30) which rotates in time, clockwise if T > 0 and counterclockwise if T < 0.
if f , g and h are linearly dependent. Taking inner product of the first equation in (3) with w and the second equation in (3) with ψ and using the property (52)
Now applying the operator 1 − ǫ∆ m to the second equation in (60) and using the first equation of (60), we obtain p(∆ m )w m = 0, i.e. the same equation (61), this time for w m . Hencew m = 3 k=1 d k I m ( ξ k r).The first equation in (60) gives the relation d k = −i mR 2 2 (ǫ − ξ −1 k )c k between c k 's and d k 's. Now the boundary conditions w m (1) = ψ m (1) = ψ ′ m (1) = 0 constitute a homogeneous system of three linear equations for the coefficients c k 's. The existence of nontrivial solutions is then equivalent to the vanishing determinant of this system which after some manipulation becomes (63)
Figure 2 .
2The plots of Re(A N ) for different ǫ values. All A N 's are scaled so that |Re(A 1 )| = 1.
Figure 3 .
3B N , defined by (66), measures the relative strength of the nonlinear interactions of the critical modes with m = 6 modes to the m = 0 modes.
Figure 4 .
4R E and R c curves in the ǫ − R plane.
Figure 5 .
5The plot of R m vs ǫ for m = 1, 2, 3.
Table 1. The smallest positive root λ m,1 of (22) and (λ m,1has
m
1
2
3
4
5
6
λ m,1
21.260 34.877 51.030 69.665 90.739 114.21
(λ m,1 /m) 1/2 4.610 4.175 4.124 4.173 4.260 4.36
c 0 12.87 13.49 14.84 16.37 17.95 12.87 ∞ 10 −4 12.86 13.47 14.81 16.32 17.88 12.86 42.4 10 −3 12.77 13.31 14.54 15.90 17.29 12.77 23.84 10 −2 11.99 11.95 12.44 12.95 13.39 11.95 13.40 2 × 10 −2 11.26 10.83 10.91 11.03 11.13 10.83 11.27
Á Baricz, S Ponnusamy, S Singh, arXiv:1507.01104Cross-product of Bessel functions: monotonicity patterns and functional inequalities. arXiv preprintÁ. Baricz, S. Ponnusamy, and S. Singh, Cross-product of Bessel functions: monotonicity patterns and functional inequalities, arXiv preprint arXiv:1507.01104, (2015).
An approximation theorem for functionals, with applications in continuum mechanics. B D Coleman, W Noll, Archive for Rational Mechanics and Analysis. 6B. D. Coleman and W. Noll, An approximation theorem for functionals, with applications in continuum mechanics, Archive for Rational Mechanics and Analysis, 6 (1960), pp. 355-370.
Dynamic transitions of surface tension driven convection. H Dijkstra, T Sengul, S Wang, Physica D: Nonlinear Phenomena. 247H. Dijkstra, T. Sengul, and S. Wang, Dynamic transitions of surface tension driven convection, Physica D: Nonlinear Phenomena, 247 (2013), pp. 7-17.
Fluids of differential type: critical review and thermodynamic analysis. J Dunn, K , International Journal of Engineering Science. 33J. Dunn and K. Rajagopal, Fluids of differential type: critical review and thermodynamic analysis, International Journal of Engineering Science, 33 (1995), pp. 689-729.
Thermodynamics, stability, and boundedness of fluids of complexity 2 and fluids of second grade, Archive for Rational Mechanics and Analysis. J E Dunn, R L Fosdick, 56J. E. Dunn and R. L. Fosdick, Thermodynamics, stability, and boundedness of fluids of complexity 2 and fluids of second grade, Archive for Rational Mechanics and Analysis, 56 (1974), pp. 191-252.
Starting solutions for some unsteady unidirectional flows of a second grade fluid. C Fetecau, C Fetecau, International Journal of Engineering Science. 43C. Fetecau and C. Fetecau, Starting solutions for some unsteady unidirectional flows of a second grade fluid, International Journal of Engineering Science, 43 (2005), pp. 781-789.
Stability of stratified rotating viscoelastic Rivlin-Ericksen fluid in the presence of variable magnetic field. R K Gupta, M Singh, Advances in Applied Science Research. 3R. K. Gupta and M. Singh, Stability of stratified rotating viscoelastic Rivlin-Ericksen fluid in the presence of variable magnetic field, Advances in Applied Science Research, 3 (2012), pp. 3253-3258.
Transient flows of a second grade fluid. T Hayat, M Khan, A Siddiqui, S Asghar, International Journal of Non-Linear Mechanics. 39T. Hayat, M. Khan, A. Siddiqui, and S. Asghar, Transient flows of a second grade fluid, International Journal of Non-Linear Mechanics, 39 (2004), pp. 1621 -1633.
Infinite period bifurcation and global bifurcation branches. J P Keener, SIAM Journal on Applied Mathematics. J. P. Keener, Infinite period bifurcation and global bifurcation branches, SIAM Journal on Applied Mathematics, 41 (1981), pp. 127-144.
. T Ma, S Wang, Springer-VerlagNew YorkPhase transition dynamicsT. Ma and S. Wang, Phase transition dynamics, Springer-Verlag New York, 2014.
Pulsatile flow of blood using a modified second-grade fluid model. M Massoudi, T X Phuoc, Computers & Mathematics with Applications. 56M. Massoudi and T. X. Phuoc, Pulsatile flow of blood using a modified second-grade fluid model, Computers & Mathematics with Applications, 56 (2008), pp. 199 -211.
Stability of Poiseuille flow of an incompressible second-grade Rivlin-Ericksen fluid. S Özer, E Şuhubi, ARI-An International Journal for Physical and Engineering Sciences. 51S.Özer and E. Şuhubi, Stability of Poiseuille flow of an incompressible second-grade Rivlin- Ericksen fluid, ARI-An International Journal for Physical and Engineering Sciences, 51 (1999), pp. 221-227.
Existence, uniqueness and stability of steady flows of second and third grade fluids in an unbounded "pipe-like" domain. A Passerini, M C Patria, International Journal of Non-Linear Mechanics. 35A. Passerini and M. C. Patria, Existence, uniqueness and stability of steady flows of second and third grade fluids in an unbounded "pipe-like" domain, International Journal of Non-Linear Mechanics, 35 (2000), pp. 1081-1103.
Pattern formations of 2d Rayleigh-Bénard convection with no-slip boundary conditions for the velocity at the critical length scales. T Sengul, J Shen, S Wang, Mathematical Methods in the Applied Sciences. T. Sengul, J. Shen, and S. Wang, Pattern formations of 2d Rayleigh-Bénard convection with no-slip boundary conditions for the velocity at the critical length scales, Mathematical Methods in the Applied Sciences, (2014).
T Sengul, S Wang, Pattern formation in Rayleigh-Bénard convection. 11T. Sengul and S. Wang, Pattern formation in Rayleigh-Bénard convection, Communica- tions in Mathematical Sciences, 11 (2013), pp. 315-343.
The energy method, stability, and nonlinear convection. B Straughan, Springer Science & Business Media91B. Straughan, The energy method, stability, and nonlinear convection, vol. 91, Springer Science & Business Media, 2013.
On the non-linear instability of fiber suspensions in a Poiseuille flow. Z Wan, J Lin, H Xiong, International Journal of Non-Linear Mechanics. 43Z. Wan, J. Lin, and H. Xiong, On the non-linear instability of fiber suspensions in a Poiseuille flow, International Journal of Non-Linear Mechanics, 43 (2008), pp. 898-907.
| zyda_arxiv-0030000 |
Plankton-FL: Exploration of Federated Learning for Privacy-Preserving Training of Deep Neural Networks for Phytoplankton Classification
Daniel Zhang [email protected]
Vikram Voleti
Mila
Alexander Wong
Jason Deglint [email protected]
University of Waterloo
University of Montreal
University of Waterloo
Blue Lion Labs
University of Waterloo
Blue Lion Labs
Plankton-FL: Exploration of Federated Learning for Privacy-Preserving Training of Deep Neural Networks for Phytoplankton Classification
10.1038/s41746-020-00323-1
Creating high-performance generalizable deep neural networks for phytoplankton monitoring requires utilizing large-scale data coming from diverse global water sources. A major challenge to training such networks lies in data privacy, where data collected at different facilities are often restricted from being transferred to a centralized location. A promising approach to overcome this challenge is federated learning, where training is done at site level on local data, and only the model parameters are exchanged over the network to generate a global model. In this study, we explore the feasibility of leveraging federated learning for privacy-preserving training of deep neural networks for phytoplankton classification. More specifically, we simulate two different federated learning frameworks, federated learning (FL) and mutually exclusive FL (ME-FL), and compare their performance to a traditional centralized learning (CL) framework. Experimental results from this study demonstrate the feasibility and potential of federated learning for phytoplankton monitoring.
Introduction
The uncontrollable growth of particular phytoplankton and algae species can cause the formation of harmful algae blooms (HABs). If not properly monitored and controlled, HABs can have severe, negative impacts on various industries, natural ecosystems, and the environment [1]. HABs are a growing concern as research has shown that climate change has led to an increase in the frequency and severity of HABs [2]. A very important step in the monitoring and controlling of HAB formation is the identification of phytoplankton and algae species. Unfortunately, this process is largely manual and thus is highly time-consuming and prone to human error. As such, effective methods for automating the species identification process are highly desired.
Recent advances in machine learning, in particular deep learning, have shown considerable promise for monitoring and assessment of phytoplankton and algae [3,4]. However, a significant bottleneck to training such models is the need for large-scale data coming from different water sources across different countries in order to create high-performance, generalizable models. Since the data collected at the different facilities are often restricted from being transferred to a centralized location for training due to data privacy concerns, this makes it infeasible to leverage traditional, centralized learning frameworks for building such models.
A particularly promising direction for tackling this data privacy challenge lies in federated learning (FL), which involves training local models at individual local nodes on the premises (prem) of each local data source and communicating only the parameters and updates of these local models to a server for generating a global model to reap the benefits from the different local data without having seen any of the individual data sources [5]. FL has demonstrated considerable success in the domains of mobile computing [5,6] and healthcare [7], and thus can hold considerable potential for the application of phytoplankton monitoring and assessment.
In this study, we explore the feasibility of leveraging federated learning to train deep, convolutional neural networks for the purpose of image-driven phytoplankton classification, which we will refer to as Plankton-FL. Our main contributions in this study are as follows:
(1) we simulate and study two federated learning frameworks as potential realizations of Plankton-FL: (centralized) federated learning (FL) and mutually exclusive FL (ME-FL), (2) we evaluate the performance of both Plankton-FL frameworks, and (3) we compare them to a traditional, centralized learning framework (CL). Figure 1 provides a visual representation of each of the three environments. Both privacy-preserving federated learning frameworks are evaluated against CL for the training of deep neural networks for phytoplankton classification.
Methodology
Background
Federated learning has been shown to be very effective for training deep neural networks on decentralized data while ensuring data privacy [5]. Specifically, when there is sensitive data from various sources, federated learning can be leveraged. A typical federated learning framework consists of 2 components: a global model and K clients. Each client contains its own local model, and they are trained iteratively and independently on their respective data. It is assumed that all of the data available is partitioned into K clients, P k . The local models are then used to update the global model [5]. This process is repeated for N rounds in order for the global model to generalize. The objective for federated learning is calculated using equation 1.
f (w) = K ∑ k=1 n k n F k (w) where F k (w) = 1 n k ∑ i∈P k f i (w)(1)
In equation 1, f i (w) denotes the loss function (x i , y i ; w) for an observation (x i , y i ) and model parameters w. Also, n k and n denote the |P k | and the total number of observations, respectively.
The centralized federated learning algorithm which governs the communication between the global and local models is known as FederatedAveraging (FedAvg) and was introduced by McMahan et al. [5]. FedAvg is the iterative process of training all the local models, taking the average of all the updated weights from the local models, and then using it to update the global model. As described by McMahan et al., pseudo-code is provided in Algorithm 1.
Algorithm 1
McMahan et al. [5]'s implementation of the FedAvg algorithm. C is the set of all clients; B is the local model batch size; LE is the number of local epochs; η is the learning rate; w are the weights, and is the loss function 1: Global/Server: 2: C ← Set of all available clients 3: initialize w 0 4: for each round, r = 1,2, ... do 5: for each client, k ∈ C, do 6: w k r+1 ← Local-Training(k, w r ) 7: end for 8: FedAvg: w r+1 ← ∑ K k=1 n k n w k r+1 9:
SYNC Global→Local: w k r+1 ← w r+1 ∀k 10: end for 11: 12: Local-Training(k, w): In this paper, we used the FedAvg method, as described in Algorithm 1, when training our two instances of Plankton-FL. To test the feasibility and potential of federated learning, three different experiments were simulated. Specifically, a centralized learning baseline (CL), a (centralized) federated learning framework (FL), and a mutually exclusive, federated learning framework (ME-FL). Figure 1 provides a visual representation of each of the three experiments.
for batch b ∈ B do 16: w ← w − η * ∇ w (w; b)
Centralized Learning (CL)
For the CL experiment, we have two data sources that we consolidate into a single server. From there we train a centralized model, which can then be deployed back to the edge devices for assessment and monitoring. The model was trained for a maximum of 75 epochs, with an early stopping criteria: A minimum of 50 epochs and a δ between test accuracies of 0.000001.
Federated Learning (FL)
In the FL experiment, all of the training data was combined, randomly shuffled, and distributed to clients. Each client trained their own local model, on-prem, and only communicated their parameters back to the global server. FL was run for 10 iterations, where each iteration number corresponded to the number of clients. Namely, for the first iteration, there was only one client containing all of the training data, identical to CL, and with each increasing iteration another client was added (i.e. second iteration utilized two clients, etc.). Although, in reality, no single client will contain all of the data, for the purpose of comparing to CL, it made sense to start with a single client.
Mutually Exclusive FL (ME-FL)
Unlike FL, in ME-FL, instead of combining all of the data together, shuffling, and distributing them, the clients only contained data from a single source, making them mutually exclusive. Again, each of the clients trained their own model, on-prem, and only communicated their parameters back to the global server. ME-FL was run for 9 iterations, starting from 2 clients up to 10. Iterations start from 2 clients due to the nature of the experiment; since each client only has data from a single source, it would not make sense to only have a single client. This modified setup ensures that we always have data from both sources.
Experimental Setup
Dataset
The dataset was provided by Blue Lion Labs and was collected from two mutually exclusive sources, Halifax and Waterloo. It contained 301 distinct microscope specimen photos, each at a resolution of 3208 x 2200 pixels. The phytoplankton contained in
# of clients
Model Architecture
For the purpose of this exploration, we took the majority class present in each image as the label to do image classification. This ensures that all models receive the same amount of information.
Given the task, we built a custom convolutional neural network with four convolutional layers, three max-pooling layers, two dense layers, and an output layer. Across all intermediate layers, the ReLU activation function was used, and at the output, a softmax activation was used to predict the probabilities of each class. For all the convolutional layers, a kernel of size 3x3 and a stride of 1x1 was used and for all of the pooling layers, a pool and stride size of 2x2 was used. Additionally, dropout was used with a rate of 0.25 after the convolutional layers and a rate of 0.5 after the first dense layer.
Model Training and Evaluation
When training, the images were resized to a resolution of 128 x 128 pixels and further augmented, using a horizontal flip, vertical flip, rotation, and color jitter, to create a larger data set of 2107 images. Across all experiments, the model architectures were held the same, a batch size of 8 was used, and the data was split into 80% training and 20% test. We also tune the learning rate across all experiments via a grid search over three different learning rates (LR) of 0.001, 0.0001, and 0.0005. Both of the federated learning experiments were run for 75 rounds and each local model was trained for 1 epoch. Furthermore, given it is a multi-label image classification task, the metric considered is prediction accuracy and the loss function is categorical cross-entropy. comparing CL and FL, we observe that for a single client FL outperforms CL. However, this is expected because FL with a single client is the exact same setup as CL; we expect the test accuracies to be very close in magnitude, and it is entirely possible that FL can outperform CL in this scenario. For all other number of clients, FL has a progressively worse test accuracy and is continuously outperformed by CL. In addition, across all number of clients, we observe that ME-FL consistently gets outperformed by CL and FL, which further demonstrates the impracticality of this method. Figure 2 provides a visual comparison of CL, FL, and ME-FL across all epochs. We specifically look at the results for 10 clients of both FL and ME-FL, as in reality there are often large numbers of clients. From the figure, CL and FL both appear to learn, whereas ME-FL does not appear to learn at all. Comparing CL and FL, we observe that CL converges much faster than FL, which tells us that CL learns faster than FL. Overall, across all experiments, generally, we observed that CL performed the best, FL performed the second best, and ME-FL had the worst performance and this same trend is observed across each learning rate.
Results & Discussion
Comparison of Performance Across Experiments
Downward Trend in FL Across Number of Clients
From table 1 and figure 2, we observe that FL performs relatively well, which prompted an investigation into its properties. Figure 3 displays the global model test accuracies for FL, for all numbers of clients, across the three learning rates. We observe a downward trend in the test accuracies as the number of clients increases. With an increasing number of clients, the global model needs to process and learn more information (i.e. the global model has to aggregate weights from more sources). With more information to process, learning is slowed down, yielding a worse generalization.
Causation of Poor ME-FL Performance
The largest contributor as to why ME-FL had a subpar performance relative to FL was because ME-FL was trained on individual, mutually exclusive clients. In our FL experiments we utilize a homogeneous model architecture, that is, all clients and the global server have the same model architecture. The nature of FL yields an independent and identically distributed (IID) distribution of labels across clients. However, in ME-FL, the distribution across clients is non-IID. As discussed in other literature, homogeneous federated learning performs poorly on non-IID data distributions [6]. Given this limitation, heterogeneous federated learning [8,9] is an alternative approach that should be explored. In this method, clients are allowed to differ in network architecture, allowing for more flexibility. Research has been done to explore applications of heterogeneous federated learning to mutually exclusive data [10] and it has typically been the preferred approach over homogeneous federated learning.
Conclusion & Future Works
This work demonstrates the feasibility and potential of Plankton-FL for the privacy-preserving building of high-performance, generalizable models for phytoplankton assessment without the need to exchange data. We simulated two different federated learning frameworks and compared their performance to a traditional, centralized learning framework. Although centralized learning yields the best performance, it does not address privacy concerns. Federated learning preserves privacy but fails to generalize when clients are mutually exclusive. We find that when clients share class labels with one another, federated learning both generalizes well and provides a privacy-preserving alternative to centralized learning.
Given the outcomes of this paper, the immediate future work includes (1) implementing this framework for object detection to build off the current work of image classification, (2) utilizing a heterogeneous federated learning framework and conducting the same experiments to assess the relative performance to homogeneous federated learning, and (3) explore novel federated learning-related methods. For example, another method that can be utilized is git re-basin, which aims to train individual models on disjoint datasets and merge them together [11]. Finally, careful consideration must be taken on how federated learning frameworks will be deployed in the field to ensure data privacy between clients. This will help provide a secure and accurate method for identifying different species of phytoplankton and help alleviate the manual workload.
Fig. 1 :
1Traditional centralized learning (CL) (top) and the two federated learning frameworks as realizations of Plankton-FL: federated learning (FL) (middle) and mutually exclusive FL (ME-FL) (bottom).
on client k into B batches 14: for each local epoch, l = 1...LE do 15:
Table 1
1provides a numerical comparison across all experiments
utilizing a learning rate of 0.0001. Note that, CL, FL, and ME-FL
were trained for 51, 75, and 75 epochs, respectively. Firstly, when
AcknowledgmentsThis work was funded by the Waterloo AI Institute and Mitacs. The dataset was provided by Blue Lion Labs, and the computing resources were provided by the Vision and Image Processing (VIP) Lab at the University of Waterloo and Blue Lion Labs.
An Introduction to Harmful Algae. E Granéli, J Turner, 189E. Granéli and J. Turner, An Introduction to Harmful Algae, 01 2006, vol. 189, pp. 3-7.
Future hab science: Directions and challenges in a changing climate. M L Wells, B Karlson, A Wulff, R Kudela, C Trick, V Asnaghi, E Berdalet, W Cochlan, K Davidson, M De Rijcke, S Dutkiewicz, G Hallegraeff, K J Flynn, C Legrand, H Paerl, J Silke, S Suikkanen, P Thompson, V L Trainer, Harmful Algae. 91101632climate change and harmful algal blooms. [OnlineM. L. Wells, B. Karlson, A. Wulff, R. Kudela, C. Trick, V. Asnaghi, E. Berdalet, W. Cochlan, K. Davidson, M. De Rijcke, S. Dutkiewicz, G. Hallegraeff, K. J. Flynn, C. Legrand, H. Paerl, J. Silke, S. Suikkanen, P. Thompson, and V. L. Trainer, "Future hab science: Directions and challenges in a changing climate," Harmful Algae, vol. 91, p. 101632, 2020, climate change and harmful algal blooms. [Online].
Investigating the automatic classification of algae using the spectral and morphological characteristics via deep residual learning. J L Deglint, C Jin, A Wong, International Conference on Image Analysis and Recognition. SpringerJ. L. Deglint, C. Jin, and A. Wong, "Investigating the auto- matic classification of algae using the spectral and morpholog- ical characteristics via deep residual learning," in International Conference on Image Analysis and Recognition. Springer, 2019, pp. 269-280.
Towards generating large synthetic phytoplankton datasets for efficient monitoring of harmful algal blooms. N Bamra, V S Voleti, A Wong, J L Deglint, abs/2208.02332ArXiv. N. Bamra, V. S. Voleti, A. Wong, and J. L. Deglint, "To- wards generating large synthetic phytoplankton datasets for efficient monitoring of harmful algal blooms," ArXiv, vol. abs/2208.02332, 2022.
Communication-efficient learning of deep networks from decentralized data. H B Mcmahan, E Moore, D Ramage, S Hampson, B A Arcas, AISTATSH. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," in AISTATS, 2017.
Federated learning with non-iid data. Y Zhao, M Li, L Lai, N Suda, D Civin, V Chandra, abs/1806.00582ArXiv. Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, "Fed- erated learning with non-iid data," ArXiv, vol. abs/1806.00582, 2018.
The future of digital health with federated learning. N Rieke, J Hancox, W Li, F Milletarì, H R Roth, S Albarqouni, S Bakas, M N Galtier, B A Landman, K Maier-Hein, S Ourselin, M Sheller, R M Summers, A Trask, D Xu, M Baust, M J Cardoso, 10.1038/s41746-020-00323-1npj Digital Medicine. 31119N. Rieke, J. Hancox, W. Li, F. Milletarì, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein, S. Ourselin, M. Sheller, R. M. Summers, A. Trask, D. Xu, M. Baust, and M. J. Cardoso, "The future of digital health with federated learning," npj Digital Medicine, vol. 3, no. 1, p. 119, Sep 2020. [Online]. Available: https://doi.org/10.1038/s41746-020-00323-1
Fedmd: Heterogenous federated learning via model distillation. D Li, J Wang, abs/1910.03581ArXiv. D. Li and J. Wang, "Fedmd: Heterogenous federated learning via model distillation," ArXiv, vol. abs/1910.03581, 2019.
Heterogeneous federated learning. F Yu, W Zhang, Z Qin, Z Xu, D Wang, C Liu, Z Tian, X Chen, ArXiv. F. Yu, W. Zhang, Z. Qin, Z. Xu, D. Wang, C. Liu, Z. Tian, and X. Chen, "Heterogeneous federated learning," ArXiv, vol. abs/2008.06767, 2020.
Federated learning with heterogeneous labels and models for mobile activity monitoring. G K Gudur, S K Perepu, abs/2012.02539ArXiv. G. K. Gudur and S. K. Perepu, "Federated learning with het- erogeneous labels and models for mobile activity monitoring," ArXiv, vol. abs/2012.02539, 2020.
Git re-basin: Merging models modulo permutation symmetries. S K Ainsworth, J Hayase, S S Srinivasa, abs/2209.04836ArXiv. S. K. Ainsworth, J. Hayase, and S. S. Srinivasa, "Git re-basin: Merging models modulo permutation symmetries," ArXiv, vol. abs/2209.04836, 2022.
| zyda_arxiv-0036000 |
Distributionally Robust Optimization using Cost-Aware Ambiguity Sets
Mathijs Schuurmans
Panagiotis Patrinos
Distributionally Robust Optimization using Cost-Aware Ambiguity Sets
We present a novel framework for distributionally robust optimization (DRO), called cost-aware DRO (CADRO).The key idea of CADRO is to exploit the cost structure in the design of the ambiguity set to reduce conservatism. Particularly, the set specifically constrains the worst-case distribution along the direction in which the expected cost of an approximate solution increases most rapidly. We prove that CADRO provides both a high-confidence upper bound and a consistent estimator of the out-of-sample expected cost, and show empirically that it produces solutions that are substantially less conservative than existing DRO methods, while providing the same guarantees.
I. INTRODUCTION
We consider the stochastic programming problem
minimize x∈X IE[ (x, ξ)](1)
with X ⊆ IR n a nonempty, closed set of feasible decision variables, ξ ∈ Ξ a random variable following probability measure P, and : IR n × Ξ → IR a known cost function. This problem is foundational in many fields, including operations research [1], machine learning [2], and control (e.g., stochastic model predictive control) [3]. Provided that the underlying probability measure P is known exactly, this problem can effectively be solved using traditional stochastic optimization methods [1], [4]. In reality, however, only a data-driven estimateP of P is typically available, which may be subject to misestimations-known as ambiguity. Perhaps the most obvious method for handling this issue is to disregard this ambiguity and instead apply a sample average approximation (SAA) (also known as empirical risk minimization (ERM) in the machine learning literature), where (1) is solved usingP as a plug-in replacement for P. However, this is known to produce overly optimistic estimates of the optimal cost [4,Prop. 8.1], potentially resulting in unexpectedly high realizations of the cost when deploying the obtained optimizers on new, unseen samples. This downward bias of SAA is closely related to the issue of overfitting, and commonly refered to as the optimizer's curse [5], [6].
Several methods have been devised over the years to combat this undesirable behavior. Classical techniques such as regularization and cross-validation are commonly used in machine learning [2], although typically, they are used as heuristics, providing few rigorous guarantees, in particular for M. Schuurmans small sample sizes. Alternatively, the suboptimality gap of the SAA solution may be statistically estimated by reserving a fraction of the dataset for independent replications [7]. However, these results are typically based on asymptotic arguments, and are therefore not valid in the low-sample regime. Furthermore, although this type of approach may be used to validate the SAA solution, it does not attempt to improve it, by taking into account possible estimation errors. More recently, distributionally robust optimization (DRO) has garnered considerable attention, as it provides a principled way of obtaining a high-confidence upper bound on the true out-of-sample cost [6], [8], [9]. In particular, its capabilities to provide rigorous performance and safety guarantees has made it an attractive technique for data-driven and learning-based control [10]- [12]. DRO refers to a broad class of methods in which a variant of (1) is solved where P is replaced with a worst-case distribution within a statistically estimated set of distributions, called an ambiguity set.
As the theory essentially requires only that the ambiguity set contains the true distribution with a prescribed level of confidence, a substantial amount of freedom is left in the design of the geometry of these sets. As a result, many different classes of ambiguity sets have been proposed in the literature, e.g., Wasserstein ambiguity sets [9], divergencebased ambiguity sets [6], [12], [13] and moment-based ambiguity sets [8], [14]; See [15], [16] for recent surveys.
Despite the large variety of existing classes of ambiguity sets, a common characteristic is that their design is considered separately from the optimization problem in question. Although this simplifies the analysis in some cases, it may also induce a significant level of conservatism; In reality, we are only interested in excluding distributions from the ambiguity set which actively contribute to increasing the worst-case cost. Requiring that the true distribution deviates little from the data-driven estimate in all directions may therefore be unnecessarily restrictive. This intuition motivates the introduction of a new DRO methodology, which is aimed at designing the geometry of the ambiguity sets with the original problem (1) in mind. The main idea is that by only excluding those distributions that maximally affect the worstcase cost, higher levels of confidence can be attained without introducing additional conservatism to the cost estimate.
Contributions: (i) We propose a novel class of ambiguity sets for DRO, taking into account the structure of the underlying optimization problem; (ii) We prove that the DRO cost is both a high-confidence upper bound and a consistent estimate of the optimal cost of the original stochastic program (1); (iii) We demonstrate empirically that the provided ambiguity set outperforms existing alternatives.
Notation: We denote [n] = {1, . . . , n}, for n ∈ IN. |S| denotes the cardinality of a (finite) set S. e i ∈ IR n is the ith standard basis vector in IR n . Its dimension n will be clear from context. We denote the level sets of a function f : IR n → IR as lev ≤α f := {x ∈ IR n | f (x) ≤ α}. We write 'a.s.' to signify that a random event occurs almost surely, i.e., with probability 1. We denote the largest and smallest entries of a vector v ∈ IR n as v max := max i∈[n] v i and v min = min i∈[n] v i , respectively, and define its range
as rg(v) := v max − v min . δ X is the indicator of a set X: δ X (x) = 0 if x ∈ X, +∞ otherwise.
II. PROBLEM STATEMENT
We will assume that the random variable ξ is finitely supported, so that without loss of generality, we may write Ξ = {1, . . . , d}. This allows us to define the probability mass vector p = (P[ξ = i]) d i=1 , and enumerate the cost
realizations i = (· , i), i ∈ [d].
Furthermore, it will be convenient to introduce the mapping L : IR n → IR d as L(x) = ( 1 (x), . . . , d (x)). We will pose the following (mostly standard) regularity assumption on the cost function.
Assumption II.1 (Problem regularity). For all i ∈ [d] (i) i is continuous on X; (ii) i := i + δ X is level-bounded;
Since any continuous function is lower semicontinuous (lsc), Assumption II.1 combined with the closedness of X implies inf-compactness, which ensures attainment of the minimum [17,Thm. 1.9]. Continuity of i is used mainly in Lemma A.5 to establish continuity of the solution mapping V -defined below, see (2). However, a similar result can be obtained by replacing condition (i) by lower semicontinuity and uniform level-boundedness on X. However, for ease of exposition, we will not cover this modification explicitly.
Let p ∈ ∆ d := {p ∈ IR d + | d i=1 p i = 1}
denote the true-but-unknown probability mass vector, and define V : IR n ×∆ d → IR : (x, p) → p, L(x) , to obtain the parametric optimization problem with optimal cost and solution set
V (p) = min x∈X V (x, p) and X (p) = argmin x∈X V (x, p). (2)
The solution of (1) is retrieved by solving (2) with p = p .
Assume we have access to a datasetΞ := {ξ 1 , . . . , ξ m } ∈ Ξ m collected i.i.d. from p . In order to avoid the aforementioned downward bias of SAA, our goal is to obtain a data-driven decisionx m along with an estimateV m such that
P[V (x m , p ) ≤V m ] ≥ 1 − β,(3)
where β ∈ (0, 1) is a user-specified confidence level. We address this problem by means of distributionally robust optimization, where instead of (2), one solves the surrogate problemV
m = min x∈X max p∈Am V (x, p).(DRO)
Here, A m ⊆ ∆ d is a (typically data-dependent, and thus, random) set of probability distributions that is designed to contain the true distribution p with probability 1−β, ensuring that (3) holds. Trivially, (3) is satisfied with β = 0 by taking A m ≡ ∆ d . This recovers a robust optimization method, i.e., min x∈X max i∈[d] i (x). Although it satisfies (3), this robust approach tends to be overly conservative as it neglects all available statistical data. The aim of distributionally robust optimization is to additionally ensure thatV m is a consistent estimator, i.e.,
lim m→∞V m = V (p ), a.s.(4)
We will say that a class of ambiguity sets is admissible if the solutionV m of the resulting DRO problem (DRO) satisfies (3) and (4). Our objective is to develop a methodology for constructing admissible ambiguity sets that take into account the structure of (DRO) and in doing so, provide tighter estimates of the cost, while maintaining (3) with a given confidence level β.
III. COST-AWARE DRO
In this section, we describe the proposed DRO framework, which we will refer to as cost-aware DRO (CADRO). The overall method is summarized in Alg. 1. Here, > 0 is determined to satisfy (5) and α = max p∈A TV L(x), p . Since A TV ⊂ A, A satisfies (5) with a higher confidence level 1 − β, but nevertheless, we have max p∈A V (x, p) = max p∈A TV V (x, p).
A. Motivation
We start by providing some intuitive motivation. Consider the problem (DRO). In order to provide a guarantee of the form (3), it obviously suffices to design A m such that
P[p ∈ A m ] ≥ 1 − β.(5)
However, this condition alone still leaves a considerable amount of freedom to the designer. A common approach is to select A m to be a ball (expressed in some statistical metric/divergence) around an empirical estimatep of the distribution. Depending on the choice of metric/divergence (e.g., total variation [18], Kullback-Leibler [6], Wasserstein [9], . . . ), several possible variants may be obtained. Using concentration inequalities, one can then select the appropriate radius of this ball, such that (5) is satisfied. A drawback of this approach, however, is that the construction of A m is decoupled from the original problem (1). Indeed, given that A m takes the form of a ball, (5) essentially requires the deviation ofp from p to be small along every direction. If one could instead enlarge the ambiguity set without increasing the worst-case cost, then (5) could be guaranteed for smaller values of β without introducing additional conservatism. This idea is illustrated in Fig. 1.
Conversely, for a fixed confidence level β, one could thus construct a smaller upper boundV m , by restricting the choice of p only in a judiciously selected direction. Particularly, we may set A m = {p ∈ ∆ d | L(x), p ≤ α m } for some candidate solution x ∈ X, where α m is the smallest (potentially data-dependent) quantity satisfying (5). This directly yields an upper bound on the estimateV m . Namely, for x ∈ X (p ), we have with probability 1 − β,
V (x , p ) (a) ≤ V (x m , p ) ≤ max p∈Am V (x m , p) =V m = min x∈X max p∈Am V (x, p) (b) ≤ max p∈Am V (x, p) = α m .
Here, inequalities (a) and (b) become equalities whenx m = x = x. Thus, a reasonable aim would be to select x to be a good approximation of x . We will return to the matter of selecting x in §III-C. First, however, we will assume x to be given and focus on establishing the coverage condition (5).
B. Ambiguity set parameterization and coverage
Motivated by the previous discussion, we propose a family of ambiguity sets parameterized as follows. Let v ∈ IR d be a fixed vector (we will discuss the choice of v in §III-C). Given a sampleΞ = {ξ 1 , . . . , ξ m } of size |Ξ| = m drawn i.i.d. from p , we consider ambiguity sets of the form
AΞ(v) := {p ∈ ∆ d | p, v ≤ αΞ(v)},(6)
where α :
Ξ m × IR d (Ξ, v) → αΞ(v) ∈
IR is a datadriven estimator for p , v , selected to satisfy the following assumption, which implies that (5) holds for A m = AΞ(v).
Assumption III.1. P[ p , v ≤ αΞ(v)] ≥ 1 − β, ∀v ∈ IR d .
Note that the task of selecting α to satisfy Assumption III.1 is equivalent to finding a high-confidence upper bound on the mean of the scalar random variable v, e ξ , ξ ∼ p . It is straightforward to derive such bounds by bounding the deviation of a random variable from its empirical mean using classical concentration inequalities like Hoeffding's inequality .
Proposition III.2 (Hoeffding bound). Fix v ∈ IR d and let Ξ with |Ξ| = m, be an i.i.d. sample from p ∈ ∆ d , with empirical distributionpΞ = 1 m ξ∈Ξ e ξ . Consider the bound αΞ(v) = v,pΞ + r m rg(v).(7)
This bound satisfies Assumption III.1, if r m satisfies
r m = min 1, log( 1 /β) 2m .(8)
Proof. Define
y k := p −e ξ k , v , so that 1 m m k=1 y k = p − p m , v . Since v is fixed, y k , k ∈ [m] are i.i.d., and we have IE[y k ] = 0 and (by Lemma A.1), |y k | ≤ rg(v), ∀k ∈ IN.
This establishes the (vacuous) case r m = 1 in (8). For the nontrivial case, we apply Hoeffding's inequality [19, eq. 2.11]
P 1 m m k=1 y k > t ≤ exp −2mt 2 rg(v) 2 .(9)
Setting t = r m rg(v), equating the right-hand side of (9) to the desired confidence level β, and solving for r m yields the desired result.
Although attractive for its simplicity, this type of bounds has the drawback that it applies a constant offset (depending only on the sample size, not the data) to the empirical mean, which may be conservative, especially for small samples. Considerably sharper bounds can be obtained through a more direct approach. In particular, we will focus our attention on the following result due to Anderson [20], which is a special case of the framework presented in [21]. We provide an experimental comparison between the bounds in Appendix B.
Proposition III.3 (Ordered mean bound [21]). Let
η k := v, e ξ k , k ∈ [m], so that IE[η k ] = v, p . Let η (1) ≤ η (2) ≤ · · · ≤ η (m) ≤ η denote the sorted sequence, with ties broken arbitrarily, where η := max i∈[d] v i . Then, there exists a γ ∈ (0, 1) such that Assumption III.1 holds for αΞ(v) = κ m −γ η (κ) + m i=κ+1 η (i) m +γη, κ = mγ .(γ = log( 1 /β) 2m , for sufficiently large m.(11)
This asymptotic expression will be useful when establishing theoretical guarantees in Section IV.
C. Selection of v
The proposed ambiguity set (6) depends on a vector v. As discussed in §III-A, we would ideally take v = L(x ) with x ∈ X (p ). However, since this ideal is obviously out of reach, we instead look for suitable approximations. In particular, we propose to use the available datasetΞ in part to select v to approximate L(x ), and in part to calibrate the mean bound α.
To this end, we will partition the available datasetΞ into a training set and a calibration set. Let τ : IN → IN be a user-specified function determining the size of the training set, which satisfies τ (m) ≤ cm for some c ∈ (0, 1); and (12a)
τ (m) → ∞ as m → ∞.(12b)
Correspondingly, let {Ξ T ,Ξ C } be a partition ofΞ, i.e.,Ξ T ∩ Ξ C = ∅ andΞ T ∪Ξ C =Ξ. Given that |Ξ| = m, we ensure that |Ξ T | = τ (m) and thus |Ξ C | = m := m − τ (m). Note that by construction, m ≥ (1 − c)m, with c ∈ (0, 1), and thus, both |Ξ T | → ∞ and |Ξ C | → ∞ as m → ∞. Due to the statistical independence of the elements inΞ, it is inconsequential how exactly the individual data points are divided intoΞ T andΞ C . Therefore, without loss of generality, we may takê
Ξ T = {ξ 1 , . . . , ξ τ (m) } andΞ C = {ξ τ (m)+1 , . . . , ξ m }.
With an independent datasetΞ T at our disposal, we may use it to design a mapping v τ (m) : Ξ τ (m) → IR d , whose output will be a data-driven estimate of L(x ). For ease of notation, we will omit the explicit dependence on the data, i.e., we write v τ (m) instead of v τ (m) (Ξ T ). We propose the following construction. Letp τ (m) = 1
τ (m) τ (m) k=1 e ξ k denote the empirical distribution ofΞ T and set v τ (m) = L(x τ (m) ), with x τ (m) ∈ argmin x∈X V (x,p τ (m) ).(13)
Remark III.4. We underline that although (13) is a natural choice, several alternatives for the training vector could in principle be considered. To guide this choice, Lemma IV.2 provides sufficient conditions on the combination of α and v τ (m) to ensure consistency of the method.
Given v τ (m) as in (13), we will from hereon use the following shorthand notation whenever convenient:
A m := AΞ C (v τ (m) ), α m := αΞ C (v τ (m) ),(14)
with AΞ C (v τ (m) ) as in (6). We correspondingly obtain the cost estimateV m according to (DRO).
D. Selection of τ
Given the conditions in (12), there is still some flexibility in the choice of τ (m), which defines a trade-off between the quality of v τ (m) as an approximator of L(x ) and the size of the ambiguity set A m .
An obvious choice is to reserve a fixed fraction of the available data for the training set, i.e., set τ (m) /m equal to some constant. However, for low sample counts m, the mean bound α m will typically be large and thus A m will not be substantially smaller than the unit simplex ∆ d , regardless of v τ (m) . As a result, the obtained solution will also be rather insensitive to v τ (m) . In this regime, it is therefore preferable to reduce the conservativeness of α m quickly by using small values of τ (m) /m (i.e., large values of m = m − τ (m)).
Conversely, for large sample sizes, α m is typically a good approximation of p , v τ (m) and the solution to (DRO) will be more strongly biased to align with v τ (m) . Thus, the marginal benefit of improving the quality of v τ (m) takes priority over reducing α m , and large fractions τ (m) /m become preferable. Based on this reasoning, we propose the heuristic τ (m) = µν m(m+1) µm+ν , µ, ν ∈ (0, 1).
Note that µ and ν are the limits of τ (m) /m as m → 0 and m → ∞, respectively. Eq. (15) then interpolates between these extremes, depending on the total amount of data available. We have found µ = 0.01, ν = 0.8 to be suitable choices for several test problems.
E. Tractable reformulation
The proposed ambiguity set takes the form of a polytope, and thus, standard reformulations based on conic ambiguity sets apply directly [23]. Nevertheless, as we will now show, a tractable reformulation of (DRO) specialized to the ambiguity set (6) may be obtained, which requires fewer auxiliary variables and constraints .
Proposition III.5 (Tractable reformulation of (DRO)). Fix parametersp ∈ ∆, v ∈ IR d , and α ∈ IR and let A = {p ∈ ∆ d | p, v ≤ α} be an ambiguity set of the form (6).
Denoting V A := min x∈X max p∈A V (x, p), we have V A = min x∈X,λ≥0 λα + max i∈[d] { i (x) − λv i }.(16)
Proof. Let g(z) := max p∈∆ d { p, z | p, v ≤ α}, where z ∈ IR d and α are constants with respect to p. By strong duality of linear programming [24],
g(z) = min λ≥0 max p∈∆ d p, z − λ( p, v − α) = min λ≥0 λα + max p∈∆ d p, z − λv
Noting that max p∈∆ d y = max i∈[d] y i , ∀y ∈ IR d and that V A = min x∈X g(L(x)), we obtain (16).
If the functions { i } i∈ [d] are convex, then (16) is a convex optimization problem, which can be solved efficiently using off-the-shelf solvers. In particular, if they are convex, piecewise affine functions, then it reduces to a linear program (LP). For instance, introducing a scalar epigraph variable, one may further rewrite (16) as min x∈X,λ≥0,z∈IR
{λα + z | L(x) − λv ≤ z1},(17)
which avoids the non-smoothness of the pointwise maximum in (16) at the cost of a scalar auxiliary variable. Even for general (possibly nonconvex) choices of i , (16) is a standard nonlinear program, which can be handled by existing solvers. We conclude the section by summarizing the described steps in Alg. 1.
Algorithm 1 CADRO
IV. THEORETICAL PROPERTIES
We will now show that the proposed scheme possesses the required theoretical properties, namely to provide (i) an upper bound to the out-of-sample cost, with high probability (cf. (3)); and (ii) a consistent estimate of the true optimal cost (cf. (4)). Let us start with the first guarantee, which follows almost directly by construction.
Theorem IV.1 (Out-of-sample guarantee). Fix m > 0, and letV m ,x m be generated by Alg. 1. Then,
P[V (x m , p ) ≤V m ] ≥ 1 − β.(18)
Proof. If p ∈ A m , then
V m (x) := max p∈Am V (x, p) ≥ V (x, p ), ∀x ∈ X.(19)
Sincex m ∈ argmin x∈X V m (x), (19)
implies that V (x m , p ) ≤ V m (x m ) =V m , where the last equality holds by definition (DRO). Consequently, p ∈ A m =⇒ V (x m , p ) ≤V m , and thus P[V (x m , p ) ≤V m ] ≥ P[p ∈ A m ].
Since v τ (m) is constructed independently fromΞ C , Assumption III.1 ensures that (5) holds with respect to A m = AΞ C (v τ (m) ), establishing the claim. We now turn our attention to the matter of consistency. That is, we will show that under suitable conditions on the mean bound α and the training vector v in (6),V m converges almost surely to the true optimal value, as the sample size m grows to infinity. We will then conclude the section by demonstrating that for the choices proposed in §III-B and III-C, the aforementioned conditions hold.
. If v τ (m) = L(x τ (m) ), with x τ (m) , α m = αΞ C (v τ (m) ) chosen to ensure (i) p m , v τ (m) ≤ αΞ C (v τ (m) ), a.s.; (ii) lim sup m→∞ αΞ C (v τ (m) ) ≤ V (p ),x ∈ X, p m , L(x) ≤ V m (x) ≤ α m + ε m (x) ∞ .
Minimizing with respect to x yields that for all m,
V SAA m ≤V m ≤ α m ,(20)
whereV SAA m := V (p m ) (cf. (2)). By the law of large numbers,p m → p , a.s. Furthermore, under Assumption II.1, Lemma A.5 states that the optimal value mapping V (p) is continuous, which implies that alsoV SAA m → V (p ), a.s. The claim then follows directly from condition (ii).
Informally, Lemma IV.2 requires that the mean bound is bounded from below by the empirical mean, and from above by a consistent estimator of the optimal cost. The latter excludes choices such as the robust minimizer x τ (m) ∈ argmin max i∈[d] i (x) in the construction of v τ (m) . However, besides (13), one could consider alternatives, such as a separate DRO scheme to select v τ (m) . A more extensive study of such alternatives, however, is left for future work. We now conclude the section by showing that (13) (13), then,V m → V (p ), a.s.
Proof. It suffices to show that conditions (i) and (ii) of Lemma IV.2 are satisfied by αΞ C (v τ (m) ). Condition (i): Consider αΞ C (v) as in (10) for an arbitrary v ∈ IR d , and let (η (i) ) i∈[m ] denote ( v, e ξ ) ξ∈ΞC , sorted in increasing order, then, we may write
p m , v = 1 m m i=1 η (i) ,(21)
and thus,
αΞ C (v) − p m , v = κ m − γ η (κ) − κ i=1 η (i) m + γη, (a) ≥ κ m − γ η (κ) − κ m η (k) + γη, = γ(η − η (k) ) (γ≥0) ≥ 0, ∀v ∈ IR d ,
where (a) follows from the fact that η (i) are sorted.
Condition (ii): By Lemma A.4, there exists a constant
v ≥ v τ (m) ∞ , ∀m > 0, a.
s. . Therefore, using (10) and (21),
α m − p m , v τ (m) ≤ ( κ m − γ)v + κ m v + γv = 2v( κ m ) (b) ≤ 2v(γ + 1 m ),(22)
for all m > 0, where (b) follows from κ = m γ ≤ m γ +1. By construction (see (12) and below), we have that both τ (m) → ∞ and m → ∞. Thus, using (11), γ + 1 m = log( 1 /β) 2m + 1 m → 0. Combined with (22), this yields that
lim sup m→∞ αΞ C (v τ (m) ) − v τ (m) ,pΞ C ≤ 0.(23)
Finally, by the law of large numbers,p m → p andp τ (m) → p , a.s. Thus (under Assumption II.1), Corollary A.6 ensures that lim m→∞ p m , L(x τ (m) ) = V (p ), which, combined with (23) yields the required result.
Theorem IV.4 (Consistency -Hoeffding bound). LetV m be generated by Alg. 1, for m > 0. If α m = αΞ C (v τ (m) ) is selected according to Proposition III.2, with v τ (m) as in (13) then,V m → V (p ), a.s.
Proof. We show that condition (i) and condition (ii) of Lemma IV.2 are satisfied.
Condition (i): Trivial, noting that r m > 0 by (8). Condition (ii): By the law of large numbers, we have that p m → p , a.s., and thus, by Corollary A.6, p m , v τ (m) → V (p ). Furthermore, by Lemma A.4, there exists a constant v such that rg(v τ (m) ) ≤ 2v for all m ∈ IN. Thus, for r m given by (8), we have
lim sup m→∞ αΞ C (v τ (m) ) ≤ lim sup m→∞ p m , v τ (m) + r m v = V (p ).
This concludes the proof.
V. ILLUSTRATIVE EXAMPLE
As an illustrative example, we consider the following facility location problem, adapted from [25,Sec. 8.7.3]. Consider a bicycle sharing service setting out to determine locations x (i) ∈ X i ⊆ IR 2 , i ∈ [n x ], at which to build stalls where bikes can be taken out or returned. We will assume that X i are given (polyhedral) sets, representing areas within the city suitable for constructing a new bike stall. Let z (k) ∈ IR 2 , k ∈ [d], be given points of interest (public buildings, tourist attractions, parks, etc.). Suppose that a person located in the vicinity of some point z (k) decides to rent a bike. Depending on the availability at the locations x (i) , this person may be required to traverse a distance
k (x) = max i∈[nx] x (i) − z (k) 2 , where x = (x (i) ) i∈[nx] .
With this choice of cost, (16) can be cast as a second order cone program. Thus, if the demand is distributed over (z (k) ) k∈ [d] according to the probability mass vector p ∈ ∆ d , then the average cost to be minimized over X = X 1 ×· · ·×X d is given by V (x, p ) as in (2). We will solve a randomly generated instance of the problem, illustrated in Fig. 2. As p is unknown, one has to collect data, e.g., by means of counting passersby at the locations z (k) . As this may be a costly operation, it is important that the acquired data is used efficiently. Furthermore, in order to ensure that the potentially large up-front investment is justified, we are required to provide a certificate stating that, with high confidence, the quality of the solution will be no worse than what is predicted. Thus, given our collected sample of size m, our aim is to compute estimatesV m , satisfying (3).
X1 X2 X3 z (k) argmin x∈X V (x, p ⋆ ) argmin x∈X max k∈[d] ℓ k (x) CADRO (m = 20)
We compare the following data-driven methods.
CADRO Solves (DRO) according to Alg. 1, setting τ (m) as in (15), with µ = 0.01, ν = 0.8. [28], ensuring that (5) is satisfied. SAA Using the same data partition {Ξ T ,Ξ C } as CADRO, we useΞ T to compute x m = x τ (m) as in (13), and we useΞ C to obtain a high-confidence upper boundV m = αΞ C (L(x τ (m) )), utilizing Proposition III.3.
D-DRO Solves (DRO), with an ambiguity set of the form
A m = {p ∈ ∆ d | D(p m , p) ≤ r D m }, with D ∈ {TV,
Note that D-DRO does not require an independent data sample in order to satisfy (3).
Remark V.1. Other methods could be used to validate SAA (e.g., cross-validation [2], replications [7]), but these methods only guarantee the required confidence level asymptotically. In order to obtain a fair comparison, we instead use the same mean bound, namely (10) for both CADRO and SAA, so both methods provide the same theoretical guarantees. Moreover, we note that a different data partition could be used for SAA. However, preliminary experiments have indicated that significantly increasing or decreasing τ (m) resulted in deteriorated bounds on the cost.
We set n x = 3, d = 50, β = 0.01, and apply each method for 100 independently drawn datasets of size m. In Fig. 3, we plot the estimated costsV m and the achieved out-of-sample cost V (x m , p ), for increasing values of m. We observe that CADRO provides a sharper cost estimateV m than the other approaches. In particular, the classical DRO formulations require relatively large amounts of data before obtaining a non-vacuous upper bound on the cost. The right-hand panel in Fig. 3 shows that additionally, CADRO returns solutions which exhibit superior out-of-sample performance than the compared approaches, illustrating that it does not rely on conservative solutions to obtain better upper bounds.
VI. CONCLUSION AND FUTURE WORK
We proposed a DRO formulation, named cost-aware DRO (CADRO), in which the ambiguity set is designed to only restrict errors in the distribution that are predicted to have significant effects on the worst-case expected cost. We proved out-of-sample performance bounds and consistency of the resulting DRO scheme, and demonstrated empirically that this approach may be used to robustify against poor distribution estimates at small sample sizes, while remaining considerably less conservative than existing DRO formulations. In future work, we aim to extend the work to continuous distributions.
| p − e i , v | ≤ rg(v), for all i ∈ [d], v ∈ IR d and p ∈ ∆ d . Proof. For any i ∈ [d], v ∈ IR d and p ∈ ∆ d , | p − e i , v | ≤ max{max i∈[d] p, v − v i , max i∈[d] v i − p, v } = max{ p, v − v min , v max − p, v } (a) ≤ v max − v min = rg(v),
where (a) follows from the fact that max Then, for all x ∈ X, we have
p∈∆ p, v = v max and max p∈∆ − p, v = max i∈[d] {−v i } = −v min .V (x) ≤ αΞ(v) + L(x) − v ∞ , a.s. Proof. Define ε(x) = L(x) − v for x ∈ X. We have V (x) = max p∈AΞ(v) p, v + p, ε(x) (6) ≤ αΞ(v) + max p∈AΞ(v) p, ε(x) .
The claim directly follows because AΞ(v) ⊆ ∆ d and max p∈∆ d p, z = max i {z i }, ∀z ∈ IR d [29,Ex. 4.10].
Lemma A.3 (Uniform level-boundedness). If Assump- tion II.1(ii) holds, then V (x, p) = p, L(x) + δ X×∆ d (x, p) is level-bounded in x locally uniformly in p. Proof. Since V (x, p) is a convex combination of i (x), i ∈ [d], V (x, p) ≤ α implies that ∃i ∈ [d] : i (x) ≤ α. Therefore, lev ≤α V ( · , p) ⊆ i∈[d] lev ≤α i =: U α , for all p ∈ ∆ d . By Assumption II.1(ii), lev ≤α i is bounded for all i ∈ [d].
Since the union of a finite number of bounded sets is bounded, U α is bounded. Furthermore, for p / ∈ ∆ d , V (x, p) = ∞, and thus lev ≤α V ( · , p) = ∅ ⊆ U α , ∀p / ∈ ∆ d Thus, lev ≤α V (· , p) ⊆ U α for all p ∈ IR d .
Lemma A.4 (Uniform boundedness of v τ (m) ). Let v τ (m) be defined as in (13). Then, there exists a v ∈ IR + such that v τ (m) ∞ ≤ v, ∀m ∈ IN, a.s.
Proof. By Assumption II.1, there exists
r := min x∈X max i∈[d] i (x) ≥ min x∈X V (x, p), ∀p ∈ ∆ d ,
so that, by (13),
x τ (m) ∈ lev ≤r V ( · ,p τ (m) ), ∀m ∈ IN.
Since V (x, p) is level-bounded uniformly in p (cf. Lemma A.3), there exists a compact set C ⊆ IR n satisfying (i) the optimal value V (p) defined by (2), is continuous at p relative to ∆ d .
x τ (m) ∈ lev ≤r V ( · ,p τ (m) ) ⊆ C, ∀m ∈ IN.(24)
(ii) For anyp m → p , and for any x m ∈ X (p m ), {x m } m∈IN is bounded and all its cluster points lie in X (p ).
Proof. If L is continuous, then V (x, p) can be written as the composition V ≡ g • F of the lsc function g : IR 2d → IR : (y, z) → y, z + δ ∆ d (p), and F : IR nd → IR 2d : (x, p) → (L(x), p). By [17, Ex. 1.40(a)], this implies that V is lsc, and so is (x, p) → V (x, p) + δ X (x). Moreover, by Lemma A.3, it is level-bounded in x locally uniformly in p. Furthermore, p → V (x, p) is continuous relative to ∆ d for all fixed x ∈ X.
On the other hand, since the sequence {x τ (m) ∈ X} m is bounded, and L is continuous on X, p m , L(x τ (m) ) has at least one cluster point and lim sup m→∞ p m , L(x τ (m) ) < ∞. Assume then, for the sake of contradiction, that there exists a cluster point V = lim sup m→∞ p m , L(x τ (m) ) > V (p ). Sincep m → p , this implies, by continuity of L, that there must exist a limit point x / ∈ X (p ) of {x τ (m) } m , contradicting Lemma A.5. We conclude that lim sup m→∞ p m , L(x τ (m) ) ≤ V (p ).
Combining (25) and (26) completes the proof.
B. Comparison with the Hoeffding bound
We consider another instance of the example set-up from §V, and compare CADRO using the the Hoeffding bound (Proposition III.2) and the ordered mean bound (Proposition III.3) . Figure 4 shows the cost estimateV m and the out-ofsample cost V (x m , p ) for the TV-DRO method and the aforementioned versions of CADRO. We note that the radius of the ambiguity set for TV-DRO is computed using the Bretagnolle-Huber-Carol inequality [27, Prop. A.6.6] with slightly improved constants. As this result is based on the same Hoeffding-type inequality as Proposition III.2, The apparent performance gains of CADRO with the Hoeffding bound are thus to be attributed primarily to the geometry of the ambiguity set. However, unlike divergence-based ambiguity sets, which rely on concentration inequalities to bound deviations of the distribution from the empirical mean, (6) does not require the use of concentration inequalities. Rather, any high-confidence upper bound on the mean of a scalar random variable satisfying the conditions of Lemma IV.2 may be used, allowing the use of more sophisticated approaches (e.g., Proposition III.3). This results in the improvements visible in Fig. 4, without requiring alterations to the DRO method itself.
and P. Patrinos are with the Department of Electrical Engineering (ESAT-STADIUS), KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium. Email: {mathijs.schuurmans, panos.patrinos}@esat.kuleuven.be This work was supported by the Research Foundation Flanders (FWO) research projects G081222N, G033822N, G0A0920N; European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 953348; Ford KU Leuven Research Alliance Project KUL0075;
Fig. 1 .
1Conceptual motivation for the structure of the ambiguity set(6). The cost contour lines {p ∈ ∆ 3 | L(x), p = α} corresponding to some x ∈ X are shown for increasing values of α (dark to light), together with the sets A TV := ∆ 3 ∩ IB 1 (p, ) and A := {p ∈ ∆ 3 | L(x), p ≤ α}.
Require: i.i.d. datasetΞ = {ξ1, . . . , ξm}; τ (m) (cf. (12)); Confidence parameter β ∈ (0, 1). Ensure: (Vm,xm) satisfy (3)-(4) Cf. §IV ΞT ← {ξ1, . . . , ξ τ (m) },ΞC ← {ξ τ (m)+1 , . . . , ξm} v τ (m) ← evaluate (13) (Vm,xm) ← solve (DRO) with Am = AΞ C (v τ (m) ) Use(16)
Lemma IV.2 (consistency conditions). LetΞ T ,Ξ C be two independent samples from p , with sizes |Ξ T | = τ (m) and |Ξ C | = m := m−τ (m). Letp m := 1 m ξ∈ΞC e ξ denote the empirical distribution of the calibration setΞ C
a.s. ThenV m → V (p ), a.s., whereV m is given by (DRO). Proof. Let V m (x) := max p∈Am p, L(x) . It is clear from condition (i) and (6) thatp m ∈ A m . Let us furthermore define ε m (x) = L(x) − L(x τ (m) ). Then, by Lemma A.2 , we have for all
satisfy the requirements of Lemma IV.2. Theorem IV.3 (Consistency -Ordered mean bound). LetV m be generated by Alg. 1, for m > 0. If α m = αΞ C (v τ (m) ) is selected according to Proposition III.3, with v τ (m) as in
Fig. 2 .
2Illustration of the facility location problem. The colors of the points z (k) represent their probability p k .
Fig. 3 .
3Results of the facility location problem of Section V. (left): The cost estimatesVm satisfying (3) and (4); (right): True out of sample cost V (xm, p ). The points indicate the sample mean, the solid errorbars indicate the empirical 0.95 (upper and lower) quantiles and the semi-transparent errorbars indicate the largest and smallest values over 100 independent runs.
.
Let e i denote the i'th standard basis vector.
Lemma A.2 (Upper bound). Fix v ∈ IR d and consider a sampleΞ . For an ambiguity set AΞ(v), given by (6) with mean bound αΞ(v) , define V (x) := max p∈AΞ(v) V (x, p).
Thus, [17, Thm. 1.17(b),(c)] applies, translating directly to statements (i) and (ii). Corollary A.6. Let {x τ (m) } m∈IN be generated by (13) and let {p m ∈ ∆ d } m∈IN be some sequence withp m → p . Then, lim m→∞ p m , L(x τ (m) ) = V (p ) Proof. By definition of V , we have p m , L(x τ (m) ) ≥ V (p m ), and by Lemma A.5, lim m→∞ V (p m ) → V (p ). Therefore, lim inf m→∞ p m , L(x τ (m) ) ≥ V (p ).
Fig. 4 .
4Results for a problem instance as described in Section V. (left): The cost estimatesVm satisfying (3) and (4); (right): True out of sample cost V (xm, p ). The points indicate the sample mean, the solid errorbars indicate the empirical 0.95 (upper and lower) quantiles and the semi-transparent errorbars indicate the largest and smallest values over 100 independent runs.
10) For finite m, the smallest value of γ ensuring that Proposition III.3 holds, can be computed efficiently by solving a scalar root-finding problem[21, Rem. IV 3]. Furthermore, it can be shown that the result holds for[22, Thm. 11.6.2]
Hence, since i , i ∈ [d] are continuous, they attain their minima v i and maxima v i on X ∩ C. Using (24), combined with (13), we thus have v τ (m) ∞ ≤ max{|v i |, |v i |} i∈[d] =: v for all m ∈ IN, as required.Lemma A.5 (Parametric stability). If Assumption II.1 is satisfied, then the following statements hold:
We use K ij = z (i) − z (j) 2 , i, j ∈ [d]as the transportation cost. 2 This is a slightly improved version of the classical Bretagnolle-Huber-Carol inequality [27, Prop. A.6.6].
A Shapiro, D Dentcheva, A Ruszczynski, Lectures on Stochastic Programming: Modeling and Theory. MOS-SIAM Series on Optimization. third ed.A. Shapiro, D. Dentcheva, and A. Ruszczynski, Lectures on Stochastic Programming: Modeling and Theory. MOS-SIAM Series on Optimiza- tion, Society for Industrial and Applied Mathematics, third ed., July 2021.
T Hastie, R Tibshirani, J H Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. New York, NYSpringer2nd ed ed.T. Hastie, R. Tibshirani, and J. H. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics, New York, NY: Springer, 2nd ed ed., 2009.
Stochastic Model Predictive Control: An Overview and Perspectives for Future Research. A Mesbah, IEEE Control Systems Magazine. 36A. Mesbah, "Stochastic Model Predictive Control: An Overview and Perspectives for Future Research," IEEE Control Systems Magazine, vol. 36, pp. 30-44, Dec. 2016.
. J O Royset, R , J.-B Wets, An Optimization Primer. Springer Series in Operations Research and Financial Engineering. SpringerJ. O. Royset and R. J.-B. Wets, An Optimization Primer. Springer Series in Operations Research and Financial Engineering, Cham, Switzerland: Springer, 2021.
The Optimizer's Curse: Skepticism and Postdecision Surprise in Decision Analysis. J E Smith, R L Winkler, Management Science. 52J. E. Smith and R. L. Winkler, "The Optimizer's Curse: Skepticism and Postdecision Surprise in Decision Analysis," Management Science, vol. 52, pp. 311-322, Mar. 2006.
From Data to Decisions: Distributionally Robust Optimization Is Optimal. B P G Van Parys, P M Esfahani, D Kuhn, Management Science. 67B. P. G. Van Parys, P. M. Esfahani, and D. Kuhn, "From Data to Deci- sions: Distributionally Robust Optimization Is Optimal," Management Science, vol. 67, pp. 3387-3402, June 2021.
Assessing solution quality in stochastic programs. G Bayraksan, D P Morton, Mathematical Programming. 108G. Bayraksan and D. P. Morton, "Assessing solution quality in stochastic programs," Mathematical Programming, vol. 108, pp. 495-514, Sept. 2006.
Distributionally Robust Optimization Under Moment Uncertainty with Application to Data-Driven Problems. E Delage, Y Ye, Operations Research. 58E. Delage and Y. Ye, "Distributionally Robust Optimization Under Moment Uncertainty with Application to Data-Driven Problems," Operations Research, vol. 58, pp. 595-612, June 2010.
Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations. P , Mohajerin Esfahani, D Kuhn, Mathematical Programming. 171P. Mohajerin Esfahani and D. Kuhn, "Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations," Mathematical Programming, vol. 171, pp. 115-166, Sept. 2018.
Distributionally Robust Risk Map for Learning-Based Motion Planning and Control: A Semidefinite Programming Approach. A Hakobyan, I Yang, IEEE Transactions on Robotics. A. Hakobyan and I. Yang, "Distributionally Robust Risk Map for Learning-Based Motion Planning and Control: A Semidefinite Pro- gramming Approach," IEEE Transactions on Robotics, pp. 1-20, 2022.
Safe, learning-based MPC for highway driving under lane-change uncertainty: A distributionally robust approach. M Schuurmans, A Katriniok, C Meissen, H E Tseng, P Patrinos, Artificial Intelligence. 320103920M. Schuurmans, A. Katriniok, C. Meissen, H. E. Tseng, and P. Patrinos, "Safe, learning-based MPC for highway driving under lane-change uncertainty: A distributionally robust approach," Artificial Intelligence, vol. 320, p. 103920, July 2023.
A General Framework for Learning-Based Distributionally Robust MPC of Markov Jump Systems. M Schuurmans, P Patrinos, IEEE Transactions on Automatic Control. M. Schuurmans and P. Patrinos, "A General Framework for Learning- Based Distributionally Robust MPC of Markov Jump Systems," IEEE Transactions on Automatic Control, pp. 1-16, 2023.
Data-Driven Stochastic Programming Using Phi-Divergences. G Bayraksan, D K Love, The Operations Research Revolution. D. Aleman, A. Thiele, J. C. Smith, and H. J. GreenbergINFORMSG. Bayraksan and D. K. Love, "Data-Driven Stochastic Programming Using Phi-Divergences," in The Operations Research Revolution (D. Aleman, A. Thiele, J. C. Smith, and H. J. Greenberg, eds.), pp. 1-19, INFORMS, Sept. 2015.
Data-driven distributionally robust LQR with multiplicative noise. P Coppens, M Schuurmans, P Patrinos, PMLRLearning for Dynamics and Control. P. Coppens, M. Schuurmans, and P. Patrinos, "Data-driven distribution- ally robust LQR with multiplicative noise," in Learning for Dynamics and Control, pp. 521-530, PMLR, July 2020.
Frameworks and Results in Distributionally Robust Optimization. H Rahimian, S Mehrotra, Open Journal of Mathematical Optimization. 3H. Rahimian and S. Mehrotra, "Frameworks and Results in Distribution- ally Robust Optimization," Open Journal of Mathematical Optimization, vol. 3, pp. 1-85, 2022.
Distributionally Robust Optimization: A review on theory and applications. F Lin, X Fang, Z Gao, Numerical Algebra, Control & Optimization. 121159F. Lin, X. Fang, and Z. Gao, "Distributionally Robust Optimization: A review on theory and applications," Numerical Algebra, Control & Optimization, vol. 12, no. 1, p. 159, 2022.
R T Rockafellar, R J B Wets, of Grundlehren Der Mathematischen Wissenschaften. Berlin, Heidelberg; Berlin HeidelbergSpringer317R. T. Rockafellar and R. J. B. Wets, Variational Analysis, vol. 317 of Grundlehren Der Mathematischen Wissenschaften. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998.
Identifying effective scenarios in distributionally robust stochastic programs with total variation distance. H Rahimian, G Bayraksan, T Homem-De-Mello, Mathematical Programming. 173H. Rahimian, G. Bayraksan, and T. Homem-de-Mello, "Identifying effective scenarios in distributionally robust stochastic programs with total variation distance," Mathematical Programming, vol. 173, pp. 393- 430, Jan. 2019.
M Wainwright, High-Dimensional Statistics: A Non-Asymptotic Viewpoint. No. 48 in Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge ; New York, NYCambridge University PressM. Wainwright, High-Dimensional Statistics: A Non-Asymptotic View- point. No. 48 in Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge ; New York, NY: Cambridge University Press, 2019.
Confidence limits for the expected value of an arbitrary bounded random variable with a continuous distribution function. T Anderson, AD0696676Stanford University CA Dept. of StatisticsTechnical ReportT. Anderson, "Confidence limits for the expected value of an arbitrary bounded random variable with a continuous distribution function," Technical Report AD0696676, Stanford University CA Dept. of Statistics, Oct. 1969.
Robustified Empirical Risk Minimization with Law-Invariant, Coherent Risk Measures. P Coppens, P Patrinos, arXiv:2303.09196P. Coppens and P. Patrinos, "Robustified Empirical Risk Minimization with Law-Invariant, Coherent Risk Measures," Mar. 2023, arXiv: 2303.09196.
S S Wilks, Mathematical Statistics. A Wiley Publication in Mathematical Statistics. New YorkWileyprint ed.S. S. Wilks, Mathematical Statistics. A Wiley Publication in Mathe- matical Statistics, New York: Wiley, 2. print ed., 1963.
Risk-averse riskconstrained optimal control. P Sopasakis, M Schuurmans, P Patrinos, 2019 18th European Control Conference (ECC). P. Sopasakis, M. Schuurmans, and P. Patrinos, "Risk-averse risk- constrained optimal control," in 2019 18th European Control Con- ference (ECC), pp. 375-380, June 2019.
A Ben-Tal, A Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimiza- tion: Analysis, Algorithms, and Engineering Applications. Society for Industrial and Applied Mathematics, Jan. 2001.
S P Boyd, L Vandenberghe, Convex Optimization. Cambridge, UK; New YorkCambridge University PressS. P. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, UK ; New York: Cambridge University Press, 2004.
. T Weissman, E Ordentlich, G Seroussi, S Verdu, M J Weinberger, Palo Alto, CaliforniaInformation Theory Research Group ; HP Laboratories Palo AltoInequalities for the L1 Deviation of the Empirical Distribution," tech. rep.T. Weissman, E. Ordentlich, G. Seroussi, S. Verdu, and M. J. Wein- berger, "Inequalities for the L1 Deviation of the Empirical Distribution," tech. rep., Information Theory Research Group, HP Laboratories Palo Alto, Palo Alto, California, 2003.
A W Van Der Vaart, J A Wellner, Weak Convergence and Empirical Processes: With Applications to Statistics. New YorkSpringerA. W. van der Vaart and J. A. Wellner, Weak Convergence and Empirical Processes: With Applications to Statistics. New York: Springer, 2000.
On Choosing and Bounding Probability Metrics. A L Gibbs, F E Su, International Statistical Review. 703A. L. Gibbs and F. E. Su, "On Choosing and Bounding Probability Metrics," International Statistical Review, vol. 70, no. 3, pp. 419-435, 2002.
First-Order Methods in Optimization. A Beck, MOS-SIAM Series on Optimization, Society for Industrial and Applied Mathematics. A. Beck, First-Order Methods in Optimization. MOS-SIAM Series on Optimization, Society for Industrial and Applied Mathematics, Oct. 2017.
| zyda_arxiv-0055000 |
Approximate Killing Fields as an Eigenvalue Problem
12 Aug 2008 (Dated: July 13, 2007)
Christopher Beetle
Department of Physics
Florida Atlantic University
33431Boca Raton, Florida
Approximate Killing Fields as an Eigenvalue Problem
12 Aug 2008 (Dated: July 13, 2007)
Approximate Killing vector fields are expected to help define physically meaningful spins for nonsymmetric black holes in general relativity. However, it is not obvious how such fields should be defined geometrically. This paper relates a definition suggested recently by Cook and Whiting to an older proposal by Matzner, which seems to have been overlooked in the recent literature. It also describes how to calculate approximate Killing fields based on these proposals using an efficient scheme that could be of immediate practical use in numerical relativity.
Spacetime symmetries are essential for defining physically important conserved quantities such as energy and angular momentum in general relativity. For example, when a vacuum spacetime admits a rotational symmetry generated by a Killing vector field ϕ a , then a Komar-type integral [1,2] over an arbitrary 2-sphere in spacetime can reproduce the physically well-defined angular momentum measured at infinity. When spacetime is axi-symmetric, but not vacuum, the difference between these integrals for a pair of different 2-spheres is precisely the ordinary angular momentum computed from the stress-energy of matter in the intervening space. If spacetime is not axisymmetric, however, such formulae become rather ambiguous. They depend not only on the 2-sphere S over which one integrates, but also on the vector field ϕ a used to define the integrand.
These difficulties can be partially avoided under physically favorable conditions. For instance, angular momentum is well-defined at infinity in asymptotically flat spacetimes (see [3] for a recent review), or on an appropriate isolated horizon [4,5] modeling an isolated, nondynamical black hole in a spacetime that may describe interesting dynamics in other regions. Essentially, these techniques identify preferred 2-spheres S (infinity, horizon, etc.) over which to integrate, and thereby reduce the ambiguity in defining the angular momentum. The resulting quasi-local formulae have the general Brown-York [6] form
J[ϕ] := 1 8πG S ϕ a K ab dS b ,(1)
where S is a (perhaps preferred) 2-sphere, K ab is the extrinsic curvature of a spatial slice Σ containing it, ϕ a is a vector field on S, and dS b is the area element of S within Σ. The basic problem remains: the vector field ϕ a is arbitrary unless S has an intrinsic symmetry that can be used to select it. (Now, at least, that symmetry need not extend into the bulk of spacetime.) The horizons of black holes resulting from numerical simulations of astrophysical processes generally have no symmetry of any kind and therefore, seemingly, no preferred vector field ϕ a . The problem is not that such surfaces have no reasonable definition for the angular momentum, but rather that they have infinitely many.
There is one for every vector field ϕ a tangent to the horizon. What is needed is a technique to pick a preferred vector field, and the obvious thing to do is to seek a ϕ a that, in some sense, is as close as possible to a Killing field, even if none is present. This leads intuitively to the idea of an approximate Killing field.
Motivated by the general issues discussed above, several groups have recently proposed elegant definitions of approximate Killing fields on 2-spheres. These include schemes based on Killing transport [7], conformal Killing vectors [8], and most recently a minimization scheme by Cook and Whiting [9]. This paper revives an older approach [10] due to Matzner based on solving an eigenvalue problem. It also suggests a novel adaptation of Matzner's approach to the specific problem of computing a preferred angular momentum for black holes in numerical relativity, and elucidates the intimate relationship between this scheme and that of Cook and Whiting.
Let us begin with a brief review of Matzner's definition [10] of an approximate Killing field on a compact manifold M of dimension n equipped with a Riemannian metric g ab . A continuous symmetry of g ab is generated by a Killing vector field ξ a satisfying L ξ g ab = 2 ∇ (a ξ b) = 0 (2) throughout M , where L ξ denotes the Lie derivative along ξ a and ∇ a is the unique torsion-free connection on M determined by g ab . Taking a divergence, we see that any geometry with a continuous symmetry admits at least one non-trivial solution to the equation
− 2 ∇ b ∇ (b ξ a) = 0.(3)
In principle, even when the geometry is not symmetric, we are still free to seek solutions for this equation. But generically we will not find any. Consider the eigenvalue problem
∆ K ξ a := −2 ∇ b ∇ (b ξ a) = κξ a(4)
on a generic geometry. The operator ∆ K appearing here arises naturally in the transverse decomposition of symmetric tensor fields on Riemannian manifolds [12]. A related operator, denoted ∆ L , arises in the same way from the conformal Killing equation, and plays a similar role in the transverse-traceless decomposition of such fields. Its application to the initial-data problem in general relativity is very well known indeed [13]. Eq. (4) of course admits solutions ξ a only for a certain spectrum of eigenvalues κ, and zero may or may not be among these. Matzner [10] establishes the following four results: the spectrum of eigenvalues κ of Eq. (4) on a compact manifold is (a) discrete, (b) non-negative, (c) corresponds to a complete set of real vector eigenfields ξ a , and (d) contains κ = 0 if and only if the corresponding eigenfield ξ a is a genuine Killing field. That is, the zero eigenspace of Eq. (4) is precisely the finitedimensional vector space of Killing fields of the metric g ab on M . Therefore, Eq. (3) admits no solution if the metric g ab on M has no continuous symmetries, as claimed above. However, we assert that the best approximation to a Killing field on a manifold with no actual symmetry is the unique vector eigenfield ξ a of Eq. 4 with the minimum eigenvalue κ > 0. This is Matzner's definition of an approximate Killing field, and it has several desirable features. It exists generically, reduces to the correct answer when symmetries do exist, and, like a true Killing field on a symmetric manifold, is naturally defined only up to an overall (i.e., constant over M ) scaling.
Like any eigenvalue problem, Eq. (4) admits a variational formulation. Recall the natural L 2 inner product
ζ, ξ := M ζ a ξ a ǫ(5)
on the space of (complex) vector fields over M . Here, ǫ denotes the canonical n-form volume element induced on M by the metric g ab . We minimize the quadratic form
Q K [ξ; κ) := 1 2 ξ, ∆ K ξ − 1 2 κ ξ, ξ − 1 ,(6)
where κ is constant over M and here plays the role of a Lagrange multiplier. Minimizing this functional produces the Euler-Lagrange equations ∆ K ξ a = κξ a and ξ, ξ = 1,
the solutions of which are clearly the vector eigenfields of Eq. (4), normalized to unity in Hilbert space. Many variational problems are solved by initially solving the first, differential equation in Eq. (7) for ξ a as a function of an arbitrary Lagrange multiplier κ, and then using that result in the second, algebraic equation to impose the constraint and determine κ. This does not happen for Eq. (7) because the differential equation is linear in ξ a , and therefore leaves the overall scaling of ξ a undetermined. The second equation serves only to fix that scaling, and cannot also determine κ. The Lagrange multiplier therefore must be fixed when we solve the first equation; for general κ, no solution exists. This is hardly surprising since of course only true eigenvalues κ allow us to solve Eq. (4) for ξ a . However, it does make an approach to Matzner's eigenvalue problem via a variational principle like Eq. (6) rather complicated. There is no algebraic equation to determine the Lagrange multiplier. Indeed, κ is determined in this problem precisely by the condition that it be an eigenvalue of ∆ K , and there is no algebraic equation giving these. Minimizing Q K [ξ; κ) in Eq. (6) by solving the associated Euler-Lagrange equations is neither easier nor harder than solving the eigenvalue problem in Eq. (4).
Cook and Whiting's recent definition [9] of an approximate Killing field uses a variational principle based on a quadratic form closely related to that of Eq. (6). However, it differs in a two important details. First, it focuses on the case where M ≃ S is topologically a 2-sphere, and restricts ξ a to be area-preserving:
L ξ ǫ ab = (∇ c ξ c ) ǫ ab = 0.(8)
The motivation for this restriction arises from the technical details of an eventual application to calculating the angular momentum of a non-symmetric black hole [4]. Second, it is based on a non-standard inner product
ζ, ξ R := ζ, R ξ = S ζ a ξ a R ǫ.(9)
These choices change the form, but not the basic content, of the resulting equations. They still describe a sort of self-adjoint eigenvalue problem.
To restrict to area-preserving vector fields, it is easiest simply to recall that any divergenceless vector fieldξ a on a 2-sphere topology is described by a unique scalar potential Θ such that ξ a = ( * dΘ) a := −ǫ ab ∇ b Θ and S Θ ǫ = 0. (10) Now consider the restricted eigenvalue problem ∆ Kξ a := P ∆ KP ξ a =κξ a ,
whereP denotes the orthogonal projection onto the subspace of area-preserving vector fields within the Hilbert space of Eq. (5). A givenξ a = ( * dΘ) a solves this equation if and only if, for all otherζ a = ( * dΦ) a , we have * dΦ, ∆ K * dΘ =κ * dΦ, * dΘ .
Integrating both sides by parts, and using positivity of the standard L 2 inner product of scalar functions over S, we find that Eq. (11) is equivalent to
∆ ∆ K Θ := ∆ 2 Θ + ∇ a (R ∇ a Θ) =κ ∆ Θ,(13)
where ∆ := −∇ a ∇ a denotes the standard scalar Laplacian. We have shown that the scalar functions Θ solving Eq. (13) generate, via Eq. (10), solutionsξ a of the restricted eigenvalue problem of Eq. (11). Because the projectionP does not typically commute with ∆ K , these ξ a do not generally solve Eq. (4), and the restricted eigenvaluesκ are generally distinct from the eigenvalues κ in the full Hilbert space. In fact, we should generally expect thatκ min > κ min . However, the area-preserving vector eigenfield corresponding to this minimum restricted eigenvalue can also be considered a best approximation to a Killing field, albeit within a restricted class.
To recover the Cook-Whiting approximate Killing field, we must repeat the previous calculation in the inner product of Eq. (9). The operator ∆ K is then no longer self-adjoint, but R −1 times ∆ K is. Accordingly, we seek vector fieldsξ a R = ( * dΘ) a satisfying * dΦ,
R −1 ∆ K * dΘ R =κ R * dΦ, * dΘ R(14)
for allζ a = ( * dΦ) a . Integrating by parts, and once again invoking positivity of the standard inner product of scalar functions, we find that Eq. (14) is equivalent to
∆ ∆ K Θ = −κ R ∇ a (R ∇ a Θ).(15)
Although the notation here differs slightly, this is precisely the Euler-Lagrange equation that Cook and Whiting find [9] by minimizing a quadratic form similar to Eq. (6). Once again, the solutions (ξ a R ,κ R ) of this eigenvalue problem generally differ from the solutions (ξ a , κ) of Eq. (4) and from the solutions (ξ a ,κ) of Eq. (11).
Let us now make two technical comments. First, any constant function Θ = c will give zero on both sides of Eqs. (11) and (15) for all values ofκ orκ R , respectively. These are spurious solutions. They arise only because we have used potentials to describe the subspace of areapreserving vector fields. These solutions are ruled out by the the second condition in Eq. (10), which makes the correspondence betweenξ a and Θ an isomorphism.
Second, the Cook-Whiting inner product in Eq (9) looks a little odd, but it is not immediately clear whether there is anything technically wrong with it. There certainly can be problems. Recall that the scalar curvature in two dimensions varies as
δ 2 R = ∇ b ∇ [a δg b] a − 1 2 2 R δg a a(16)
under a perturbation δg ab of the metric. If this perturbation varies sufficiently rapidly over S, then the first term here can easily dominate the second, as well as the unperturbed, background value 2 R. The result is that a generic spherical geometry, even if perturbatively close to a round sphere in the sense that δg ab has small amplitude, can have regions of negative scalar curvature. (This is intuitively obvious if we imagine "pinching" the surface of a round sphere to create a small, saddle-shaped region of negative curvature.) On such geometries, the "inner product" of Eq. (9) is not positive-definite, and does not define a Hilbert space. However, in the space of all spherical geometries, there should be some finite region of "sufficiently smooth" perturbations of the round sphere for which the total scalar curvature remains everywhere positive. In this region, there is no obvious problem with the Cook-Whiting scheme, but nothing particular to recommend it either. The question could presumably be settled [11] by comparing qualitative features of the approximate Killing fields computed from Eqs. (11) and (15). Matzner's eigenvalue definition of an approximate Killing field is unambiguous, universally applicable, and reproduces the usual Killing fields on a symmetric manifold. But it is not necessarily efficient in practice. Indeed, it would prohibitively expensive to solve any of the eigenvalue problems in Eqs. (4), (11) or (15) on the apparent horizon at every moment of time of a black hole in a numerical simulation. Fortunately, however, there is a simple approximation to speed the process up on a generic geometry. This approximation is based on the Rayleigh-Ritz method [15], and works so long as we only want to find the lowest eigenvalue and the corresponding vector eigenfield.
Consider the Rayleigh-Ritz functional
F [ξ] := ξ, ∆ K ξ ξ, ξ = 2 M ∇ (a ξ b) · ∇ (a ξ b) · ǫ M ξ a ξ a ǫ(17)
on the full Hilbert space of Eq. (5), with the zero vector removed. The local extrema of Eq. (17) occur when ξ a is a vector eigenfield of ∆ K , and the value of F [ξ] at each such extremum is the corresponding eigenvalue κ. Note that the numerator here, which arises via integration by parts of the second-order operator ∆ K in Eq. (4), is precisely one half the square integral of L ξ g ab from Eq. (2). Thus, among all vector fields with fixed L 2 -norm on S, diffeomorphisms along the approximate Killing field modify the metric least in a quantifiable, L 2 sense. It is still not practicable to find the genuine absolute minimum of Eq. (17) on the computer, which of course would yield Matzner's approximate Killing field. But one can approximate that minimum by minimizing F [ξ] within an appropriate space of trial vector fields. This idea is familiar from elementary quantum mechanics, where just such a variational principle is routinely used to approximate the ground-state wave-function of a complicated system. Unless the subspace of trial fields one chooses happens to be orthogonal, or nearly so, to the true minimum ξ a true of F [ξ] in all of Hilbert space, the dominant component of the minimizing trial field ξ a trial should lie along ξ a true in Hilbert space. Most randomlychosen trial spaces will not be orthogonal to ξ a true . This idea allows us to approximate Matzner's approximate Killing field.
There is a natural candidate for the trial space of vector fields in which to minimize Eq. (17) in the specific case M ≃ S of a 2-sphere horizon of a quiescent black hole in numerical relativity. One striking feature of many recent numerical simulations (e.g, [14]) is that the horizons at late times often look fairly regular in the fiducial spacetime coordinates used to do the evolution. Therefore, it is natural to try a space of trial fields ξ a based simply on those coordinates. A specific proposal follows.
Use the fiducial spacetime coordinates in which the numerical evolution occurs to induce spherical coordinates (θ, φ) on the black-hole horizon in some more-or-less natural, but fundamentally ad hoc, way. Then, take the space of scalar trial potentials
Θ(θ, φ) := lmax l=1 l m=−l Θ lmŶ lm (θ, φ),(18)
whereŶ lm (θ, φ) are the ordinary scalar spherical harmonic functions on a round sphere, Θ lm are arbitrary constants, and l max is a chosen cut-off. Each of these potentials generates an area-preserving vector field via Eq. (10), and this will be our trial space [16] within the full Hilbert space of Eq. (5). Therefore, minimize
F [Θ] := * dΘ, ∆ K * dΘ * dΘ, * dΘ = Θ, ∆ ∆ K Θ Θ, ∆ Θ (19) = S 2 g ac g bd − g ab g cd ∇ a ∇ b Θ · ∇ c ∇ d Θ ǫ − S Θ · ∇ a ∇ a Θ · ǫ
within the trial space of potentials given by Eq. (18). Generally, we should expect that the minimizing potential will generate a vector field ξ a trial fairly close to the minimum-eigenvalue area-preserving vector eigenfieldξ a true of Eq. (11). This, in turn, should approximate Matzner's approximate Killing field from Eq. (4). To check the approximation, one could imagine increasing l max until ξ a trial doesn't vary much with the cut-off. Equivalently, one could use a fairly large cut-off-perhaps l max = 5 would be enough-from the start, and check that the amplitudes Θ lm are small for large l. If one prefers to approximate the Cook-Whiting approximate Killing field, one need only insert a factor of the scalar curvature R between the gradients in the denominator of Eq. (19).
There is one significant issue that has not been addressed in this discussion. Even once an approximate Killing field ξ a has been found from the eigenvalue approach, it is still determined only up to overall normalization on S. For a proper rotational Killing field on a symmetric apparent horizon, the correct normalization would demand that the affine length of each Killing orbit should be 2π. It is not immediately clear what convention might be used in the general case, without symmetry, to fix a normalization that goes over to this correct one in the limit of a symmetric manifold. This issue will be discussed more thoroughly, in the context of practical applications, in a forthcoming paper [11].
AcknowledgementsThe author would like to thank Ivan Booth, Manuela Campanelli, Greg Cook, Stephen Fairhurst, Greg Galloway, Carlos Lousto, Charles Torre, Bernard Whiting and Yosef Zlochower for stimulating discussions related to this question. This work has been supported by NSF grants PHY 0400588 and PHY 0555644, and by NASA grant ATP03-0001-0027.
Covariant Conservation Laws in General Relativity. A Komar, Phys. Rev. 113A. Komar. Covariant Conservation Laws in General Rel- ativity. Phys. Rev. 113 (1959) 934-936.
R M Wald, General Relativity. ChicagoUniversity of Chicago PressR.M. Wald. General Relativity. University of Chicago Press, Chicago, 1984.
Quasi-Local Energy-Momentum and Angular Momentum in General Relativity: A Review Article. L B Szabados, Living Rev. Relativity. 7L.B. Szabados. Quasi-Local Energy-Momentum and An- gular Momentum in General Relativity: A Review Arti- cle. Living Rev. Relativity 7 (2004) 4. Cited 8 July 2007.
Mechanics of rotating isolated horizons. A Ashtekar, C Beetle, J Lewandowski, Phys. Rev. D. 6444016A. Ashtekar, C. Beetle and J. Lewandowski. Mechanics of rotating isolated horizons. Phys. Rev. D 64 (2001) 044016.
Isolated and Dynamical Horizons and Their Applications. A Ashtekar, B Krishnan, Living Rev. Relativity. 7A. Ashtekar and B. Krishnan. Isolated and Dynamical Horizons and Their Applications. Living Rev. Relativity 7 (2004), 10. Cited 8 July 2007.
Quasilocal energy and conserved charges derived from the gravitational action. J D Brown, J W York, Jr , Phys. Rev. D. 47J.D. Brown and J.W. York, Jr. Quasilocal energy and conserved charges derived from the gravitational action. Phys. Rev. D 47 (1993) 1407-1419.
Introduction to isolated horizons in numerical relativity. Olaf Dreyer, B Krishnan, D Shoemaker, E Schnetter, Phys. Rev. D. 6724018Olaf Dreyer, B. Krishnan, D. Shoemaker and E. Schnet- ter. Introduction to isolated horizons in numerical rela- tivity. Phys. Rev. D 67 (2003) 024018.
Circular orbits and spin in black-hole initial data. M Caudill, G B Cook, J D Grigsby, H P Pfeiffer, Phys. Rev. D. 7464011M. Caudill, G.B. Cook, J.D. Grigsby and H.P. Pfeiffer. Circular orbits and spin in black-hole initial data. Phys. Rev. D 74 (2006) 064011.
Approximate Killing Vectors on S 2. G B Cook, B F Whiting, arXiv:0706.0199v1E-Printgr-qcG.B. Cook and B.F. Whiting. Approximate Killing Vec- tors on S 2 . E-Print arXiv: 0706.0199v1 [gr-qc]. 2007.
Almost Symmetric Spaces and Gravitational Radiation. R A Matzner, J. Math. Phys. 9R.A. Matzner. Almost Symmetric Spaces and Gravita- tional Radiation. J. Math. Phys. 9 (1968) 1657-1668.
. C Beetle, M Campanelli, C O Lousto, Y Zlochower, In preparationC. Beetle, M. Campanelli, C.O. Lousto and Y. Zlochower. In preparation.
Covariant decompositions of symmetric tensors in the theory of gravitation. J W York, Jr , Ann. Inst. Henri Poincaré. 21J.W. York, Jr. Covariant decompositions of symmetric tensors in the theory of gravitation. Ann. Inst. Henri Poincaré 21 (1974) 319-332.
Initial-value problem of general relativity. I. General forumlation and physical interpretation. N Murchadha, J W York, Jr , Phys. Rev. D. 10N.Ó Murchadha and J.W. York, Jr. Initial-value problem of general relativity. I. General forumlation and physical interpretation. Phys. Rev. D 10 (1974) 428-436.
Spin flips and precession in black-holebinary mergers. M Campanelli, C O Lousto, Y Zlochower, B Krishnan, D Merritt, Phys. Rev. D. 7564030M. Campanelli, C.O. Lousto, Y. Zlochower, B. Krishnan and D. Merritt. Spin flips and precession in black-hole- binary mergers. Phys. Rev. D 75 (2007) 064030.
. J Mathews, R L Walker, Mathematical Methods of Physics. Addison-WesleyJ. Mathews and R.L. Walker. Mathematical Methods of Physics. Addison-Wesley, Redwood City, California, 1970.
to the space of constant functions on S. However, the key point is that standard properties of the ordinary spherical harmonics show that this space of trial potentials contains no actual constant functions. The space of potentials in Eq. (18) is usually not orthogonal, in the sense of Eq. (10). This is why we have taken lmin = 1 in Eq. (18)The space of potentials in Eq. (18) is usually not orthog- onal, in the sense of Eq. (10), to the space of constant functions on S. However, the key point is that standard properties of the ordinary spherical harmonics show that this space of trial potentials contains no actual constant functions. This is why we have taken lmin = 1 in Eq. (18).
10) maps our space of trial potentials faithfully to a space of trial vector fields with the same dimension, lmax (lmax + 2). Eq Therefore, Therefore, Eq. (10) maps our space of trial potentials faithfully to a space of trial vector fields with the same dimension, lmax (lmax + 2).
| zyda_arxiv-0064000 |
A Wheeler-DeWitt Equation with Time
November 4, 2022
Rotondo Marcello
A Wheeler-DeWitt Equation with Time
November 4, 2022
The equation for canonical gravity produced by Wheeler and De-Witt in the late 1960s still presents difficulties both in terms of its mathematical solution and its physical interpretation. One of these issues is, notoriously, the absence of an explicit time. In this short note, we suggest one simple and straightforward way to avoid this occurrence. We go back to the classical equation that inspired Wheeler and DeWitt (namely, the Hamilton-Jacobi-Einstein equation) and make explicit, before quantization, the presence of a known, classically meaningful notion of time. We do this by allowing Hamilton's principal function to be explicitly dependent on this time locally. This choice results in a Wheeler-DeWitt equation with time. A working solution for the de Sitter minisuperspace is shown. arXiv:2201.00809v4 [gr-qc] 3 Nov 2022
Introduction
One traditional avenue to the quantization of gravity is the geometrodynamical one, represented by the infamous Wheeler-DeWitt (WDW) equation [1,2]. The equation is expected to describe the quantum evolution of the spatial components of the metric tensor of General Relativity (GR), but its solution and interpretation are long-standing problems [3]. In particular, a problem with time occurs when we try to interpret the WDW equation as a Schrödinger-type equation for gravity, because the state it describes appears to be stationary.
To begin with, the absence of time from the WDW equation is a consequence of the fact that the first-class Hamiltonian constraint of GR, of which the WDW equation intends to be the quantization, specifically enforces time diffeomorphism. In other words, it ensures its dynamical laws are valid independently of our choice of time coordinate. When we consider the so-called Hamilton-Jacobi-Einstein (HJE) equation developed by Peres [4], which expresses the constraint on the 00 component of the Einstein field equations in the Hamilton-Jacobi formalism, it is clear that in the classical case, time is absent where it should appear, even though the theory is classical. We know, however, that the HJE equation does not describe a timeless geometry. The reason that the HJE equation is not problematic can be traced back to the fact that a notion of time exists for the evolution of the spatial geometry, as long as the classical notion of trajectory in superspace holds. In that sense, it appears that the actual problem with time is not that it is absent in the Schrödinger-type equation itself, but that we cannot introduce it as we do in the classical case, since space no longer evolves along classical trajectories. It is not clear what becomes of this time beyond the semi-classical level, when gravity does not act as a stage for matter fields, but rather partakes in the quantum dance.
The absence of an external time in the description of GR, which is inherited by its quantization, is sometimes referred to as the "frozen formalism problem", and constitutes only one among other difficulties in the definition of time in classical and quantum physics [5]. In the present work, we will address only this particular aspect of the problem, ignoring its relations to others (a strictly related one being the definition of time-evolving observables for quantum gravity). Two classic reviews on this subject are Isham's [6] and Kuchař's [7]. We invite the reader to read these reviews for detailed references, and to gain a general idea of the large extent of variable approaches. The study of this aspect of the problem of time has certainly evolved significantly since the time of these reviews, with some issues of each approach being successfully addressed, but it ultimately remains open. Faced by the menace represented by the loss of a useful notion of time, three alternative reactions have been adopted by researchers: flight, fight, or freeze (corresponding to Isham's tempus ante quantum, tempus post quantum, and tempus nihil est).
1.
Flight: Time is recognized as a fundamental element of our description of physical phenomena, and attempts are made to define it before quantization, as a functional of the canonical variables. This is a conservative approach that tries to obtain an "external" notion of time as that appearing in Schrödinger's equation.
Fight:
Time is recognized as a fundamental element of our description of physical phenomena, but it is retrieved only after the quantization. This type of approach fights against the interpretative problems presented by the quantum theory to obtain a novel definition of time.
3. Freeze: Timelessness is accepted, time is forsaken as a fundamental notion for the description of quantum gravity, and attempts are made to provide a complete quantum theory otherwise.
The present work adopts a definition of time resulting from the semi-classical approximation of the WDW equation, an approach falling within the second category above. However, we do not limit to the semi-classical regime the definition of time identified by the semiclassical approximation. The point of the present note is in fact to suggest that the "frozen formalism" could be avoided by retaining the use of a classical notion of time suggested by the semi-classical approximation of the theory, even though quantum space does not evolve along classical trajectories. Therefore, our proposal belongs to the first category, in that it carries over to the quantum regime the definition of time justified by the semi-classical approximation. The definition of time that we adopt as a starting point is that naturally resulting from the semi-classical approximation of canonical quantum gravity obtained by expanding the total wave functional in inverse squares of the Planck mass [8,9,10]. Classical GR and the Schrödinger equation for non-gravitational fields straightforwardly recovered this approximation. Time, for the matter fields, is a multi-fingered (i.e., space dependent) functional generated by the classical evolution of background geometry along its trajectory in superspace. The operation has formal analogies with the Wentzel-Kramers-Brillouin (WKB) approximations of quantum mechanics, and the Born-Oppenheimer approximation from molecular physics [11,12,13]. In recent years, special attention was drawn to the problem of re-establishing the unitarity of time evolution in this approach (see, for example, [14,15,16,17]). For a recent work by this author which is related to the present one, see [18].
One key observation that motivated this approach was that this multi-fingered (or "WKB") functional time can be applied to the HJE equation itself, and we can make the presence of that classical time explicit in the HJE equation, which is the timeless form. This alone was the starting point of Wheeler and DeWitt, as the following memoir by one of the authors recalls.
One day in 1965, John Wheeler had a two-hour stopover between flights at the Raleigh-Durham airport in North Carolina. He called Bryce DeWitt, then at the University of North Carolina in Chapel Hill, proposing to meet at the airport during the wait. Bryce showed up with the Hamilton-Jacobi equation of general relativity, published by Asher Peres not long before [...] Bryce mumbled the idea of repeating what Schröedinger did for the hydrogen atom: obtaining a wave equation by replacing the square of derivatives with (i times) a second derivative-a manner for undoing the optical approximation. [...] Wheeler was tremendously excited (he was often enthusiastic) and declared on the spot that the equation for quantum gravity had been found. [19,20] What if DeWitt had presented Wheeler with the HJE equation together with the notion of multi-fingered time? The resulting equation does present the functional time variable just as time appears in the Schrödinger equation, and preserves the correct classical limit for gravity.
In Section 2, we briefly review the definition of this multi-fingered time. In Section 3, we rewrite the HJE equation, allowing an explicit local dependence of Hamilton's principal function on that time, and write the associated WDW equation with time. Finally, in Section 4, we discuss a simple realization in the de Sitter minisuperspace that is of special interest to quantum cosmology.
Classical Time Evolution from the Hamiltonian Constraint
For a straightforward introduction to the emergence of time in the semi-classical approximation of canonical quantum gravity, we refer the reader to the mentioned work by Isham [6], Section 5.4 and references therein. Here, we follow, with some variation, the notation adopted by Kiefer [9]. Consider a generic spacetime with line element
ds 2 = −N 2 dt 2 + N i dtdx i + h ij dx i dx j .(1)
Here, h ij = g µν , µ, ν ∈ {1, 2, 3} is the spatial metric, with Latin indices i, j ∈ (1, 2, 3), N i = g 0i is the shift function, and N = (−g 00 ) −1/2 is the lapse function. In the Hamiltonian formalism of GR, time parametrization invariance is enforced by a first-class Hamiltonian constraint (i.e., a constraint imposed on the Hamiltonian only after the equations of motion are satisfied).
d 3 x (2M ) −1 G AB π A π B + V(h A ) + H φ (φ; h A ) = 0 .(2)
(This constraint can be obtained from the variation of the super-Hamiltonian of GR with respect to the lapse function N in the Arnowitt-Deser-Misner (ADM) formalism [21]. Here, we intend to recall only the elements strictly necessary to follow our discussion.)
In Equation (2), the capital indices, A, B = {ij} represent pairs of Latin indices, and G AB is the DeWitt metric
G AB ≡ G ijkl = 1 2 √ h (h ik h jl + h il h jk − h ij h kl ) ,(3)
underlying superspace, i.e., the space of spatial metrics up to differeomorphism invariance.
The physical scale (we have set = c = 1) of quantum gravity is set by the "geometrodynamical mass" M , which is proportional to the square of the Planck mass m P
M = (m P /2) 2 , m P = (8πG) −1/2 .(4)
The geometrodynamical potential density V is
V = 2M √ h(2Λ − (3) R) ,(5)
where h and (3) R are the determinant and the Ricci scalar of the spatial metric, respectively. The Hamiltonian density operator H φ is taken to describe bosonic matter. The WDW equation results from an attempt to quantize the Hamiltonian constraint (2) straightforwardly, applying it to the "wave functional of the universe", Ψ[h A , φ], thus obtaining
d 3 x (2M ) −1 G AB ∂ A ∂ B + V(h A ) + H φ (h A , φ) Ψ[h A , φ] = 0 ,(6)
where all variables are promoted to the respective operators. For the sake of simplicity, in this section, we have adopted the trivial ordering, and the symbols ∂ A are used to indicate functional derivatives with respect to the metric component indicated by the double index.
The evolution of bosonic quantum fields in classical curved spacetime is obtained by making the ansatz
Ψ[h A , φ] = χ[h A ]ψ[φ; h A ](7)
for the total wave functional, and considering a WKB-like expansion in inverse powers of M [9,10]. In doing so, one aims at wave functionals χ and ψ that describe the "heavy" (i.e., the spatial metric components) and "light" degrees of freedom (i.e., matter), respectively. Notice that ψ depends on the geometry only parametrically, which is indicated by the use of the semicolon. The method consists of substituting the expansion
Ψ[h A , φ] = exp i ∞ n=0 M 1−n S n [h A , φ](8)
in the WDW equation, and equating contributions to equal powers of M .
To order M 2 , one obtains S 0 = S 0 [h A ]: the leading contribution is purely geometrodynamical. To order M 1 , one obtains the vacuum HJE equation
d 3 x (2M ) −1 G AB ∂ A S G ∂ B S G + V = 0 ,(9)
S G = M S 0 being the leading contribution to the phase of the wave functional (8). The HJE Equation (9) appears to be timeless due to the vanishing of the RHS. However, the time evolution of space can still be obtained from the Hamiltonian constraint (2) by expressing the canonical momenta (defined by the Lagrangian as π A = ∂L/∂ḣ A , and identified with π A = ∂ A S G in the Hamilton-Jacobi formalism) in terms of the geometrodynamical velocities. From the Hamiltonian equations of motion in the ADM formalism, these are given bẏ
h A = N π A + 2N (A) ,(10)
N (A) being a shorthand for N (i;j) . The fact that time evolution is retrieved by such substitution, as the Hamiltonian is constrained, is an important point.
At this point, define
ψ 1 [φ; h A ] = exp (iS 1 [φ, h A ]
) and require conservation of the current associated with χ. Then, to order M 0 , one gets the following functional equation for matter
d 3 x iM −1 G AB ∂ A S G ∂ B − H φ ψ 1 [φ; h A ] = 0 .(11)
By using as time the local "multi-fingered" time τ = τ (x) of the parametrization generated by the classical momenta along the classical trajectory in superspace
G AB π A ∂ B x τ (y) = δ(y − x) ,(12)
Equation (11) gives the Schrödinger-type functional equation
d 3 x i • ψ 1 [φ; h A ] − H φ ψ 1 [φ; h A ] = 0 ,(13)
where we employ a circle over the variable to indicate the functional derivative with respect to τ
δ δτ = M −1 G AB π A ∂ B .(14)
Notice that in normal coordinates (N = 1, N 0i = 0), the Equation (13) reduces to the functional Schrödinger equation for bosonic fields. In this case, the passage to the partial derivative with respect to time is granted by the fact that the matter wave functional ψ 1 [φ; h A ] depends explicitly on time only through the background metric, which appears as a set of local parameters, and employing the resulting relation between momenta and velocities (10), (3) reduces to an application of the chain rule.
The Time Evolution of Quantum Space
The functional derivative with respect to τ of a metric component, intended as the functional
h A (x) = d 3 y h A (y) δ(y − x) ,(15)
gives the relation between velocities and momenta,
• h A = M −1 G AB π A .(16)
Putting the relation (16) back into the Hamiltonian equation gives the equations of motion with respect to τ . In other words, we can use τ not only to describe the evolution of quantum fields with respect to the background metric, but also to describe the evolution of the background metric itself. It is a favorable parametrization in that it makes the form of the equations of motion simpler and not explicitly dependent on the coordinate choice (1). Incidentally, notice that, in Peres' HJE equation, the spatial metric components are defined as functions of space alone. How they become a function of time seems to be a problem that is not addressed in the literature. In the present treatment, they become functions of time precisely by defining their dependence on time according to (16).
The main objection in extending the use of multi-fingered time to the quantum evolution is that classical trajectories are lost in that regime. While this is true, we still know what the classical trajectory is, and we can use the "natural" parametrization (i.e., the parametrization in which all equations of motion appear simple as (16)) along it to describe evolution along non-classical trajectories between wave fronts of constant multi-fingered time.
Working with the vacuum model, we can make multi-fingered time explicit in the HJE Equation (9) by rewriting it as
d 3 x ∂ τ S G + (2M ) −1 G AB ∂ A S G ∂ B S G + V = 0 ,(17)
and requiring that
d 3 x∂ τ S G = 0 .(18)
Notice that the derivative with respect to τ is partial. As we previously observed, WKB time (3) only takes into account the dependency on τ through the geometrodynamical degrees of freedom, but S G could, in principle, also be explicitly dependent on it. What we are doing is simply allowing for this possibility. Adding and subtracting the τderivative of S G , and substituting the velocities (16), integration of the HJE Equation (17) tells us that S G is indeed Hamilton's principal function, defined as a functional integral of the Lagrangian.
Notwithstanding the fact that the condition (18) ensures that the Hamiltonian constraint still holds, it allows for the action to remain dependent explicitly on τ locally. We may then try to quantize the HJE Equation (17)à la Schrödinger, and require the global condition (18) to be true only in the classical limit. What we obtain is a WDW equation with time
d 3 x −i ∂Ψ ∂τ + (2M ) −1 G AB ∂ A ∂ B + V Ψ = 0 .(19)
Both the introduction of the coordinate independent functional time derivative and the (classically redundant) condition (18) are necessary to obtain Equation (19). The second condition is somehow reminiscent of another approach [22], where time is recovered by weakening the classical Hamiltonian constraint, required to hold only on average in the quantum regime. In that work, the problem of which time to use to describe the evolution is not addressed, and the condition on time dependence is stricter than ours (see (6) in the referenced paper), resulting in a wave function whose phase does not depend on time even locally.
The de Sitter Minisuperspace
The results of the previous section, i.e., the time evolution Equation (19) combined with the global condition (18), are only formal. Equation (19) still presents the same issues as the "timeless" WDW Equation (6). Besides that, using a spatial-dependent τ to parametrize the spatial geometry is more easily said than done. In the following, we will consider, by way of example, the solution for the spatially flat de Sitter universe, described by the line element
ds 2 = −dt 2 + a(t) 2 δ ij dx i dx j .(20)
Here, a(t) is the scale factor. We will set M = 1/12. As a geometrodynamical variable, rather than the scale factor itself. It will be convenient to adopt
q = (2a) 3 2 3 d 3 x 1 2 ,(21)
which is proportional to the square root of the co-moving spatial volume considered.
The Ricci scalar of the spatial metric vanishes in this model, and the Hilbert-Einstein action Lagrangian, reads simply
L = − 1 2 q 2 + ω 2 q 2 .(22)
Here, we have defined the constant ω = 3Λ/4. The canonical momentum is π q = −q, and the Hamiltonian is
H = − 1 2 π 2 q + 1 2 ω 2 q 2(23)
The Hamiltonian constraint then gives the Friedmann equation
q q 2 = ω 2 .(24)
This allows us to obtain the classical time evolution of spatial volume
q(t) = q(0) exp (ωt) .(25)
The HJE Equation (9) reduces to − 1 2
∂S ∂q
2 + 1 2 ω 2 q 2 = 0 .(26)
Classically, Hamilton's principal function depends on time only implicitly, and is of the form
S = ∓ ω 2 q 2 − q 2 i .(27)
Using Equation (25), one can check that the (one-fingered) time associated with this action indeed coincides with forward coordinate time when we choose the negative sign.
Moving onto the quantization, notice that with our choice of variable, the DeWitt metric, that in terms of the scale factor reads G aa = −2a, is now
G qq = −(3q) 2 3 .(28)
This simplifies the measure
−G (a) da → dq(29)
and, adopting the Laplace-Beltrami operator ordering for the kinetic operator, we have
−G (a) −1 ∂ a −G (a) G aa ∂ a → ∂ 2 q .(30)
Then, in this minisuperspace, the WDW Equation (19) reads
iΨ = 1 2 ∂ 2 q Ψ + 1 2 ω 2 q 2 Ψ ,(31)
which is essentially the one-dimensional Schrödinger equation for the so-called inverted harmonic oscillator. See [23] for a recent review, and [24] for an application to the study of a scalar field in slow-roll inflation. The difference in our case is that the variable is constrained to the positive axis only, and time appears with the opposite sign. As in [24], we require the Gaussian ansatz
ψ(q, t) = A(t) exp −B(t)q 2 .(32)
The equality in (31) of the terms of null and second order in q imposes −iȦ = A B
and
−iḂ = 1 2 ω 2 + 2B 2 .(34)
From the evolution Equation (34), we have
B(t) = ω 2 tan (φ + iωt) .(35)
Here φ is the real part of the constant of integration. The imaginary part of the constant is merged into the choice of initial time, instead, so that the width of the state is minimized at t = 0. The value of φ determines the greater (<π/4) or lesser (>π/4) peaking of the distribution in q, rather than the conjugate momentum at time t = 0.
Substituting B(t) in Equation (33), for a normalized solution, we obtain
A(t) = 2 π ω sin(2φ) 1/4 (cos(φ + iωt)) − 1 2 .(36)
The expectation value for q is
q = (2πω sin(2φ)) − 1 2 cos(2φ) + cosh(2ωt) | cos(φ + iωt)| .(37)
At late times, we correctly recover the classical inflationary expansion q ∝ exp(ωt) of Equation (25). In this limit, the phase of ψ approximates the classical action (27) (up to a global phase, which fixes the initial value) and loses its explicit dependence on time, thus reducing to the classical action (18). On the other hand, when we approach the time t = 0, where the state is maximally contracted for the given value of φ, the expectation value of the scale variable deviates from the classical one, which is directed at asymptotic convergence to zero: we observe instead the state of the de Sitter universe bouncing back from an earlier phase of contraction (see Figure 1). Figure 1: Time evolution of the expectation value of the scale variable q for φ = π/4. We set ω = 1.
Conclusions
In this short note, we have proposed to extend to the quantum regime the use of the multi-fingered time originating from the semi-classical WKB approximation of geometrodynamics. We have done so by rewriting the classical HJE equation for vacuum space to include an explicit time dependency on Hamilton's principal function, and requiring this dependency to vanish globally. The quantization provides a WDW equation that describes the evolution of the state with respect to classical multi-fingered time. Quantum matter fields can be included by appropriately augmenting the Hamiltonian operator. The classical limit will still be granted.
The main purpose of this work was to provide a formal WDW equation, (19), with a clear and working notion of time. This result, however, does not help with the original mathematical difficulties of the WDW equation, such as the indefiniteness of the superspace metric, or the divergence of functional derivatives in the full theory. However, we showed by example of a minisuperspace model of flat de Sitter universe that exact normalizable solutions can be easily found and interpreted. In particular, for the de Sitter universe, we found a wellbehaved Gaussian solution, which shows a state that bounces back at the time of maximal contraction, thus avoiding the classical asymptotic regression to nihil.
The next step along this line of research could be the formalization of our heuristic approach both in the Hamiltonian and the Lagrangian formalism, where the wave functional could be constructed in terms of a path integral over foliations of constant multi-fingered time. On the side of application, it would be fundamental to work out exact solutions that include quantum matter degrees of freedom. Expanding on the de Sitter model introduced here by adding perturbations could be relevant for early inflationary cosmology. Furthermore, beyond cosmology, an application to time evolution during the last stage of gravitational collapse could be of interest.
Quantum Theory of Gravity. I. The Canonical Theory. B S Dewitt, Phys. Rev. 160DeWitt, B.S. Quantum Theory of Gravity. I. The Canonical The- ory. Phys. Rev. 1967, 160, 1113-1148.
Superspace and the nature of quantum geometrodynamics. J A Wheeler, Batelles RencontresDeWitt, B.S., J. A. Wheeler, J.A., Eds.BenjaminNew York, NY, USAWheeler, J.A. Superspace and the nature of quantum ge- ometrodynamics. In Batelles Rencontres; DeWitt, B.S., J. A. Wheeler, J.A., Eds.; Benjamin: New York, NY, USA, 1968; pp. 242-307.
. C Quantum Rovelli, Gravity, Cambridge University PressCambridge, UKRovelli, C. Quantum Gravity; Cambridge University Press: Cambridge, UK, 2004.
On Cauchy's problem in General Relativity. A Peres, Nuovo Cimento. 26Peres, A. On Cauchy's problem in General Relativity. Nuovo Cimento 1962, 26, 53-62.
The Problem of Time. E Anderson, SpringerNew York, NY, USAAnderson, E. The Problem of Time; Springer: New York, NY, USA, 2017.
Canonical Quantum Gravity and the Problem of Time. C J Isham, Integrable Systems, Quantum Groups, and Quantum Field Theories. Ibort, L.A., Rodríguez, M.A.Dordrecht, GermanySpringerIsham, C.J. Canonical Quantum Gravity and the Problem of Time. In Integrable Systems, Quantum Groups, and Quantum Field Theories; Ibort, L.A., Rodríguez, M.A., Eds; Springer: Dordrecht, Germany, 1993.
Time and interpretations of quantum gravity. K V Kuchar, Proceedings of the 4th Canadian Conference on General Relativity and Relativistic Astrophysics. the 4th Canadian Conference on General Relativity and Relativistic AstrophysicsKuchar, K.V. Time and interpretations of quantum gravity. In Proceedings of the 4th Canadian Conference on General Rela- tivity and Relativistic Astrophysics;
. G Kunstatter, D Vincent, J Williams, World ScientificSingaporeKunstatter, G. Vincent, D., Williams, J., Eds.; World Scientific: Singapore, 1992.
Quantum gravitational corrections to the functional Schrödinger equation. C Kiefer, T P Singh, Phys. Rev. D. Kiefer, C.; Singh, T.P. Quantum gravitational corrections to the functional Schrödinger equation. Phys. Rev. D 1991 44, 1067.
The semiclassical approximation to quantum gravity. C Kiefer, In Canonical Gravity: From Classical to Quantum. Kiefer, C. The semiclassical approximation to quantum grav- ity. In Canonical Gravity: From Classical to Quantum;
. J Ehlers, H Friedrich, SpringerBerlin/Heidelberg, GermanyEhlers, J., Friedrich, H., Eds.; Springer: Berlin/Heidelberg, Germany, 1994; pp. 170-212.
. C Quantum Kiefer, Gravity, Oxford University PressOxford, UKKiefer, C. Quantum Gravity; Oxford University Press: Oxford, UK, 2007.
The Born-Oppenheimer approach to the matter-gravity system and unitarity. C Bertoni, F Finelli, G Venturi, Class. Quant. Grav. 132375Bertoni, C.; Finelli, F.; Venturi, G. The Born-Oppenheimer ap- proach to the matter-gravity system and unitarity. Class. Quant. Grav. 1996 13, 2375.
The Born-Oppenheimer method, quantum gravity and matter. A Y Kamenshchik, A Tronconi, G Venturi, Class. Quant. Grav. 3515012Kamenshchik, A.Y.; Tronconi, A.; Venturi, G. The Born- Oppenheimer method, quantum gravity and matter. Class. Quant. Grav. 2017 35, 015012.
Time in quantum theory, the Wheeler-DeWitt equation and the Born-Oppenheimer approximation. A Y Kamenshchik, A Tronconi, T Vardanyan, G Venturi, Int. J. Mod. Phys. D. Kamenshchik, A.Y.; Tronconi, A.; Vardanyan, T.; Venturi, G. Time in quantum theory, the Wheeler-DeWitt equation and the Born-Oppenheimer approximation. Int. J. Mod. Phys. D 2019 28, 1950073.
Semiclassical approximation of the Wheeler-DeWitt equation: Arbitrary orders and the question of unitarity. C Kiefer, D Wichmann, Gen. Relativ. Gravit. 5066Kiefer, C.; Wichmann, D. Semiclassical approximation of the Wheeler-DeWitt equation: Arbitrary orders and the question of unitarity. Gen. Relativ. Gravit. 2018 50, 66.
Gauge Fixing and the Semiclassical Interpretation of Quantum Cosmology. L Chataignier, Z. Für Nat. A. 1069Chataignier, L. Gauge Fixing and the Semiclassical Interpreta- tion of Quantum Cosmology. Z. Für Nat. A 2019, 74, 1069.
Construction of quantum Dirac observables and the emergence of WKB time. L Chataignier, Phys. Rev. 202086001Chataignier, L. Construction of quantum Dirac observables and the emergence of WKB time. Phys. Rev. D 2020, 101, 086001.
Nonunitarity problem in quantum gravity corrections to quantum field theory with Born-Oppenheimer approximation. F Di Gioia, G Maniccia, G Montani, J Niedda, Phys. Rev. D. 2022103511Di Gioia, F.; Maniccia, G.; Montani, G.; Niedda, J. Nonunitar- ity problem in quantum gravity corrections to quantum field the- ory with Born-Oppenheimer approximation. Phys. Rev. D 2022, 103, 103511.
The Functional Schrödinger Equation in the Semiclassical Limit of Quantum Gravity with a Gaussian Clock Field. M Rotondo, Rotondo, M. The Functional Schrödinger Equation in the Semi- classical Limit of Quantum Gravity with a Gaussian Clock Field. Universe 2020 6, 176.
The strange equation of quantum gravity. C Rovelli, Class. Quant. Grav. 32124005Rovelli, C. The strange equation of quantum gravity. Class. Quant. Grav. 2015, 32, 124005.
The Pursuit of Quantum Gravity. C Dewitt-Morette, Springer58Berlin/Heidelberg, GermanyDeWitt-Morette, C. The Pursuit of Quantum Gravity; Springer: Berlin/Heidelberg, Germany, 2011; p. 58.
Republication of: The dynamics of general relativity. R Arnowitt, S Deser, C W Misner, Gen. Relativ. Gravit. 40Arnowitt, R.; Deser, S.; Misner, C.W. Republication of: The dynamics of general relativity. Gen. Relativ. Gravit. 2008, 40, 1997-2027.
Time in quantum gravity by weakening the Hamiltonian constraint. H Nikolic, gr-qc/0312063. 2003arXiv preprintNikolic, H. Time in quantum gravity by weakening the Hamilto- nian constraint. arXiv preprint gr-qc/0312063. 2003.
Physics of the Inverted Harmonic Oscillator: From the lowest Landau level to event horizons. V Subramanyan, S S Hegde, S Vishveshwara, B Bradlyn, Ann. Phys. 2021168470Subramanyan, V.; Hegde, S.S.; Vishveshwara, S.; Bradlyn, B. Physics of the Inverted Harmonic Oscillator: From the lowest Landau level to event horizons. Ann. Phys. 2021, 435, 168470.
Quantum mechanics of the scalar field in the new inflationary universe. A H Guth, S.-Y Pi, Phys. Rev. D. 32Guth, A.H.; Pi, S.-Y. Quantum mechanics of the scalar field in the new inflationary universe. Phys. Rev. D 1985, 32, 1889-1920.
| zyda_arxiv-0113000 |
Self-Supervised Representation Learning from Temporal Ordering of Automated Driving Sequences
Christopher Lang
University of Freiburg
Robert Bosch GmbH
Alexander Braun
Robert Bosch GmbH
Lars Schillingmann
Robert Bosch GmbH
Karsten Haug
Robert Bosch GmbH
Abhinav Valada
University of Freiburg
Self-Supervised Representation Learning from Temporal Ordering of Automated Driving Sequences
Self-supervised feature learning enables perception systems to benefit from the vast amount of raw data being recorded by vehicle fleets all over the world. However, their potential to learn dense representations from sequential data has been relatively unexplored. In this work, we propose TempO, a temporal ordering pretext task for pre-training region-level feature representations for perception tasks. We embed each frame by an unordered set of proposal feature vectors, a representation that is natural for instance-level perception architectures, and formulate the sequential ordering prediction by comparing similarities between sets of feature vectors in a transformer-based multi-frame architecture. Extensive evaluation in automated driving domains on the BDD100K and MOT17 datasets shows that our TempO approach outperforms existing self-supervised single-frame pre-training methods as well as supervised transfer learning initialization strategies on standard object detection and multi-object tracking benchmarks.
Figure 1
. We introduce TempO, a self-supervised learning pretext task by temporal ordering of frames, that pre-trains perception models for both frame-level tasks, like object detection, and multi-frame tasks like multiobject tracking. We propose a transformer-based architecture that is designed to scale quadratic w.r.t. sequence length, and enables richer temporal context during pre-training. methods as pre-training for image classification tasks have been demonstrated by image-level feature learning using contrastive methods [4][5][6]16]. These approaches were later extended to region-level representation learning, where the contrastive approach is applied to image patches tracked over a set of augmentations [8,52]. In robotic domains, such as automated driving, the perception system observes the environment from a continuous stream of sensor data. Adding such temporal context allows learning from undistorted images by exploiting object permanence and dynamical constraints. Recent methods exploit these by enforcing linear motion models and static scene assumptions for depth estimation [14], or consistent appearance constraints in tracking patches along a temporal cycle [23, 46,47]. However, such methods fail once these underlying assumptions are violated (e.g. by dynamic objects in the scene or large camera movements), resulting in inconsistent predictions or indefinite loss values. The task of ordering a set of shuffled frames into their original temporal succession by video-level feature representations [26,33] has shown promising results, as it is less ambiguous and never ill-defined while encouraging an understanding of both local object semantics and global scene composition.
In this work, we propose TempO, a self-supervised representation learning approach that extends temporal ordering to region-level feature learning for object detection and tracking. We achieve this synergy by defining the temporal ordering task as a sequence estimation problem and construct the method based on the tracking-by-detection paradigm, as depicted in Figure 1. This design of a single-frame (spatial) network and a light multi-frame head requires the network to learn consistent representations and perform the bulk of the semantic reasoning for each frame separately. We evaluate the performance of TempO pre-training on downstream tasks of object detection on the BDD100K dataset and multi-object tracking on the BDD100K and MOT17 datasets, by comparing with pre-training on supervised datasets and existing unsupervised pre-training methods. Furthermore, we study the utility of representations learned from TempO pre-training for the frame retrieval task, without additional training.
The main contributions of this work are:
• We propose TempO, a self-supervised pretraining pipeline for object detection models and multi-object tracking models from a temporal ordering pretext. • We design a transformer-based multi-frame sorting head whose computational complexity scales less than quadratic with the sequence length. This allows us to pretrain on longer sequences compared to combinatorial approaches [33,53]. • We perform extensive evaluations of TempO pretraining for the downstream tasks of object detection and multi-object tracking that demonstrate the utility of our approach.
Related Work
Our work is related to the field of self-supervised visual representation learning from sequential data. In the following, we embed our proposed method in these areas.
Self-Supervised Image Feature Learning: Self-supervised learning has been studied extensively on single images, where the field can be broadly categorized into image-level, region-level, and pixel-level approaches. Image-level approaches learn a global embedding vector per frame and are typically evaluated on image classification tasks. Contrastive methods learn an image feature vector that is invariant to a set of augmentations while being distinct from embeddings of other images. The choice of negative examples ranges from instance-based discrimination [6,7], clustering-based pretext tasks [4], and bootstrapping approaches [5,16]. The temporal consistency in video clips is also used as a source for positive pairs from subsequent frames [11]. Enforcing such temporally persistent features benefits both instancebased discrimination and clustering-based methods [11].
Nevertheless, many computer vision tasks, including object detection [24,25] and segmentation [15,34,45], require dense representations that also encode local image information. Region-level approaches [8,9,57], therefore, rely on pretext tasks that operate on patches of an image or the feature map. Patch discrimination methods [29,35] utilize these image patches to learn local augmentationinvariant embeddings, analogously to the image-level methods described above. The patch re-identification pretext task [10,19], on the other hand, emphasizes the localization task by regressing the image coordinates of the local patch in the global image. Combining patch discrimination and re-identification pretext tasks have shown sustainable gains for downstream object detection [8,10,51,52], as the losses include regression and classification-based terms analogously to the downstream task. In our experiments, we compare against UP-DETR [8] as a baseline, which is trained on localizing random crops in an image and contrasting the feature embeddings of crops within an image. It, therefore, is less sensitive to the choice of image augmentations, which is comparable to our proposed method. Dense self-supervised feature learning approaches on the level of pixels [35,38,52] or features [29,48,54] are evaluated on segmentation tasks. The pretext tasks rely on unsupervised segmentation masks [19,20] or the construction of new images [48,54,55].
In recent years, consistency constraints in sequential data have been used as a supervisory signal to learn dense feature representations. Tracking approaches exploit object permanence constraints in appearance by searching image patches by their representation in a feature map. The pretext task is either to predict the offset of an image region forward and then backward along a cycle in time [21,47], whereby the difference between start and end points is used as a training signal. However, such methods operate only on pairwise frame contexts and depend on data domains where the presence of an object over a certain time window can be ensured. Cross-stream approaches [17,43,44] estimate the optical flow between two images, from which they derive an informed selection of positive and negative patches [43], instances [17] or prototypes [44] for contrastive learning.
Self-Supervised Video Feature Learning: Video feature learning exploits the temporal consistency in sequential data as a supervision signal to generate video-level embeddings. Self-supervised learning approaches can be broadly categorized into three types of pretext tasks: 1. Video discrimination learns contrastive video-features [9,11,39] that are invariant to a set of temporally consistent augmentations. 2. Sequential verification performs a binary classification if either a sequence is incorrect or in a shuffled temporal order [12,33]. Other formulations discriminate between forward and backward order of frames [49]. Such methods are used for encoding video clips all at once but are outper-
T 2:4 T 1 T 2 T 3 T 4
Additive Attention
Transformer encoder
Association scores (proposal-byproposal)
Next image probabilities (frame-by-frame)
Proposal-to-frame reduction function Att p(I n |I 1:n-1 )
H 1:3
Future frame mask Figure 2. Illustration of the multi-frame head in the TempO architecture. Each frame n is represented by a set of proposal feature vectors Tn extracted by the same frame-level network. The proposal features are concatenated and encoded by a transformer encoder into a set of history tokens H using a future frame masking that allows for each proposal to aggregate temporal context from past frames only. An additive attention mechanism then computes the association scores between the history tokens and the proposal feature vectors. We next map all scores of proposal features corresponding of the same frame onto scalar image transition probabilities. Our proposed temporal ordering task maximizes these probabilities for the correct temporal order during pre-training.
formed by temporal ordering methods as the network needs to reason and understand the statistical temporal structure of image sequences 3. Temporal ordering methods on video classification tasks [26,37,49] is formulated as a classification problem among all the frame permutations, which limits their usage to sequence lengths of usually not more than six frames due to the combinatorial explosion of orderings. While the aforementioned methods learn video-level embeddings, they are widely evaluated on clip retrieval and action recognition tasks. We define the temporal ordering pretext task as a sequence estimation problem by estimating image transition probabilities instead of a multi-classification problem, which allows learning frame-level feature representations that allow for a larger variety of downstream tasks, as described in the following section.
Technical Approach
We next detail our proposed TempO pretext task for region-level visual representation learning. In the remainder of this section, we first introduce the pretext task in Section 3.1, followed by the network architecture in Section 3.2 and Section 3.3, and finally describe the transfer learning technique to perform downstream task evaluations in Section 3.4.
Sequence Ordering Task Definition
For the TempO pretext task, we consider image sequences of length N . We chose the starting index such that n = 1, . . . , N for a more concise notation. A training sample is composed of the first frame in the sequence I 1 as an anchor in time, and the remaining sequence frames I 2:N in arbitrary order. At first, a single-frame network extracts an unordered set of proposal feature vectors from each frame independently. The multi-frame transformer head then processes the concatenated proposal feature vectors over all N frames in a training sample and maps them onto next-image probabilities given a sequence of images as described in Section 3.3.
Our training objective is to maximize the next-image probability ρ (I n |I 1:n−1 ) for the observed temporal orderings in the video data using a ranking loss formulation
L = m =n max {ρ(Im|I1:n−1) − ρ(In|I1:n−1) + ∆, 0} , (1)
where ∆ ≥ 0 is a scalar margin.
Single-Frame Network
Our approach adapts to network architectures that process images and produces a set of P proposal feature vectors Q ∈ R P ×D of dimension D per frame. This includes common region proposal-based [42] and transformer-based [3,58] architectures. The majority of our experiments are conducted on the Sparse R-CNN [42] object detection architecture, using a ResNet-50 [18] as a feature extractor. It learns a sparse set of P proposal boxes and features, from which classification scores and bounding boxes are generated. The initial proposal features are extracted from learned proposal box regions in the feature map. The proposal features are then iteratively refined by a sequence of dynamic heads that each allow interaction between proposal features via selfattention modules.
We implement two distinct branches of dynamic heads: a detection branch that extracts object proposal features, consisting of six iterative heads, and a tracking branch that extracts tracking proposal features, from two iterative dynamic heads, that are used to associate objects identities throughout a sequence. The model under pre-training uses two dynamic heads, whose parameters are cloned to initialize the parameters of both the detection and tracking branch (see Section 3.3) during fine-tuning. The motivation for separating the tracking and detection branches is that the feature representations pursue competing objectives. While detection features should learn to generalize across an object type (e.g. car), tracking features should learn to discriminate between object instances.
Multi-Frame Sequence Ordering Network
The overall setup of our sequence ordering network is depicted in Figure 2. We express the image transition probability with respect to the track features as ρ (I n |I 1:n−1 ) = ρ(T n |H n−1 ) given a set of sequence history up to frame n − 1 as H n−1 ∈ R (P ×D . The history tokens at H n−1 are the output sequence tokens by a transformer encoder, that takes as input the track feature vectors up to frame n − 1. This is implemented by masking track features of future frames in the attention matrix.
Finally, we compute additive attention between the encoded sequence history tokens and the track features given by
Att(t i n , h j m ) = v T tanh W 1 t i n + W 2 h j m ,(2)
where Att(t i n , h j m ) denotes the attention score between t i n , i.e., the i-th proposal vector in frame n, and h j m , i.e., the j-th history feature vector up to frame m. v, W 1 , andW 2 are the learnable parameter matrices. In the next step, we employ a reduction function that computes a scalar ordering score from the track-to-history association score matrix.
Proposal-to-Frame Reduction Function: The final stage in our multi-frame head is the reduction from an association score matrix to a next-frame transition probability. As the associations are derived from unordered sets of proposals, this function is required to be permutation invariant with respect to the elements of the association matrix. One naive candidate is the mean over all matrix elements (AvgPool) that encourage similarity between all track features in temporally subsequent frames. Since tracking requires a consistent embedding of an object across subsequent frames, we also experiment to enforce a one-to-one matching among track features of two frames. Therefore, we propose an approximation of the linear sum assignment, as follows:
ρ (T n |H m ) = j max j softmax i=1,...,P Att(t i n , h j m )(3)
and ablate over these choices of the reduction function.
Downstream Task Architectures
Our models use the ResNet-50 [18] architecture as the backbone and the Sparse R-CNN [42] architecture with two iterative dynamic heads for generating 100 object proposals per frame. The ResNet-50 parameters are initialized with weights pre-trained on the ImageNet dataset and all the other model parameters are randomly initialized. For the pretext task, we collect feature vectors over all sequence frames and feed them to a masked transformer encoder described in Section 3.3. Since we exclusively use permutation invariant operations and no positional embedding on the multi-frame level, we omit the explicit shuffling of frames in our implementation.
Object Detection: For the object detection fine-tuning, we adapt the originally proposed Sparse R-CNN configuration [42]. Therefore, we build upon the pre-trained Sparse R-CNN architecture and stack four iterative dynamic heads on top of the pre-trained heads, called the detection branch. The final object proposal vectors are mapped onto classification scores and bounding box regression coordinates using separate linear layers.
Multi-Object Tracking: During the Multi-Object Tracking (MOT) downstream task, we associate proposals based on their additive attention between track features T i and the history features of the previous frame H i−1 that is generated by the transformer encoder. We follow the setup in QD-Track [36], applying their bidirectional softmax matching in feature space, tracker logic, and training pipeline. We further extend the model into an object detector, as described above. We, therefore, clone the pre-trained iterative heads, such that they have distinct parameter sets for the tracking and detection branch.
Experimental Results
We pre-train the networks using our proposed TempO approach on the train splits of the Berkeley Deep Drive [56] (BDD100K) as well as MOT17 [32] (MOT17) datasets. In this section, we compare the performance of TempO pretrained models with other initialization strategies from the literature for single-frame as well as multi-frame downstream perception tasks.
Datasets
The Berkeley Deep Drive [56] (BDD100K) dataset contains crowdsourced videos of driving scenes such as city streets and highways. It consists of different weather conditions and times of the day. For MOT fine-tuning, we use the annotations of the BDD100K and MOT17 images at 5 frames per second (FPS), which consist of 1400 videos of 40 s length. The annotations cover eight object categories of overall 131k identities. For object detection fine-tuning, we use annotations from the object detection challenge, which provides eight annotated categories for evaluation. The MOT17 [32] (MOT17) challenge consists of 14 video sequences (7 training, 7 test) at varying frame rates (>14fps) and sequence lengths (>20s) in unconstrained environments filmed with both static and moving cameras. It provides MOT annotations that feature a people class with at least three persons visible per frame. For our pre-training, we sample frames at 5 FPS from videos in the training split, from which we generate non-overlapping training sequences.
Training Protocol
We train the models on NVIDIA V100 GPUs with a batch size of 8 for 6 epochs on the pretext tasks. We use the AdamW optimizer with an initial learning rate of 2.5 · 10 −5 and weight decay of 10 −4 . A step scheduler further reduces the learning rate by a factor of 10 every 3 epochs. We fine-tune the models for another 6 epochs on the respective downstream tasks. The baselines were trained for 12 epochs on the downstream tasks, such that all models have seen each frame at most 12 times during training. We resize the images to a resolution of 800 pixels on the longer side and perform random cropping (spatial augmentation) or photometric augmentations such as color jitter, random gray scaling, and brightness change, only when stated.
Evaluations on Downstream Tasks
We evaluate how the region-level feature representations learned from the TempO pre-training impacts the performance of single-frame and multi-frame downstream tasks, i.e., for object detection and multi-object tracking on the BDD100K as well as the MOT17 dataset. We follow the evaluation protocol as in [5,7,53], where we fine-tune all models for the same fixed number of epochs (12 in our case). The TempO pre-training consists of ordering sequences of length N = 8, using two layers in the transformer encoder, for 6 epochs, followed by fine-tuning on the downstream task for another 6 epochs. Table 1 shows the mean average precision over all the classes in the BDD100K object detection benchmark. The average precision measures the area under the precisionrecall curve for detections averaged over thresholds for IoU ∈ [0.5 : 0.05 : 0.95] with the ground truth bounding boxes. We compare our proposed SSL pre-training approach against various initialization strategies using the Sparse R-CNN [42] object detector, including the common practice of pre-training the feature extractor as a classifier on the Ima-geNet dataset, as well as pre-training the model parameters on the supervised Common Objects in Context [28] (COCO 2017) object detection dataset. Additionally, we compare against the single-frame self-supervised pre-training task of random query patch detection (RPD) as described in [8]. For our adaptation to the Sparse R-CNN detector, we add the Table 1. Object detection results on the BDD100k val dataset. We use use a sequence length of 8 frames in the TempO pre-training, two layers in the multi-frame network, and AvgPool motivated by the ablation results presented in Table 3. patch image features to the initial proposal features of the Region Proposal Network (RPN) neck. We observe that our TempO initialization outperforms supervised pre-training on the COCO 2017 dataset by +0.7%mAP , while the performance gain compared to the single-frame pretext task of random query patch detection is as large as +2%mAP evaluated using the Sparse R-CNN detector. Moreover, TempO pre-training results in faster convergence of the detectors. Please refer to the supplementary material for a comparison of convergence plots. In Figure 3, we present qualitative results of (a) a Sparse R-CNN object detector trained with ImageNet pre-trained weights and (b) the TempO pre-trained Sparse R-CNN detector. Compared to the fully supervised training strategy, the TempO pre-trained Sparse R-CNN detector improves detection accuracy as it suppresses a ghost detection of a motorcycle within the pile of garbage bags on the right side of the image shown in Figure 3 (b). It also detects the poorly illuminated rider on top of the moving bicycle in Figure 3 (b).
Object Detection Results
Multi-Object Tracking Results
We evaluate our TempO pre-trained models on the MOT downstream task using the standard evaluation metrics [22]. We build on QDTrack [36] to extend our model into a multiobject tracker, as it associates detected objects in the feature space and is the current state-of-the-art on the BDD100K MOT benchmark for models with fewer than 100M parameters. As a baseline, we trained this method in its standard configuration using the Faster R-CNN and Sparse R-CNN model as a detector, which was pre-trained on the BDD100K detection dataset. Table 2 shows the performance on the BDD100K MOT val dataset for fine-tuning with various parameter initialization strategies on the BDD100K MOT training annotations for 4 epochs. We compare the performance with the higherorder tracking accuracy [31] (HOTA) metric which combines the detection, association, and localization accuracy into a single value. Additionally, we present the benchmark metrics of MOTA, IDF1, and ID switches. We observe that our tracking approach with TempO pre-training and prior fine-tuning as an object detector achieves the highest tracking accuracy in both the HOTA and MOTA scores, outperforming the baseline QDTrack using a Faster R-CNN detector by +0.9% in the HOTA score. Interestingly, the Sparse R-CNN approach achieves lower association accuracy and IDF1 scores compared to the Faster R-CNN as a base detector, when only pre-training the object detector, while detection and localization accuracy are at comparable levels. This results in lower overall tracking accuracy when the network parameters are only initialized with TempO pretraining, and fine-tuning of both detection and association task are required during the fine-tuning epochs. Figure 4 shows an example tracking sequence using the tracking architecture described in Section 3.4. We see that the network reliably tracks the car with high confidence, even under heavy occlusion and changing shape from opening the car door. The pedestrian entering the car is successfully tracked over the front, side, and rearview and partially occluded when stepping into the backseat.
MOT17 Results: We further evaluate our TempO pre-training strategy on the popular MOT17 benchmark. We pre-trained our model on MOT17 dataset for 50 epochs and compare it against various baselines that are initialized with pre-trained object detectors on the COCO 2017 dataset and with unsupervised pre-training on the Crowdhuman dataset by an RPD pretext task. We follow the training setting described in QDTrack and fine-tune our models for 12 epochs on a mixed dataset of CrowdHuman [40] and MOT17 train set.
The result in Table 4 shows that TempO pre-trained Sparse R-CNN model outperforms both its unsupervised COCO and unsupervised RPD initialized counterparts by +1.4% HOTA and +2.3% MOTA, which mainly trace back to an increased detection accuracy DetA and fewer ID switches. Table 3 shows the benchmarking results for object detection and MOT downstream tasks on the BDD100K val dataset for models pre-trained on our proposed TempO training with varying hyperparameter settings. In particular, we ablate over the sequence length, the attention hierarchies of the multi-frame transformer encoder, as well as the choice of temporally-varying frame augmentations.
Ablation Study
Sequence Length: By varying the number of subsequent frames from 4 to 8, the bounding box AP increases from 31.0% to 31.4%, which suggests that a longer temporal context allows the model to learn more distinctive object attributes to reliably detect object types. For MOT performance, the gain from observing longer sequences and therefore more framewise comparisons are as high as +3% in HOTA Hierarchical Attention: Another vital design aspect is the size of the multi-frame network, and especially the hierarchy of associations that can be increased by stacking multiple encoder layers. The ablation study shows that this hyperparameter has a big impact on the performance of the downstream task compared to the sequence length. Surprisingly, we find that the complexity is saturated at two encoder layers, while a higher number of layers decreases performance, especially on single-frame object detection tasks. We initially hypothesized that the multi-frame model could be incapable of generalizing across all the dynamic interactions that inform about temporal sequences in traffic scenes. However, the results show that a lighter multi-frame head loads more of the semantic reasoning onto the single-frame model, which thereby learns more expressive features. Moreover, the damped performance can result from the slower convergence due to the high parallelism in multi-head attention modules, such that longer pre-training schedules can be required. Furthermore, using (3) as a reduction function for the association score matrices resulted in a drop in performance compared to average pooling by −0.7%mAP , which can be attributed to the sparser and attenuated gradients.
Spatial and Photometric Augmentations: In Table 3, we evaluate spatial and appearance-based augmentations of the input sequence. This enforces the network to learn object representations, that are invariant to the global image location or lightning effects. Interestingly, photometric augmentations during pre-training resulted in lower performance on the downstream task, reducing the object detection performance by −0.1%mAP and tracking performance by −1.4%HOTA for identical TempO settings. This shows that the network exploits internal appearance consistency assumptions. Spatial augmentations such as random cropping negatively affect the pre-training, which indicates that the network relies on consistent spatial cues to solve the temporal ordering task.
Frame Retrieval Results
We further evaluate the utility of the learned representations from our TempO pre-training approach for the frame retrieval task on the UCF101 dataset [41], without additional fine-tuning on the retrieval task. We trained models for 100 epochs on the UCF101 videos using the TempO pretext task described in Section 3.1. For frame retrieval, we follow the experimental setup described in [23], by extracting 10 equally distanced frames from each video from the UCF101 dataset and using the frames extracted from videos in the test set, as class representatives. The frames are then classified using nearest-neighbor (NN) search in the embedding space, where closeness is defined by set-similarities as given by average pooling. The frames extracted from the train split clips are queried for similarity with all these class representatives and marked as correctly classified if a vector of the same action class is within the k nearest neighbors.
We compare against the baseline performances reported by Kong et al. [23]. Table 5 shows the retrieval performance measured as the accuracy for k = 1 to k = 20. The results demonstrate that the TempO pre-trained embeddings show strong consistency across videos of the same action class. Interestingly, the similarity is higher than that obtained from other frame-level pre-training strategies, improving the T op − 1 accuracy by +3.9% compared to the cycleconsistency pretext (CCL) task. In Figure 5, we present examples of misclassified frames and their top three most similar class representatives. We observe that the learned TempO representations in these examples focus primarily on similarity of scene attributes, for instance, the number and size of objects or camera perspective. Especially in the example on the bottom row in Figure 5, the large variety of backgrounds indicates that the learned representation consists more of tracking features on foreground objects.
Discussion of Limitations
Our analysis, and the SSL video feature learning field in general, focuses on relatively short clips < 2s. Many actions in automated driving or human activity recognition, however, extend over longer time periods, and also our ablation study suggests that longer sequences can benefit the pre-trained models. The integration of more efficient video architectures, e.g. on compressed videos [50], would be an important enabler to consider longer time intervals. In our design, however, the association of objects over an increased number of frames can become computationally expensive, which can be alleviated by sampling a subset of frames to be compared. Secondly, we do most of our evaluation on driving sequences, where the camera moves smoothly in a dynamic environment. We chose this domain as the BDD100K dataset provides many hours of videos with high variability, as well as object detection and multi-object tracking annotations without domain shift. Even though we also evaluate on MOT17 and UCF101 [41] (UCF101) datasets, which are more human-centric and from predominantly static cameras, future work could evaluate how TempO pre-training behaves on a mixture of domains or highly repetitive videos, e.g. from an indoor service robot. Thirdly, the design of our proposed pretext task directly applies to models that output a set of proposal features. Keypoint-based detection methods or other models would require an additional tokenization strategy.
Conclusion
In this work, we proposed a SSL pretext task based on the temporal ordering of video frames, that allows learning region-level image representations. Models initialized with TempO pre-trained weights demonstrated a speed-up in convergence and superior performance on object detection and multi-object tracking downstream tasks compared to other self-supervised as well as supervised initialization strategies. The qualitative results also show how TempO pre-training helps to suppress ghost detection and recognize dark objects at night from a semantic context. In our multi-object tracking experiments, TempO pre-training improves the tracking accuracy while using a pre-trained object detector in a tracking-by-detection paradigm.
We, therefore, conclude that a temporal ordering pretext task can boost performance compared to single-frame or supervised pre-training strategies in instance-level perception systems. We also evaluated the learned representations for a frame retrieval task, where we found consistent representations across videos of the same action class.
Convergence Experiments
In Table 1, we compare the object detection performance of different initialization strategies over the number of finetuning epochs. All models were trained using NVIDIA V100 GPUs with a batch size of 16 for 12 epochs on the BDD100K train split. We use the AdamW [30] optimizer with an initial learning rate of 2.5 · 10 −5 and weight decay of 10 −4 , and reduced the learning rate by a factor of 10 after 8 epochs.
We observe that the TempO pretrained initialization yields the fastest convergence and achieves the largest mean average precision among different initialization strategies from 3 fine-tuning epochs onwards. As a result, the TempO pretrained models fine-tuned for 6 epochs achieve comparable results to COCO 2017 pre-trained methods for 12 finetuning epochs and surpass COCO 2017 pre-trained initial- ization by more than +0.7% after 12 fine-tuning epochs. Noticeably, the detection performance for large and mediumsized objects increases at a high rate during the early finetuning epochs of TempO, but slows down after 6 epochs compared to COCO 2017 initialized detectors. However, the performance at this stage is already higher than that achieved by the COCO 2017 initialized object detectors. Figure 1 presents the loss per epoch for fine-tuning a Sparse R-CNN [42] object detector on the BDD100K dataset using varying initialization strategies. We observe that the TempO pretrained method's detection performance increases the fastest at the early training epochs, and achieves the highest detection performance throughout the second half of the fine-tuning epochs. Analogous to Table 1, the improvement in the mAP metric slows down for TempO pretrained methods after 6 fine-tuning epochs, while the remaining initialization strategies have still not converged. These results Table 1. Object detection results on the BDD100k val set for increasing number of fine-tuning epochs. The TempO pre-training uses a sequence length of 8 frames, two layers in the multi-frame network, and AvgPool motivated by Table 3 demonstrate that the TempO pre-trained models converge faster than other pre-training strategies.
Object Detection Performance of Scene-level SSL Methods
We found pre-training on methods that learn scene-level feature descriptors only partially comparable with our approach, since the parameters of the object detector neck and head are not initialized with the SSL pretrained weights. However, we cannot quantify the effect that the choice of initialization strategy for those parameters has on the evaluation after fine-tuning.
In Table 2, we present such comparisons with SSL methods that incorporate self-supervised scene-level feature learning strategies from frame-based [5,7] and sequence-based [33,53] pretext tasks. We trained the single-frame methods on double bounding box crops from the BDD100K detection train set, and the sequence-based methods on the same dataset as our TempO method, using the training protocol described in Sec. 4.2 of the manuscript. For frame order verification and classification, we used a sequence length of 3 and 5, respectively.
The experiments in Table 2 in the following with use a random parameter initialization for the detector neck and head. We observe that our TempO approach outperforms all the strategies using scene-level SSL methods for initializing the ResNet-50 backbone and random initialization for the remaining network parameters by ≥ +3.3% in mAP on the BDD100k validation set. This amends the experiments in the main paper, which showed that our proposed approach also outperforms initialization strategies that pretrain all detector parameters, e.g. from supervised training on the COCO 2017 dataset or self-supervised training on re-localization of random image patches.
TempO pre-trained, fine-tuned for 6 epochs on BDD100K.
Figure 3 .
3Qualitative object detection results on the BDD100K val set using the Sparse R-CNN detector with varying training schedules. Observe that the TempO pre-trained detector avoids a ghost detection of a motorcycle within the garbage bags and detects the poorly lit rider on top of the moving bicycle.
Figure 4 .
4Qualitative MOT results on BDD100K validation set for our proposed transformer tracking architecture and pre-trained on a sequential ordering task. The network tracks the pedestrian reliably over changing body orientations and handles the changing shape of the green car.
Figure 5 .
5Frame retrieval demonstrations of misclassifications in the Top-20 setting. Scores are normalized similarities of the Top-20 nearest neighbors. Retrieved frames resemble in scene attributes, for instance the number and size of foreground objects or camera perspective, while backgrounds vary widely.
Algorithm 1 :
1TempO pretext task pipeline. Data: Image sequence I1, . . . IN in correct temporal order, margin m Result: LT empO 1 forall Ii do 2 Ti = SpatialNetwork (Ii) 3 end 4 # per-frame track tokens Ti ∈ R Q×D ; 5 H1:N−1 = HistoryEncoder ([T1, . . . , TN−1], M future ) ; 6 # history tokens H1:N−1 ∈ R (N −1)·Q×D ; 7 LT empO = 0 ; 8 for i ∈ [2, . . . , N ] do 9 # matching scores Si ∈ R (N −i−1)×P ×P ; 10 Si,i−1 = AdditiveAttention(Ti, Hi−1) ; 11 pi,i−1 = ReductionFunction(Si,i−1); 12 for j ∈ [1, . . . , N − 1] & j = i − 1 do 13 Si,j = AdditiveAttention(Ti, Hj) ; 14 ni,j = ReductionFunction(Si,j); 15 LT empO += max {ni,j − pi,i−1 + m,
Figure 1 .
1Training loss and validation set mAP for varying initialization strategies for a Sparse R-CNN object detector over 6 fine-tuning epochs on the BDD100K dataset. We train with a batch size of 16, and decrease the learning rate by a factor of 10 after 8 epochs (35k iterations).
Table 2 .
2Multi-object tracking performance evaluation on BDD100k val set.BDD100k val
Table 3 .
3Ablation experiments on down-stream tasks for various TempO pre-training settings. All experiments use a Sparse R-CNN object detector with a ResNet-50 backbone and 100 proposals per frame.Pretext setting
BDD100k val object detection
BDD100k val MOT
Nseq
L {enc,dec}
f sim
Augment.
AP
AP50 AP75 APs
APm APl
HOTA ↑
sMOTA ↑ IDF1 ↑ IDSw. ↓
4
2
AvgPool
-
31.0 56.5
29.0
15.2 34.8
51.5
33.6
25.5
37.5
102261
6
2
AvgPool
-
31.2 56.8
29.3
15.3 35.0
51.4
34.9
25.0
36.4
101723
8
2
AvgPool
-
31.4 57.3
29.5
15.7 35.2
51.6
36.6
26.3
40.5
90388
8
1
AvgPool
-
29.1 54.1
26.9
14.2 33.1
48.0
35.1
24.4
36.1
92529
8
4
AvgPool
-
30.9 56.5
28.7
15.2 34.8
50.7
37.2
27.2
42.9
89083
8
2
AvgPool
P
31.3 56.9
29.3
15.4 35.1
51.2
35.5
15.8
41.5
92053
8
2
MSA
-
30.7 55.9
29.0
15.1 34.4
25.1
35.3
25.9
40.4
91600
Table 4. Multi-object tracking performance evaluation on MOT17 test set.
Method
Detector
Initialization
MOTA↑
IDF1↑ HOTA↑ DetA AssA LocA FP↓
FN↓
IDs↓
QDTrack [36] Faster R-CNN
COCO
68.7
66.3
53.9
-
-
-
26589 146643 3378
QDTrack
Sparse R-CNN COCO
69.5
63.4
52.9
56.2
49.1
82.2
21963 147291 3305
QDTrack
Sparse R-CNN RPD [8] + Crowdhuman 70.8
65.9
52.1
56.7
48.4
80.7
42855 117402 4563
QDTrack
DDETR
TempO + Crowdhuman
72.1
63.9
53.2
57.9
49.1
82.4
18513 135687 3180
QDTrack
Sparse R-CNN TempO + Crowdhuman
72.8
65.9
54.3
58.5
50.3
82.5
17646 133059 3093
Table 5 .
5Frame retrieval experiments on UCF101[41] dataset.Method
Model
Top-1 Top-5 Top-10 Top-20
MSE
-
13.1
20.2
23.4
28.6
JigSaw [1] 3D CNN
19.7
28.5
33.5
40.0
OPN [27]
3D CNN
19.9
28.7
34.0
40.6
CCL [23]
3D ResNet
32.7
42.5
50.8
61.2
TempO
Faster R-CNN 34.9
46.1
53.6
58.9
Sparse R-CNN 35.6
49.5
58.2
68.3
DDETR
33.1
18.4
56.4
65.9
Self-Supervised Representation Learning from Temporal Ordering of Automated Driving Sequences -Supplementary Material -Christopher Lang 1,2 Alexander Braun 2 Lars Schillingmann 2 Karsten Haug 2 Abhinav Valada 1 1 University of Freiburg 2 Robert Bosch GmbHIn this supplementary material, we provide additional insights and experimental results on knowledge embeddingbased object detection.
in the main paper.Model
Pretrain
Epoch
BDD100k val object detection
AP
AP50 AP75 APs APm APl
SparseRCNN COCO
2
25.3 47.6 23.1 12.5 28.4 40.3
4
27.7 51.0 25.8 13.7 31.2 44.5
6
27.5 49.9 25.8 12.6 30.5 47.8
12
30.7 55.8 28.9 15.2 34.3 50.8
SparseRCNN TempO
2
23.4 46.0 20.8 11.0 27.0 36.2
4
28.6 52.5 26.9 13.6 32.4 48.3
6
30.7 55.5 28.6 15.4 34.6 49.6
12
31.4 57.2 29.3 15.3 35.2 52.4
DDETR
COCO
2
9.3
20.9 6.9
4.5
11.6 17.3
4
16.8 34.7 14
7.6
19.8 30.6
6
28.3 52.2 26.1 11.2 32.1 53.6
12
30.2 56.0 27.6 14.2 34.0 51.3
DDETR
TempO
2
13.9 29.9 11.3 6.1
16.6 26.6
4
30.6 55.9 28.3 12.7 34.3 55.0
6
31.3 57.9 28.9 15.2 34.9 55.3
12
32.5 59.2 30.4 15.7 36.9 55.3
Table 2 .
2Object detection results using Sparse R-CNN model on the BDD100k val dataset. All methods use a ResNet-50 backbone. Frame order verification [33] 27.4 51.1 25.0 13.8 31.3 43.5 Frame order classification [53] 27.0 49.7 24.8 12.3 30.5 46.2Pre-train
BDD100k val object detection
Method
AP AP50 AP75 APs APm APl
COCO (supervised)
27.5 49.9 25.8 12.6 30.5 47.8
MoCo v2 [7]
24.8 46.7 22.4 12.7 28.3 39.1
DINO [5]
26.7 50.0 24.3 13.0 29.9 46.1
TempO (Ours)
30.7 55.5 28.6 15.4 34.6 49.6
Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition. Rishi Unaiza Ahsan, Irfan Madhok, Essa, 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEEUnaiza Ahsan, Rishi Madhok, and Irfan Essa. Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 179-189. IEEE, 2019. 7
Unsupervised domain adaptation for lidar panoptic segmentation. Borna Bešić, Nikhil Gosala, Daniele Cattaneo, Abhinav Valada, IEEE Robotics and Automation Letters. 72Borna Bešić, Nikhil Gosala, Daniele Cattaneo, and Abhinav Valada. Unsupervised domain adaptation for lidar panop- tic segmentation. IEEE Robotics and Automation Letters, 7(2):3404-3411, 2022. 1
Endto-end object detection with transformers. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko, European conference on computer vision. SpringerNicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End- to-end object detection with transformers. In European con- ference on computer vision, pages 213-229. Springer, 2020. 3
Unsupervised learning of visual features by contrasting cluster assignments. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin, Advances in Neural Information Processing Systems. 33Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Pi- otr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems, 33:9912-9924, 2020. 1, 2
Emerging properties in self-supervised vision transformers. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin, Proceedings of the International Conference on Computer Vision (ICCV). the International Conference on Computer Vision (ICCV)13Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerg- ing properties in self-supervised vision transformers. In Pro- ceedings of the International Conference on Computer Vision (ICCV), 2021. 1, 2, 5, 13
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, PMLRInternational conference on machine learning. 1Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 1, 2
Xinlei Chen, Haoqi Fan, arXiv:2003.04297Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. 13arXiv preprintXinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Im- proved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020. 2, 5, 13
Up-detr: Unsupervised pre-training for object detection with transformers. Zhigang Dai, Bolun Cai, Yugeng Lin, Junying Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)7Zhigang Dai, Bolun Cai, Yugeng Lin, and Junying Chen. Up-detr: Unsupervised pre-training for object detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1601-1610, June 2021. 1, 2, 5, 7
Deeply unsupervised patch re-identification for pre-training object detectors. IEEE transactions on pattern analysis and machine intelligence. Jian Ding, Enze Xie, Hang Xu, Chenhan Jiang, Zhenguo Li, Ping Luo, Guisong Xia, PP. Jian Ding, Enze Xie, Hang Xu, Chenhan Jiang, Zhenguo Li, Ping Luo, and Guisong Xia. Deeply unsupervised patch re-identification for pre-training object detectors. IEEE trans- actions on pattern analysis and machine intelligence, PP, 2022. 2
Deeply unsupervised patch re-identification for pre-training object detectors. Jian Ding, Enze Xie, Hang Xu, Chenhan Jiang, Zhenguo Li, Ping Luo, Gui-Song Xia, IEEE Transactions on Pattern Analysis and Machine Intelligence. 20222Jian Ding, Enze Xie, Hang Xu, Chenhan Jiang, Zhenguo Li, Ping Luo, and Gui-Song Xia. Deeply unsupervised patch re-identification for pre-training object detectors. IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 2022. 2
Ross Girshick, and Kaiming He. A large-scale study on unsupervised spatiotemporal representation learning. Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionChristoph Feichtenhofer, Haoqi Fan, Bo Xiong, Ross Gir- shick, and Kaiming He. A large-scale study on unsupervised spatiotemporal representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3299-3309, 2021. 2
Self-supervised video representation learning with odd-one-out networks. Basura Fernando, Hakan Bilen, Efstratios Gavves, Stephen Gould, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionBasura Fernando, Hakan Bilen, Efstratios Gavves, and Stephen Gould. Self-supervised video representation learn- ing with odd-one-out networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3636-3645, 2017. 2
Panoptic nuscenes: A large-scale benchmark for lidar panoptic segmentation and tracking. Kit Whye, Rohit Fong, Juana Valeria Mohan, Lubing Hurtado, Holger Zhou, Oscar Caesar, Abhinav Beijbom, Valada, IEEE Robotics and Automation Letters. 72Whye Kit Fong, Rohit Mohan, Juana Valeria Hurtado, Lubing Zhou, Holger Caesar, Oscar Beijbom, and Abhinav Valada. Panoptic nuscenes: A large-scale benchmark for lidar panop- tic segmentation and tracking. IEEE Robotics and Automation Letters, 7(2):3795-3802, 2022. 1
Digging into self-supervised monocular depth estimation. Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J Brostow, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionClément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J Brostow. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 3828-3838, 2019. 1
Bird's-eye-view panoptic segmentation using monocular frontal view images. Nikhil Gosala, Abhinav Valada, IEEE Robotics and Automation Letters. 72Nikhil Gosala and Abhinav Valada. Bird's-eye-view panoptic segmentation using monocular frontal view images. IEEE Robotics and Automation Letters, 7(2):1968-1975, 2022. 2
Bootstrap your own latent-a new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Advances in neural information processing systems. 33Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doer- sch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Ghesh- laghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020. 1, 2
Selfsupervised co-training for video representation learning. Tengda Han, Weidi Xie, Andrew Zisserman, Advances in Neural Information Processing Systems. 33Tengda Han, Weidi Xie, and Andrew Zisserman. Self- supervised co-training for video representation learning. Ad- vances in Neural Information Processing Systems, 33:5679- 5690, 2020. 2
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition34Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 3, 4
Efficient visual pretraining with contrastive detection. J Olivier, Skanda H'enaff, Jean-Baptiste Koppula, Alayrac, Aäron van den Oord, Oriol Vinyals, and João Carreira. 2021Olivier J. H'enaff, Skanda Koppula, Jean-Baptiste Alayrac, Aäron van den Oord, Oriol Vinyals, and João Carreira. Ef- ficient visual pretraining with contrastive detection. 2021
IEEE/CVF International Conference on Computer Vision (ICCV). IEEE/CVF International Conference on Computer Vision (ICCV), pages 10066-10076, 2021. 2
J Olivier, Skanda Hénaff, Evan Koppula, Daniel Shelhamer, Andrew Zoran, Andrew Jaegle, João Zisserman, Relja Carreira, Arandjelović, arXiv:2203.08777Object discovery and representation networks. arXiv preprintOlivier J Hénaff, Skanda Koppula, Evan Shelhamer, Daniel Zoran, Andrew Jaegle, Andrew Zisserman, João Carreira, and Relja Arandjelović. Object discovery and representation networks. arXiv preprint arXiv:2203.08777, 2022. 2
Space-time correspondence as a contrastive random walk. Allan Jabri, Andrew Owens, Alexei Efros, Advances in neural information processing systems. 33Allan Jabri, Andrew Owens, and Alexei Efros. Space-time correspondence as a contrastive random walk. Advances in neural information processing systems, 33:19545-19560, 2020. 2
. Arne Hoffhues, Jonathon Luiten, Trackeval, Arne Hoffhues Jonathon Luiten. Trackeval. https:// github.com/JonathonLuiten/TrackEval, 2020.
Cycle-contrast for self-supervised video representation learning. Quan Kong, Wenpeng Wei, Ziwei Deng, Tomoaki Yoshinaga, Tomokazu Murakami, Advances in Neural Information Processing Systems. 337Quan Kong, Wenpeng Wei, Ziwei Deng, Tomoaki Yoshinaga, and Tomokazu Murakami. Cycle-contrast for self-supervised video representation learning. Advances in Neural Informa- tion Processing Systems, 33:8089-8100, 2020. 1, 7
On hyperbolic embeddings in object detection. Christopher Lang, Alexander Braun, Lars Schillingmann, Abhinav Valada, Pattern Recognition: 44th DAGM German Conference, DAGM GCPR 2022. Konstanz, GermanySpringerChristopher Lang, Alexander Braun, Lars Schillingmann, and Abhinav Valada. On hyperbolic embeddings in object detec- tion. In Pattern Recognition: 44th DAGM German Confer- ence, DAGM GCPR 2022, Konstanz, Germany, September 27-30, 2022, Proceedings, pages 462-476. Springer, 2022. 2
Robust object detection using knowledge graph embeddings. Christopher Lang, Alexander Braun, Abhinav Valada, Pattern Recognition: 44th DAGM German Conference, DAGM GCPR 2022. Konstanz, GermanySpringerChristopher Lang, Alexander Braun, and Abhinav Valada. Robust object detection using knowledge graph embeddings. In Pattern Recognition: 44th DAGM German Conference, DAGM GCPR 2022, Konstanz, Germany, September 27-30, 2022, Proceedings, pages 445-461. Springer, 2022. 2
Unsupervised representation learning by sorting sequences. Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, Ming-Hsuan Yang, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision13Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming- Hsuan Yang. Unsupervised representation learning by sorting sequences. In Proceedings of the IEEE international confer- ence on computer vision, pages 667-676, 2017. 1, 3
Unsupervised representation learning by sorting sequence. Hsin-Ying Lee, Jia-Bin Huang, Maneesh Kumar Singh, Ming-Hsuan Yang, IEEE International Conference on Computer Vision. Hsin-Ying Lee, Jia-Bin Huang, Maneesh Kumar Singh, and Ming-Hsuan Yang. Unsupervised representation learning by sorting sequence. In IEEE International Conference on Computer Vision, 2017. 7
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 5
Self-emd: Selfsupervised object detection without imagenet. Songtao Liu, Zeming Li, Jian Sun, arXiv:2011.13677arXiv preprintSongtao Liu, Zeming Li, and Jian Sun. Self-emd: Self- supervised object detection without imagenet. arXiv preprint arXiv:2011.13677, 2020. 2
. Ilya Loshchilov, Frank Hutter, arXiv:1711.0510112Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 12
Hota: A higher order metric for evaluating multi-object tracking. Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixé, Bastian Leibe, International Journal of Computer Vision. 6Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixé, and Bastian Leibe. Hota: A higher order metric for evaluating multi-object tracking. International Journal of Computer Vision, pages 1-31, 2020. 6
A Milan, L Leal-Taixé, I Reid, S Roth, K Schindler, arXiv:1603.00831arXiv: 1603.00831. 4MOT16: A benchmark for multi-object tracking. A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler. MOT16: A benchmark for multi-object track- ing. arXiv:1603.00831 [cs], Mar. 2016. arXiv: 1603.00831. 4
Shuffle and learn: unsupervised learning using temporal order verification. Ishan Misra, Lawrence Zitnick, Martial Hebert, European conference on computer vision. Springer13Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order veri- fication. In European conference on computer vision, pages 527-544. Springer, 2016. 1, 2, 13
Amodal panoptic segmentation. Rohit Mohan, Abhinav Valada, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Rohit Mohan and Abhinav Valada. Amodal panoptic segmen- tation. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 21023-21032, 2022. 2
Unsupervised learning of dense visual representations. O O Pedro, Amjad Pinheiro, Ryan Almahairi, Florian Benmalek, Aaron C Golemo, Courville, Advances in Neural Information Processing Systems. 33Pedro O O Pinheiro, Amjad Almahairi, Ryan Benmalek, Flo- rian Golemo, and Aaron C Courville. Unsupervised learning of dense visual representations. Advances in Neural Informa- tion Processing Systems, 33:4489-4500, 2020. 2
Quasi-dense similarity learning for multiple object tracking. Jiangmiao Pang, Linlu Qiu, Xia Li, Haofeng Chen, Qi Li, Trevor Darrell, Fisher Yu, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognition47Jiangmiao Pang, Linlu Qiu, Xia Li, Haofeng Chen, Qi Li, Trevor Darrell, and Fisher Yu. Quasi-dense similarity learning for multiple object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 164-173, 2021. 4, 5, 7
Evolving losses for unsupervised video representation learning. Anelia Aj Piergiovanni, Michael S Angelova, Ryoo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2020AJ Piergiovanni, Anelia Angelova, and Michael S Ryoo. Evolving losses for unsupervised video representation learn- ing. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 133-142, 2020. 3
Unsupervised learning of dense visual representations. ArXiv, abs. H O Pedro, Amjad Pinheiro, Ryan Y Almahairi, Florian Benmalek, Aaron C Golemo, Courville, Pedro H. O. Pinheiro, Amjad Almahairi, Ryan Y. Benmalek, Florian Golemo, and Aaron C. Courville. Unsupervised learn- ing of dense visual representations. ArXiv, abs/2011.05499, 2020. 2
Spatiotemporal contrastive video representation learning. Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, Yin Cui, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, and Yin Cui. Spatiotempo- ral contrastive video representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6964-6974, 2021. 2
Crowdhuman: A benchmark for detecting human in a crowd. Shuai Shao, Zijian Zhao, Boxun Li, Tete Xiao, Gang Yu, Xiangyu Zhang, Jian Sun, arXiv:1805.00123arXiv preprintShuai Shao, Zijian Zhao, Boxun Li, Tete Xiao, Gang Yu, Xiangyu Zhang, and Jian Sun. Crowdhuman: A bench- mark for detecting human in a crowd. arXiv preprint arXiv:1805.00123, 2018. 6
Ucf101: A dataset of 101 human actions classes from videos in the wild. Khurram Soomro, Mubarak Amir Roshan Zamir, Shah, arXiv:1212.04027arXiv preprintKhurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 7, 8
Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, Ping Luo, arXiv:2011.12450SparseR-CNN: End-to-end object detection with learnable proposals. 512arXiv preprintPeize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, and Ping Luo. SparseR-CNN: End-to-end object detection with learnable proposals. arXiv preprint arXiv:2011.12450, 2020. 3, 4, 5, 12
Yonglong Tian, Dilip Krishnan, Phillip Isola, arXiv:1906.05849Contrastive multiview coding. arXiv preprintYonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019. 2
Self-supervised video representation learning with cross-stream prototypical contrasting. Martine Toering, Ioannis Gatopoulos, Maarten Stol, Vincent Tao Hu, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)2022Martine Toering, Ioannis Gatopoulos, Maarten Stol, and Vin- cent Tao Hu. Self-supervised video representation learning with cross-stream prototypical contrasting. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022. 2
Convoluted mixture of deep experts for robust semantic segmentation. Abhinav Valada, Ankit Dhall, Wolfram Burgard, IEEE/RSJ International conference on intelligent robots and systems (IROS) workshop, state estimation and terrain perception for all terrain mobile robots. Abhinav Valada, Ankit Dhall, and Wolfram Burgard. Con- voluted mixture of deep experts for robust semantic segmen- tation. In IEEE/RSJ International conference on intelligent robots and systems (IROS) workshop, state estimation and terrain perception for all terrain mobile robots, 2016. 2
There is more than meets the eye: Self-supervised multi-object detection and tracking with sound by distilling multimodal knowledge. Juana Valeria Francisco Rivera Valverde, Abhinav Hurtado, Valada, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionFrancisco Rivera Valverde, Juana Valeria Hurtado, and Abhi- nav Valada. There is more than meets the eye: Self-supervised multi-object detection and tracking with sound by distilling multimodal knowledge. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11612-11621, 2021. 1
Learning correspondence from the cycle-consistency of time. Xiaolong Wang, Allan Jabri, Alexei A Efros, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition1Xiaolong Wang, Allan Jabri, and Alexei A Efros. Learning correspondence from the cycle-consistency of time. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2566-2576, 2019. 1, 2
Dense contrastive learning for self-supervised visual pre-training. Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, Lei Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li. Dense contrastive learning for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3024-3033, 2021. 2
Learning and using the arrow of time. Donglai Wei, J Joseph, Andrew Lim, William T Zisserman, Freeman, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition23Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. Learning and using the arrow of time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8052-8060, 2018. 2, 3
Compressed vision for efficient video understanding. Olivia Wiles, Joao Carreira, Iain Barr, Andrew Zisserman, Mateusz Malinowski, arXiv:2210.02995arXiv preprintOlivia Wiles, Joao Carreira, Iain Barr, Andrew Zisserman, and Mateusz Malinowski. Compressed vision for efficient video understanding. arXiv preprint arXiv:2210.02995, 2022. 8
Detco: Unsupervised contrastive learning for object detection. Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Zhenguo Li, Ping Luo, IEEE/CVF International Conference on Computer Vision (ICCV). Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Zhenguo Li, and Ping Luo. Detco: Unsupervised con- trastive learning for object detection. 2021 IEEE/CVF In- ternational Conference on Computer Vision (ICCV), pages 8372-8381, 2021. 2
Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, Han Hu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition1Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, and Han Hu. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16684-16693, 2021. 1, 2
Self-supervised spatiotemporal learning via video clip order prediction. Dejing Xu, Jun Xiao, Zhou Zhao, Jian Shao, Di Xie, Yueting Zhuang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition513Dejing Xu, Jun Xiao, Zhou Zhao, Jian Shao, Di Xie, and Yueting Zhuang. Self-supervised spatiotemporal learning via video clip order prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10334-10343, 2019. 2, 5, 13
Instance localization for self-supervised detection pretraining. Ceyuan Yang, Zhirong Wu, Bolei Zhou, Stephen Lin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionCeyuan Yang, Zhirong Wu, Bolei Zhou, and Stephen Lin. Instance localization for self-supervised detection pretraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3987-3996, 2021. 2
Inscon: Instance consistency feature representation via self-supervised learning. Junwei Yang, Ke Zhang, Zhaolin Cui, Jinming Su, Junfeng Luo, Xiaolin Wei, arXiv:2203.076882022arXiv preprintJunwei Yang, Ke Zhang, Zhaolin Cui, Jinming Su, Junfeng Luo, and Xiaolin Wei. Inscon: Instance consistency feature representation via self-supervised learning. arXiv preprint arXiv:2203.07688, 2022. 2
Bdd100k: A diverse driving dataset for heterogeneous multitask learning. Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, Trevor Darrell, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognition14Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multi- task learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2636-2645, 2020. 1, 4
Self-supervised visual representations learning by contrastive mask prediction. Yucheng Zhao, Guangting Wang, Chong Luo, Wenjun Zeng, Zhengjun Zha, IEEE/CVF International Conference on Computer Vision (ICCV). Yucheng Zhao, Guangting Wang, Chong Luo, Wenjun Zeng, and Zhengjun Zha. Self-supervised visual representations learning by contrastive mask prediction. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 10140-10149, 2021. 2
Deformable detr: Deformable transformers for end-to-end object detection. Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai, arXiv:2010.04159arXiv preprintXizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable trans- formers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020. 3
| zyda_arxiv-0127000 |
On the existence of (H, A)-stable sheaves on K3 or abelian surfaces
20 Feb 2013
Markus Zowislok
On the existence of (H, A)-stable sheaves on K3 or abelian surfaces
20 Feb 2013
We give an existence result on (H, A)-stable sheaves on a K3 or abelian surface X with primitive triple of invariants (rank,first Chern class,Euler characteristics) in the integral cohomology lattice. Such a result yields the existence of singular projective Q-factorial symplectic terminalisations of certain moduli spaces of sheaves on X that are Gieseker semistable with respect to a nongeneral ample divisor.
Introduction
After the paper [KLS06] has appeared, the hope to construct new examples of irreducible (holomorphically) symplectic manifolds out of moduli spaces of sheaves on K3 or abelian surfaces almost died: the authors showed that in general, i.e. for general ample divisors, there is no symplectic resolution of these moduli spaces except for the nonsingular and O'Grady-like cases. In [Zow12] I investigated the case of a nongeneral ample divisor. In particular, I could exclude the existence of new examples of projective irreducible symplectic manifolds lying birationally over components of the moduli spaces of one-dimensional semistable sheaves on K3 surfaces, and over components of many of the moduli spaces of two-dimensional sheaves on K3 surfaces, in particular, of those for rank two sheaves.
In order to answer the question of symplectic resolvability, as explained in [Zow12], constructing a projective Q-factorial symplectic terminalisationM → M of a component M of the moduli space, i.e. a symplectic Q-factorial projective varietyM with at most terminal singularities together with a projective birational morphism f :M → M , yields the following facts:
(1) IfM can be chosen to be an irreducible symplectic manifold thenM is unique up to deformation by a result of Huybrechts [Huy99].
(2) IfM is singular, M admits no projective symplectic resolution by [Nam06, Corollary 1].
To be more precise we need some notation. Let X be a nonsingular projective irreducible surface over C, K X its canonical divisor, H an ample divisor on X, and E a coherent sheaf on X. We associate the element
u(E) := (rk E, c 1 (E), χ(E)) ∈ Λ(X) := N 0 ⊕ NS(X) ⊕ Z ⊂ H 2 * (X, Z)
of sheaf invariants to E. We avoid the elegant notion of a Mukai vector in favour of keeping torsion inside NS(X). For an element u := (r, c, χ) ∈ Λ(X) we define ∆(u) := c 2 − 2rχ + 2r 2 χ(O X ) − rc.K X and
χ(u, u) := χ(O X )r 2 − ∆(u) .
If E satisfies u(E) = u, then, by Riemann-Roch, its discriminant 1 is ∆(u), and
χ(E, E) := 2 k=0 ext k (E, E) = χ(u, u) ,
where ext k (E, E) := dim Ext k (E, E). We will also write hom(E, F ) := dim Hom(E, F ) for two coherent sheaves E, F . We denote the moduli space of sheaves E on X with u(E) = u that are semistable with respect to an ample divisor H on (2) Let m ≥ 2 and χ(mu, mu) = 8. If H is mu-general or r = 1 or χ(u, u) > ϕ(r) with ϕ as in [Zow12,Theorem 6.5] then there is a singular Q-factorial projective symplectic terminalisation of M s H (mu) , and in particular, there is no projective symplectic resolution of M s H (mu). The proof of (2) is based on the existence of a singular Q-factorial projective symplectic terminalisation M → M H,A (mu) established by item 2.b.ii of [Zow12, Theorem 5.3] using the existence of an (H, A)-stable sheaf E with u(E) = u. This existence is ensured by the assumption of (2), see the proof of the above theorem in [Zow12]. Of course, instead one can also just assume this existence. Our main result of this article is another existence result, which in turn implies the existence of a singular Q-factorial projective symplectic terminalisation as in the above theorem:
Theorem 3.1. Let X be a projective surface with torsion canonical bundle, u ∈ Λ(X) primitive, and H and A two ample divisors on X such that H is contained in at most one wall and A is u-general. Then the nonemptyness of M H,A (u) is independent of the choice of the pair (H, A).
In particular, one has:
Corollary 3.2. Let X be a projective K3 or abelian surface, u ∈ Λ(X) primitive with χ(u, u) ≥ −2, and H and A two ample divisors on X such that H is contained in at most one wall and A is u-general. Then M H,A (u) is nonempty.
As M H,A (u) = M s H,A (u), in the situation of the corollary there is an (H, A)-stable sheaf E with u(E) = u.
Twisted and (H, A)-stability
In this section we recall three notions of stability of sheaves and establish a relation between twisted stability and (H, A)-stability for positive rank. In my PhD thesis [Zow10], this relation was discussed in Chapter 6 for K3 surfaces. We assume familiarity with the material presented in [HL10] and use the notation therein.
Let still X be a nonsingular projective irreducible surface over C. In this case, twisted stability and (H, A)-stability, which are two generalisations of Gieseker stability, have an overlap. We briefly recall the definitions. Therefore let H be an ample divisor on X and E a nontrivial coherent sheaf on X.
(1) Gieseker stability, see e.g. in [HL10,
Section 1.2]. The Hilbert polynomial of E is P H (E)(n) := χ(E ⊗ O X (nH)
). Its leading coefficient multiplied by (dim E)! is called multiplicity of E and denoted here by α H (E). It is always positive, and
p H (E)(n) := χ(E ⊗ O X (nH)) α H (E) is called reduced Hilbert polynomial of E. E is said to be H-(semi)stable if E is pure and for all nontrivial proper subsheaves F ⊂ E one has that p H (F ) (≤) p H (E), i.e. one has p H (F )(n) (≤) p H (E)(n) for n ≫ 0.
In order to avoid case differentiation for stable and semistable sheaves we here follow the Notation 1.2.5 in [HL10] using bracketed inequality signs, e.g. an inequality with (≤) for (semi)stable sheaves means that one has ≤ for semistable sheaves and < for stable sheaves.
If rk E > 0, then E is H-(semi)stable if E is pure and for all nontrivial proper subsheaves F ⊂ E one has that µ H (F ) ≤ µ H (E) and, in the case of equality,
χ(F ) rk F (≤) χ(E) rk E . Here µ H (E) := c1(E).H rk E
is the slope of E (with respect to H).
(2) Twisted stability. Let D ∈ NS(X) Q := NS(X) ⊗ Q. We call χ D (E) := X ch(E). exp(D).td(X) the D-twisted Euler characteristic of E, and we say that E is D-twisted H-(semi)stable if E is pure and for all nontrivial saturated proper subsheaves F ⊂ E one has that
χ D+nH (F ) α H (F ) (≤) χ D+nH (E) α H (E)
as polynomials in n.
If rk E > 0, then E is D-twisted H-(semi)stable if E is pure and for all nontrivial proper subsheaves F ⊂ E one has that µ H (F ) ≤ µ H (E) and, in the case of equality, It is enough to restrict to saturated proper nontrivial subsheaves F ⊂ E in the definition.
µ D (F ) + χ(F ) rk F (≤) µ D (E) + χ(E) rk E .(3)
The case of Gieseker stability can be regained by D = 0 from twisted stability and by H = A from (H, A)-stability.
We briefly recall the notion of a general ample divisor for positive rank. The ample cone of X carries a chamber structure for a given triple u = (r, c, χ) ∈ Λ(X) of invariants. The definition depends on r. In the case of r = 1 we agree that the whole ample cone is the only chamber. For r > 1, we follow the definition in [HL10, Section 4.C]. Let Num(X) := Pic(X)/ ≡, where ≡ denotes numerical equivalence, and ∆ := ∆(u) > 0. Let still r > 0, H an ample divisor lying on exactly one u-wall W and A a u-general ample divisor lying in a chamber touching H.
Definition 2.1. Let W (r, ∆) := {ξ ⊥ ∩ Amp(X) Q | ξ ∈ Num(X) with − r 2 4 ∆ ≤ ξ 2 < 0} ,
Definition 2.2. For a nontrivial saturated subsheaf
F ⊂ E of a µ H -semistable sheaf E with u(E) = u, µ H (F ) = µ H (E), and c 1 (F ) rk F ≡ c 1 (E) rk E ,
we call the hyperplane
z ∈ NS(X) Q χ z (F ) rk F = χ z (E) rk E
a u-miniwall. The connected components of the complement of all u-miniwalls are called uminichambers. 2
In the following we omit the u-prefix as it is fixed for the whole section.
χ D+nH (F ) rk F = χ D+nH (E) rk E
(as polynomials in n) one has that
c 1 (F ) rk F ≡ c 1 (E) rk E and χ(F ) rk F = χ(E) rk E .
Proof. Let F ⊂ E be such a nontrivial saturated subsheaf. Equating the coefficients of the above polynomials yields µ H (F ) = µ H (E) and
χ D (F ) rk F = χ D (E) rk E .
As D is not contained in a miniwall, one has that c1(F ) rk F ≡ c1(E) rk E and thus also χ(F ) rk F = χ(E) rk E .
Lemma 2.5. Let L be in a minichamber C, L ′ in its closure C, and E a coherent sheaf on X with u(E) = u.
(1) If E is L-twisted H-semistable then it is also L ′ -twisted H-semistable.
(2) If E is L ′ -twisted H-stable then it is also L-twisted H-stable.
Proof. Let F ⊂ E be a nontrivial saturated proper subsheaf. As for µ H (F ) < µ H (E) one has
χ nH+D (F ) rk F < χ nH+D (E) rk E
(as polynomials in n) for any D ∈ NS(X) Q , we can restrict to µ H (F ) = µ H (E). We define the map
f : C → Q, D → c 1 (F ) rk F − c 1 (E) rk E .D + χ(F ) rk F − χ(E) rk E . If c1(F ) rk F ≡ c1(E) rk E then f is independent of D. So let c1(F ) rk F ≡ c1(E)
rk E . Then f = 0 on the whole minichamber C by the definition of a minichamber. We distinguish the two cases from above.
(1) Let E be L-twisted H-semistable. Then f < 0 on C, hence f ≤ 0.
(2) Let E be L ′ -twisted H-stable. Then f (L ′ ) < 0, hence f < 0 on an open subset containing L ′ , which in turn yields f < 0 on C.
Proposition 2.6. Let L be in a minichamber C, L ′ in its boundary ∂C, and E a coherent sheaf on X with u(E) = u.
χ nH+L ′ (F ) rk F = χ nH+L ′ (E) rk E (as polynomials in n) one has that µ A (F ) (≥) µ A (E).
Proof. Let F ⊂ E be a nontrivial saturated proper subsheaf. As for µ H (F ) < µ H (E) one has
χ nH+D (F ) rk F < χ nH+D (E) rk E
(as polynomials in n) for any D ∈ NS(X) Q , we can again restrict to µ H (F ) = µ H (E). Then
χ nH+L (F ) rk F − χ nH+L (E) rk E − χ nH+L ′ (F ) rk F − χ nH+L ′ (E) rk E = c 1 (F ) rk F − c 1 (E) rk E .(L − L ′ ) . (1) If c1(E) rk E ≡ c1(F ) rk F then χ nH+L (F ) rk F − χ nH+L (E) rk E = χ nH+L ′ (F ) rk F − χ nH+L ′ (E) rk E and µ A (F ) = µ A (E), so we assume c 1 (F ) rk F − c 1 (E) rk E ≡ 0 ,
which thus defines the wall W . In particular, the sign of
c 1 (F ) rk F − c 1 (E) rk E .(L − L ′ ) = 0
is opposite to the sign of µ A (F ) − µ A (E) due to the choice of A.
(1) Assume that E is L-twisted H-semistable and thus also L ′ -twisted H-semistable by Lemma 2.5. If furthermore
χ nH+L ′ (F ) rk F = χ nH+L ′ (E) rk E then equation (1) yields χ nH+L (F ) rk F − χ nH+L (E) rk E = c 1 (F ) rk F − c 1 (E) rk E .(L − L ′ ) ,
which is negative, hence µ A (F ) > µ A (E).
(2) Assume that E is L ′ -twisted H-semistable, i.e. in particular
χ nH+L ′ (F ) rk F ≤ χ nH+L ′ (E) rk E .
If one has strict inequality then by the same argument as in Lemma 2.5 one has that
χ nH+L (F ) rk F < χ nH+L (E) rk E .
So let's assume equality. Then µ A (F ) ≥ µ A (E) and thus
χ nH+L (F ) rk F − χ nH+L (E) rk E = c 1 (F ) rk F − c 1 (E) rk E .(L − L ′ ) < 0 .
The following statement, at least the part on semistability, is already known to Matsuki and Wentworth, as it can be found in [MW97, Theorem 4.1, part i]. Proof. Clearly a coherent sheaf E is L-twisted H-(semi)stable if and only if E⊗L is H-(semi)stable. Thus the claim follows from Proposition 2.6 and the description of (H, A)-stability at the beginning of this section.
3 Existence of (H, A)-stable sheaves Hence it is enough to prove nonemptyness for one suitable special choice of ample divisors. In particular, one has the Corollary 3.2. Let X be a projective K3 or abelian surface, u ∈ Λ(X) primitive with χ(u, u) ≥ −2, and H and A two ample divisors on X such that H is contained in at most one wall and A is u-general. Then M H,A (u) is nonempty.
Proof. This follows from the above Theorem 3.1 as M H (u) = M H,H (u) is well-known to be nonempty for general H and χ(u, u) ≥ −2, see e.g. [KLS06].
X by M H (u) and the open subscheme of stable sheaves by M s H (u). The corresponding spaces for (H, A)-(semi)stable sheaves introduced in [Zow12] are denoted by M H,A (u) and M s H,A (u). The main result of [Zow12] on the case of positive rank was the extension of the result of [KLS06] to the following Theorem [Zow12] 1.1. Let X be a projective K3 or abelian surface, u = (r, c, χ) ∈ Λ(X) primitive with r > 0 and χ(u, u) ≥ 0, m ∈ N and H an ample divisor on X, and assume that M s H (mu) is nonempty. (1) Let m = 1 or χ(mu, mu) = 8. Then there is a projective symplectic resolution M → M s H (mu). If H is not mu-general then M can be chosen to be a symplectic resolution of M H,A (mu), where A is an mu-general ample divisor.
whose elements are called u-walls. The connected components of the complement of the union of all u-walls are called u-chambers. An ample divisor is called u-general if it is not contained in a u-wall. The set W (r, ∆) is locally finite in Amp(X) Q by [HL10, Lemma 4.C.2].
Proposition 2. 3 .
3The number of miniwalls is finite and the miniwalls are parallel to W . For D, D ′ ∈ NS(X) Q one has that the set of D-twisted H-semistable sheaves is the same as the set of D ′ -twisted H-semistable sheaves if and only if D and D ′ belong to the same v-minichamber or v-miniwall. Proof. [MW97, Proposition 3.5]. Lemma 2.4. Let D be contained in a minichamber and E a D-twisted H-semistable sheaf with u(E) = u. Then for every nontrivial saturated subsheaf F ⊂ E with
The vector space generated by the wall W divides NS(X) Q into two open half spaces, one of them containing L − L ′ . Choose A in the neighbouring chamber of W contained in the other half space. Then E is L-twisted H-(semi)stable if and only if it is L ′ -twisted H-semistable and for all nontrivial saturated proper subsheaves F ⊂ E with
Corollary 2. 7 .
7Let A be an ample divisor in a chamber touching H and L ∈ Pic(X) lying on a miniwall. The vector space generated by the wall W divides NS(X) Q into two open half spaces, one of them containing A. Choose D in one of the minichambers touching L such that D − L is in the other half space. Then a coherent sheaf E with u(E) = u is D-twisted H-(semi)stable if and only if E ⊗ L is (H, A)-(semi)stable.
Theorem 3. 1 .
1Let X be a projective surface with torsion canonical bundle, u ∈ Λ(X) primitive, and H and A two ample divisors on X such that H is contained in at most one wall and A is u-general. Then the nonemptyness of M H,A (u) is independent of the choice of the pair(H, A).Proof. As a direct consequence of [Yos03, Proposition 4.1], the nonemptyness of the moduli space M D H (u) of D-twisted H-stable sheaves is independent of the choice of the pair (H, D) if (H, D) is u-general, where D is any Q-line bundle. The claim now follows from Corollary 2.7.
Be aware of different conventions of the discriminant's definition.
Both notions are inspired by the work of Ellingsrud and Göttsche.
Acknowledgements. The author would like to express his gratitude to Richard Thomas for his support and valuable discussions. Moreover, he thanks the Imperial College London for its hospitality and the Deutsche Forschungsgemeinschaft (DFG) for supporting the stay there by a DFG research fellowship (Az.: ZO 324/1-1).
The geometry of moduli spaces of sheaves. Daniel Huybrechts, Manfred Lehn, Cambridge University PressCambridgesecond ed.Daniel Huybrechts and Manfred Lehn, The geometry of moduli spaces of sheaves, second ed., Cambridge Mathematical Library, Cambridge University Press, Cambridge, 2010.
Compact hyper-Kähler manifolds: basic results. Daniel Huybrechts, Invent. Math. 1351Daniel Huybrechts, Compact hyper-Kähler manifolds: basic results, Invent. Math. 135 (1999), no. 1, 63-113.
Singular symplectic moduli spaces. D Kaledin, M Lehn, Ch Sorger, Invent. Math. 1643D. Kaledin, M. Lehn, and Ch. Sorger, Singular symplectic moduli spaces, Invent. Math. 164 (2006), no. 3, 591-614.
Mumford-Thaddeus principle on the moduli space of vector bundles on an algebraic surface. Kenji Matsuki, Richard Wentworth, Internat. J. Math. 81Kenji Matsuki and Richard Wentworth, Mumford-Thaddeus principle on the moduli space of vector bundles on an algebraic surface, Internat. J. Math. 8 (1997), no. 1, 97-148.
On deformations of Q-factorial symplectic varieties. Yoshinori Namikawa, J. Reine Angew. Math. 599Yoshinori Namikawa, On deformations of Q-factorial symplectic varieties, J. Reine Angew. Math. 599 (2006), 97-110.
Twisted stability and Fourier-Mukai transform. I. Kōta Yoshioka, Compositio Math. 1383Kōta Yoshioka, Twisted stability and Fourier-Mukai transform. I, Compositio Math. 138 (2003), no. 3, 261-288.
On moduli spaces of semistable sheaves on K3 surfaces, Dissertation. Markus Zowislok, Südwestdeutscher Verlag für HochschulschriftenMarkus Zowislok, On moduli spaces of semistable sheaves on K3 surfaces, Disser- tation, Südwestdeutscher Verlag für Hochschulschriften, 2010, http://ubm.opus.hbz- nrw.de/volltexte/2010/2287/.
On moduli spaces of sheaves on K3 or abelian surfaces. Math. Z. 2723-4, On moduli spaces of sheaves on K3 or abelian surfaces, Math. Z. 272 (2012), no. 3-4, 1195-1217.
| zyda_arxiv-0133000 |
THE NIELSEN AND REIDEMEISTER NUMBERS OF MAPS ON INFRA-SOLVMANIFOLDS OF TYPE (R)
12 Jan 2014
Alexander Fel'shtyn
Jong Bum Lee
THE NIELSEN AND REIDEMEISTER NUMBERS OF MAPS ON INFRA-SOLVMANIFOLDS OF TYPE (R)
12 Jan 2014
We prove the rationality, the functional equations and calculate the radii of convergence of the Nielsen and the Reidemeister zeta functions of continuous maps on infra-solvmanifolds of type (R). We find a connection between the Reidemeister and Nielsen zeta functions and the Reidemeister torsions of the corresponding mapping tori. We show that if the Reidemeister zeta function is defined for a homeomorphism on an infra-solvmanifold of type (R), then this manifold is an infra-nilmanifold. We also prove that a map on an infra-solvmanifold of type (R) induced by an affine map minimizes the topological entropy in its homotopy class and it has a rational Artin-Mazur zeta function. Finally we prove the Gauss congruences for the Reidemeister and Nielsen numbers of any map on an infra-solvmanifolds of type (R) whenever all the Reidemeister numbers of iterates of the map are finite. Our main technical tool is the averaging formulas for the Lefschetz, the Nielsen and the Reidemeister numbers on infra-solvmanifolds of type (R).
8. The Reidemeister zeta function is never defined for any homeomorphism of infra-solvmanifold of type (R), not an infra-nilmanifold 37 9. The Artin-Mazur zeta functions on infra-solvmanifolds of type (R) 39 10. The Nielsen numbers of virtually unipotent maps on infrasolvmanifolds of type (R) 41 11. Gauss congruences for the Nielsen and Reidemeister numbers 42 References 46 0. Introduction
We assume everywhere X to be a connected, compact polyhedron and f : X → X to be a continuous map. Let p :X → X be the universal cover of X andf :X →X a lifting of f , i.e., p •f = f • p. Two liftsf andf ′ are called conjugate if there is a γ ∈ Γ ∼ = π 1 (X) such thatf ′ = γ •f • γ −1 . The subset p(Fix(f )) ⊂ Fix(f ) is called the fixed point class of f determined by the lifting class [f ]. A fixed point class is called essential if its index is nonzero. The number of lifting classes of f (and hence the number of fixed point classes, empty or not) is called the Reidemeister number of f , denoted by R(f ). This is a positive integer or infinity. The number of essential fixed point classes is called the Nielsen number of f , denoted by N (f ) [36].
The Nielsen number is always finite. R(f ) and N (f ) are homotopy invariants. In the category of compact, connected polyhedra the Nielsen number of a map is, apart from in certain exceptional cases, equal to the least number of fixed points of maps with the same homotopy type as f .
Let G be a group and φ : G → G an endomorphism. Two elements α, α ′ ∈ G are said to be φ-conjugate if and only if there exists γ ∈ G such that α ′ = γαφ(γ) −1 . The number of φ-conjugacy classes is called the Reidemeister number of φ, denoted by R(φ). This is a positive integer or infinity.
Taking a dynamical point of view, we consider the iterates of f and φ, and we may define following [17,53,18,19] several zeta functions connected with the Nielsen fixed point theory. The Reidemeister zeta functions of f and φ and the Nielsen zeta function of f are defined as power series:
R φ (z) = exp ∞ n=1 R(φ n ) n z n , R f (z) = exp ∞ n=1 R(f n ) n z n , N f (z) = exp ∞ n=1 N (f n ) n z n .
Whenever we mention the Reidemeister zeta function R f (z), we shall assume that it is well-defined and so R(f n ) < ∞ and R(φ n ) < ∞ for all n > 0. Hence R f (z) = N f (z) on infra-nilmanifolds by Theorem 1.1 below and on infra-solvmanifolds of type (R) by Corollary 7.6. However, there are spaces and maps for which R f (z) is not defined. The zeta functions R f (z) and N f (z) are homotopy invariants. The function N f (z) has a positive radius of convergence for any continuous map f [53]. The above zeta functions are directly analogous to the Lefschetz zeta function
L f (z) := exp ∞ n=1 L(f n ) n z n ,
where
L(f n ) := dim X k=0 (−1) k tr f n * k : H k (X; Q) → H k (X; Q)
is the Lefschetz number of the iterate f n of f . The Lefschetz zeta function is a rational function of z and is given by the formula:
L f (z) = dim X k=0 det I − f * k .z (−1) k+1 .
The following problem was investigated: for which spaces and maps and for which groups and endomorphisms are the Nielsen and Reidemeister zeta functions rational functions? Are these functions algebraic functions?
The knowledge that a zeta function is a rational function is important because it shows that the infinite sequence of coefficients of the corresponding power series is closely interconnected, and is given by the finite set of zeros and poles of the zeta function.
In [19,21,22,45,20], the rationality of the Reidemeister zeta function R φ (z) was proven in the following cases: the group is finitely generated and an endomorphism is eventually commutative; the group is finite; the group is a direct sum of a finite group and a finitely generated free Abelian group; the group is finitely generated, nilpotent and torsion free. In [60,Theorem 4] the rationality of the Reidemeister and Nielsen zeta functions was proven for infra-nilmanifold under some (rather technical) sufficient conditions. It is also known that the Reidemeister numbers of the iterates of an automorphism of an almost polycyclic group satisfy remarkable Gauss congruences [23,24].
In this paper we investigate the Reidemeister and the Nielsen zeta functions on infra-solvmanifolds of type (R). Our main technical tool is the averaging formulas for the Lefschetz numbers, the Nielsen numbers and the Reidemeister numbers on infra-nilmanifolds and on infra-solvmanifolds of type (R).
Recently, using these averaging formulas, K. Dekimpe and G.-J. Dugardein [11,16] calculated the Nielsen numbers via Lefschetz numbers and proved the rationality of the Nielsen zeta functions on infra-nilmanifolds.
We prove in this paper the rationality, the functional equations and calculate the radii of convergence of the Nielsen and the Reidemeister zeta functions of continuous maps on infra-solvmanifolds of type (R). We find a connection between the Reidemeister and Nielsen zeta functions and the Reidemeister torsions of the corresponding mapping tori. We show that if the Reidemeister zeta function is defined for a homeomorphism on an infra-solvmanifold of type (R), then this manifold is an infra-nilmanifold. We also prove that a map on an infra-solvmanifold of type (R) induced by an affine map minimizes the topological entropy in its homotopy class and it has a rational Artin-Mazur zeta function. Finally we prove the Gauss congruences for the Reidemeister and Nielsen numbers of any map on an infra-solvmanifolds of type (R) whenever all the Reidemeister numbers of iterates of the map are finite.
Let us present the contents of the paper in more details. In Section 1 we describe the averaging formulas for the Lefschetz numbers, the Nielsen numbers and the Reidemeister numbers on infra-nilmanifolds and Dekimpe-Dugardein's formula for the Nielsen numbers. In Section 2, we obtain a partial generalization of K. Dekimpe and G.-J. Dugardein's formula from fixed points on infra-nilmanifolds to coincidences on infra-solvmanifolds of type (R) when the holonomy group is a cyclic group. The rationality and the functional equations for the Reidemeister and the Nielsen zeta functions on infra-solvmanifolds of type (R) are proven in Sections 3 and 7. After studying the asymptotic Nielsen numbers on infra-solvmanifolds of type (R) in Section 4, we discuss the relationship between the topological entropies, the asymptotic Nielsen numbers and the radius of convergence of the Nielsen and the Reidemeister zeta functions in Section 5. We also prove in Section 5 that a map on an infra-solvmanifold of type (R) induced by the affine map minimizes the topological entropy in its homotopy class . In Section 6, we find a connection between the Nielsen and the Reidemeister zeta functions and the Reidemeister torsions of the corresponding mapping tori. In Section 7, we obtain the averaging formula for the Reidemeister numbers on infra-solvmanifolds of type (R) and we are able to show that the Reidemeister zeta functions on infra-solvmanifolds of type (R) coincide with the Nielsen zeta functions. In Section 8, we show that if the Reidemeister zeta function is defined for a homeomorphism on an infra-solvmanifold of type (R), then this manifold is an infra-nilmanifold. In Section 9 we prove that the Artin-Mazur zeta function coincides with the Nielsen zeta function and is a rational function with functional equation for a continuous map on an infra-solvmanifold of type (R) induced by an affine map. In Section 11 we prove the Gauss congruences for the Reidemeister and Nielsen numbers of any map on an infra-solvmanifolds of type (R) whenever all the Reidemeister numbers of iterates of the map are finite. the possibility of the present research during his visit there. The authors are grateful to Karel Dekimpe and Gert-Jan Dugardein for helpful comments and valuable discussions. The authors would like to thank the referee for making careful corrections to a few expressions and suggesting some relevant references in the original version of the article. This helped improving some results.
Averaging formulas and Dekimpe-Dugardein's formula
We consider almost Bieberbach groups Π ⊂ G ⋊ Aut(G), where G is a connected, simply connected nilpotent Lie group, and infra-nilmanifolds M = Π\G. It is known that these are exactly the class of almost flat Riemannian manifolds [55]. It is L. Auslander's result (see, for example, [44]) that Γ := Π ∩ G is a lattice of G, and is the unique maximal normal nilpotent subgroup of Π. The group Φ = Π/Γ is the holonomy group of Π or M . Thus we have the following commutative diagram:
1 − −−− → G − −−− → G ⋊ Aut(G) − −−− → Aut(G) − −−− → 1 1 − −−− → Γ − −−− → Π p − −−− → Φ − −−− → 1 Thus Φ sits naturally in Aut(G). Denote ρ : Φ → Aut(G), A → A * = the differential of A.
Let M = Π\G be an infra-nilmanifold. Any continuous map f : M → M induces a homomorphism φ : Π → Π. Due to [43, Theorem 1.1], we can choose an affine element (d, D) ∈ G ⋊ Endo(G) such that
(1) φ(α) • (d, D) = (d, D) • α, ∀α ∈ Π.
This implies that the affine map (d, D) : G → G induces a continuous map on the infra-nilmanifold M = Π\G, which is homotopic to f . That is, f has an affine homotopy lift (d, D). By [41, Lemma 3.1], we can choose a fully invariant subgroup Λ ⊂ Γ of Π which is of finite index. Therefore φ(Λ) ⊂ Λ and so φ induces the following commutative diagram
1 − −−− → Λ − −−− → Π − −−− → Ψ − −−− → 1 φ ′ φ φ 1 − −−− → Λ − −−− → Π − −−− → Ψ − −−− → 1 where Ψ = Π/Λ is finite. Applying (1) for λ ∈ Λ ⊂ Π, we see that φ(λ) = dD(λ)d −1 = (τ d D)(λ)
where τ d is the conjugation by d. The homomorphism φ ′ : Λ → Λ induces a unique Lie group homomorphism F = τ d D : G → G, and hence a Lie algebra homomorphism F * : G → G. On the other hand, since φ(Λ) ⊂ Λ, f has a liftf : N → N on the nilmanifold N := Λ\G which finitely and regularly covers M and has Ψ as its group of covering transformations. Theorem 1.1 (Averaging Formula [41,Theorem 3.4], [33,Theorem 6.11]). Let f be a continuous map on an infra-nilmanifold Π\G with holonomy group Φ. Let f have an affine homotopy lift (d, D) and let φ : Π → Π be the homomorphism induced by f . Then we have
L(f ) = 1 |Φ| A∈Φ det(I − A * F * ) = 1 |Φ| A∈Φ det(I − A * D * )
,
N (f ) = 1 |Φ| A∈Φ | det(I − A * F * )| = 1 |Φ| A∈Φ | det(I − A * D * )|, R(f ) = R(φ) = 1 |Φ| A∈Φ σ (det(A * − F * )) = 1 |Φ| A∈Φ σ (det(A * − D * )) where σ : R → R ∪ {∞} is defined by σ(0) = ∞ and σ(x) = |x| for all x = 0.
Recently, Dekimpe and Dugardein in [11] showed the following: Let f : M → M be a continuous map on an infra-nilmanifold M . Then the Nielsen number N (f ) is either equal to |L(f )| or equal to the expression |L(f ) − L(f + )|, where f + is a lift of f to a 2-fold covering of M . By exploiting the exact nature of this relationship for all powers of f , they proved that the Nielsen zeta function N f (z) is always a rational function.
Let M = Π\G be an infra-nilmanifold with the holonomy group Φ and let f : M → M be a continuous map with an affine homotopy lift (d, D). Let A ∈ Φ. Then we can choose g ∈ G so that α = (g, A) ∈ Π. Write φ(α) = (g ′ , A ′ ). By (1), we have (g ′ , A ′ )(d, D) = (d, D)(g, A) ⇒ A ′ D = DA. Thus φ induces a functionφ : Φ → Φ given byφ(A) = A ′ so that it satisfies that
(2)φ(A)D = DA,φ(A) * D * = D * A * for all A ∈ Φ.
In what follows, we shall give a brief description of main results in [11]. We can choose a linear basis of G so that ρ(Φ) = Φ * ⊂ Aut(G) can be expressed as diagonal block matrices
Φ 1 0 0 Φ 2 ⊂ GL(n 1 , R) × GL(n 2 , R) ⊂ GL(n, R)
and D * can be written in block triangular form
D 1 * 0 D 2
where D 1 and D 2 have eigenvalues of modulus ≤ 1 and > 1, respectively. We can assume Φ = Φ 1 × Φ 2 . Every element α ∈ Π is of the form (a, A) ∈ G ⋊ Aut(G) and α is mapped to A = (A 1 , A 2 ). We define
Π + = {α ∈ Π | det A 2 = 1}.
Then Π + is a subgroup of Π of index at most 2. If [Π : Π + ] = 2, then Π + is also an almost Bieberbach group and the corresponding infra-nilmanifold
M + = Π + \GN (f k ) = (−1) p+(k+1)n L(f k ), when Π = Π + ; (−1) p+(k+1)n L(f k + ) − L(f k ) , when Π = Π + ,
where p be the number of real eigenvalues of D * which are > 1 and n be the number of real eigenvalues of D * which are < −1.
Remark 1.3. 1) In [11,Theorem 4.4] Nielsen numbers N (f n ) are expressed in terms of Lefschetz numbers L(f n ) and L(f n + ) via a table given by parity of n.
2) The proof of our Theorem 3.5 covers the case when Π = Π + in Theorem 1.2 above because in this case N (f ) = |L(f )|.
2.
Coincidences on infra-solvmanifolds of type (R) with a cyclic holonomy group
In this section, we will be concerned with a generalization of Theorem 1.2 when k = 1 (that is, N (f ) = |L(f )| or |L(f + ) − L(f )|) from fixed points on infra-nilmanifolds to coincidences on infra-solvmanifolds of type (R). We obtain a partial result for coincidences on infra-solvmanifolds of type (R) when the holonomy group is a cyclic group.
Let S be a connected, simply connected solvable Lie group of type (R), and let C be a compact subgroup of Aut(S). Let Π ⊂ S ⋊ C be torsion free and discrete which is a finite extension of the lattice Γ = Π ∩ S of S. Such a group Π is called an SB-group modeled on S. The quotient space Π\S is called an infra-solvmanifold of type (R) with holonomy group Φ = Π/Γ. When Π ⊂ S, Π\S is a special solvmanifold of type (R). Thus the infra-solvmanifold Π\S is finitely and regularly covered by the special solvmanifold Γ\S with the group of covering transformations Φ. For more details, we refer to [42].
Let M = Π\S be an infra-solvmanifold of type (R) with the holonomy group Φ. Then Φ sits naturally in Aut(S). Write ρ : Φ → Aut(S), A → A * . Let f, g : M → M be maps with affine homotopy lifts (d, D), (e, E) : S → S, respectively. Then f and g induce homomorphisms φ, ψ : Π → Π by the following rules:
φ(α) • (d, D) = (d, D) • α, ψ(α) • (e, E) = (e, E) • α ∀α ∈ Π.
In turn, we obtain functionsφ,ψ :
Φ → Φ satisfyinĝ φ(A)D = DA andψ(A)E = EA ∀A ∈ Φ. Thusφ (A) * D * = D * A * andψ(A) * E * = E * A * ∀A ∈ Φ.(3)
Recall the following well-known facts from representation theory:
Theorem 2.1 (H. Maschke). Let ρ : Φ → GL(n, R) be a representation.
Then there exist irreducible representations ρ i : Φ → GL(n i , R) such that ρ is similar to ρ 1 ⊕ · · · ⊕ ρ s . Theorem 2.2. Let Φ = A be a cyclic group of order n and let ρ : Φ → GL(m, R) be a faithful R-irreducible representation. If n = 1 then ρ is the trivial representation ρ triv . If n = 2, then m = 1 and ρ(A) = −1. In this case, we denote ρ by τ . If n > 2, then there exists k ∈ Z such that gcd(n, k) = 1 and ρ is similar to the irreducible rotation given by
Φ −→ GL(2, R), A −→ cos 2kπ n − sin 2kπ n sin 2kπ n cos 2kπ n .
Consider the case where the infra-solvmanifold M of type (R) is orientable (for coincidences) with holonomy group Φ a cyclic group with a generator A 0 . By Theorem 2.1, the natural representation ρ : Φ → Aut(S) ∼ = Aut(S) is similar to a sum of irreducible representations. If σ : Φ → GL(m, R) is irreducible, then the induced representationσ : Φ/ ker ρ → GL(m, R) is faithful and irreducible. By Theorem 2.2,σ is similar to ρ triv , τ or a rotation. Thus we may assume that ρ = mρ triv ⊕ kτ ⊕ ρ 1 ⊕ · · · ⊕ ρ t , where ρ i : Φ → GL(2, R) is an irreducible rotation. That is, there is a linear basis of S so that ρ(A 0 ) ∈ Aut(S) can be represented as diagonal block matrices
ρ(A 0 ) = I m −I k Φ 1 . . . Φ t where Φ i = ρ i (A 0 ) ∈ GL(2, R).
Remark that if k > 0 then the order of Φ is even, and det(ρ i (A 0 )) = 1 for all i. Hence det(ρ(A 0 )) = 1 if and only if k is even. This is the only case when the infra-solvmanifold is orientable and hence k is even. Using the identities (3), we can write D * and E * as block matrices
D * = D triv 0 0 0 D τ 0 * * D , E * = E triv 0 0 0 E τ 0 * * Ê
where D triv , E triv are m × m, D τ , E τ are k × k andD,Ê are 2t × 2t.
For A ∈ Φ, A = A p 0 for some p and
A * = I m (−1) p I k A * . Writeρ = ρ 1 ⊕ · · · ⊕ ρ t : Φ → GL(2t, R), A →ρ(A) = A * (abusing the notation: ρ(A) = A * ). Then the identities (3) inducê φ(A) * D =DA * ,ψ(A) * Ê =ÊA * .
Hence, for all A = A p 0 and B = A q 0 ∈ Φ, we have that
det(E * − A * D * ) det(E * − B * D * ) (4) = det(E triv − D triv ) 2 det(E τ − (−1) p D τ ) det(E τ − (−1) q D τ ) × det(Ê − A * D ) det(Ê − B * D ). Note here that det(Ê − A * D ) det(Ê − B * D ) ≥ 0,Φ = A 0 . If ρ(A 0 ) has no eigenvalue −1, i.e., if k = 0, then N (f, g) = |L(f, g)|.
Assume k > 0 (is even); then Φ = A 0 is of even order. Let Φ 0 = A 2 0 and let Π 0 be the subgroup of Π induced by the inclusion Φ 0 ֒→ Φ. Remark also that if D τ = 0 or E τ = 0, then we still have N (f, g) = |L(f, g)|. We also assume that D τ = 0 and E τ = 0. Proof. It is clear that [Π : Π 0 ] = 2 and that Π 0 is also an SB-group and the corresponding infra-solvmanifold Π 0 \S is a double covering of Π\S.
To prove the last assertion, we may consider and assume that (d, D) : S → S induces f and that φ : Π → Π is a homomorphism such that
φ(α)(d, D) = (d, D)α, ∀α ∈ Π.
We need to show that (d, D) also induces a map on Π 0 \S. For this purpose, it is enough to show that φ(Π 0 ) ⊂ Π 0 . For any β = (a, A) ∈ Π 0 , let φ(β) = (b,φ(A)). Since (a, A) ∈ Π 0 , we have A ∈ Φ 0 . The above identity implies thatφ
(A) * D * = D * A * ⇒ D τ = 0 orφ(A) ∈ Φ 0 .
Since D τ = 0, this finishes the proof of the last assertion.
For any
A = A p 0 ∈ Φ, we recall from (4) that det(E * − A * D * ) = det(E triv − D triv ) det(E τ − (−1) p D τ ) det(Ê − A * D ) and det(Ê −D) det(Ê − A * D ) ≥ 0. Let ǫ o = sign det(E τ − D τ ), ǫ e = sign det(E τ + D τ ).
Then ǫ o = ±ǫ e . Notice that the values ǫ o and ǫ e depend both on f and g. When ǫ o = ǫ e , we still have N (f, g) = |L(f, g)|. When ǫ o = −ǫ e , we have that
N (f, g) = 1 |Φ| A∈Φ | det(E * − A * D * )| = 1 |Φ| A∈Φ 0 | det(E * − A * D * )| + A / ∈Φ 0 | det(E * − A * D * )| = ǫ o |Φ| A∈Φ 0 det(E * − A * D * ) − A / ∈Φ 0 det(E * − A * D * ) = ǫ o |Φ| 2 A∈Φ 0 det(E * − A * D * ) − A∈Φ det(E * − A * D * ) = ǫ o 1 |Φ 0 | A∈Φ 0 det(E * − A * D * ) − 1 |Φ| A∈Φ det(E * − A * D * ) = ǫ o (L(f 0 , g 0 ) − L(f, g)).
Therefore, we can summarize what we have observed as follows:
Theorem 2.5. Let M = Π\S be an orientable infra-solvmanifold of type (R) with cyclic holonomy group Φ = A 0 . Let ρ : Φ → Aut(G) be the natural presentation. Then ρ is similar to the sum of irreducible representations mρ triv ⊕ kτ ⊕ ρ 1 ⊕ · · · ⊕ ρ t , where ρ triv : Φ → GL(1, R) is the trivial representation, τ : Φ → GL(1, R) is the representation given by τ (A 0 ) = −1, and ρ i : Φ → GL(2, R) is an irreducible rotation. Let f, g : M → M be continuous maps with affine homotopy lifts (d, D), (e, E) respectively. Then D * and E * can be expressed as block matrices
D * = D triv 0 0 0 D τ 0 * * D , E * = E triv 0 0 0 E τ 0 * * Ê
where D triv , E triv are m×m, D τ , E τ are k×k andD,Ê are 2t×2t. Moreover, we have that:
(1) If k = 0, then N (f, g) = |L(f, g)|.
(2) If k > 0 and det(E τ −D τ ) det(E τ +D τ ) ≥ 0, then N (f, g) = |L(f, g)|.
(3) If k > 0 and det(E τ − D τ ) det(E τ + D τ ) < 0, then the maps f, g lift to maps f 0 , g 0 : M 0 → M 0 on a double covering M 0 of M which have the same homotopy lifts as f, g respectively so that the following formula holds
N (f, g) = |L(f 0 , g 0 ) − L(f, g)|.
Proof. We are left to notice only one thing: If D τ = 0 or E τ = 0, then k > 0 is even and so det(
E τ − D τ ) det(E τ + D τ ) ≥ 0.
The rationality and the functional equation
We start with an example that shows how different can be the Nielsen, the Reidemeister and the Lefschetz zeta functions. 20]). Let f : S 2 ∨ S 4 → S 2 ∨ S 4 to be a continuous map of the bouquet of spheres such that the restriction f | S 4 = id S 4 and the degree of the restriction f | S 2 : S 2 → S 2 equal to −2. Then L(f ) = 0, hence N (f ) = 0 since S 2 ∨ S 4 is simply connected. For k > 1 we have L(f k ) = 2 + (−2) k = 0, therefore N (f k ) = 1. R(f k ) = 1 for all k ≥ 1 since S 2 ∨ S 4 is simply connected. From this we have by direct calculation that
Example 3.1 ([N f (z) = exp(−z) · 1 1 − z ; R f (z) = 1 1 − z ; L f (z) = 1 (1 − z) 2 (1 + 2z)
.
Hence N f (z) is a meromorphic function, and R f (z) and L f (z) are rational and different.
We give now some other examples of the Nielsen and the Reidemeister zeta functions on infra-nilmanifolds. For the explicit computation of the zeta functions, the following is useful.
N f (z) = A∈Φ |Φ| exp ∞ n=1 | det(A * − D n * )| n z n . When R f (z) is defined, R f (z) = R φ (z) = N f (z).
Proof. We may assume R f (z) is defined. By Theorem 1.1, we have that
R f (z) = R φ (z) = N f (z) and R φ (z) = exp ∞ n=1 R(φ n ) n z n = exp ∞ n=1 1 |Φ| A∈Φ | det(A * − F n * )| n z n = A∈Φ exp ∞ n=1 | det(A * − F n * )| n z n 1 |Φ| = A∈Φ |Φ| exp ∞ n=1 | det(A * − F n * )| n z n .
Example 3.3. This is an example used by Anosov to show that the Anosov relation does not hold when the manifold is not a nilmanifold [1].
Let α = (a, A) and t i = (e i , I 2 ) be elements of R 2 ⋊ Aut(R 2 ), where a = 1 2 0 , A = 1 0 0 −1 , e 1 = 1 0 , e 2 = 0 1 .
Then A has period 2, (a, A) 2 = (a + Aa, I 2 ) = (e 1 , I 2 ), and t 2 α = αt −1 2 . Let Γ be the subgroup generated by t 1 and t 2 . Then it forms a lattice in R 2 and the quotient space Γ\R 2 is the 2-torus. It is easy to check that the subgroup
Π = Γ, (a, A) ⊂ R 2 ⋊ Aut(R 2 )
generated by the lattice Γ and the element (a, A) is discrete and torsion free. Furthermore, Γ is a normal subgroup of Π of index 2. Thus Π is an (almost) Bieberbach group, which is the Klein bottle group, and the quotient space Π\R 2 is the Klein bottle. Thus Γ\R 2 → Π\R 2 is a double covering projection.
Let K : R 2 → R 2 be the linear automorphism given by
K = −1 0 0 2 .
It is not difficult to check that K inducesf : Γ\R 2 → Γ\R 2 and f : Π\R 2 → Π\R 2 so that the following diagram is commutative:
R 2 K − −−− → R 2 Γ\R 2f − −−− → Γ\R 2 Π\R 2 f − −−− → Π\R 2
Note that all the vertical maps are the natural covering maps. In particular, Γ\R 2 → Π\R 2 is a double covering by the holonomy group of Π/Γ, which is
Φ = {I, A} ∼ = Z 2 . By Theorem 1.1, we have L(f n ) = 1 2 (det(I − K n ) + det(I − AK n )) = 1 − (−1) n , N (f n ) = 2 n (1 − (−1) n ).
In particular, R(f n ) = 2 n+1 when n is odd; otherwise, R(f n ) = ∞. Therefore, the Reidemeister zeta function R f (z) is not defined, and
L f (z) = exp ∞ n=1 2 2n − 1 z 2n−1 = 1 + z 1 − z , N f (z) = exp ∞ n=1 2 2n 2n − 1 z 2n−1 = exp ∞ n=1 2 2n − 1 (2z) 2n−1 = 1 + 2z 1 − 2z .
Example 3.4. Consider Example 3.5 of [41] in which an infra-nilmanifold M modeled on the 3-dimensional Heisenberg group Nil has the holonomy group of order 2 generated by A and a self-map f on M is induced by the automorphism D : Nil → Nil given by D :
1 x z 0 1 y 0 0 1 −→ 1 −4x − y z ′ 0 1 6x + 2y 0 0 1 where z ′ = −2z − (12x 2 + 10xy + y 2 )
. Then with respect to the ordered (linear) basis for the Lie algebra of Nil
e 1 = 0 0 1 0 0 0 0 0 0 , e 2 = 0 1 0 0 0 0 0 0 0 , e 3 = 0 0 0 0 0 1 0 0 0 ,
the differentials of A and D are
A * = 1 0 0 0 −1 0 0 0 −1 , D * = −2 0 0 0 −4 −1 0 6 2 .
By Proposition 3.2, we have
R φ (z) = exp ∞ n=1 | det(I − D n * )| n z n exp ∞ n=1 | det(A * − D n * )| n z n .
Remark that A * is a block diagonal matrix with 1 × 1 block I 1 and 2 × 2 block −I 2 . We have
| det(A * − D n * )| = | det(I 1 − D n 1 ) det(−I 2 − D n 2 )| = | det(I 1 − D n 1 )|| det(I 2 + D n 2 )| = |(1 − (−2) n )|(−1) n det(I 2 + D n 2 ) = (2 n − (−1) n )(−1) n i tr( i D n 2 ).
Consequently, we obtain
exp ∞ n=1 | det(A * − D n * )| n z n = exp ∞ n=1 (2 n − (−1) n )(−1) n i tr( i D n 2 ) n z n = exp ∞ n=1 i tr( i D n 2 ) n (−2z) n − ∞ n=1 i tr( i D n 2 ) n z n = i det(I − z i D 2 ) det(I + 2z i D 2 ) = 1 − z 1 + 2z · 1 + 2z − 2z 2 1 − 4z − 8z 2 · 1 + 2z 1 − 4z .
In a similar fashion, we compute
exp ∞ n=1 | det(I − D n * )| n z n = exp ∞ n=1 | det(I 1 − D n 1 ) det(I 2 − D n 2 )| n z n = exp ∞ n=1 (2 n − (−1) n )(−1) n+1 i (−1) i tr( i D n 2 ) n z n = i det(I + 2z i D 2 ) det(I − z i D 2 ) (−1) i = 1 + 2z 1 − z · 1 + 2z − 2z 2 1 − 4z − 8z 2 · 1 − 4z 1 + 2z .
The last identity of the above computations follows from the definition of i D 2 (see [32,Lemma 3.2]). Namely, we have
0 D 2 = 1, 1 D 2 = D 2 , 2 D 2 = det(D 2 ) = −2.
THE NIELSEN AND REIDEMEISTER NUMBERS ON INFRA-SOLVMANIFOLDS 15
In all, we obtain that
N f (z) = R f (z) = 1 + 2z − 2z 2 1 − 4z − 8z 2 .
Theorem 3.5. Let f be a continuous map on an infra-nilmanifold with an affine homotopy lift (d, D). Assume N (f ) = |L(f )|. Then the Nielsen zeta function N f (z) is a rational function and is equal to
N f (z) = L f ((−1) q z) (−1) r
where q is the number of real eigenvalues of D * which are < −1 and r is the number of real eigenvalues of D * of modulus > 1. When the Reidemeister
zeta function R f (z) is defined, we have R f (z) = R φ (z) = N f (z) . Proof. By [52, Theorem 8.2.2] N (f ) = |L(f )| implies N (f n ) = |L(f n )| for all n.
Let ǫ n be the sign of det(I − D n * ). Let q be the number of real eigenvalues of D * which are less than −1 and r be the number of real eigenvalues of D * of modulus > 1. Then ǫ n = (−1) r+qn . By Theorem 1.1, we have that
ǫ 1 det(I − A * D * ) ≥ 0 for all A ∈ Φ. In particular, we have det(I − A * D * ) det(I − B * D * ) ≥ 0 for all A, B ∈ Φ.
Choose arbitrary n > 0. By [52,Lemma 8
.2.1], det(I − A * D n * ) det(I − D n * ) ≥ 0 for all A ∈ Φ. Hence we have N (f n ) = ǫ n L(f n ) = (−1) r+qn L(f n ). Consequently, N f (z) = exp ∞ n=1 N (f n ) n z n = exp ∞ n=1 (−1) r+qn L(f n ) n z n = exp ∞ n=1 L(f n ) n ((−1) q z) n (−1) r = L f ((−1) q z) (−1) r is a rational function. Assume R f (z) is defined. So, R(f n ) = R(φ n ) < ∞ for all n > 0.
On infranilmanifolds, by Theorem 1.1, it is equivalent to saying that det(A * − D n * ) = 0 for all A ∈ Φ and all n, and hence σ (det(A * − D n * )) = | det(A * − D n * )|.
Thus
R(f n ) = R(φ n ) = 1 |Φ| A∈Φ σ (det(A * − D n * )) = 1 |Φ| A∈Φ | det(A * − D n * )| = N (f n ).
This
implies that R f (z) = R φ (z) = N f (z).
Therefore, for those classes of maps on infra-nilmanifolds for which Anosov relation N (f ) = |L(f )| holds [40,46,8] and for those classes of infranilmanifolds for which Anosov relation N (f ) = |L(f )| holds for ALL maps [1,7,8,9], the Nielsen zeta functions and the Reidemeister zeta functions are rational functions.
In general case, using the results of Theorem 1.2, Dekimpe and Dugardein described the Nielsen zeta function of f as follows:
Theorem 3.6 ([11, Theorem 4.5]). Let f be a continuous map on an infranilmanifold Π\G with an affine homotopy lift (d, D). Then the Nielsen zeta function is a rational function and is equal to
N f (z) = L f ((−1) n z) (−1) p+n when Π = Π + ; L f + ((−1) n z) L f ((−1) n z) (−1) p+n when Π = Π + ,
where p is the number of real eigenvalues of D * which are > 1 and n is the number of real eigenvalues of D * which are < −1.
When the Reidemeister zeta function R f (z) is defined, we have
R f (z) = R φ (z) = N f (z)
. Remark 3.7. In [11,Theorem 4.5] the Nielsen zeta function is expressed in terms of Lefschetz zeta functions L f (z) and L f + (z) via a table given by parity of p and n. The class of infra-solvmanifolds of type (R) contains and shares a lot of properties of the class of infra-nilmanifolds such as the averaging formula for Nielsen numbers, see [32,42]. Therefore, Theorem 1.2 and the statement about N f (z) in Theorem 3.6 can be generalized directly to the class of infra-solvmanifolds of type (R), see Remark in [11,Sec. 4].
To write down a functional equation for the Reidemeister and the Nielsen zeta function, we recall the following functional equation for the Lefschetz zeta function: Proposition 8], see also [14]). Let M be a closed orientable manifold of dimension m and let f : M → M be a continuous map of degree d. Then
Lemma 3.8 ([25,L f α dz = ǫ (−αdz) (−1) m χ(M ) L f (αz) (−1) m
where α = ±1 and ǫ ∈ C is a non-zero constant such that if |d| = 1 then ǫ = ±1.
Proof. In the Lefschetz zeta function formula (0), we may replace f * by
f * : H * (M ; Q) → H * (M ; Q). Let β k = dim H k (M ; Q) be the kth Betti number of M . Let λ k,j be the (complex and distinct) eigenvalues of f * k : H k (M ; Q) → H k (M ; Q) Via the natural non-singular pairing in the cohomology H k (M ; Q) ⊗ H m−k (M ; Q) → Q, the operators f * m−k and d(f * k ) are adjoint to each other. Hence since λ k,j is an eigenvalue of f * k , µ ℓ,j = d/λ k,j is an eigenvalue of f * m−k = f * ℓ . Furthermore, β k = β m−k = β ℓ . Consequently, we have L f α dz = m k=0 β k j=1 1 − λ k,j α dz (−1) k+1 = m k=0 β k j=1 1 − d λ k,j αz (−1) k+1 − αdz λ k,j (−1) k = m ℓ=0 β m−ℓ j=1 (1 − µ ℓ,j αz) (−1) m−ℓ+1 m k=0 β k j=1 − αdz λ k,j (−1) m−ℓ = m ℓ=0 β ℓ j=1 (1 − µ ℓ,j αz) (−1) ℓ+1 m k=0 β k j=1 − αdz λ k,j (−1) ℓ (−1) m = L f (αz) (−1) m · (−αdz) m ℓ=0 (−1) ℓ β ℓ · m k=0 β k j=1 λ (−1) k+1 k,j = L f (αz) (−1) m ǫ(−αdz) (−1) m χ(M ) .
Here,
ǫ = m k=0 β k j=1 λ (−1) k+1 k,j = ± m k=0 det(f * k ).
We obtain:
Theorem 3.9 (Functional Equation)
. Let f be a continuous map on an orientable infra-nilmanifold M = Π\G with an affine homotopy lift (d, D). Then the Reidemeister zeta function, whenever it is defined, and the Nielsen zeta function have the following functional equations:
R f 1 dz = R f (z) (−1) m ǫ (−1) p+n when Π = Π + ; R f (z) (−1) m ǫ −1 when Π = Π + and N f 1 dz = N f (z) (−1) m ǫ (−1) p+n when Π = Π + ; N f (z) (−1) m ǫ −1 when Π = Π + where d is a degree f , m = dim M , ǫ is a constant in C × , σ = (−1) n ,
p is the number of real eigenvalues of D * which are > 1 and n is the number of real eigenvalues of D * which are < −1. If |d| = 1 then ǫ = ±1.
Proof. Assume Π = Π + . Then R f (z) = N f (z) = L f (σz) (−1) p+n . By Lemma 3.8, we have R f 1 dz = N f 1 dz = L f σ dz (−1) p+n = ǫ(−σdz) (−1) m χ(M ) L f (σz) (−1) m (−1) p+n = N f (z) (−1) m ǫ (−1) p+n (−σdz) (−1) m+p+n χ(M ) = R f (z) (−1) m ǫ (−1) p+n (−σdz) (−1) m+p+n χ(M ) .
Assume now that Π = Π + . First we claim that f and f + have the same degree. Let π : M + → M be the natural double covering projection. Then Π/Π + ∼ = Z 2 is the group of covering transformations of π. By [4, III.2], the homomorphism π * : H m (M ; Q) → H m (M + ; Q) induces an isomorphism π * : H m (M ; Q) → H m (M + ; Q) Π/Π + . In particular, π * is injective. If x is the nontrivial covering transformation, we have the commutative diagram
M + π ! ! ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ x / / M + π } } ④ ④ ④ ④ ④ ④ ④ ④ M
This induces the following commutative diagram By Theorem 3.6 and Lemma 3.8, we have
H m (M + ; Q) x * / / H m (M + ; Q) H m (M ; Q) π * g g ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ π * 7 7 ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦R f 1 dz = N f 1 dz = L f + σ dz (−1) p+n · L f σ dz (−1) p+n+1 = ǫ(−σdz) (−1) m χ(M ) L f + (σz) (−1) m (−1) p+n × ǫ(−σdz) (−1) m χ(M ) L f (σz) (−1) m (−1) p+n+1 = N f (z) (−1) m ǫ(−σdz) (−1) m χ(M ) −1 = R f (z) (−1) m ǫ(−σdz) (−1) m χ(M ) −1 .
On the other hand, it is known that χ(M ) = 0, e.g. see the remark below, which finishes our proof.
Remark 3.10. Let G be a torsion-free polycylic group. Then χ(G) = 0. For, by induction, we may assume that G is an extension of Z m by Z n ; then as
χ(Z) = 0, we have χ(G) = χ(Z m )χ(Z n ) = 0, [6, Theorem 6.4.2].
Another proof: A solvmanifold is aspherical and its fundamental group contains a nontrivial Abelian normal subgroup. By Gottlieb's theorem, its Euler characteristic is zero. If S is a torsion-free extension of G by a finite group of order k, then k · χ(S) = χ(G) = 0 ⇒ χ(S) = 0.
Remark 3.11. As it is mentioned above, since Theorem 3.6 is true for the Nielsen zeta functions on infra-solvmanifolds of type (R), the functional equation for the Nielsen zeta functions in Theorem 3.9 is true on infra-solvmanifolds of type (R) (see Theorem 7.9 for the Reidemeister zeta functions).
Asymptotic Nielsen numbers
The growth rate of a sequence a n of complex numbers is defined by Growth(a n ) := max 1, lim sup
n→∞ |a n | 1/n .
We define the asymptotic Nielsen number [34] and the asymptotic Reidemeister number to be the growth rate N ∞ (f ) := Growth(N (f n )) and R ∞ (f ) := Growth(R(f n )) correspondingly. These asymptotic numbers are homotopy type invariants. We denote by sp(A) the spectral radius of the matrix or the operator A, sp(A) = lim n n A n | which coincide with the largest modulus of an eigenvalue of A. We denote by F * := m By [42, Theorem 2.2], we may assume that f is induced by a Lie group homomorphism D : S → S. Let {λ 1 , · · · , λ m } be the eigenvalues of D * , counted with multiplicities. First we note from definition that
sp( D * ) = |λ j |>1 |λ j | when sp(D * ) > 1 1 when sp(D * ) ≤ 1.
In the case when sp(D * ) ≤ 1, the eigenvalues of q≥1 D * are multiples of eigenvalues of D * , which are ≤ 1. On the other hand 0 D * = id, and hence sp( D * ) = 1.
Recalling If |λ| ≤ 1 then lim sup n 1 n log |1 − λ n | = 0. For, log |1 − λ n | ≤ log 2. If |λ| > 1 then using L'Hôpital's rule, we have
N (f n ) = | det(I − D n * )| = m j=1 |1 − λ n j |,|λ| n − 1 ≤ |1 − λ n | ≤ |λ| n + 1 ⇒ lim n→∞ 1 n log |1 − λ n | = log |λ|.
Hence
N ∞ (f ) = max 1, lim sup n→∞ N (f n ) 1/n = max 1, |λ|>1 |λ| = sp( D * ).
Next we consider the case where N (f n ) = 0 for some n. Thus some λ j is an nth root of unity. For each such λ j , consider all k's for which |1 − λ k j | = 0. Since by the assumption λ j = 1, there are infinitely many such k's. Furthermore, there are infinitely many k's for which |1 − λ k j | = 0 for all such (finitely many) λ j . Therefore, when sp(D * ) > 1 we have
log lim sup n→∞ N (f n ) 1/n = lim sup k→∞ 1 k log N (f k ) = lim sup k→∞ 1 k m j=1 log |1 − λ k j | = |λ|>1 lim sup k→∞ 1 k log |1 − λ k | = log |λ|>1 |λ| ;
when sp(D * ) ≤ 1 we have log lim sup n N (f n ) 1/n = 0. This completes the proof.
In fact, what we have shown in the above proof is the following: Recall that if f : M → M is a continuous map on an infra-nilmanifold M = Π\G with the holonomy group Φ and if f has an affine homotopy lift (d, D), then f induces a homomorphism φ : Π → Π defined by the rule:
∀α ∈ Π, φ(α) • (d, D) = (d, D) • α.
Furthermore, the homomorphism φ induces a functionφ : Φ → Φ satisfying the identity (2):
∀A ∈ Φ,φ(A)D = DA.
For any n ≥ 1, we can observe that:
(1) f n has an affine homotopy lift (d, D) n = ( * , D n ),
(2) f n induces a homomorphism φ n : Π → Π, (3) the homomorphism φ n induces a function φ n =φ n : Φ → Φ.
Recall from Theorem 1.1 the averaging formula:
N (f n ) = 1 |Φ| A∈Φ | det(I − A * D n * )|. Since 1 |Φ| | det(I − D n * )| ≤ N (f n ), we have 1 n log N (f n ) ≥ 1 n (log | det(I − D n * )| − log |Φ|) ⇒ lim sup 1 n log N (f n ) ≥ lim sup 1 n (log | det(I − D n * )|) .
This induces from Corollary 4.2 that sp( D * ) = Growth(L(D n * )) ≤ N ∞ (f ).
Next we recall [8, Lemma 3.1]: Give A ∈ Φ, we can choose a sequence (B i ) i∈N of elements in Φ be taking B 1 = A and such that B i+1 =φ(B i ), associated to f . Since Φ is finite, this sequence will become periodic from a certain point onwards. Namely, there exist j, k ≥ 1 such that
B j+k = B j . It is shown in [8, Lemma 3.1] that (1) ∀i ∈ N, det(I −φ(B i ) * D * ) = det(I −φ(B i+1 ) * D * ), (2) ∃ℓ ∈ N such that (φ(B j ) * D * ) ℓ = D ℓ * , Since A is of finite order, det A * = ±1.
Let λ 1 , · · · , λ m be the eigenvalues of D * counted with multiplicities and let µ 1 , · · · , µ m be the eigenvalues ofφ(B j ) * D * counted with multiplicities.
Since (φ(B j ) * D * ) ℓ = D ℓ * , (φ(B j ) * D * ) ℓ has the eigenvalues {λ ℓ 1 , · · · , λ ℓ m } = {µ ℓ 1 , · · · , µ ℓ m }.
We may assume that λ ℓ i = µ ℓ i for all i = 1, · · · , m. Thus
|µ i | = |λ i |. Now, | det(I − A * D * )| = | det A * det(A −1 * − D * )| = | det(I − D * A * )| = | det(I −φ(B j ) * D * )| (by (1)) = m i=1 |1 − µ i | ≤ m i=1 (1 + |µ i |) (by triangle inequality) = m i=1 (1 + |λ i |)
Applying the above argument to D n , we obtain that
| det(I − A * D n * )| ≤ m i=1 (1 + |λ i | n ) .
By the averaging formula, we have
N (f n ) = 1 |Φ| A∈Φ | det(I − A * D n * )| ≤ 1 |Φ| A∈Φ m i=1 (1 + |λ i | n ) = m i=1 (1 + |λ i | n ) , which induces lim sup 1 n log N (f n ) ≤ m i=1 lim sup 1 n log (1 + |λ i | n ) = |λ|>1 log |λ| = log |λ|>1 |λ| .
Hence it follows that
N ∞ (f ) ≤ sp( D * ).
Because the above (algebraic) properties [8] and the averaging formula for the Nielsen number [42] on infra-nilmanifolds can be generalized to infrasolvmanifolds of type (R), we have proven in all that:
Topological entropy and the radius of convergence
The most widely used measure for the complexity of a dynamical system is the topological entropy. For the convenience of the reader, we include its definition. Let f : X → X be a self-map of a compact metric space. For given ǫ > 0 and n ∈ N, a subset E ⊂ X is said to be (n, ǫ)separated under f if for each pair x = y in E there is 0 ≤ i < n such that d(f i (x), f i (y)) > ǫ. Let s n (ǫ, f ) denote the largest cardinality of any (n, ǫ)-separated subset E under f . Thus s n (ǫ, f ) is the greatest number of orbit segments x, f (x), · · · , f n−1 (x) of length n that can be distinguished one from another provided we can only distinguish between points of X that are at least ǫ apart. Now let The number 0 ≤ h(f ) ≤ ∞, which to be independent of the metric d used, is called the topological entropy of f . If h(f, ǫ) > 0 then, up to resolution ǫ > 0, the number s n (ǫ, f ) of distinguishable orbit segments of length n grows exponentially with n. So h(f ) measures the growth rate in n of the number of orbit segments of length n with arbitrarily fine resolution. A basic relation between topological entropy h(f ) and Nielsen numbers was found by N. Ivanov [34]. We present here a very short proof by B. Jiang of the Ivanov's inequality. 34]). Let f be a continuous map on a compact connected polyhedron X.
Lemma 5.1 ([
Then
h(f ) ≥ log N ∞ (f )
Proof. Let δ be such that every loop in X of diameter < 2δ is contractible. Let ǫ > 0 be a smaller number such that d(f (x), f (y)) < δ whenever d(x, y) < 2ǫ. Let E n ⊂ X be a set consisting of one point from each essential fixed point class of f n . Thus |E n | = N (f n ). By the definition of h(f ), it suffices to show that E n is (n, ǫ)-separated. Suppose it is not so. Then there would be two points By sp(f ) we denote the spectral radius of H * (f ), which is a homotopy invariant. In 1974 Michael Shub asked, [57], the extent to which the inequality h(f ) ≥ log(sp(f )) holds. From this time this inequality has been usually called the Entropy Conjecture. Later A. Katok conjectured [37] that Entropy Conjecture holds for all continuous map for M being a manifold with the universal cover homeomorphic to R m . In [49], this was confirmed for every continuous map on an infra-nilmanifold. given in [54] (see [48] for another reference on this estimate). Next we observe that for an affine map f = (d, D) the latter reduces to || D|| = || D * || = sp( D * ). This implies that ≥ log sp(f ) (lift under a finite regular cover).
x = y ∈ E n such that d(f i (x), f i (y)) ≤ ǫ for o ≤ i < n hence for all i ≥ 0. Pick a path c i from f i (x) to f i (y) of diameter < 2ǫlog sp( D * ) = log N ∞ (f ) = log N ∞ (f ) = h(f ).
Thus we have
log sp(f ) ≤ log sp( D * ) = log N ∞ (f ) = log N ∞ (f ) = h(f ) ≤ h(f ).
The last inequality follows from Ivanov's inequality, Lemma 5.1. We denote by R the radius of convergence of the zeta functions N f (z) or R f (z).
Theorem 5.5. Let f be a continuous map on an infra-nilmanifold with an affine homotopy lift (d, D). Then the Nielsen zeta function N f (z) and the Reidemeister zeta function R f (z), whenever it is defined, have the same positive radius of convergence R which admits following estimation
R ≥ exp(−h) > 0, where h = inf{h(g) | g ≃ f }. If 1 is not in the spectrum of D * , the radius R of convergence of R f (z) is R = 1 N ∞ (f ) = 1 exp h(f ) = 1 sp( D * ) .
Proof. When R f (z) is defined, as it was observed before, R(f n ) < ∞ and so R(f n ) = N (f n ) > 0 for all n > 0 on infra-nilmanifolds. In particular, R f (z) = N f (z). By the Cauchy-Hadamard formula,
1 R = lim sup n→∞ N (f n ) n 1/n = lim sup n→∞ N (f n ) 1/n .
Since N (f n ) ≥ 1 for all n > 0, it follows that lim sup n→∞ N (f n ) 1/n ≥ 1.
Thus 1 R = N ∞ (f ) ≤ exp h(f ).
This induces the inequality R ≥ exp(−h) by the homotopy invariance of the radius R of the Reidemeister zeta function R f (z). We consider a smooth map g : M → M which is homotopic to f . As it is known in [54], the entropy h(g) is finite. Thus exp(−h) ≥ exp(−h(g)) > 0. Now the identities in our theorem follow from Theorem 5.2. Consider next the Nielsen zeta function N f (z). If lim sup n→∞ N (f n ) 1/n ≥ 1, then we obtain the same inequality for R as for R f (z). Thus, we assume lim sup n→∞ N (f n ) 1/n < 1. This happens only when N (f n ) = 0 for all but finitely many n. In this case, 1/R = lim sup n→∞ N (f n ) 1/n = 0 and so R = ∞ and N ∞ (f ) = 1.
Zeta functions and the Reidemeister torsion of the mapping torus
The Reidemeister torsion is a graded version of the absolute value of the determinant of an isomorphism of vector spaces.
Let d i : C i → C i+1 be a cochain complex C * of finite dimensional vector spaces over C with C i = 0 for i < 0 and large i. If the cohomology H i = 0 for all i we say that C * is acyclic. If one is given positive densities ∆ i on C i then the Reidemeister torsion τ (C * , ∆ i ) ∈ (0, ∞) for acyclic C * is defined as follows: Definition 6.1. Consider a chain contraction δ i : C i → C i−1 , i.e., a linear map such that d • δ + δ • d = id. Then d + δ determines a map (d + δ) + : C + := ⊕C 2i → C − := ⊕C 2i+1 and a map (d + δ) − : C − → C + . Since the map (d + δ) 2 = id + δ 2 is unipotent, (d + δ) + must be an isomorphism. One defines τ (C * , ∆ i ) := | det(d + δ) + |.
Reidemeister torsion is defined in the following geometric setting. Suppose K is a finite complex and E is a flat, finite dimensional, complex vector bundle with base K. We recall that a flat vector bundle over K is essentially the same thing as a representation of π 1 (K) when K is connected. If p ∈ K is a base point then one may move the fibre at p in a locally constant way around a loop in K. This defines an action of π 1 (K) on the fibre E p of E above p. We call this action the holonomy representation ρ : π → GL(E p ).
Conversely, given a representation ρ : π → GL(V ) of π on a finite dimensional complex vector space V , one may define a bundle E = E ρ = (K × V )/π. HereK is the universal cover of K, and π acts onK by covering transformations and on V by ρ. The holonomy of E ρ is ρ, so the two constructions give an equivalence of flat bundles and representations of π.
If K is not connected then it is simpler to work with flat bundles. One then defines the holonomy as a representation of the direct sum of π 1 of the components of K. In this way, the equivalence of flat bundles and representations is recovered.
Suppose now that one has on each fibre of E a positive density which is locally constant on K. In terms of ρ E this assumption just means | det ρ E | = 1. Let V denote the fibre of E. Then the cochain complex C i (K; E) with coefficients in E can be identified with the direct sum of copies of V associated to each i-cell σ of K. The identification is achieved by choosing a basepoint in each component of K and a basepoint from each i-cell. By choosing a flat density on E we obtain a preferred density ∆ i on C i (K, E). A case of particular interest is when E is an acyclic bundle, meaning that the twisted cohomology of E is zero (H i (K; E) = 0). In this case one defines the Rtorsion of (K, E) to be τ (K; E) = τ (C * (K; E), ∆ i ) ∈ (0, ∞). It does not depend on the choice of flat density on E.
The Reidemeister torsion of an acyclic bundle E on K has many nice properties. Suppose that A and B are subcomplexes of K. Then we have a multiplicative law:
(5) τ (A ∪ B; E) · τ (A ∩ B; E) = τ (A; E) · τ (B; E)
that is interpreted as follows. If three of the bundles E|A ∪ B, E|A ∩ B, E|A, E|B are acyclic then so is the fourth and the equation (5) holds. Another property is the simple homotopy invariance of the Reidemeister torsion. In particular τ is invariant under subdivision. This implies that for a smooth manifold, one can unambiguously define τ (K; E) to be the torsion of any smooth triangulation of K.
In the case K = S 1 is a circle, let A be the holonomy of a generator of the fundamental group π 1 (S 1 ). One has that E is acyclic if and only if I − A is invertible and then
τ (S 1 ; E) = | det(I − A)|
Note that the choice of generator is irrelevant as
I − A −1 = (−A −1 )(I − A) and | det(−A −1 )| = 1.
These three properties of the Reidemeister torsion are the analogues of the properties of Euler characteristic (cardinality law, homotopy invariance and normalization on a point), but there are differences. Since a point has no acyclic representations (H 0 = 0) one cannot normalize τ on a point as we do for the Euler characteristic, and so one must use S 1 instead. The multiplicative cardinality law for the Reidemeister torsion can be made additive just by using log τ , so the difference here is inessential. More important for some purposes is that the Reidemeister torsion is not an invariant under a general homotopy equivalence: as mentioned earlier this is in fact why it was first invented.
It might be expected that the Reidemeister torsion counts something geometric (like the Euler characteristic). D. Fried [26] showed that it counts the periodic orbits of a flow and the periodic points of a map. We will show that the Reidemeister torsion counts the periodic point classes of a map (fixed point classes of the iterations of the map).
Some further properties of τ describe its behavior under bundles. Let p : X → B be a simplicial bundle with fiber F where F, B, X are finite complexes and p −1 sends subcomplexes of B to subcomplexes of X over the circle S 1 . We assume here that E is a flat, complex vector bundle over B . We form its pullback p * E over X. Note that the vector spaces H i (p −1 (b), C) with b ∈ B form a flat vector bundle over B, which we denote H i F . The integral lattice in H i (p −1 (b), R) determines a flat density by the condition that the covolume of the lattice is 1. We suppose that the bundle E ⊗ H i F is acyclic for all i. Under these conditions D. Fried [26] has shown that the bundle p * E is acyclic, and
τ (X; p * E) = i τ (B; E ⊗ H i F ) (−1) i .
Let f : X → X be a homeomorphism of a compact polyhedron X. Let T f := (X × I)/(x, 0) ∼ (f (x), 1) be the mapping torus of f . We shall consider the bundle p : T f → S 1 over the circle S 1 . We assume here that E is a flat, complex vector bundle with finite dimensional fibre and base S 1 . We form its pullback p * E over T f . Note that the vector spaces H i (p −1 (b), C) with b ∈ S 1 form a flat vector bundle over S 1 , which we denote H i F . The integral lattice in H i (p −1 (b), R) determines a flat density by the condition that the covolume of the lattice is 1. We suppose that the bundle E ⊗ H i F is acyclic for all i. Under these conditions D. Fried [26] has shown that the bundle p * E is acyclic, and we have (6) τ
(T f ; p * E) = i τ (S 1 ; E ⊗ H i F ) (−1) i .
Let g be the preferred generator of the group π 1 (S 1 ) and let A = ρ(g) where ρ : π 1 (S 1 ) → GL(V ). Then the holonomy around g of the bundle (6) that
E ⊗ H i F is A ⊗ (f * ) i . Since τ (S 1 ; E) = | det(I − A)| it follows fromτ (T f ; p * E) = i | det(I − A ⊗ (f * ) i ) | (−1) i .
We now consider the special case in which E is one-dimensional, so A is just a complex scalar λ of modulus one. Then in terms of the rational function L f (z) we have :
(7) τ (T f ; p * E) = i | det(I − λ(f * ) i ) | (−1) i =| L f (λ) | −1
This means that the special value of the Lefschetz zeta function is given by the Reidemeister torsion of the corresponding mapping torus. Let us consider an infra-nilmanifold M = Π\G and a continuous map f on M . As in Section 1, we consider the subgroup Π + of Π of index at most 2. Then Π + is also an almost Bieberbach group as Π itself and the corresponding infra-nilmanifold M + = Π + \G is a double covering of the infra-nilmanifold M = Π\G; the map f lifts to a map f + : M + → M + which has the same affine homotopy lift (d, D) as f . Let T f and T f + be the mapping torus of f and f + correspondingly. We shall consider two bundles p : T f → S 1 and p + : T f + → S 1 over the circle S 1 . We assume here that E is a flat, complex vector bundle with one dimensional fibre and base S 1 . We form its pullback p * E over T f and pullback p * + E over T f + . We suppose that the bundles E ⊗ H i M and E ⊗ H i M + are acyclic for all i. Then Theorem 3.6 and the formula (7) imply the following result about special values of the Reidemeister and Nielsen zeta functions Theorem 6.2. Let f be a homeomorphism on an infra-nilmanifold Π\G with an affine homotopy lift (d, D). Then
|R f ((−1) n λ) (−1) p+n | = |R φ ((−1) n λ) (−1) p+n | = |N f ((−1) n λ) (−1) p+n | = |L f (λ)| = τ (T f ; p * E) −1 when Π = Π + ; |L f + (λ)L f (λ) −1 | = τ (T f ; p * E)τ (T f + ; p * + E) −1 when Π = Π + ,
where p is the number of real eigenvalues of D * which are > 1 and n is the number of real eigenvalues of D * which are < −1.
7. Jiang-type spaces and averaging formula for the Reidemeister numbers on infra-solvmanifolds of type (R)
A closed manifold M is called a Jiang-type space if for all continuous maps f : M → M ,
L(f ) = 0 ⇒ N (f ) = 0; L(f ) = 0 ⇒ N (f ) = R(f ).
A closed orientable manifold M is called a Jiang-type space for coincidences ([28]) if for any continuous maps f, g : N → M where N is any closed orientable manifold of equal dimension,
L(f, g) = 0 ⇒ N (f, g) = 0; L(f, g) = 0 ⇒ N (f, g) = R(f, g).
It is well-known that Jiang spaces are of Jiang-type for coincidences. When N = M is a nilmanifold and φ, ψ are homomorphisms on the group of covering transformations induced by self-maps f, g on N , it is proven in [27,Theorem 2.3] that
N (f, g) > 0 ⇔ coin(φ, ψ) = 1 ⇔ R(f, g) < ∞
Further if one of the above holds then
R(f, g) = N (f, g) = |L(f, g)|.
Furthermore, nilmanifolds are Jiang-type spaces for coincidences, see [28]. Recall that if N is a finite connected complex and M is a nilmanifold then N (f, g) = 0 ⇒ R(f, g) < ∞; if both N and M are nilmanifolds of equal dimension, then two conditions are equivalent and in that case we have N (f, g) = R(f, g). [51,Sec. 2]. Let S i be simply connected solvable Lie groups of type (E) with equal dimension, and let Γ i be lattices of
Recall what C. McCord proved in
S i . Let D i : S 1 → S 2 be Lie group homomorphisms such that D i (Γ 1 ) ⊂ Γ 2 . Write φ i = D i | Γ 1 : Γ 1 → Γ 2 .
Thus D i induce maps f i between orbit spaces M i = Γ i \S i , special solvmanifolds of type (E). When S i are of type (R), we can always assume that any f i is induced from a Lie group homomorphism D i , see [42,Theorem 2.2] or [31,Theorem 4.2].
Denote C γ := coin(γ •D 1 , D 2 ) and S γ = p 1 (coin(γ • D 1 , D 2 )) for each γ ∈ Γ 2 . We also consider the map D : S 1 → S 2 defined by D(s) = D 1 (s) −1 D 2 (s) for s ∈ S 1 . (1) coin(φ 1 , φ 2 ) = 1.
(2) dim(C 1 ) = 0.
(3) D is injective.
(4) C 1 = S 1 . (5) ind(S 1 ) = ±1. (6) ind(S 1 ) = 0.
These statements are also valid for any other coincidence class S γ , and all ind(S γ ) have the same sign. Hence N (f 1 , f 2
) = |L(f 1 , f 2 )|.
We generalize [27, Theorem 2.3] from nilmanifolds to special solvmanifolds of type (R).
Theorem 7.2. Let f 1 and f 2 be maps on a special solvmanifold Γ\S of type (R). Let φ 1 , φ 2 : Γ → Γ be homomorphisms induced by f 1 , f 2 respectively. Then the following are equivalent:
(a) N (f 1 , f 2 ) > 0. (b) coin(φ 1 , φ 2 ) = 1. (c) R(f 1 , f 2 ) < ∞.
Further if one of the above holds then
R(f 1 , f 2 ) = N (f 1 , f 2 ) = |L(f 1 , f 2 )|.
Proof. By Lemma 7.1, (a) ⇔ (b). Now we will show (b) ⇒ (c), and (c) ⇒ (a) together with the identity R(f 1 ,
f 2 ) = N (f 1 , f 2 ).
Let S be a simply connected solvable Lie group of type (R). Let N = [S, S] and Λ = S/N . Then N is nilpotent and Λ ∼ = R k for some k > 0. A lattice Γ of S yields a lattice N ∩ Γ of N . Moreover, the lattice Γ induces a short exact sequence 1 → N ∩ Γ → Γ → Γ/N ∩ Γ ∼ = Γ · N/N → 1 so that the following diagram is commutative
1 − −−− → N − −−− → S − −−− → Λ = S/N − −−− → 0 1 − −−− → N ∩ Γ − −−− → Γ − −−− → Γ · N/N − −−− → 0
This gives rise to the fibration, called a Mostow fibration,
N ∩ Γ\N −→ M = Γ\S −→ Γ · N \S
over a torus base Γ · N \S with compact nilmanifold fiber N ∩ Γ\N . It is known that this fibration is orientable if and only if the solvmanifold M is a nilmanifold. Let E : S → S be a homomorphism. Then E induces a homomorphism E ′ : N → N and hence a homomorphismĒ : Λ → Λ so that the following diagram is commutative
1 − −−− → N − −−− → S − −−− → Λ − −−− → 0 E ′ E Ē 1 − −−− → N − −−− → S − −−− → Λ − −−− → 0
Hence we have the following diagram is commutative
1 − −−− → N ∩ Γ − −−− → Γ − −−− → Γ · N/N − −−− → 0 φ ′ i φ i φ i 1 − −−− → N ∩ Γ − −−− → Γ − −−− → Γ · N/N − −−− → 0
Denote Γ ′ = N ∩ Γ and letΓ = Γ · N/N . By [42,Theorem 2.2] or [31,Theorem 4.2], we may assume that f 1 , f 2 are induced by Lie group homomorphisms D 1 , D 2 : S → S respectively. Then
ϕ i (γ) • D i = D i • γ ∀γ ∈ Γ.
Evaluating at the identity of S, we obtain that φ i (γ) = D i (γ) for all γ ∈ Γ. So, φ i is the restriction of D i on Γ.
Assume (b): coin(φ 1 , φ 2 ) = 1. Then coin(D 1 , D 2 ) = 1 by Lemma 7.1. By taking differential, we see that coin(D 1 * , D 2 * ) = 0, or D 2 * − D 1 * is a linear isomorphism. We can write D 2 * − D 1 * as
D 2 * − D 1 * = D 2 * −D 1 * 0 * D ′ 2 * − D ′ 1 *
with respect to some linear basis of the Lie algebra of S. This implies that D 2 * −D 1 * is an isomorphism and so coin(D 2 * ,D 1 * ) = 0 or coin(D 1 ,D 2 ) = 1 = coin(φ 1 ,φ 2 ). This happens on Λ ∼ = R k with the lattice Γ ′ and so on the
torus Γ · N \S = Γ ′ \Λ. Hence coin(φ 1 ,φ 2 ) =1 implies R(φ 1 ,φ 2 ) < ∞.
On the other hand, since coin(φ ′ 1 , φ ′ 2 ) = 1 from coin(φ 1 , φ 2 ) = 1, by [27, Theorem 2.3], R(φ ′ 1 , φ ′ 2 ) < ∞. Now the above commutative diagram induces a short exact sequence of the sets of Reidemeister classes
R(φ ′ 1 , φ ′ 2 ) −→ R(φ 1 , φ 2 ) −→ R(φ 1 ,φ 2 ) −→ 1. Because both sets R(φ ′ 1 , φ ′ 2 )
and R(φ 1 ,φ 2 ) are finite, it follows that the middle set R(φ 1 , φ 2 ) is also finite. Hence R(φ 1 , φ 2 ) < ∞.
Assume (c): R(φ 1 , φ 2 ) < ∞. Then R(φ 1 ,φ 2 ) < ∞ on the torus Γ ′ \Λ. We already know that this implies 0 < N (f 1 ,f 2 ) = R(φ 1 ,φ 2 ) and coin(φ 1 ,φ 2 ) = 1. Assume that R(φ ′ 1 , φ ′ 2 ) = ∞. By [27, Theorem 2.3], coin(φ ′ 1 , φ ′ 2 ) = 1 and then by Lemma 7.1, coin(D ′ 1 , D ′ 2 ) = 1 and hence D ′ 2 * −D ′ 1 * is singular, which implies D 2 * − D 1 * is also singular and so contradicts coin(φ 1 , φ 2 ) = 1. Hence
R(φ ′ 1 , φ ′ 2 ) < ∞ on the nilmanifold Γ ′ \N . This implies that 0 < N (f ′ 1 , f ′ 2 ) = R(φ ′ 1 , φ ′ 2 ). Hence we have N (f 1 , f 2 ) = |L(f 1 , f 2 )| ([51, Theorem 2.1]) = | det(D 2 * − D 1 * )| ([32, Theorem 3.1]) = | det(D 2 * −D 1 * )|| det(D ′ 2 * − D ′ 1 * )| = N (f 1 ,f 2 )N (f ′ 1 , f ′ 2 ) = R(φ 1 ,φ 2 )R(φ ′ 1 , φ ′ 2 ) ≥ R(φ 1 ,φ
2 ) (exactness and finiteness of each Reidemeister set).
Consequently, sine it is always true that N (f 1 , f 2 ) ≤ R(φ 1 , φ 2 ), we have the identity N (f 1 , f 2 ) = R(φ 1 , φ 2 ). Immediately, from Theorem 7.2 we obtain the following: for any maps f 1 , f 2 : M → M on a special solvmanifold M of type (R), we have
L(f 1 , f 2 ) = 0 ⇒ N (f 1 , f 2 ) = 0; L(f 1 , f 2 ) = 0 ⇒ N (f 1 , f 2 ) = R(f 1 , f 2 ).
Example 7.3. Consider the closed 3-manifolds with Sol-geometry. We refer to [31,Sec.6] for details about the Reidemeister numbers on these manifolds. These are infra-solvmanifolds Π\Sol of type (R). When Π = Π 0 or Π ± 2 , the corresponding manifold is a torus bundle over S 1 , and when Π = Π 3 or Π 6 , the manifold is a sapphire space. Only Π 0 \Sol is the special solvmanifold and the remaining manifolds are non-special, infra-solvmanifolds. For any homeomorphism f : Π\Sol → Π\Sol, let F * be its linearization. Then the following can be found in [31,Sec. 6]:
(1) When Π = Π 0 or Π + 2 , L(f ) = N (f ) = R(f ) = 4 only when F * is of type (II) with det F * = −1; otherwise, L(f ) = N (f ) = 0 and R(f ) = ∞.
(2) When Π = Π − 2 , F * is always of type (I) and
L(f ) = N (f ) = 0, but R(f ) = ∞. (3) When Π = Π 3 , L(f ) = N (f ) = 0, but R(f ) = ∞.
(4) When Π = Π 6 , L(f ) = N (f ), which is 0 or 2 according as det F * = 1 or −1, but R(f ) = ∞. These results show that Theorem 7.2 (i.e., N (f ) > 0 ⇔ R(f ) < ∞; in this case, N (f ) = R(f )) is true for the special solvmanifold Π 0 \Sol and infra-solvmanifolds Π ± 2 \Sol and Π 3 \Sol, but not true anymore for the infrasolvmanifold Π 6 \Sol. Now we can state a practical formula for the Reidemeister number of a pair of continuous maps on an infra-solvmanifold of type (R). This is a straightforward generalization of [33, Theorem 6.11] and its proof from infra-nilmanifolds. On the other hand, we may assume that f, g are induced by the affine maps (d, D), (e, E) respectively. This induces thatf ,ḡ are induced by the Lie group homomorphisms µ(d) • D, µ(e) • E : S → S, where µ(·) is conjugation. If (a, A) ∈ Π is a preimage ofᾱ ∈ Π/Λ, then the transformationᾱ on Λ\S is induced by the Lie group automorphism µ(a) • A. By [32, Theorem 3.1] and Lemma 7.1, we have that
R(f, g) = 1 |Φ| A∈Φ σ (det(E * − A * D * )) ,N (ᾱf ,ḡ) = | det(Ad(e)E * − Ad(a)A * Ad(d)D * )| = | det Λ (E * − A * D * )|
with respect to any preferred basis of Λ. If we regard this as a basis of Γ, then we can see that
[Γ : Λ] det Λ (E * − A * D * ) = det Γ (E * − A * D * ),
for example see the proof of [33,Theorem 6.11]. Hence
R(f, g) = 1 [Π : Λ] ᾱ∈Π/Λ σ N (ᾱf ,ḡ) = 1 [Π : Λ] A∈Φ [Γ : Λ] σ det Λ (E * − A * D * ) = 1 |Φ| A∈Φ σ det Γ (E * − A * D * ) .
The following corollaries generalize [13, Theorems 5.1 and 5.2] from infranilmanifolds to infra-solvmanifolds of type (R). Proof. Because M is orientable, the Nielsen number N (f, g) is defined and is equal to, by [31,Theorem 4.5],
N (f, g) = 1 |Φ| A∈Φ | det(E * − A * D * )|.
Since
R(f, g) < ∞, by Theorem 7.4, σ(det(E * −A * D * )) is finite for all A ∈ Φ. By the definition of σ, we have σ(det(E * − A * D * )) = | det(E * − A * D * )| for all A ∈ Φ.
This finishes the proof. Then
R(f ) = 1 |Φ| A∈Φ σ (det(I − A * D * )) ,
and if R(f ) < ∞ then R(f ) = N (f ).
By Remarks 3.7 and 3.11, since the averaging formulas for the Lefschetz number and the Nielsen number are generalized from infra-nilmanifolds to infra-solvmanifolds of type (R) (see [32,42]), all results and proofs concerning the Nielsen number and the Nielsen zeta function in this article directly generalize to the class of infra-solvmanifolds of type (R). By Corollary 7.6 and [42,Theorem 4.3], the averaging formulas for the Reidemeister number and the Nielsen number on infra-solvmanifolds of type (R), we can generalize all results and proofs concerning the Reidemeister zeta function, whenever it is defined, to the class of infra-solvmanifolds of type (R). If R f (z) is defined, then R(f n ) < ∞ and so by Corollary 7.6 R(f n ) = N (f n ) > 0 for all n > 0 and thus R f (z) = N f (z). For example, we can generalize Theorems 3.5, 3.6, 3.9, and 6.2, and their proofs from infra-nilmanifolds to infra-solvmanifolds of type (R) to obtain the following:
N f (z) = L f ((−1) q z) (−1) r
where q is the number of real eigenvalues of D * which are < −1 and r is the number of real eigenvalues of D * of modulus > 1. When the Reidemeister
zeta function R f (z) is defined, we have R f (z) = R φ (z) = N f (z).
Theorem 7.8. Let f be a continuous map on an infra-solvmanifold Π\S of type (R) with an affine homotopy lift (d, D). Then the Reidemeister zeta function, whenever it is defined, is a rational function and is equal to Theorem 7.9 (Functional Equation). Let f be a continuous map on an orientable infra-solvmanifold M = Π\S of type (R) with an affine homotopy lift (d, D). Then the Reidemeister zeta function, whenever it is defined, and the Nielsen zeta function have the following functional equations: D). Then the Nielsen zeta function N f (z) and the Reidemeister zeta function R f (z), whenever it is defined, have the same positive radius of convergence R which admits following estimation
R f (z) = N f (z) = L f ((−1) n z) (−1) p+n when Π = Π + ; L f + ((−1) n z) L f ((−1) n z) (−1) p+n when Π = Π + ,R f 1 dz = R f (z) (−1) m ǫ (−1) p+n when Π = Π + ; R f (z) (−1) m ǫ −1 when Π = Π + and N f 1 dz = N f (z) (−1) m ǫ (−1) p+n when Π = Π + ; N f (z) (−1) m ǫ −1 when Π = Π + where d is a degree f , m = dim M , ǫ is a constant in C × , σ = (−1) n ,R ≥ exp(−h) > 0, where h = inf{h(g) | g ≃ f }. If 1 is not in the spectrum of D * , the radius R of convergence of R f (z) is R = 1 N ∞ (f ) = 1 exp h(f ) = 1 sp( D * ) .
Theorem 7.11. Let f be a homeomorphism on an infra-solvmanifold Π\S of type (R) with an affine homotopy lift (d, D). Then
|N f ((−1) n λ) (−1) p+n | = |L f (λ)| = τ (T f ; p * E) −1 when Π = Π + ; |L f + (λ)L f (λ) −1 | = τ (T f ; p * E)τ (T f + ; p * + E) −1 when Π = Π + ,
where p is the number of real eigenvalues of D * which are > 1 and n is the number of real eigenvalues of D * which are < −1.
Remark 7.12. One may formulate the above theorem also for the Reidemeister zeta function of a homeomorphism f on an infra-solvmanifold of type (R) . However it will be seen in Theorem 8.2 that in the case of R f (z) such a manifold must be an infra-nilmanifold. Remark 7.13. For any map f on an infra-solvmanifold of type (R), Theorem 1.2 states the relation between the Lefschetz numbers and the Nielsen numbers of iterates of f and Corollary 7.6 states the relation of the Nielsen numbers with the Reidemeister numbers of iterates of f when these are finite. Via these relations some of the arithmetic, analytic, and asymptotic properties of the sequences N (f n ) and R(f n ) can be determined from the corresponding properties of the sequence L(f n ). For the sequence L(f n ), all these properties were thoroughly discussed in [35,Sect. 3.1], see also [2]. 8. The Reidemeister zeta function is never defined for any homeomorphism of infra-solvmanifold of type (R), not an infra-nilmanifold Consider now as an example closed 3-manifolds with Sol-geometry. We refer to [31,Sec. 6] for details about the Reidemeister numbers on these manifolds. These are infra-solvmanifolds Π\Sol of type (R). Let Π 1 be a lattice of Sol:
Π 1 = Γ A = a 1 , a 2 , τ | [a 1 , a 2 ] = 1, τ a i τ −1 = A(a i ) ,
where A is a 2 × 2-integer matrix of determinant 1 and trace > 2. Consider a homomorphism φ on Π 1 of type (III), i.e., φ is given by the form φ(a 1 ) = φ(a 2 ) = 1, φ(τ ) = a p 1 a q 2 τ r , r = ±1.
Then it is shown in [31, Theorem 6.1] that R(φ) = |1 − r|. We can observe easily that φ n is also of type (III) and R(φ n ) = |1 − r n | for all n > 0. Hence
R φ (z) = exp ∞ n=1 |1 − r n | n z n = 1 1−z when r = 0; 1− r |r| z 1−|r|z when |r| > 1.
It can be seen also that if φ is not of type (III), then R(φ) = ∞ or R(φ 2 ) = ∞. Thus the associated Reidemeister zeta function is not defined. A similar phenomenon happens for the infra-solvmanifold Π ± \Sol. For the remaining infra-solvmanifolds Π 3 \Sol and Π 6 \Sol, it is shown that only trivial map has a finite Reidemeister number, which is 1. That is, only the trivial map defines the Reidemeister zeta function. The homomorphisms above are eventually commutative, and in fact, for every eventually commutative homomorphism the Reidemeister zeta function, whenever it is defined, is a rational function, see Theorem 9 and Theorem 10 in [20].
We will show now that if the Reidemeister zeta function is defined for a homeomorphism on an infra-solvmanifold of type (R), then the manifold must be an infra-nilmanifold.
Recall the following
R(f n ) = 1 [Π : Λ] ᾱ∈Π/Λ R(ᾱf n ).
Assume now that f defines the Reidemeister zeta function. Then R(f n ) < ∞ for all n > 0. The above averaging formula implies that R(f n ) < ∞ for all n. By Theorem 7.2, we must have
R(f n ) = N (f n ) = |L(f n )| > 0.
Since L(f n ) = det(I −D n * ) = 0 for all n > 0 by [32,Theorem 3.1], this would imply that the differential D * of D has no roots of unity. By Proposition 8.1, S must be nilpotent.
Remark 8.3. Let A be an Anosov diffeomorphism on an infra-nilmanifold.
Then an iteration A n will be also an Anosov diffeomorphism for every n ≥ 1. The Reidemeister number of an Anosov diffeomorphism is always finite [10]. Hence the Reidemeister zeta R A (z) is well-defined. From Theorem 3.6 and Theorem 3.9 it follows that the Reidemeister zeta function R A (z) of an Anosov diffeomorphism on an infra-nilmanifold is a rational function with functional equation. It is known that a nilmanifold modelled on a free cstep nilpotent Lie group on r generators admits an Anosov diffeomorphism if and only if r > c [5]. Hence the Reidemeister zeta function of an Anosov diffeomorphism on such nilmanifold is well-defined if r > c and is a rational function with functional equation. 9. The Artin-Mazur zeta functions on infra-solvmanifolds of type (R)
Let f be a continuous map on a topological space X. Then the Artin-Mazur zeta function of f is defined as follows:
AM f (z) = exp ∞ n=1 F (f n ) n z n
where F (f ) is the number of isolated fixed points of f . Remark 9.2. The above proposition is a straightforward generalization of [40, Proposition 1] from infra-nilmanifolds to infra-solvmanifolds of type (E). Further, the linear part of the affine map F need not be an automorphism. Proposition 9.1 is proved when the manifold is a special solvmanifold of type (E) and the map is induced by a homomorphism in Lemma 7.1, [51]. In fact, the converse is also proved. That is, every essential fixed point class consists of a single element. We will prove the converse of the proposition on infra-solvmanifolds of type (R). Then we have an averaging formula, [42,Theorem 4.2],
N (f ) = 1 [Π : Λ] ᾱ∈Π/Λ N (ᾱ •f ).
Assume that f has an essential fixed point class. The averaging formula tells that this essential fixed point class of f is lifted to an essential fixed point class of someᾱ •f . That is, there is α = (a, A) ∈ Π such that the fixed point class p ′ (Fix(α •f )) ofᾱ •f is essential (and so p(Fix(α •f )) is an essential fixed point class of f ). It suffices to show that the fixed point class p ′ (Fix(α •f )) consists of only one point.
Let F = α •f = (a, A)(d, D) := (e, E) be the affine map on S, and let F =ᾱ •f . Then p ′ (Fix(F )) is essential and N (F ) = | det(I − E * )| = 0. Choose x ∈ Fix(F ) = Fix((e, E)). Then the left multiplication by x −1 ,
ℓ x −1 : y ∈ Fix((e, E)) → x −1 y ∈ Fix(E),
is a bijection. Further, since exp : S → S is a diffeomorphism, it follows that Fix(E) ↔ fix(E * ) = ker(I − E * ). Since I − E * is invertible, we see that Fix(F ) and hence p ′ (Fix(F )) and p(Fix(F )) consist of a single element. By the main result in [50], if f is a map on an infra-solvmanifold of type (R) which is induced by an affine map and is homotopically periodic, then we have AM f (z) = N f (z) = L f (z) as N (f n ) = L(f n ). According to Theorem 10.3, if f is a virtually unipotent affine diffeomorphism on an infra-solvmanifold of type (R), then we still have AM f (z) = N f (z) = L f (z). Let g = ((x, y), t) ∈ Sol. Then it can be seen easily that τ g : Sol → Sol is given by , y), t) ∈ Sol | x = y = 0}. Fix g = ((0, 0), t) ∈ Sol Φ with t = 0 and consider (g, τ g −1 ) ∈ Aff(Sol). Then (g, τ g −1 ) centralizes Π + 2 and (g, τ g −1 ) induces an affine diffeomorphism f on Π + 2 \Sol given bȳ x →xḡ. Hence the affine diffeomorphism f is homotopic to the identity map. However f is not virtually unipotent since (τ g −1 ) * = Ad(g −1 ) is not virtually unipotent.
Remark 10.2. Recall [46,Lemma 3.6], which states that if an affine diffeomorphism f on an infra-nilmanifold M is homotopic to a virtually unipotent affine diffeomorphism on M , then f is virtually unipotent. However, the above example shows that this statement is not true in general for infrasolvmanifolds of type (R). Namely, there is an affine diffeomorphism on an infra-solvmanifold of type (R) which not virtually unipotent, but is homotopic to a virtually unipotent affine diffeomorphism.
Furthermore, in the above example, f ≃ id is a homotopically periodic map which is not virtually unipotent. Therefore [46,Proposition 3.11] is not true in general for infra-solvmanifolds of type (R). Note also that there is a unipotent affine diffeomorphism on the torus which is not homotopically periodic, see [46,Remark 3.12].
Consequently, on infra-nilmanifolds homotopically periodic maps are virtually unipotent maps. But on infra-solvmanifolds of type (R), there is no relation between homotopically periodic maps and virtually unipotent maps. Proof. Let M be an infra-solvmanifold of type (R) with holonomy group Φ. Then we can assume f is an affine diffeomorphism induced by an affine map (d, D) such that D * is virtually unipotent. This implies that (d, D) normalizes Π and hence it follows that D normalizes the holonomy group Φ. By the previous observation (2), since D * is virtually unipotent, so are all A * D * where A ∈ Φ and hence by [46,
Gauss congruences for the Nielsen and Reidemeister numbers
In number theory, the following Gauss congruence for integers holds: d|n µ(d) a n/d ≡ 0 mod n for any integer a and any natural number n. Here µ is the Möbius function.
In the case of a prime power n = p r , the Gauss congruence turns into the Euler congruence. Indeed, for n = p r the Möbius function µ(n/d) = µ(p r /d) is different from zero only in two cases: when d = p r and when d = p r−1 .
Therefore, from the Gauss congruence we obtain the Euler congruence a p r ≡ a p r−1 mod p r This congruence is equivalent to the following classical Euler's theorem:
a ϕ(n) ≡ 1 mod n where (a, n) = 1. These congruences have been generalized from integers a to some other mathematical invariants such as the traces of all integer matrices A and the Lefschetz numbers of iterates of a map, see [47,61]:
d|n µ(d) tr(A n/d ) ≡ 0 mod n,(8)
tr(A p r ) ≡ tr(A p r−1 ) mod p r .
A. Dold in [15] proved by a geometric argument the following congruence for the fixed point index of iterates of a map f on a compact ANR X and any natural number n d|n µ(d) ind(f n/d , X) ≡ 0 mod n, thus consequently d|n µ(d) L(f n/d ) ≡ 0 mod n (DL) by using Hopf theorem. These congruences are now called the Dold congruences. It is also shown in [47] (see also [61,Theorem 9]) that the above congruences (8), (9) . When d = ±1, then all R(f n ) < ∞ and so the congruences hold. When d = 1, the congruence for the Nielsen number is obviously true. We assume d = −1. So, N (f n ) = 2 for odd n and 0 for even n. For n = 2 · 3 2 · 5, we have d|n µ(d) N (f n/d ) = d|n d even µ(d) 2 = 2 µ(2) + µ(2 · 3) + µ(2 · 3 2 ) + µ(2 · 3 · 5) + µ(2 · 3 2 · 5) = 2 ((−1) + 1 + 0 + (−1) + 0) = −2 = 0 mod 2 · 3 2 · 5.
Thus we have no congruence (DN).
Next we consider the congruences (EN) and (ER). If d ≥ 0, then L(f n ) = 1 − d n = −N (f n ) = −R(f n ). The congruence (EL) L(f p r ) ≡ L(f p r−1 ) mod p r implies the other congruences (EN) and (ER). Assume d < 0. The congruence (EL) L(f p r ) ≡ L(f p r−1 ) mod p r is exactly 1 − d p r ≡ 1 − d p r−1 mod p r , which implies that d p r ≡ d p r−1 mod p r and so d p r ± 1 ≡ d p r−1 ± 1 mod p r . Thus the other congruences (EN) and (ER) hold true.
In summary, (EN) and (ER) are true, but (DN) is not true.
The congruence (DR) was previously known for automorphisms of almost polycyclic groups ( [23, p. 195]) and for all continuous maps only on nilmanifolds ( [20,Theorem 58]) provided that all Reidemeister numbers of iterates of the maps are finite. We generalize these on infra-solvmanifolds of type (R).
Theorem 2 . 4 .
24Then Π 0 is a subgroup of Π of index 2, Π 0 is also an SBgroup and the corresponding infra-solvmanifold M 0 = Π 0 \S is a double covering of M = Π\S; the maps f, g lift to map f 0 , g 0 : M 0 → M 0 which have the same affine homotopy lifts (d, D), (e, E) as f and g.
Proposition 3. 2 .
2Let f be a continuous map on an infra-nilmanifold Π\G with holonomy group Φ. Let f have an affine homotopy lift (d, D) and let φ : Π → Π be the homomorphism induced by f . Then
We denote generators of H m (M ; Q) and H m (M + ; Q) by [M ] and [M + ], respectively. The above diagram shows that x * (π * ([M ])) = π * ([M ]), which induces that x * ([M + ]) = [M + ] as π * is injective, and hence x acts on H m (M + ; Q) trivially. In other words, H m (M + ; Q) = H m (M + ; Q) Π/Π + and π * : H m (M ; Q) → H m (M + ; Q) is an isomorphism. This implies that f and f + have the same degree.
induced in the exterior algebra * R m := m ℓ=0 ℓ R m of G considered as the linear space R m .
Theorem 4.1 (see also the proof of [48, Theorem 1.5]). Let M = Γ\S be a special solvmanifold of type (R) and let f be a continuous map on M with a Lie group homomorphism D : S → S as a homotopy lift. Then we have N ∞ (f ) = sp( D * ) provided that 1 is not in the spectrum of D * . Proof. We give a very elementary proof of this theorem. Compare with the proof of [48, Theorem 1.5] in which the authors are interested only in the case of positive topological entropy which excludes the case N ∞ (f ) = 1.
we first consider the case where N (f n ) = 0 for all n.
Corollary 4. 2 .
2Let D be a matrix with eigenvalues λ 1 , · · · , λ m , counted with multiplicities. Let L(D n ) = det(I − D n ). If 1 is not in the spectrum of D, then Growth (L(D n )) = sp( D).
Theorem 4. 3 .
3Let f be a continuous map on an infra-solvmanifold of type (R) with an affine homotopy lift (d, D). Then we have N ∞ (f ) = sp( D * )provided that 1 is not in the spectrum of D * .
Remark 4. 4 .
4The above theorem was proved when M is a special solvmanifold of type (R), see Theorem 4.1 and the proof of [48, Theorem 1.5]. In the paper [48], it is assumed that sp(D * ) > 1. Since N (f ) = | det(I − D * )|, 1 is not in the spectrum of D * if and only if f is not homotopic to a fixed point free map. Now, we provide an example of the asymptotic Nielsen numbers.
Example 4. 5 .when r is odd and q == 0 .
50Let f : Π\R 2 → Π\R 2 be any continuous map on the Klein bottle Π\R 2 of type (r, ℓ, q). Recall from[38, Theorem 2.3] and its proof that r is odd or q = 0, and N (f n ) = |q n (1 − r n )| when r is odd and q = 0; |1 − r n | when q = 0, If |r| ≤ 1, then N (f n ) ≤ 2 and so N ∞ (f ) = 1; if |r| > 1 then log lim sup n→∞ N (f n ) 1/n = lim sup n→∞ 1 n log |1 − r n | = log |r|.Thus N ∞ (f ) = max{1, |r|}.Assume q = 0 and r is odd.If r = 1 then N (f n ) = 0 ⇒ N ∞ (|1 − r n | = log |q| when |r| ≤ 1, i.e., r = −1; log |qr| when |r| > 1.Thus N ∞ (f ) = 1 q = 0 and r = 1 max{1, |q|, |qr|} q = 0 and r = 1 is odd.On the other hand, since sp( D * ) is the largest modulus of an eigenvalue of D * , it follows that sp( D * ) = max{1, |r|, |q|, |qr|}.Hence:(1) If r = 1, then N ∞ (f ) = 1 and sp( D * ) = |q| ≥ 1 (since r = 1 is odd and so q = 0). (2) If r = 0 (even), then q must be 0 and so N ∞ (f ) = sp( D * ) = 1.(3) Otherwise, N ∞ (f ) = sp( D * ).We observe explicitly in this example that the condition that 1 is not in the spectrum of D * induce the identity N ∞ (f ) = sp( D * ). If q = 0 then sp(D * ) = |r| and so N ∞ (f ) = max{1, |r|} = sp( D * ). If q = 0 then r is odd and sp(D * ) = max{|r|, |q|} > 1; if sp(D * ) = |r| ≥ |q| then |r| > 1 and so N ∞ (f ) = |qr| = sp( D * ); if sp(D * ) = |q| ≥ |r| then |q| > 1 and |r| > 1 or r = −1 (because r cannot be 1) so N ∞ (f ) = |qr| = sp( D * ).
log s n (ǫ, f ) h(f ) := lim sup ǫ→0 h(f, ǫ).
for 0
0≤ i < n and let c n = c 0 . By the choice of δ and ǫ, f • c i ≃ c i+1 for all i, so f n • c 0 ≃ c n = c 0 . This means x, y in the same fixed point class of f n , contradicting the construction of E n . This inequality is remarkable in that it does not require smoothness of the map and provides a common lower bound for the topological entropy of all maps in a homotopy class. Let H * (f ) : H * (M ; R) → H * (M ; R) be a linear map induced by f on the total cohomology H * (M ; R) of M with real coefficients.
Theorem 5. 2 .
2Let f be a continuous map on an infra-solvmanifold M of type (R) with an affine homotopy lift (d, D). If 1 is not in the spectrum of D * , then h(f ) ≥ log(sp(f )). Iff is the map on M induced by the affine map (d, D), then h(f ) ≥ h(f ) ≥ log sp(f ), h(f ) = log sp( D * ) = log N ∞ (f ) = log N ∞ (f ).Hencef minimizes the entropy in the homotopy class of f . Proof. Letf be the map on M induced by the affine map (d, D). Thus f is homotopic tof . By [42, Lemma 2.1], there is a special solvmanifold which regularly and finitely covers the infra-solvmanifold M so that f can be lifted tof on the solvmanifold. We also remark that the Lie group homomorphism τ d D induces a mapf on the solvmanifold so thatf lifts tō f ,f is homotopic to φ f , the linearization D * of f is also a linearization of the liftf , and the topological entropies of f,f and their liftsf ,f are the same, i.e., h(f ) = h(f ) and h(f ) = h(f ). Moreover, since the spectral radius is a homotopy invariant, sp(f ) = sp(f ) and sp(f ) = sp(f ). It is also known that sp(f ) ≤ sp(f ). See, for example, [49, Proposition 2]. Now observe that log sp( D * ) = log N ∞ (f ) (Theorem 4.3) = log N ∞ (f ) (homotopy invariance of N ∞ (·)) ≤ h(f ) (Lemma 5.1) = h(f ) (lift under a finite regular cover) ≤ log sp( D * ). The fundamental is the last inequality. It follows from the estimate of topological entropy of a C 1 self-map of a compact manifold M h(f ) ≤ lim sup n→∞ 1 n log sup x∈M || Df (x)||
( D * ) ≥ log sp(f ) ([48, Theorem 2.1]) = log sp(f ) (homotopy invariance of sp(·))
Remark 5. 3 .
3If sp(D * ) ≤ 1, then sp( D * ) = 1 and so we have the obvious inequality h(f ) ≥ 0 = log sp( D * ) ≥ log sp(f ).
Theorem 7 . 4 .
74Let M = Π\S be an infra-solvmanifold of type (R) with holonomy group Φ. Let f, g : M → M be continuous maps with affine homotopy lifts (d, D), (e, E) respectively. Then
where A * , D * and E * induced by A, D and E are expressed with respect to a preferred basis of Π ∩ S and where σ : R → R ∪ {∞} is given by σ(0) = ∞ and σ(x) = |x| for all x = 0.Proof. Choose a fully invariant subgroup Λ ⊂ Γ := Π ∩ S of Π with finite index ([42, Lemma 2.1]). Then f, g lift to mapsf ,ḡ on the special solvmanifold Λ\S of type (R) and by [30, Corollary 1.3] we have R(f, g) = 1 [Π : Λ] ᾱ∈Π/Λ R(ᾱf ,ḡ).By Theorem 7.2, R(ᾱf ,ḡ) = σ N (ᾱf ,ḡ) for allᾱ ∈ Π/Λ.
Corollary 7 . 5 .
75Let M = Π\S be an orientable infra-solvmanifold of type (R). Let f, g : M → M be continuous maps. If R(f, g) < ∞, then R(f, g) = N (f, g).
Corollary 7. 6 .
6Let M = Π\S be an infra-solvmanifold of type (R) with holonomy group Φ. Let f : M → M be a continuous map with an affine homotopy lift (d, D).
Theorem 7 . 7 .
77Let f be a continuous map on an infra-solvmanifold of type (R) with an affine homotopy lift (d, D). Assume N (f ) = |L(f )|. Then the Nielsen zeta function N f (z) is a rational function and is equal to
p is the number of real eigenvalues of D * which are > 1 and n is the number of real eigenvalues of D * which are < −1. If |d| = 1 then ǫ = ±1. Theorem 7.10. Let f be a continuous map on an infra-solvmanifold of type (R) with an affine homotopy lift (d,
Proposition 9. 1 ([ 40 ,
140Proposition 1]). Let f be a continuous map on an infra-solvmanifold Π\S of type (E) induced by an affine map F : S → S. For any α ∈ Π, Fix(α • F ) is an empty set or path connected. Hence every nonempty fixed point class of f is path connected, and every isolated fixed point class forms an essential fixed point class.Proof. Let x, y ∈ Fix(α • F ). So, the affine map αF fixes x and y. Writingα • F = (d, D) ∈ S ⋊ Endo(S), we see that • (d, D)(x) = x ⇒ D(x) = d −1 x, • (d, D)(y) = y ⇒ D(y) = d −1 y, • (x, I) −1 (α•F )(x, I) = (x, I) −1 (d, D)(x, I) = (x −1 dD(x), D) = (1, D)and D fixes 1 and x −1 y. Since S is of type (E), exp : S → S is a diffeomorphism with inverse log. Let X = log(x −1 y) ∈ S. Then the 1-parameter subgroup {exp(tX) | t ∈ R} of S is fixed by the endomorphism D. Consequently, the affine map α•F fixes the 'line' connecting the points x and y.In particular, p(Fix(α • F )) is isolated {x} if and only if Fix(α•F ) is isolated {x}, where p : S → Π\S is the covering projection. Further, the index of the fixed point class p(Fix(α • F )) = {x} is det(I − dfx) = ± det(I − d(α • F ) x ) = ± det(I − D * ) where the second identity follows from the fact that x −1 (α•F )x = D. Since D fixes only the identity element of S, D * fixes only the zero element of S and so I − D * is nonsingular. Hence the fixed point class p(Fix(α • F )) is essential.
Proposition 9 . 3 .
93Let f be a continuous map on an infra-solvmanifold of type (R) induced by an affine map. Then every essential fixed point class of f consists of a single element. Proof. Letf = (d, D) be the affine map on the connected, simply connected solvable Lie group S of type (R) which induces f : Π\S → Π\S. Then f induces a homomorphism φ : Π → Π. By [42, Lemma 2.1], we can choose a fully invariant subgroup Λ ⊂ Π ∩ S of Π with finite index. Hence φ(Λ) ⊂ Λ. This implies thatf induces a map f on Λ\S.
Remark 9. 4 .
4In Propositions 9.1 and 9.3, we have shown that for any continuous map on an infra-solvmanifold of type (R) induced by an affine map the isolated fixed points of f are the essential fixed point classes of f . That is, F (f ) = N (f ). Similarly F (f n ) = N (f n ) for all n. Therefore, by Theorem 7.8 and Theorem 7.9 we have Theorem 9.5. Let f be a continuous map on an infra-solvmanifold of type (R) induced by an affine map. Then AM f (z) = N f (z), i.e., AM f (z) is a rational function with functional equation.
10 .
10The Nielsen numbers of virtually unipotent maps on infra-solvmanifolds of type (R) A square matrix is unipotent if all of its eigenvalues are 1. A square matrix is called virtually unipotent if some power of it is unipotent. Let M = Π\S be an infra-solvmanifold of type (R). Let f : M → M be a continuous map with an affine homotopy lift (d, D) ∈ Aff(S). Then f is homotopic to the diffeomorphism on M induced by the affine map (d, D), called an affine diffeomorphism. If, in addition, D * is virtually unipotent then we say that f is a virtually unipotent map. Now we observe the following: (1) A matrix is virtually unipotent if and only if all of its eigenvalues have absolute value 1, see [59, Lemma 11.6]. (2) Let Φ be a finite subgroup of GL(n, R) and let D ∈ GL(n, R) normalize Φ. If D is virtually unipotent, then for all A ∈ Φ, AD is virtually unipotent, see [46, Lemma 3.2]. Example 10.1. Consider the 3-dimensional Lie group Sol = R 2 ⋊ σ R, where σ(t) = e t 0 0 e −t .
τ g : ((u, v), s) → (e t u − e s x + x, e −t v − e −s y + y), s),and Ad(g) : sol → sol is given basis of sol. Hence Ad(g) is not virtually unipotent unless t = 0. Now consider the infra-solvmanifold Π + 2 \Sol. Sol Φ = {((x
Theorem 10 . 3 .
103If f is a virtually unipotent map on an infra-solvmanifold of type (R), then L(f ) = N (f ).
Lemma 4.2], det(I − A * D * ) ≥ 0. Using the averaging formula [42, Theorem 4.3], we obtain N (f ) = 1 |Φ| A∈Φ | det(I − A * D * )| = 1 |Φ| A∈Φ det(I − A * D * ) = L(f ).
=
and (DL) are equivalent. For example, (8) ⇒ (DL) follows easily by the following observation: Let A i be an integer matrix obtained from the homomorphism f i * : H i (X; Q) → H i (X; Q). L(f p r−1 ) mod p r . Now we shall consider the following congruences for the Nielsen numbers and Reidemeister numbersd|n µ(d) R(f n/d ) ≡ 0 mod n, (DR) R(f p r ) ≡ R(f p r−1 ) mod p r , (ER) d|n µ(d) N (f n/d ) ≡ 0 mod n, (DN) N (f p r ) ≡ N (f p r−1 ) mod p r (EN)and find the relations between them and the conditions on spaces, groups and/or on maps for which the congruences hold true.Example 11.1. Let f be a map on an infra-solvmanifold of type (R) which is homotopically periodic or virtually unipotent. Then N (f n ) = L(f n ) for all n > 0. The congruence (DL) immediately implies the congruences (DN) and (EN). Example 11.2. Let f : S 2 ∨ S 4 → S 2 ∨ S 4 be the map considered in Example 3.1. ThenL(f ) = N (f ) = 0, L(f k ) = 2 + (−2) k , N (f k ) = 1 ∀k > 1, R(f k ) = 1 ∀k ≥ 1.Thus we have no congruence (DN) and we have nice congruences (DR) and (DL).
Example 11 . 3 .
113Let f be a map on the circle S 1 of degree d. Then N (f n ) = |L(f n )| = |1 − d n |(= R(f n ) when d = ±1)
Theorem 11 . 4 .
114Let f be any continuous map on an infra-solvmanifold of type (R) such that all R(f n ) are finite. Then we have d|n µ(d) R(f n/d ) = d|n µ(d) N (f n/d ) ≡ 0 mod n for all n > 0. Proof. We define P n (f ) = the set of isolated periodic points of f with period n, P d (f ) = P d (f ) − k|d P k (f ) = the set of isolated periodic points of f with least period d. Then we have P n (f ) = d|n P d (f ) or #P n (f ) = d|n #P d (f ). By the Möbius inversion formula when all terms are finite, we have #P n (f ) = d|n µ(d) #P n/d (f ).
is a double covering of M = Π\G; the map f lifts to a map f + : M + → M + which has the same affine homotopy lift (d, D) as f . If D * has no eigenvalues of modulus > 1, then for any A ∈ Φ, A = A 1 and in this case we take Π + = Π.Now, a main result, Theorem 4.4, of [11] is the
following:
Theorem 1.2 ([11],Theorem 4.4, when Π = Π + see also proof of Theorem
3.5). Let f be a continuous map on an infra-nilmanifold Π\G with an affine
homotopy lift (d, D). Then the Nielsen numbers of f k are
Theorem 2.3 (Compare with [12, Theorem 6.1]). Let f and g be continuous maps on an orientable infra-solvmanifold of type (R) with cyclic holonomy groupsee [12, Lemma 6.3]. From
[31, Theorem 4.5], immediately we have:
Proposition 8.1 ([3, Ex. 21(b), p. 97],[58, Proposition 3.6]). Let σ be a Lie algebra automorphism. If none of the eigenvalues of σ is a root of unity, then the Lie algebra must be nilpotent.Theorem 8.2. If the Reidemeister zeta function R f (z) is defined for a homeomorphism f on an infra-solvmanifold M of type (R), then M is an infra-nilmanifold.Proof. Let f be a homeomorphism on an infra-solvmanifold M = Π\S of type (R). By[42, Theorem 2.2], we may assume that f has an affine map as a homotopy lift. By [42, Lemma 2.1], there is a special solvmanifold N = Λ\S which covers M finitely and on which f has a liftf , which is induced by a Lie group automorphism D on the solvable Lie group S. From [30, Corollary 1.3], we have an averaging formula for Reidemeister numbers:
Acknowledgments. The first author is indebted to the Institut des Hauteś Etudes Scientifiques (Bures-sur-Yvette) for the support and hospitality and
For, f k (f (x)) = f (x) ⇒ f -orbits, each of length n. So, when #P n (f ) is finite, it is a multiple of n. Let M be an infra-solvmanifold of type (R) and let f be a map on M . Since we are working with the Nielsen numbers and the Reidemeister numbers of iterates of f , we may assume that f is induced by an affine map on S. Assume R(f n ) < ∞. On the other hand, if x ∈ P n (f ) then f (x) ∈ P n (f ). then N (f n ) = R(f n ) > 0 by Corollary 7.6. By Remark 9.4, N (f n ) is the number of isolated periodic points of f with period nOn the other hand, if x ∈ P n (f ) then f (x) ∈ P n (f ). For, f k (f (x)) = f (x) ⇒ f -orbits, each of length n. So, when #P n (f ) is finite, it is a multiple of n. Let M be an infra-solvmanifold of type (R) and let f be a map on M . Since we are working with the Nielsen numbers and the Reidemeister num- bers of iterates of f , we may assume that f is induced by an affine map on S. Assume R(f n ) < ∞; then N (f n ) = R(f n ) > 0 by Corollary 7.6. By Remark 9.4, N (f n ) is the number of isolated periodic points of f with period n;
N (f n ) = #P n (f ). N (f n ) = #P n (f ).
Consequently, what we have shown is that if all R(f n ) < ∞, then. Consequently, what we have shown is that if all R(f n ) < ∞, then
Let f be a map on an infra-solvmanifold of type (R) which is homotopic to an affine diffeomorphism induced by an affine map. Corollary 11.5.. d, D)Corollary 11.5. Let f be a map on an infra-solvmanifold of type (R) which is homotopic to an affine diffeomorphism induced by an affine map (d, D).
If D * has no eigenvalue that is a root of unity, then all R(f n ) are finite. Hence the Gauss congruences (DR) for the Reidemeister and (DN) for the Nielsen numbers hold true. If D * has no eigenvalue that is a root of unity, then all R(f n ) are finite. Hence the Gauss congruences (DR) for the Reidemeister and (DN) for the Nielsen numbers hold true.
This follows from a straightforward generalization of. Proposi- tion 4.3infra-nilmanifolds to infra-solvmanifolds of type (R). 10Proof. This follows from a straightforward generalization of [10, Proposi- tion 4.3] from infra-nilmanifolds to infra-solvmanifolds of type (R).
Recall that f induces a homomorphism φ : Π → Π given by φ(α) • (d, D) = (d, D) • α for all α ∈ Π. That is, φ = τ (d,D). Let M = Π\S be the infra-solvmanifold of type (R) with holonomy group Φ. This implies that (d, D) normalizes Π and hence D normalizes Φ. So AD n normalizes Φ for all A ∈ Φ and all n. Assume that R(f n ) = ∞. By Corollary 7.6, there exists A ∈ Φ such that A * D n * has eigenvalue 1. By [46, Lemma 3.2], D n * = A −1 * (A * D n * ) is virtually unipotent. Thus D * is virtually unipotent, a contradictionLet M = Π\S be the infra-solvmanifold of type (R) with holonomy group Φ. Recall that f induces a homomorphism φ : Π → Π given by φ(α) • (d, D) = (d, D) • α for all α ∈ Π. That is, φ = τ (d,D) . This implies that (d, D) normalizes Π and hence D normalizes Φ. So AD n normalizes Φ for all A ∈ Φ and all n. Assume that R(f n ) = ∞. By Corollary 7.6, there exists A ∈ Φ such that A * D n * has eigenvalue 1. By [46, Lemma 3.2], D n * = A −1 * (A * D n * ) is virtually unipotent. Thus D * is virtually unipotent, a contradiction.
Example 11.6. Let f be an Anosov diffeomorphism on an infra-nilmanifold. By [10, Lemma 4.2], f has an affine homotopy lift (d, D) with hyperbolic D * . From the above corollary, the Gauss congruences (DR) and (DN) hold trueExample 11.6. Let f be an Anosov diffeomorphism on an infra-nilmanifold. By [10, Lemma 4.2], f has an affine homotopy lift (d, D) with hyperbolic D * . From the above corollary, the Gauss congruences (DR) and (DN) hold true.
It is known in [29] that f is topologically conjugate to an expanding map on an infra-nilmanifold. Thus we can assume that M is an infra-nilmanifold and f is a map induced by an affine map (d, D), where all the eigenvalues of D * are of modulus > 1. Since (d, D) satisfies the conditions of Corollary 11.5, all R(f n ) are finite and so the congruences (DR) and (DN). Let f : M → M be an expanding smooth map on a closed smooth manifold. 1120, Example 11. hold true. On the other hand, by [56], the set Fix(f n ) of fixed points of the expandingExample 11.7 ([20, Example 11], [41]). Let f : M → M be an expanding smooth map on a closed smooth manifold. It is known in [29] that f is topologically conjugate to an expanding map on an infra-nilmanifold. Thus we can assume that M is an infra-nilmanifold and f is a map induced by an affine map (d, D), where all the eigenvalues of D * are of modulus > 1. Since (d, D) satisfies the conditions of Corollary 11.5, all R(f n ) are finite and so the congruences (DR) and (DN) hold true. On the other hand, by [56], the set Fix(f n ) of fixed points of the expanding
The Nielsen numbers of maps of nil-manifolds. D V Anosov, 40Uspekhi Mat. Naukin RussianD. V. Anosov, The Nielsen numbers of maps of nil-manifolds, Uspekhi Mat. Nauk, 40 (1985), 133-134 (in Russian);
. English Transl, Russian Math. Surveys. 40English transl.: Russian Math. Surveys, 40 (1985), 149-150.
The behavior of the index of periodic points under iterations of a mapping. I K Babenko, S A Bogatyǐ, Izv. Akad. Nauk SSSR Ser. Mat. 55Russian. translation in Math. USSR-Izv.I. K. Babenko, and S. A. Bogatyǐ, The behavior of the index of periodic points under iterations of a mapping, Izv. Akad. Nauk SSSR Ser. Mat., 55 (1991), 3-31 (Russian); translation in Math. USSR-Izv., 38 (1992), 1-26.
Lie Groups and Lie Algebras Chapters 1-3, Translated from the French. N Bourbaki, Elements of Mathematics. Springer-VerlagReprint of the 1989 English translationN. Bourbaki, Lie Groups and Lie Algebras Chapters 1-3, Translated from the French, Reprint of the 1989 English translation, Elements of Mathematics (Berlin), Springer- Verlag, Berlin, 1998.
Introduction to Compact Transformation Groups. G E Bredon, Pure and Applied Mathematics. 46Academic PressG. E. Bredon, Introduction to Compact Transformation Groups, Pure and Applied Mathematics, Vol. 46, Academic Press, New York-London, 1972.
Nilmanifolds with Anosov Automorphism. S G Dani, J. London Math. Soc. 182S. G. Dani, Nilmanifolds with Anosov Automorphism, J. London Math. Soc. (2), 18 (1978), 553-559.
K Dekimpe, Almost-Bieberbach Groups: Affine and Polynomial Structures. BerlinSpringer-Verlag1639K. Dekimpe, Almost-Bieberbach Groups: Affine and Polynomial Structures, Lecture Notes in Mathematics, 1639, Springer-Verlag, Berlin, 1996.
The Anosov theorem for flat generalized Hantzsche-Wendt manifolds. K Dekimpe, B Rock, W Malfait, J. Geom. Phy. 52K. Dekimpe, B. de Rock and W. Malfait, The Anosov theorem for flat generalized Hantzsche-Wendt manifolds, J. Geom. Phy., 52 (2004), 174-185.
The Anosov relation for Nielsen numbers of maps of infra-nilmanifolds. K Dekimpe, B Rock, W Malfait, Monatsh. Math. 150K. Dekimpe, B. de Rock and W. Malfait, The Anosov relation for Nielsen numbers of maps of infra-nilmanifolds, Monatsh. Math., 150 (2007), 1-10.
The Anosov theorem for infra-nilmanifolds with a 2-perfect holonomy group. K Dekimpe, B Rock, P Penninckx, Asian J. Math. 15K. Dekimpe, B. de Rock and P. Penninckx, The Anosov theorem for infra-nilmanifolds with a 2-perfect holonomy group, Asian J. Math., 15 (2011), 539-548.
The R∞ property for infra-nilmanifolds. K Dekimpe, B Rock, P Penninckx, Topol. Methods Nonlinear Anal. 34K. Dekimpe, B. de Rock and P. Penninckx, The R∞ property for infra-nilmanifolds, Topol. Methods Nonlinear Anal., 34 (2009), 353-373.
K Dekimpe, G.-J Dugardein, arXiv:1302.5512Nielsen zeta functions for maps on infranilmanifolds are rational. K. Dekimpe and G.-J. Dugardein, Nielsen zeta functions for maps on infra- nilmanifolds are rational, arXiv:1302.5512.
K Dekimpe, P Penninckx, Coincidence theory for infra-nilmanifolds. 157K. Dekimpe and P. Penninckx, Coincidence theory for infra-nilmanifolds, Topology Appl., 157 (2010), 1815-1832.
The finiteness of the Reidemeister number of morphisms between almost-crystallographic groups. K Dekimpe, P Penninckx, J. Fixed Point Theory Appl. 9K. Dekimpe and P. Penninckx, The finiteness of the Reidemeister number of mor- phisms between almost-crystallographic groups, J. Fixed Point Theory Appl., 9 (2011), 257-283.
. P Deligne, La Weil, Inst. HautesÉtudes Sci. Publ. Math. 43P. Deligne, La conjecture de Weil, Inst. HautesÉtudes Sci. Publ. Math., 43 (1974), 273-307.
Fixed point indices of iterated maps. A Dold, Invent. Math. 74A. Dold, Fixed point indices of iterated maps, Invent. Math., 74 (1983), 419-435.
Nielsen zeta functions on infra-nilmanifolds up to dimension three. G.-J Dugardein, This special issue. G.-J. Dugardein, Nielsen zeta functions on infra-nilmanifolds up to dimension three, This special issue.
Fel'shtyn, New zeta function in dynamic. A L , Tenth Internat. Conf. on Nonlinear Oscillations. VarnaA. L. Fel'shtyn, New zeta function in dynamic, in Tenth Internat. Conf. on Nonlinear Oscillations, Varna, Abstracts of Papers, B, 1984.
Fel'shtyn, New zeta functions for dynamical systems and Nielsen fixed point theory. A L , Lecture Notes in Math. SpringerA. L. Fel'shtyn, New zeta functions for dynamical systems and Nielsen fixed point theory, Lecture Notes in Math., 1346, Springer, 1988, 33-55.
Fel'shtyn, The Reidemeister zeta function and the computation of the Nielsen zeta function. A L , Colloq. Math. 62A. L. Fel'shtyn, The Reidemeister zeta function and the computation of the Nielsen zeta function, Colloq. Math., 62, (1991) 153-166.
Dynamical zeta functions, Nielsen theory and Reidemeister torsion. A Fel'shtyn, Mem. Amer. Math. Soc. 699Amer. Math. SocA. Fel'shtyn, Dynamical zeta functions, Nielsen theory and Reidemeister torsion, Mem. Amer. Math. Soc., 699, Amer. Math. Soc., Providence, R.I., 2000.
The Reidemeister zeta function with applications to Nielsen theory and a connection with Reidemeister torsion, K-theory. A L Fel'shtyn, R Hill, 8A. L. Fel'shtyn and R. Hill, The Reidemeister zeta function with applications to Nielsen theory and a connection with Reidemeister torsion, K-theory, 8 (1994), 367- 393.
Reidemeister numbers of equivariant maps. A L Fel'shtyn, R Hill, P Wong, Topology Appl. 67A. L. Fel'shtyn, R. Hill and P. Wong, Reidemeister numbers of equivariant maps, Topology Appl., 67 (1995), 119-131.
Twisted Burnside-Frobenius theory for discrete groups. A , E Troitsky, J. reine Angew. Math. 613A. Fel'shtyn and E. Troitsky, Twisted Burnside-Frobenius theory for discrete groups, J. reine Angew. Math., 613 (2007), 193-210.
Geometry of Reidemeister classes and twisted Burnside theorem. A , E Troitsky, J. K-Theory. 2A. Fel'shtyn and E. Troitsky, Geometry of Reidemeister classes and twisted Burnside theorem, J. K-Theory, 2 (2008), 463-506.
The zeta functions of Ruelle and Selberg. I. D Fried, Ann. Sci. Ecole Norm. Sup. 194D. Fried, The zeta functions of Ruelle and Selberg. I, Ann. Sci. Ecole Norm. Sup. (4), 19 (1986), 491-517.
Lefschetz formula for flows, The Lefschetz centennial conference, Part III. D Fried, Contemp. Math. 58III, Amer. Math. SocD. Fried, Lefschetz formula for flows, The Lefschetz centennial conference, Part III (Mexico City, 1984), 19-69, Contemp. Math., 58, III, Amer. Math. Soc., Providence, RI, 1987.
D Gonçalves, Coincidenc Reidemeister classes on nilmanifolds and nilpotent fibrations. 83D. Gonçalves, Coincidenc Reidemeister classes on nilmanifolds and nilpotent fibra- tions, Topology Appl., 83 (1998), 169-186.
Nilmanifolds are Jiang-type spaces for coincidences, Forum Math. D Gonçalves, P Wong, 13D. Gonçalves and P. Wong, Nilmanifolds are Jiang-type spaces for coincidences, Fo- rum Math., 13 (2001), 133-141.
Groups of polynomial growth and expanding maps. M Gromov, Inst. HautesÉtudes Sci. 53M. Gromov, Groups of polynomial growth and expanding maps, Inst. HautesÉtudes Sci., 53 (1981), 53-73.
The R∞ property for crystallographic groups of Sol. K Y Ha, J B Lee, preprintK. Y. Ha and J. B. Lee, The R∞ property for crystallographic groups of Sol, preprint.
K Y Ha, J B Lee, Nielsen numbers of maps on infra-solvmanifolds of type (R). preprintK. Y. Ha and J. B. Lee, Nielsen numbers of maps on infra-solvmanifolds of type (R), preprint.
Anosov theorem for coincidences on special solvmanifolds of type (R). K Y Ha, J B Lee, P Penninckx, Proc. Amer. Math. Soc. 139K. Y. Ha, J. B. Lee and P. Penninckx, Anosov theorem for coincidences on special solvmanifolds of type (R), Proc. Amer. Math. Soc., 139 (2011), 2239-2248.
Formulas for the Reidemeister, Lefschetz and Nielsen coincidence number of maps between infra-nilmanifolds, Fixed Point Theory Appl. K Y Ha, J B Lee, P Penninckx, 39K. Y. Ha, J. B. Lee and P. Penninckx, Formulas for the Reidemeister, Lefschetz and Nielsen coincidence number of maps between infra-nilmanifolds, Fixed Point Theory Appl. 2012, 2012:39.
Entropy and the Nielsen numbers. N V Ivanov, Dokl. Akad. Nauk SSSR. 2652in RussianN. V. Ivanov, Entropy and the Nielsen numbers. Dokl. Akad. Nauk SSSR 265 (2) (1982), 284-287 (in Russian);
. English Transl, Soviet Math. Dokl. 26English transl.: Soviet Math. Dokl. 26 (1982), 63-66.
Homotopy methods in topological fixed and periodic points theory. J Jezierski, W Marzantowicz, Topological Fixed Point Theory and Its Applications. DordrechtSpringer3J. Jezierski and W. Marzantowicz, Homotopy methods in topological fixed and peri- odic points theory, Topological Fixed Point Theory and Its Applications, 3, Springer, Dordrecht, 2006.
Lectures on Nielsen fixed point theory. B Jiang, Contemporary Math. 14Amer. Math. SocB. Jiang, Lectures on Nielsen fixed point theory, Contemporary Math., 14, Amer. Math. Soc., Providence, R.I., 1983.
The entropy conjecture, Smooth dynamical dystems. A B Katok, Mir PublishingMoscowin RussianA. B. Katok, The entropy conjecture, Smooth dynamical dystems (in Russian), Mir Publishing, Moscow 1977, 182-203.
Computation of the Nielsen type numbers for maps on the Klein bottle. H J Kim, J B Lee, W S Yoo, J. Korean Math. Soc. 45H. J. Kim, J. B. Lee and W. S. Yoo, Computation of the Nielsen type numbers for maps on the Klein bottle, J. Korean Math. Soc., 45 (2008), 1483-1503.
Averaging formular for Nielsen numbers. S W Kim, J B Lee, K B Lee, Nagoya Math. J. 178S. W. Kim, J. B. Lee and K. B. Lee, Averaging formular for Nielsen numbers, Nagoya Math. J., 178 (2005), 37-53.
The Nielsen numbers of homotopically periodic maps of infranilmanifolds. S Kwasik, K B Lee, J. London Math. Soc. 382S. Kwasik and K. B. Lee, The Nielsen numbers of homotopically periodic maps of infranilmanifolds, J. London Math. Soc. (2), 38 (1988), 544-554.
Lefschetz numbers for continuous maps, and periods for expanding maps on infra-nilmanifolds. J B Lee, K B Lee, J. Geom. Phys. 56J. B. Lee and K. B. Lee, Lefschetz numbers for continuous maps, and periods for expanding maps on infra-nilmanifolds, J. Geom. Phys., 56 (2006), 2011-2023.
Averaging formula for Nielsen numbers of maps on infrasolvmanifolds of type (R). J B Lee, K B Lee, Nagoya Math. J. 196J. B. Lee and K. B. Lee, Averaging formula for Nielsen numbers of maps on infra- solvmanifolds of type (R), Nagoya Math. J., 196 (2009), 117-134.
Maps on infra-nilmanifolds. K B Lee, Pacific J. Math. 168K. B. Lee, Maps on infra-nilmanifolds, Pacific J. Math., 168 (1995), 157-166.
Rigidity of almost crystallographic groups. K B Lee, F Raymond, Contemporary Math. Amer. Math. Soc. 44K. B. Lee and F. Raymond, Rigidity of almost crystallographic groups, Contemporary Math. Amer. Math. Soc., 44 (1985), 73-78.
On the rationality of the Nielsen zeta function. L Li, Adv. in Math. (China). 23L. Li, On the rationality of the Nielsen zeta function, Adv. in Math. (China), 23 (1994), 251-256.
The Nielsen numbers of virtually unipotent maps on infra-nilmanifolds. W Malfait, Forum Math. 13W. Malfait, The Nielsen numbers of virtually unipotent maps on infra-nilmanifolds, Forum Math., 13 (2001), 227-237.
Finding periodic points of a map by use of a k-adic expansion. W Marzantowicz, P M Przygodzki, Discrete Contin. Dyn. Syst. 5W. Marzantowicz and P. M. Przygodzki, Finding periodic points of a map by use of a k-adic expansion, Discrete Contin. Dyn. Syst., 5 (1999), 495-514.
Entropy conjecture for continuous maps of nilmanifolds. W Marzantowicz, F Przytycki, Israel J. Math. 165W. Marzantowicz and F. Przytycki, Entropy conjecture for continuous maps of nil- manifolds, Israel J. Math., 165 (2008), 349-379.
Estimates of the topological entropy from below for continuous self-maps on some compact manifolds. W Marzantowicz, F Przytycki, Discrete Contin. Dyn. Syst. 21W. Marzantowicz and F. Przytycki, Estimates of the topological entropy from below for continuous self-maps on some compact manifolds, Discrete Contin. Dyn. Syst., 21 (2008), 501-512.
Nielsen numbers of homotopically periodic maps on infrasolvmanifolds. C K Mccord, Proc. Amer. Math. Soc. 120C. K. McCord, Nielsen numbers of homotopically periodic maps on infrasolvmani- folds, Proc. Amer. Math. Soc., 120 (1994), 311-315.
C K Mccord, Lefschetz and Nielsen coincidenc numbers on nilmanifolds and solvmanifolds II. 75C. K. McCord, Lefschetz and Nielsen coincidenc numbers on nilmanifolds and solv- manifolds II, Topology Appl., 75 (1997), 81-92.
P Penninckx, Fixed point theory and coincidence theory for infra-nilmanifolds. Leuven, BelgiumKatholieke Universiteit LeuvenPh.D. thesisP. Penninckx, Fixed point theory and coincidence theory for infra-nilmanifolds, Ph.D. thesis, Katholieke Universiteit Leuven, Leuven, Belgium, December 2009.
Fel'shtyn, The Nielsen zeta function. V B Pilyugina, A L , Funktsional. Anal. i Prilozhen. 19in RussianV. B. Pilyugina and A. L. Fel'shtyn, The Nielsen zeta function, Funktsional. Anal. i Prilozhen., 19 (1985) 61-67 (in Russian);
. English Transl, Functional Anal. Appl. 19English transl.: Functional Anal. Appl., 19, (1985) 300-305.
An upper estimation for topological entropy. F Przytycki, Invent. Math. 59F. Przytycki, An upper estimation for topological entropy, Invent. Math., 59 (1980), 205-213.
Almost flat manifolds. E A Ruh, J. Differential Geom. 171E. A. Ruh, Almost flat manifolds, J. Differential Geom., 17 (1982), no. 1, 1-14.
Endomorphism of compact differentiable manifolds. M Shub, Amer. J. Math. 91M. Shub, Endomorphism of compact differentiable manifolds, Amer. J. Math., 91 (1969), 175-199.
Dynamical systems, filtrations and entropy. M Shub, Bull. Amer. Math. Soc. 80M. Shub, Dynamical systems, filtrations and entropy, Bull. Amer. Math. Soc., 80 (1974), 27-41.
Differentiable dynamical systems. S Smale, Bull. Amer. Math. Soc. 73S. Smale, Differentiable dynamical systems, Bull. Amer. Math. Soc., 73 (1967), 747- 817.
I Stewart, D Tall, Algebraic number theory and Fermat's last theorem. Ltd., Natick, MAthird edition, A K PetersI. Stewart and D. Tall, Algebraic number theory and Fermat's last theorem, third edition, A K Peters, Ltd., Natick, MA, 2002.
Reidemeister zeta function for group extensions. P Wong, J. Korean Math. Soc. 38P. Wong, Reidemeister zeta function for group extensions, J. Korean Math. Soc., 38 (2001), 1107-1116.
Geometriya, Topologiya i Matematicheskaya Fizika. I (in Russian). A V Zarelua, Proc. Steklov Inst. Math. 263Tr. Mat. Inst. SteklovaA. V. Zarelua, On congruences for the traces of powers of some matrices, Tr. Mat. Inst. Steklova, 263 (2008), 85-105, Geometriya, Topologiya i Matematicheskaya Fizika. I (in Russian); translation in Proc. Steklov Inst. Math., 263 (2008), 78-98.
address: [email protected], [email protected] Department of mathematics. Instytut Matematyki, Uniwersytet Szczecinski, Szczecin, Poland and Institut des HautesÉtudes Scientifiques, Le Bois-Marie 35, route de Chartres 91440. Bures-sur-Yvette, France E-mail15Sogang UniversitySeoul 121-742, KOREA E-mail address: [email protected] Matematyki, Uniwersytet Szczecinski, ul. Wielkopolska 15, 70- 451 Szczecin, Poland and Institut des HautesÉtudes Scientifiques, Le Bois-Marie 35, route de Chartres 91440 Bures-sur-Yvette, France E-mail address: [email protected], [email protected] Department of mathematics, Sogang University, Seoul 121-742, KOREA E-mail address: [email protected]
| zyda_arxiv-0136000 |
Improving the Generalizability of Trajectory Prediction Models with Frenét-Based Domain Normalization
Luyao Ye
Zikang Zhou
Jianping Wang
Improving the Generalizability of Trajectory Prediction Models with Frenét-Based Domain Normalization
Predicting the future trajectories of robots' nearby objects plays a pivotal role in applications such as autonomous driving. While learning-based trajectory prediction methods have achieved remarkable performance on public benchmarks, the generalization ability of these approaches remains questionable. The poor generalizability on unseen domains, a wellrecognized defect of data-driven approaches, can potentially harm the real-world performance of trajectory prediction models. We are thus motivated to improve models' generalization ability instead of merely pursuing high accuracy on average. Due to the lack of benchmarks for quantifying the generalization ability of trajectory predictors, we first construct a new benchmark called argoverse-shift, where the data distributions of domains are significantly different. Using this benchmark for evaluation, we identify that the domain shift problem seriously hinders the generalization of trajectory predictors since state-of-the-art approaches suffer from severe performance degradation when facing those out-of-distribution scenes. To enhance the robustness of models against domain shift problem, we propose a plug-and-play strategy for domain normalization in trajectory prediction. Our strategy utilizes the Frenét coordinate frame for modeling and can effectively narrow the domain gap of different scenes caused by the variety of road geometry and topology. Experiments show that our strategy noticeably boosts the prediction performance of the state-of-the-art in domains that were previously unseen to the models, thereby improving the generalization ability of datadriven trajectory prediction methods.
I. INTRODUCTION
The task of trajectory prediction is one of the indispensable components in safety-critical robotic applications, e.g., autonomous driving and robot obstacle avoidance. Given objects' past trajectories and the associated scene context, such as high-definition (HD) map, the goal of trajectory prediction is to predict objects' future movements and thereby enable safe motion planning of robots. Recent research in trajectory prediction has witnessed the huge success of deep learning. With their strong capability of fusing heterogeneous information in the scene, deep learning approaches have dominated the public benchmarks for trajectory prediction [1], [2], [3], [4]. However, whether these data-driven models can be generalized to out-of-distribution (OOD) scenes is still undetermined. As a well-known issue of learning-based approaches, the performance depends heavily on the distribution of training data and is prone All to be affected by the distribution shift problem. Namely, these models usually have satisfactory accuracy in scenes frequently appearing in training data but may have trouble making correct predictions when facing those less frequent ones. For example, a trajectory predictor fully trained on the highway dataset can perform well on straight roads that were unseen before but is very likely to fail when tested on roundabouts due to the different data distributions on straight roads and roundabouts. Collecting sufficient data in all domains for training is not an affordable or feasible solution. If a model cannot make reliable predictions in unseen situations, catastrophic accidents may happen in the real world. For this reason, there is an urgent need to enhance the generalization ability of trajectory prediction models. In daily traffic scenarios, traffic participants do not move in free space but need to obey traffic rules, e.g., driving in lanes or walking on the sidewalk. These traffic rules are mostly conveyed by the map. Therefore, many advanced trajectory prediction approaches [5], [6], [7] focus on modeling the interactions between objects and HD maps to assist trajectory prediction. However, the geometry and topology of the map elements (e.g., the curvature of lanes) vary dramatically in different scenes, which brings the distribution shift problem and makes it difficult for the model to generalize in OOD scenes [8]. To address the above issues, we propose a domain normalization method, termed as Frenét+, for eliminating the difference among scenes by utilizing the Frenét frame [9]. This method explicitly moderates the distribution shift problem caused by the diversity of HD maps and enables the trajectory prediction models to focus more on domain-independent features (e.g., motion patterns of objects and social interactions among traffic participants) rather than overfitting the training data by memorizing the domainspecific map features. Specifically, we calculate the relative coordinates of the target object with respect to the centerline and use the relative coordinates for modeling. Here, we use the Frenét coordinate (i.e., the arc length and the perpendicular offset of the centerline) to represent the relative position of the target object. As shown in Fig. 1, converting to the Frenét coordinate has a significant advantage of reducing the difference among the road shapes of different traffic scenes. We expect that existing trajectory prediction models combined with this domain normalization technique will be able to perform almost identically well on the seen and unseen domains.
To verify the existence of the domain shift problem and the effectiveness of our method, we first propose an automatic domain split schema based on a clustering algorithm to construct a new benchmark named argoverse-shift for model evaluation. After splitting different domains, we divide these domains into training set, seen validation set, and unseen validation set. Then, we evaluate several state-of-the-art models on this benchmark and observe that their performance on the unseen validation set is much worse than that on the seen validation set. After integrating with our domain normalization approach, the performance of these models on unseen domains is substantially improved.
The key contributions are summarised as follows:
• We propose an automatic domain split schema and construct a new benchmark for evaluating the generalization ability of trajectory prediction models against the distribution shift problem. • We design validation experiments to explicitly quantify the generalizability of learning-based trajectory prediction models using our benchmark. • We introduce a plug-and-play strategy for domain normalization based on the Frenét frame to help the model recover from the distribution shift problems. Experiments show that this strategy noticeably enhances the prediction performance of state-of-the-art methods in the unseen domains.
II. RELATED WORK
A. Trajectory prediction
Trajectory prediction is a classic problem of autonomous driving and has been widely studied in recent years. Early approaches to trajectory prediction are only based on historical trajectories of the ego vehicle and neighborhoods. While models are rapidly updating, using the LSTM network [10], [11], [12], Convolutional Neural Network [13], Generative Adversarial Networks [14], [15], [16] and graph neural networks [5], [17], [18], all these models neglect the influence of map information on trajectory prediction, so as to make it difficult to breakthrough in terms of prediction accuracy. Thanks to the development of High Definition Map and the release of new trajectory prediction datasets [1], [3], [2], recent works focus on capturing scene representation from HD map in order to improve the performance. Some suggested using the image learning ability of CNN to represent traffic scenes on the map [19], [20]. Another solution has rasterized map elements from the HD map as model inputs [21], [22], [20], [23], [24], [25]. While raster map representation is popular, this method was replaced by vectorized map data due to its efficiency. Vectorized methods [5], [6], [7] learn the relationships among entities in the scene as a set of vectorized entities with semantic and geometric features. Despite the good performance achieved by these methods, researchers have paid limited attention to the generalizability of trajectory prediction models to new domains. Our work proposes new solutions and evaluation benchmarks for this problem.
B. Coping with distribution shift in autonomous driving
Prior work by Angelos et al. [26] highlight the necessity of out-of-training-distribution scene detection in autonomous driving. They proposed a robust imitative planning method for detecting distribution shift and generating a safe plan. They also provided online supervision to efficiently query expert guidance for a safe course of action when extreme uncertainty. Another recent work demonstrated that sixty percent of existing scenes could be modified to make trajectory prediction models fail [8]. They presented a scene generation model to provide richer and enough scenes for model training. To further evaluate the generalizability of trajectory prediction models, Thomas et al. [27] studied the performance of baseline models across four different datasets, and used the heap map to measure model uncertainty. However, no direct and effective method has been proposed yet to enhance the generalizability of trajectory prediction models. In contrast to the work mentioned above, we present a practical and effective solution to mitigate the effect of distribution shift in trajectory prediction, which does not require manual supervision, is model-independent, and can be directly used in current trajectory prediction models.
C. Frenét frame
The Frenét frame based on the Frenet-Serret formulas [28] locally describes one point on a curve, a moving coordinate system determined by the curvature and the tangent line along the curve. In autonomous driving, many studies [9], [29], [30], [31] have used the Frenét frame for safe and optimal motion planning. They assumed that the centerline was the ideal path along the free road. Therefore, they chose the centerline as the reference path and solved the motion planning problem in Frenét coordinates rather than Cartesian coordinates. The Frenét coordinates use the arc length of the centerline and the perpendicular offset to indicate the relative position of the points on the trajectory with respect to the centerline.
Inspired by the Frenét frame method of motion planning, we apply Frenét frame to alleviate the impact of distribution shift in trajectory prediction. Finding the reference path in a complex scene and figuring out the projection of the given point are two critical challenges to transferring trajectory from Cartesian to Frenét frame. We proposed the Frenét+ strategy to solve the above problems.
III. PRELIMINARY STUDY
In this section, we first present an automatic domain split schema since very few cross-domain datasets and benchmarks are available. Then, we demonstrate that the distribution shift problem exists in the state-of-the-art trajectory prediction models using our constructed benchmark.
A. Domain split schema
In this work, we define the domain as a cluster of similar samples. For example, tracks sampled from the straight road and bend can be regarded as two domains. Dividing the dataset into multiple domains needs to ensure that the data within each domain has unique features as much as possible. In other words, the gap between the data distribution of different domains should be large. In this way, the partitioned domains are more conducive to verifying the distribution shift problem. However, it is a challenge to find out the domain boundaries precisely. Designing the boundaries manually is not only a large workload but also easily influenced by individual subjectivity, leading to inconsistent split criteria. For this reason, we propose a clustering-based automatic domain split schema.
Specifically, we first perform feature extraction for each record in the dataset. We pre-define 21 features, such as the lane deflection angle, the differences of lane coordinates and the lane boundary. We extract these 21 features for j-th record to formulate a feature vector d j ∈ R 21 . This 21dimensional vectors are then downscaled to 2-dimensionŝ d j ∈ R 2 using the PCA algorithm [32]. Experiments show that the 2-dimensions feature is good enough to characterize a sample and also convenient for visualization. Intuitively, these 2-dimensional vectors have a stronger characterization capability. The K-Means algorithm [33] is then used to cluster these 2-dimensional vectors. In this way, each record is assigned with a cluster and we believe these clusters can be regarded as domains. Based on the clustering results, we retrace the corresponding data and match them with the corresponding domains. We further plot the distribution between each pair of divided domains to investigate the overlap among them. Fig. 2 shows no overlap between each domain pair and suggests that the split schema achieves the expectation.
B. Argoverse-shift benchmark
We construct a benchmark, referred to as argoverse-shift, to evaluate the generalization ability of trajectory prediction models against the distribution shift problem.
Following the domain split schema proposed in Section III-A, we partition Argoverse dataset [3] into ten domains. We take the first seven domains as seen domains and the three left domains as unseen domains. The training set and validation set are sampled from all seen domains with a ratio of 8:2. All of the unseen domains are taken to formulate the test set. The detailed statistics are shown in Table I. On the one hand, the new cross-domain dataset, argoverse-shift, can be used to verify the existence of the distribution shift problem in the trajectory prediction task, and on the other hand, it can be used as a new benchmark to evaluate the trajectory prediction model's generalizability.
To verify that current trajectory prediction models suffer from the distribution shift problem, we select several stateof-the-art models and observe whether their performances degrade when the train and test sets are taken from different domains. Experiment results show the performance of welltrained models decreases in the unseen domains. The analysis and implementation details for the domain shift verification experiment are described in Section VI.
IV. PROBLEM DEFINITION
We take X ∈ X to denote the input sampled from the input space X and Y ∈ Y to denote the output from the output space Y. As recent works included many features into consideration, e.g., neighbors' coordinates and HD map, trajectory predication is not necessarily a self-regression task, i.e., X ̸ = Y.
We suppose that the data in a given dataset S can be divided into M independent domains, i.e., S =
{D 1 , · · · , D M }, where D i = {(X i j , Y i j )} ni j=1
denotes the i-th domain. The distributions between each pair of domains are different. The first K domains of S are taken as the seen domains S seen = {D 1 , · · · , D K } that models can access during the training, and the rest are taken as the unseen domains S unseen = {D K+1 , · · · , D M } that are used to simulate the new scenes appeared on the road. The training set is sampled from seen domains, i.e., S train ∈ S seen , and the validation set used during training is also from seen domain and does not overlap with the training set, i.e., S val ∈ {S seen \S train }. The test set is sampled from the unseen domains S test ∈ S unseen .
The goal of domain generalization in trajectory prediction task is to learn a robust and generalizable prediction function h : X → Y from the seen domains to achieve a minimum prediction error on the unseen test domain [34]:
min h E (X,Y )∈Stest [L(h(X), Y )],(1)
where E is the expectation and L is the loss function.
V. FRENÉT+ STRATEGY
In this section, we first show the strategy to select the reference path. Then we illustrate the pipeline that transfers a trajectory point from the Cartesian frame to the Frenét frame based on the selected reference path. Finally, we provide the solution to determine the projections for those trajectory points located in the non-differential areas.
A. The selection of the reference path
In complex scenes, such as the intersection, there are multiple centerlines, as shown in Fig. 3, which increases the difficulty of finding the appropriate reference path. To solve this problem, we proposed a method to determine a proper centerline as the reference path based on the vehicle's historical trajectory. Specifically, we first calculate the Euclidean distance between the vehicle's historical trajectory and each centerline and take the reciprocal of the distance as the similarity.
S 1 j = [ 1 n n t=1 ∥x t − x j t ∥ 2 ] −1 ,(2)
where x t ∈ R 2 and x t ∈ R 2 are the coordinates of the trajectory and corresponding projection on the centerline j at the time step t. Intuitively, the closer centerline would be a better choice of the reference path.
Next, we compare the shape similarity between centerlines and historical trajectory. Specifically, we translate candidate centerlines towards the vehicle and then calculate the Euclidean distance. The translation direction and length are defined by the current location of the vehicle with its projections on centerlines:
∆x j = x T − x j T ,(3)
where T indicates the current time step. Then the shape similarity is defined as: After that, we take the average of the S 1 j and S 2 j . Finally, we choose the centerline j * with the largest value as the reference line.
S 2 j = [ 1 n n t=1 ∥x t − ( x j t + ∆x j )∥ 2 ] −1 .(4)j * = arg max j (S 1 j + S 2 j ).(5)
B. Coordinate transfer
Suppose that a reference path is composed of m segments and saved as a list of coordinate points following the order of the direction of the trajectory, i.e., [p 1 , · · · , p m+1 ]. The projection of the vehicle on the reference path at the time step t falls on the J-th segment, i.e., the segment bounded with the points p J and p J+1 . Based on these two points, we can easily calculate the expression of the line where the J-th segment is located, i.e., y = k J x + b J . Then the projection of the trajectory point x t = (x t , y t ) at the time step t can be represented as:
x t = ( x t , y t ),
where,
x t = k J (y t − b J ) + x t (k J ) 2 + 1 , y t = k J x t + b J .(6)
The offset of the trajectory point about the reference path, i.e., the d coordinate in Frenét Frame, can be derived from the distance between the trajectory point and the projection:
d t = I t · ∥ x t − x t ∥ 2 , I t = −1, if −−−−→ p J p J+1 · −−→ p J x t < 0, 1, if −−−−→ p J p J+1 · −−→ p J x t ≥ 0.(7)
The I t is the indicator which indicates the side of the trajectory point relative to the reference path in the right-hand system. We initially set the first point of the reference path as the starting point. Then the arc length s of the trajectory point in the Frenét frame can be expressed as the sum of some segments:
s t = J j=2 ∥p j − p j−1 ∥ + ∥ x t − p J ∥.(8)
We take the difference between s t and s 1 as the final arc length, i.e., s t ← − (s t − s 1 ). In this way, the projection of the first trajectory point is set as the starting point. Therefore, the arc length of the first trajectory point s 1 in the Frenét frame is 0. So far, we present the whole process to convert a trajectory point from the Cartesian coordinate (x t , y t ) to the Frenét coordinate (s t , d t ). This process is unrelated to model design and can be easily adapted to most current trajectory prediction models. It can be set between the data pipe and the model, i.e., the coordinate transformation is performed before feeding trajectory into the model.
C. Projections on the non-differentiable area
Another challenge is to find the correct projections on the non-differentiable area of the reference line. From the engineering perspective, centerlines are often stored as a list of coordinate points. Connecting these points in order gives a series of line segments. In this case, the centerline is not a smooth curve but a polyline. Therefore, not all points have projections on the reference path.
As shown in Fig. 4, we cannot find a projection for points in the red area since it is non-differential at joint points. We represent the red area in Fig. 4 at the joint (x * , y * ) as:
p = {(x, y)| − 1 k 1 < y − y * x − x * < − 1 k 2 },(9)
where k 1 > k 2 are slopes of segments that intersect at the joint (x * , y * ).
We artificially set the projections of the trajectory points (x, y) ∈p to the nearest endpoints on the reference line. Moreover, we set the angle bisector as the vertical direction at the joint point:
y = 1 2 ( 1 k 1 + 1 k 2 )(x * − x) + y * .(10)
In this way, multiple points in the area can be converted to the same Frenét coordinate. It introduces extra errors when converting them back to the Cartesian coordinate frame. However, we experimentally verified that the error is less than 10 −4 meters on average, which has a negligible effect on the final results. Fig. 3 shows that our Frenét+ strategy can find the proper reference path from a complex scene and get the correct projections on the non-differentiable centerlines.
VI. EXPERIMENTS
In this section, we conduct comparative experiments by combining Frenét+ strategy with baseline models to demonstrate the effectiveness of our proposed strategy. We also perform result analysis and visualize several prediction results to give an insight into domain shift problem on trajectory prediction.
A. Experimental settings
The experiments are conducted on the argoverse-shift dataset. Following the same setting as Argoverse motion forecasting challenge [3], we require models to predict the position of the agents in the future 3 seconds, given the initial 2 seconds observations.
Baselines: We take NN+map [3], LSTM ED+map [3], WIMP [35], LaneGCN [5], HiVT [7], the five representative state-of-the-art trajectory prediction models, as baselines.
Metrics: We employ three standard metrics for trajectory prediction, including the Minimum Average Displacement Error (minADE), Minimum Final Displacement Error (minFDE) and Miss Rate (MR). Models predict six trajectories, and we report the best result with minimum errors.
We train each baseline on the training set with the best practice of the hyperparameters reported in the original paper and select the best parameter group via the validation set. After training, we separately evaluate the model on the test set, i.e., the unseen domains, and the validation set, i.e., the seen domains, for comparison. We then apply the Frenét+ strategy to each model. We train and re-evaluate them, following the same process. The comparison results are reported in Table II. Table II shows the performance of the five models on the seen domain and unseen domain, respectively, and their performance after combining the Frenét+ strategy. A smaller value means better performance. Due to changes in the dataset volume and split scheme, the results of baseline models are slightly different from the performance reported in the original paper. The value in parentheses is the performance deterioration on unseen domains relative to the performance on seen domains.
B. Result analysis
Domains shift problem verification: From the top half part of Table II, the performance of well-trained models decreased on the unseen domains in terms of all evaluation metrics. The result illustrates that the domain shift problem exists exactly in current models. For those naive models, e.g., NN + MAP, this problem has a more serious negative impact. By comparison, LaneGCN is more robust against domain shift. It has the slightest drop in performance on the unseen domain. Because LaneGCN takes into account a large amount of information about the relative positions of centerlines, their predictions are less susceptible to topographic changes. Frenét+ effectiveness evaluation: By comparing the upper and lower parts of Table II, we find that our Frenét+ strategy gives a significant improvement across all baselines on unseen domains in terms of all metrics. Though it is still not as good as the results on seen domains, the difference is not significant. Frenét+ brings more improvements to those simple models, like NN + MAP, since strong models are more robust in design.
We also find a slight decrease in the performance of the model with Frenét+, compared with the original one, on seen domains, e.g., the MinADE of HiVT increased from 0.7642 to 0.7756. We believe this is the necessary cost to improve generalization ability of baseline models, since we make the model focuses on general features rather than domain-specific details during training in order to obtain better generalizability.
C. Visualization of prediction results and error distribution
In Fig. 5, we visualize several prediction results of HiVT compared with Frenét+ HiVT. In the four scenes, the predicted trajectories by HiVT deviated from the ground truth and even crossed three lanes in the third scene, which is abnormal driving behavior. With the help of Frenét+ strategy, the predicted trajectory is corrected by the reference path and closer to the ground truth. It shows that the Frenét+ strategy provides a strong reference for models and has a significant effect on normalized abnormal prediction.
As for the error distribution in the Frenét frame and Cartesian frame, it is obvious that they are exactly the same when the reference path is a straight line. We show a case with a curve reference path in Fig. 6. Compared with Cartesian frame, the error distribution in Frenét frame is not a standard circle. When the projection falls on the part of the reference path with a large gradient, the changes of the loss are more dramatic. The third picture shows the difference between loss in the Frenét frame and Cartesian frame. Though the error distributions are not the same in these two systems, the difference is limited. Hence, we claim that training models in the Frenét frame will not introduce significant optimization gaps, and the optimization objectives, i.e., the 0-error point, are the same.
VII. CONCLUSIONS
In this paper, we have introduced a new benchmark called argoverse-shift to verify that the domain shift problem does exist in data-driven trajectory prediction models. Then, we have proposed a Frenét-based strategy, Frenét+, to enhance the robustness of models against domain shift. Our approach can diminish the variation of trajectory coordinates across domains by exploiting the local coordinates of trajectory waypoints relative to the lane centerlines. Experiments and visualization results show that the Frenét+ strategy significantly mitigates the domain shift problem and makes stateof-the-art models generalize better on unseen domains. In the near future, we plan to explore the domain shift problem on more datasets, such as Waymo Open Dataset [2] and NuScenes Dataset [36].
Fig. 1 .
1The solid black line represents the road boundary, the dotted gray line represents the centerline, and the orange arrow represents the vehicle trajectory.
Fig. 2 .
2The 10 by 10 scatter plots matrix of domains overlaps. The cross of i-th row j-th column represents the overlap scatter between i-th domain and j-th domain. For each single scatter plot, the points in the same color belong to the same domain. The scatter plots on the principal diagonal show the points distribution of corresponding domains.
Fig. 3 .
3Trajectories with their projections. The solid black lines with points are centerlines that appeared in this scene. The centerline in red is selected as the reference path. The blue points are the trajectory points of the vehicle. The orange points are the projections on the reference path. The gray lines connect the trajectory points with their corresponding projections.
Fig. 4 .
4Find the correct projections on the non-differentiable area. The red points are trajectory points; the black points are the projections of the trajectory points on the reference path, and the blue line is the angle bisector.
Fig. 5 .
5Visualized trajectory prediction results of HiVT and Frenét+ HiVT. The green line is the ground truth, the red line is the predicted trajectories by HiVT and the yellow line is predicted by Frenét+ HiVT.
Fig. 6 .
6Contour maps of error distribution in Frenét frame, compared with Cartesian frame. The black point indicates the true trajectory point, i.e., the point with 0 error. The red curve is the reference path. The lighter the color, the less the error, and vice versa.
authors are with the Department of Computer Science, City University of Hong Kong, Hong Kong SAR, and City University of Hong Kong Shenzhen Research Institute, Shenzhen, China. Emails: {luyaoye2c, zikanzhou2-c}@my.cityu.edu.hk; [email protected]. This work was partially supported by Hong Kong Research Grant Council under GRF 11200220, Science and Technology Innovation Committee Foundation of Shenzhen under Grant No. JCYJ20200109143223052.
TABLE I
IARGOVERSE-SHIFT DATASET STATISTICSSubset
Domain ID
Ratio (rel.)
Volume
Train
0, 1, 2, 3, 4, 5, 6
58.35%
143,202
Validation
0, 1, 2, 3, 4, 5, 6
14.59%
35,800
Test
7, 8, 9
27.06%
66,412
All
0, 1, 2, 3, 4, 5, 6, 7, 8, 9
100%
245,414
TABLE II THE
IIQUALITATIVE RESULTS OF 5 BASELINE MODELS IN SEEN AND UNSEEN DOMAINS, COMPARED WITH FRENÉT+ MODELSModel
Seen Domains
Unseen Domains
minADE ↓ minFDE ↓
MR ↓
minADE ↓
minFDE ↓
MR ↓
NN + MAP
0.6342
1.3887
0.1515
1.9689 (+210.45%) 3.7502 (+170.05%) 0.5501 (+263.10%)
LSTM ED + MAP
2.0870
4.4180
0.6485
2.2622 (+8.39%)
4.7286 (+7.03%)
0.6830 (+5.32%)
WIMP
0.7507
1.1189
0.1092
0.8311 (+10.71%)
1.2525 (+11.94%)
0.1364 (+24.91%)
LaneGCN
0.7152
1.0974
0.1065
0.7653 (+7.01%)
1.1554 (+5.29%)
0.1148 (+7.79%)
HiVT
0.7642
1.2081
0.1263
0.8595 (+12.47%)
1.3836 (+14.53%)
0.1505 (+19.16%)
Frenét+ NN + MAP
0.8284
1.7193
0.2114
0.9984 (+20.52%)
2.0223 (+17.62%)
0.2625 (+24.17%)
Frenét+ LSTM ED +MAP 2.0918
4.4296
0.6467
2.1058 (+0.67%)
4.4537 (+0.54%)
0.6498 (+0.48%)
Frenét+ WIMP
0.7596
1.1263
0.1167
0.7718 (+1.61%)
1.1475 (+1.88%)
0.1183 (+1.37%)
Frenét+ LaneGCN
0.7218
1.1039
0.1064
0.7330 (+1.55%)
1.1215 (+1.59%)
0.1091 (+2.54%)
Frenét+ HiVT
0.7756
1.2370
0.1344
0.7882 (+1.62%)
1.2602 (+1.88%)
0.1402 (+4.32%)
Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps. W Zhan, L Sun, D Wang, H Shi, A Clausse, M Naumann, J Kummerle, H Konigshof, C Stiller, A De La Fortelle, arXiv:1910.03088arXiv preprintW. Zhan, L. Sun, D. Wang, H. Shi, A. Clausse, M. Naumann, J. Kummerle, H. Konigshof, C. Stiller, A. de La Fortelle, et al., "Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps," arXiv preprint arXiv:1910.03088, 2019.
Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset. S Ettinger, S Cheng, B Caine, C Liu, H Zhao, S Pradhan, Y Chai, B Sapp, C R Qi, Y Zhou, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionS. Ettinger, S. Cheng, B. Caine, C. Liu, H. Zhao, S. Pradhan, Y. Chai, B. Sapp, C. R. Qi, Y. Zhou, et al., "Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9710-9719.
Argoverse: 3d tracking and forecasting with rich maps. M.-F Chang, J Lambert, P Sangkloy, J Singh, S Bak, A Hartnett, D Wang, P Carr, S Lucey, D Ramanan, J Hays, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, and J. Hays, "Argoverse: 3d tracking and forecasting with rich maps," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Argoverse 2: Next generation datasets for self-driving perception and forecasting. B Wilson, W Qi, T Agarwal, J Lambert, J Singh, S Khandelwal, B Pan, R Kumar, A Hartnett, J K Pontes, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 2021Round 2B. Wilson, W. Qi, T. Agarwal, J. Lambert, J. Singh, S. Khandelwal, B. Pan, R. Kumar, A. Hartnett, J. K. Pontes, et al., "Argoverse 2: Next generation datasets for self-driving perception and forecasting," in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
Learning lane graph representations for motion forecasting. M Liang, B Yang, R Hu, Y Chen, R Liao, S Feng, R Urtasun, European Conference on Computer Vision. SpringerM. Liang, B. Yang, R. Hu, Y. Chen, R. Liao, S. Feng, and R. Ur- tasun, "Learning lane graph representations for motion forecasting," in European Conference on Computer Vision. Springer, 2020, pp. 541-556.
Vectornet: Encoding hd maps and agent dynamics from vectorized representation. J Gao, C Sun, H Zhao, Y Shen, D Anguelov, C Li, C Schmid, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition11J. Gao, C. Sun, H. Zhao, Y. Shen, D. Anguelov, C. Li, and C. Schmid, "Vectornet: Encoding hd maps and agent dynamics from vectorized representation," in Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, 2020, pp. 11 525-11 533.
Hivt: Hierarchical vector transformer for multi-agent motion prediction. Z Zhou, L Ye, J Wang, K Wu, K Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Z. Zhou, L. Ye, J. Wang, K. Wu, and K. Lu, "Hivt: Hierarchical vector transformer for multi-agent motion prediction," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 8823-8833.
Vehicle trajectory prediction works, but not everywhere. M Bahari, S Saadatnejad, A Rahimi, M Shaverdikondori, A H Shahidzadeh, S.-M Moosavi-Dezfooli, A Alahi, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition17M. Bahari, S. Saadatnejad, A. Rahimi, M. Shaverdikondori, A. H. Shahidzadeh, S.-M. Moosavi-Dezfooli, and A. Alahi, "Vehicle tra- jectory prediction works, but not everywhere," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 123-17 133.
Optimal trajectory generation for dynamic street scenarios in a frenet frame. M Werling, J Ziegler, S Kammel, S Thrun, 2010 IEEE International Conference on Robotics and Automation. IEEEM. Werling, J. Ziegler, S. Kammel, and S. Thrun, "Optimal trajectory generation for dynamic street scenarios in a frenet frame," in 2010 IEEE International Conference on Robotics and Automation. IEEE, 2010, pp. 987-993.
Social lstm: Human trajectory prediction in crowded spaces. A Alahi, K Goel, V Ramanathan, A Robicquet, L Fei-Fei, S Savarese, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionA. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese, "Social lstm: Human trajectory prediction in crowded spaces," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 961-971.
An lstm network for highway trajectory prediction. F Altché, A De La Fortelle, 2017 IEEE 20th international conference on intelligent transportation systems (ITSC). IEEEF. Altché and A. de La Fortelle, "An lstm network for highway trajectory prediction," in 2017 IEEE 20th international conference on intelligent transportation systems (ITSC). IEEE, 2017, pp. 353-359.
Convolutional social pooling for vehicle trajectory prediction. N Deo, M M Trivedi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. the IEEE Conference on Computer Vision and Pattern Recognition WorkshopsN. Deo and M. M. Trivedi, "Convolutional social pooling for vehicle trajectory prediction," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 1468- 1476.
Convolutional neural network for trajectory prediction. N Nikhil, B. Tran Morris, Proceedings of the European Conference on Computer Vision (ECCV) Workshops. the European Conference on Computer Vision (ECCV) WorkshopsN. Nikhil and B. Tran Morris, "Convolutional neural network for trajectory prediction," in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0.
Social gan: Socially acceptable trajectories with generative adversarial networks. A Gupta, J Johnson, L Fei-Fei, S Savarese, A Alahi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi, "Social gan: Socially acceptable trajectories with generative adversarial net- works," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
Social-bigat: Multimodal trajectory forecasting using bicycle-gan and graph attention networks. V Kosaraju, A Sadeghian, R Martín-Martín, I Reid, H Rezatofighi, S Savarese, Advances in Neural Information Processing Systems. 32V. Kosaraju, A. Sadeghian, R. Martín-Martín, I. Reid, H. Rezatofighi, and S. Savarese, "Social-bigat: Multimodal trajectory forecasting us- ing bicycle-gan and graph attention networks," Advances in Neural Information Processing Systems, vol. 32, 2019.
Sophie: An attentive gan for predicting paths compliant to social and physical constraints. A Sadeghian, V Kosaraju, A Sadeghian, N Hirose, H Rezatofighi, S Savarese, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionA. Sadeghian, V. Kosaraju, A. Sadeghian, N. Hirose, H. Rezatofighi, and S. Savarese, "Sophie: An attentive gan for predicting paths compliant to social and physical constraints," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1349-1358.
Socialstgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction. A Mohamed, K Qian, M Elhoseiny, C Claudel, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition14A. Mohamed, K. Qian, M. Elhoseiny, and C. Claudel, "Social- stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14 424-14 432.
Gsan: Graph self-attention network for learning spatial-temporal interaction representation in autonomous driving. L Ye, Z Wang, X Chen, J Wang, K Wu, K Lu, IEEE Internet of Things Journal. L. Ye, Z. Wang, X. Chen, J. Wang, K. Wu, and K. Lu, "Gsan: Graph self-attention network for learning spatial-temporal interaction repre- sentation in autonomous driving," IEEE Internet of Things Journal, 2021.
Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions. J Hong, B Sapp, J Philbin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)J. Hong, B. Sapp, and J. Philbin, "Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
. H Cui, V Radosavljevic, F.-C Chou, T.-H Lin, T Nguyen, T.-K , H. Cui, V. Radosavljevic, F.-C. Chou, T.-H. Lin, T. Nguyen, T.-K.
Multimodal trajectory predictions for autonomous driving using deep convolutional networks. J Huang, N Schneider, Djuric, 2019 International Conference on Robotics and Automation (ICRA). IEEEHuang, J. Schneider, and N. Djuric, "Multimodal trajectory predictions for autonomous driving using deep convolutional networks," in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 2090-2096.
Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions. J Hong, B Sapp, J Philbin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Hong, B. Sapp, and J. Philbin, "Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8454-8462.
Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. Y Chai, B Sapp, M Bansal, D Anguelov, arXiv:1910.05449arXiv preprintY. Chai, B. Sapp, M. Bansal, and D. Anguelov, "Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction," arXiv preprint arXiv:1910.05449, 2019.
. F.-C Chou, T.-H Lin, H Cui, V Radosavljevic, T Nguyen, T.-K , F.-C. Chou, T.-H. Lin, H. Cui, V. Radosavljevic, T. Nguyen, T.-K.
Predicting motion of vulnerable road users using high-definition maps and efficient convnets. M Huang, J Niedoba, N Schneider, Djuric, 2020 IEEE Intelligent Vehicles Symposium (IV). IEEEHuang, M. Niedoba, J. Schneider, and N. Djuric, "Predicting motion of vulnerable road users using high-definition maps and efficient convnets," in 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020, pp. 1655-1662.
Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving. N Djuric, V Radosavljevic, H Cui, T Nguyen, F.-C Chou, T.-H Lin, N Singh, J Schneider, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionN. Djuric, V. Radosavljevic, H. Cui, T. Nguyen, F.-C. Chou, T.-H. Lin, N. Singh, and J. Schneider, "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp. 2095-2104.
Home: Heatmap output for future motion estimation. T Gilles, S Sabatini, D Tsishkou, B Stanciulescu, F Moutarde, 2021 IEEE International Intelligent Transportation Systems Conference (ITSC). T. Gilles, S. Sabatini, D. Tsishkou, B. Stanciulescu, and F. Moutarde, "Home: Heatmap output for future motion estimation," in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC).
. IEEE. IEEE, 2021, pp. 500-507.
Can autonomous vehicles identify, recover from, and adapt to distribution shifts. A Filos, P Tigkas, R Mcallister, N Rhinehart, S Levine, Y Gal, in International Conference on Machine Learning. PMLR, 2020. A. Filos, P. Tigkas, R. McAllister, N. Rhinehart, S. Levine, and Y. Gal, "Can autonomous vehicles identify, recover from, and adapt to distribution shifts?" in International Conference on Machine Learning. PMLR, 2020, pp. 3145-3153.
Uncertainty estimation for cross-dataset performance in trajectory prediction. T Gilles, S Sabatini, D Tsishkou, B Stanciulescu, F Moutarde, T. Gilles, S. Sabatini, D. Tsishkou, B. Stanciulescu, and F. Moutarde, "Uncertainty estimation for cross-dataset performance in trajectory prediction," 2022. [Online]. Available: https://arxiv.org/abs/2205. 07310
Journal de mathématiques pures et appliquées. F Frenet, Sur les courbes a double courbureF. Frenet, "Sur les courbes a double courbure." Journal de mathématiques pures et appliquées, pp. 437-447, 1852.
On-line trajectory generation for safe and optimal vehicle motion planning. D Althoff, M Buss, A Lawitzky, M Werling, D Wollherr, Autonomous Mobile Systems. SpringerD. Althoff, M. Buss, A. Lawitzky, M. Werling, and D. Wollherr, "On-line trajectory generation for safe and optimal vehicle motion planning," in Autonomous Mobile Systems 2012. Springer, 2012, pp. 99-107.
Timeoptimal motion planning for n-dof robot manipulators using a pathparametric system reformulation. R Verschueren, N Van Duijkeren, J Swevers, M Diehl, 2016 American Control Conference (ACC). IEEER. Verschueren, N. van Duijkeren, J. Swevers, and M. Diehl, "Time- optimal motion planning for n-dof robot manipulators using a path- parametric system reformulation," in 2016 American Control Confer- ence (ACC). IEEE, 2016, pp. 2092-2097.
Baidu apollo em motion planner. H Fan, F Zhu, C Liu, L Zhang, L Zhuang, D Li, W Zhu, J Hu, H Li, Q Kong, arXiv:1807.08048arXiv preprintH. Fan, F. Zhu, C. Liu, L. Zhang, L. Zhuang, D. Li, W. Zhu, J. Hu, H. Li, and Q. Kong, "Baidu apollo em motion planner," arXiv preprint arXiv:1807.08048, 2018.
Genetic k-means algorithm. K Krishna, M N Murty, IEEE Transactions on Systems, Man, and Cybernetics. 293Part B (Cybernetics)K. Krishna and M. N. Murty, "Genetic k-means algorithm," IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 29, no. 3, pp. 433-439, 1999.
Principal component analysis. H Abdi, L J Williams, Wiley interdisciplinary reviews: computational statistics. 24H. Abdi and L. J. Williams, "Principal component analysis," Wiley interdisciplinary reviews: computational statistics, vol. 2, no. 4, pp. 433-459, 2010.
Generalizing to unseen domains: A survey on domain generalization. J Wang, C Lan, C Liu, Y Ouyang, T Qin, W Lu, Y Chen, W Zeng, P Yu, IEEE Transactions on Knowledge and Data Engineering. J. Wang, C. Lan, C. Liu, Y. Ouyang, T. Qin, W. Lu, Y. Chen, W. Zeng, and P. Yu, "Generalizing to unseen domains: A survey on domain generalization," IEEE Transactions on Knowledge and Data Engineering, 2022.
What-if motion prediction for autonomous driving. S Khandelwal, W Qi, J Singh, A Hartnett, D Ramanan, arXiv:2008.10587arXiv preprintS. Khandelwal, W. Qi, J. Singh, A. Hartnett, and D. Ramanan, "What-if motion prediction for autonomous driving," arXiv preprint arXiv:2008.10587, 2020.
nuscenes: A multimodal dataset for autonomous driving. H Caesar, V Bankiti, A H Lang, S Vora, V E Liong, Q Xu, A Krishnan, Y Pan, G Baldan, O Beijbom, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionH. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, "nuscenes: A multimodal dataset for autonomous driving," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 621-11 631.
| zyda_arxiv-0137000 |
Thermodynamic Limit and Dispersive Regularisation in Matrix Models
4 Apr 2019
Costanza Benassi
Department of Mathematics
Physics and Electrical Engineering
Northumbria University Newcastle Newcastle upon Tyne
United Kingdom
Antonio Moro
Department of Mathematics
Physics and Electrical Engineering
Northumbria University Newcastle Newcastle upon Tyne
United Kingdom
Thermodynamic Limit and Dispersive Regularisation in Matrix Models
4 Apr 2019(Dated: April 5, 2019)
We show that Hermitian Matrix Models support the occurrence of a new type of phase transition characterised by dispersive regularisation of the order parameter near the critical point. Using the identification of the partition function with a solution of a reduction of the Toda hierarchy, known as Volterra system, we argue that the singularity is resolved via the onset of a multi-dimensional dispersive shock described by an integrable flow in the space of coupling constants. This analysis explains the origin and mechanism leading to the emergence of chaotic behaviours observed in M 6 matrix models and extends its validity to even nonlinearity of arbitrary order.
Random Matrix Models, originally introduced to describe spectra of heavy nuclei, became a universal paradigm for modelling complex phenomena. They naturally arise in connection with different areas of mathematics and physics, from quantum field theory to the theory of integrable systems [1][2][3]. A celebrated conjecture of Witten [4], proven by Kontsevich [5], established the identification of the free energy of 2D quantum gravity and the tau-function of a particular solution of the Korteweg-de Vries hierarchy. Thereafter, similar relations between specific matrix models on Hermitian, Unitary and Symplectic ensembles and integrable hierarchies have been discovered (see e.g. [6][7][8][9] and references therein). Furthermore, extensive studies of properties of matrix models partition functions unravelled intriguing connections between the theory of integrable systems, statistical mechanics, quantum field theory, algebraic and enumerative geometry [4,6,[8][9][10][11]. For the sake of simplicity, we focus on a case of Hermitian Matrix Models with even nonlinear interaction terms and their connection with the Toda hierachy, but our considerations can be in principle extended to other matrix ensembles. We also note that asymptotic properties of partition functions in the thermodynamic limit of one matrix models with even and odd nonlinearity and their connection with the Toda hierarchy have been previously considered in [13,14]. A key point is that the sequence of partition functions Z n for the one-matrix model of n × n matrices can be expressed in terms of the tau-function of the one dimensional Toda chain restricted to the even times of the hierarchy and where the index n labels points on the positive semi-axis of the chain. Identification of the Toda system with the matrix model is based on a one to one correspondence between the coupling constants of the model and the times of the hierarchy. The partition function Z n for fixed coupling constants is therefore specified by the state of the n−th particle of the chain at the corresponding times. Most importantly, the dynamics of the even hierarchy is uniquely specified by the initial conditions that are given in terms of the partition function of the free model, i.e. where all coupling constants vanish. In this respect, the model is simpler than the case of 2D gravity studied in [4] where the initial condition is specified by additional symmetries that are compatible with the hierarchy, namely the Virasoro constraints [6,11]. In his pioneering work [15], Jurkiewicz observed that a natural order parameter can be introduced, using orthogonal polynomial decompositions and combinatorial considerations [16]. Such order parameter develops, in the thermodynamic limit, a singularity that is regularised by apparently chaotic oscillations. Rigorous proof of the occurrence of asymptotic oscillations of the partition function has been found in [17,18]. We argue that the oscillations are the result of the dispersive regularisation of the shock in the continuum limit of the Toda-Volterra system [19][20][21]. The chaotic phase is therefore interpreted as a dispersive shock propagating through the chain in the continuum/thermodynamic limit. In this regime, the natural order parameter is given by interpolation of Flaschka's coordinates and its behaviour in the space of coupling constants is described by a solution of a scalar integrable hierarchy of hydrodynamic type. The considerations above outline the following scenario: when a thermodynamic system undergoes a phase transition, some specific quantities, the order parameters, develop singularities. In the context of conservation laws of hydrodynamic type, when a singularity (hydrodynamic catastrophe) occurs, viscosity and dispersion underpin two different mechanisms of regularisation of such singularity. In presence of small viscosity the solution develops a sharp but smooth wavefront [22]. If small viscosity is replaced by small dispersion, when the wavefront approaches the point of gradient catastrophe the dispersion induces initially small oscillations that further evolve into a dispersive shock [23][24][25]. In classical mean field fluid and spin models phase transitions are associated to classical shocks of order parameters in the space of thermodynamic parameters [26][27][28]. In this work we show that the chaotic behaviour observed in [15] is indeed a phase transition where the order parameter develops a singularity that is resolved via dispersion rather than viscosity as in classical spin models. This observation paves the way to a classification programme of phase transitions based on the normal forms of the differential identities satisfied by the free energy and order parameters.
arXiv:1903.11473v2 [math-ph] 4 Apr 2019
Hermitian Matrix models. We study the Hermitian Matrix Model defined by the partition function
Z n (t) = Hn e H(M ) dM,(1)
where M are Hermitian matrices of order n, H(M ) =
Tr (−M 2 /2 + ∞ j=1 t 2j M 2j )
is the Hamiltonian, with t = {t 2j } j≥1 the coupling constants, and dM is the Haar measure in the space of Hermitian Matrices H n . Based on a classical result by Weil [29], the partition function (1) is proportional to an integral over the eigenvalues of the matrix M , that is Z n (t) = c n τ n (t) where c n is a constant and
τ n (t) = 1 n! R n ∆ n (λ) 2 n i=1 e H(λi) dλ i(2)
with ∆ n (λ) = 1≤i<j≤n (λ i − λ j ) denoting the Vandermonde determinant. A theorem by Adler and van Moerbeke [6] implies that the quantity (2) can be interpreted as a tau-function of the Toda hierarchy restricted to the even flows
∂L ∂t 2k = 1 2 L 2k s , L k = 1, 2, . . . .(3)
with L the tridiagonal symmetric Lax matrix of the form
L = 0 b 1 0 0 . . . b 1 0 b 2 0 . . . 0 b 2 0 b 3 . . . . . . . . . . . . . . . . . . (4)
where b i = τ i+1 τ i−1 /τ 2 i and L 2k s denotes the skewsymmetric part of the matrix L 2k (see e.g. [6]). The solution of interest is specified by the initial conditions b i (0) = √ n obtained via a direct calculation of Gaussian integrals for the quantities τ n (0) = (2π) n/2 n j=1 j!/n! . We note that the Lax matrix of the type (4), originally considered in [12], and more recently in [13], corresponds to a reduction of the even Toda hierarchy known as Volterra hierarchy. Incidentally, we mention that the model with odd nonlinearities is different from the present case and its relation with the Toda hierarchy has been considered in [14]. We observe that the hierarchy (3) can be written in the form
∂B n ∂t 2k = B n (V (2k) n+1 − V (2k) n−1 ) k = 1, 2, . . .(5)
where B n = b 2 n and V (2k) n are suitable functions of the variables B n . For instance, the first three non-trivial flows are given by
V (2) n = B n , V (4) n = V (2) n V (2) n−1 + V (2) n + V (2) n+1 , V (6) n = V (2) n V (2) n−1 V (2) n+1 + V (4) n−1 + V (4) n + V (4) n+1 .
We conjecture that the required solution to the above reduction of the even Toda hierarchy is given by the recursive formula (string equation)
n = B n − ∞ j=1 2j t 2j V (2j) n .(6)
We proved that Eq. (6) gives the exact solution of the equations (5) for t 2 , t 4 , . . . , t 10 , hence the conjecture. Eq. (6) allows to evaluate the order parameter of the M 2q model for arbitrary q and generalises the formula obtained by Jurkiewicz for q = 3 [15,30]. We analyse the Matrix Model in the large n (thermodynamic) regime via the continuum limit of the solution (6) of the reduced Toda hierarchy. Introducing the scale given by a suitable large integer N and the rescaled variables u n = B n /N , T 2k = N k−1 t 2k , Eq. (6) reads as follows
n N = u n − ∞ j=1 2jT 2j W 2j n(7)
where W 2j n = V (2j) n /N j . We then define the interpolating function u(x) such that u n = u(x) for x = n/N and u n±1 = u(x ± ) with the notation = 1/N . Using this substitution in the equations (7) and expanding in Taylor series for small , at the leading order we have the polynomial equation in u of the form
Ω := −x + (1 − 2T 2 )u − 12T 4 u 2 − 60T 6 u 3 + · · · = 0 (8)
where the dots denote terms with higher times T 2k with k > 3. Formula (8) can be viewed as a solution of the Hopf hierachy of PDEs u T 2k = C k u 2k−1 u x obtained from the leading order of continuum limit expansion of the Volterra hierarchy (5). It is well known [22] that the generic solution of the Hopf hierarchy develops singular behaviour for finite value of the "time" variables T 2k . In the following, we study these singularities and their relation with the occurrence of phase transitions. Eq. (7) is expected to reproduce quasi-trivial deformations of the Hopf hierarchy and the behaviour near the singularity to be universally described by a solution of the fourth order analogue of the Painlevé I equation [31,32]. Dispersive regularisation.
We illustrate the general phenomenology by considering the particular case T 2k = t 2k = 0 for all k > 3 so that T 2 , T 4 and T 6 are the only non zero coupling constants. This choice allows for a simple but sufficiently general analysis demonstrating as chaotic behaviours observed in [30] correspond to a type of phase transition comprised by a dispersive shock of the order parameters. The shock occurs as a dispersive regularisation mechanism of a particular solution of the hierarchy (5) in the continuum limit. In Fig. 1 we compare the order parameter u(x) obtained as solution of the recurrence equation (7) and the limit equation (8). Values T 2 , T 4 and T 6 are chosen in such a way that the solution of the cubic equation (8) shows that the two solutions fully overlap for sufficiently small , but, as shown in Fig. 1b, a relevant deviation is observed in the vicinity of the point of gradient catastrophe of the solution to Eq. (8). We observe that equation (8) provides, for the above choices of coupling constants, the condition for extremising the free energy functional of density
F [u] = −xu + 1 2 (1 − 2T 2 ) u 2 − 4T 4 u 3 − 15T 6 u 4 . (9)
In particular, local minima and maxima depend on the signature of the discriminant ∆(x, T 2 , T 4 , T 6 ) of the cubic equation (8). If ∆ > 0 the free energy has two local minima and one local maximum, if ∆ < 0 the free energy presents one local minimum only. The set in the space of parameters such that ∆ = 0 corresponds to the critical set where a phase transition occurs. The analysis of the signature of ∆ shows that different scenarios need to be considered depending on whether the coefficients of the cubic equation (8) are negative or positive. Necessarily, in order to ensure convergence of the integral (1), it is T 6 < 0. Hence, we have four distinct cases, depending on the signs of the coefficient 1 − 2T 2 and −12T 4 in Eq. (8). Fig. 2 illustrates a case where T 2 < 1/2 and T 4 > 0. A similar analysis can be carried out for the remaining cases. This choice corresponds to the case analysed in [15,30], hence it allows for a direct comparison. In Fig. 2a we plot the set ∆ = 0 in the x-T 6 plane for a given choice of T 2 and T 4 . The convex sector corresponds to the region where the equation of state (8) admits three real solutions that correspond to the stationary points of the free energy density plotted in Fig. 2b. We compare the free energy for a given value of x and two different values of T 6 . For T 6 = −0.0051 the difference of the values of the free energy density at its local minima is particularly pronounced compared with the case T 6 = −0.0067. Figs. 2c and 2d show a comparison between the cubic solution (8) and the exact solution (7) for different values of T 6 within the convex region shown in Fig. 2a where the solution of (8) is multivalued. Both figures demonstrate the onset of a dispersive shock wave. This behaviour is intriguing as, unlike classical statistical mechanical sys-tems, e.g. magnetic and fluid models [33], the order parameter u(x) develops oscillations in conjunction with the existence of additional stationary points for the free energy such as unstable and metastable states. Fig. 3 shows two examples where T 2 > 1/2, with T 4 < 0 ( Fig. 3a and Fig. 3b) and T 4 > 0 ( Fig. 3c and Fig. 3d). In both cases the solution to Eq. (8) is three-valued but one root associated to a local minimum is negative and therefore does not correspond to a state of the system (by definition u ≥ 0). However, two concurrent states, although of different nature, one stable and one unstable, underlie a dispersive shock. Notice that for x > 0 solution to Eq. (8) has one non negative branch only. Nonetheless, u(x) develops a dispersive shock profile at positive x, although this is originated by a catastrophe located at x < 0. In both scenarios the solution to Eq. (8) is multivalued with two non negative branches for a small interval of negative values of x. It is therefore natural to study cases where the cubic solution is multivalued but only one branch is positive and therefore only one solution corresponds to a state that is accessible by the system. Such a case is shown in Fig. 4, where the the solution of the recurrence equation (7) converges to the cubic solution and no dispersive shocks occur. The above analysis suggests that the dispersive regularisation in the form of a dispersive shock of the order parameter is related to the existence of accessible (meta-)stable/unstable states. In particular, the behaviour of the order parameter, specifi- cally the form of the envelope, appears to be highly sensitive to the choice of the parameters T 2k . For instance, Fig. 2c, 2d, 3b and 3d show a dispersive shock whose envelope displays very distinctive features which require further investigations. A detailed study of this intriguing behaviour will require the construction of the asymptotic genus expansion of the solution (7) and Whitham's modulation theory for solutions of Eq. (5). We also point out that the rich phenomenology described reflects the fact that the dispersive shock given by the solution (7) is an intrinsic multidimensional object arising form the simultaneous solution of equations of the hierarchy (5).
is single valued.Fig. 1a
FIG. 1 :
1Comparison of the order parameter evaluated using Eq. (7) and Eq. (8) at T2 = 0, T4 = 0.1. In Fig. 1a T6 = −0.01 and ∆ < 0 for all x. In Fig. 1b T6 = −0.008 and ∆ = 0 at x = 5/18 0.28.
FT2 = 0 ,T2 = 0 ,FIG. 2 :T2 = 1 ,T2
0021T2 = 0, T4 = 0.1, x = 0.22 T4 = 0.1, T6 = -0.0067, ϵ = 0.01 T4 = 0.1, T6 = -0.0051, ϵ = 0.01 In all figures T2 = 0 and T4 = 0.1. Fig. 2a: critical set ∆ = 0 in the x-T6 plane. The dashed lines correspond to the specific values of T6 and x analysed in the following figures. Fig. 2b: free energy for different values of T6 at x = 0.22. Fig.s 2c and 2d: comparison of the order parameter evaluated by using Eq. (7) and Eq. (8) at T6 = −0.0067 and T6 = −0.0051 respectively. T4 = 0.25, T6 = -= 1, T4 = 0.25, T6 = -0.25, ϵ = 0.01 (d) FIG. 3: In all figures T2 = 1 and T6 = −0.25. Fig. 3a and Fig. 3c: multivalued solution of Eq. (8) for T4 = −0.25 and T4 = 0.25 respectively. Fig. 3b and Fig. 3d: comparison of the order parameter evaluated by using Eq. (7) and Eq. (8) for T4 = −0.25 and T4 = 0.25 respectively.
FIG. 4 :
4In all figures T2 = 0.25, T4 = −1, T6 = −0.5. Fig. 4a: solution of Eq. (8). Fig. 4b: comparison of the solution of Eq. (7) with the non negative branch of the solution to Eq. (8).
Acknowledgements. This work is dedicated to the memory of Boris Dubrovin (1950-2019) whose magnificent scientific foresight and generosity have inspired the original ideas of this research. Authors are grateful to The Leverhulme Trust RPG-2017-228 Integrable Dressed Networks for supporting this research project.
On the statistical distribution of the widths and spacings of nuclear resonance levels. E P Wigner, Math. Proc. Cambr. Phil. Soc. 47E.P. Wigner, On the statistical distribution of the widths and spacings of nuclear resonance levels, Math. Proc. Cambr. Phil. Soc. 47, 790 -798 (1951).
Statistical theory of the energy levels of complex systems. I. II. III. F Dyson, J. Math Phys. 31F. Dyson, Statistical theory of the energy levels of complex systems. I. II. III, J. Math Phys 3:1, 140 -156, 157 -165, 166 -175 (1962).
M L Mehta, Random matrices. BostonAcad. Press2nd edM.L. Mehta, Random matrices, 2nd ed. Boston: Acad. Press, 1991.
Two-dimensional gravity and intersection theory on mduli space. E Witten, Surv. Diff. Geom. 1E. Witten, Two-dimensional gravity and intersection the- ory on mduli space, Surv. Diff. Geom. 1, 243-310 (1991).
Intersection Theory on the Moduli Space of Curves and the Matrix Airy Function. M Kontsevich, Commun. Math. Phys. 147M. Kontsevich, Intersection Theory on the Moduli Space of Curves and the Matrix Airy Function, Commun. Math. Phys. 147, 1-23 (1992).
Matrix integrals, Toda symmetries, Virasoro constraints, and orthogonal polynomials. M Adler, P Van Moerbeke, Duke Math.J. 803M. Adler and P. van Moerbeke, Matrix integrals, Toda symmetries, Virasoro constraints, and orthogonal poly- nomials, Duke Math.J., 80 (3), 863 -911 (1995).
Integrals over Classical Groups, Random Permutations, Toda and Toeplitz Lattices. M Adler, P Van Moerbeke, Commun. Pure Appl. Math. 54M. Adler and P. van Moerbeke, Integrals over Classical Groups, Random Permutations, Toda and Toeplitz Lat- tices, Commun. Pure Appl. Math., 54, 153-205 (2001).
A Okounkov, N Reshetikhin, C Vafa, Quantum Calabi-Yau and Classical Crystals , The Unity of Mathematics. P.Etingof, V. Retakh, I.M. SingerSpringerA. Okounkov, N. Reshetikhin, C. Vafa, Quantum Calabi- Yau and Classical Crystals , The Unity of Mathemat- ics, P.Etingof, V. Retakh, I.M. Singer (eds), 597-618, Springer 2006.
Quantum Torus and Toda Hierarchy. T Nakatsu, K Takasaki, Melting Crystal, Commun. Math. Phys. 285T. Nakatsu, K. Takasaki, Melting Crystal, Quantum Torus and Toda Hierarchy, Commun. Math. Phys., 285, 445-468 (2009).
Integrable systems in topological field theory. B Dubrovin, Nucl. Phys. B. 379B. Dubrovin, Integrable systems in topological field the- ory, Nucl. Phys. B 379, 627-689 (1992).
Gromov-Witten invariants and integrable hierarchies of topological type. B Dubrovin, Amer. Math. Transl. 234B. Dubrovin, Gromov-Witten invariants and integrable hierarchies of topological type, Amer. Math. Transl. 234, 141-171 (2014).
On an explicitly soluble system of nonlinear differential equations related to certain Toda Lattices. M Kac, P Van Moerbeke, Adv. Math. 16M. Kac, P. van Moerbeke, On an explicitly soluble system of nonlinear differential equations related to certain Toda Lattices, Adv. Math. 16, 160-169 (1975).
Random Matrices, Graphical Enumeration and the Continuum Limit of Toda Lattices. N M Ercolani, K T .-R. Mclaughlin, V U Pierce, Commun. Math. Phys. 278N.M. Ercolani, K. T.-R. McLaughlin, V.U. Pierce, Ran- dom Matrices, Graphical Enumeration and the Contin- uum Limit of Toda Lattices, Commun. Math. Phys., 278, 31-81 (2008).
The continuum limit of Toda lattices for random matrices with odd weights. N M Ercolani, V U Pierce, Commun. Math. Sci. 101N.M. Ercolani, V.U. Pierce, The continuum limit of Toda lattices for random matrices with odd weights, Commun. Math. Sci., 10 (1), 267-305 (2012).
Chaotic behaviour in one-matrix models. J Jurkiewicz, Phys. Lett. B. 2613J. Jurkiewicz, Chaotic behaviour in one-matrix models, Phys. Lett. B, 261 (3), 260-268 (1991).
Planar Diagrams. E Brézin, C Itzykson, G Parisi, J B Zuber, Commun. Math. Phys. 591E. Brézin, C. Itzykson, G. Parisi and J.B. Zuber, Planar Diagrams, Commun. Math. Phys., 59 (1), 35-51 (1978) .
Strong asymptotics of orthogonal polynomials with respect to exponential weights. P Deift, T Kriecherbauer, K -R Mclaughlin, S Venakides, X Zhou, Comm. Pure Appl. Math. 52P. Deift, T. Kriecherbauer, K.T-R McLaughlin, S. Ve- nakides, and X. Zhou, Strong asymptotics of orthogonal polynomials with respect to exponential weights, Comm. Pure Appl. Math. 52, 1491-1552 (1999).
Asymptotics for the Partition Function in Two-Cut Random Matrix Models. T Claeys, T Grava, K D , T.-R Mclaughlin, Commun. Math. Phys. 3392T. Claeys, T. Grava, K. D. T.-R. McLaughlin, Asymp- totics for the Partition Function in Two-Cut Random Matrix Models, Commun. Math. Phys., 339(2), 513-587 (2015).
A continuum limit of the Toda Lattice. P Deift, K T , .-R Mclaughlin, Memoirs of the AMS. 131624P. Deift, K. T.-R. McLaughlin, A continuum limit of the Toda Lattice, Memoirs of the AMS, 131 (624), 1998.
Dispersive regularization of the Whitham equation for the Toda lattice. A M Bloch, Y Kodama, SIAM J. Appl. Math. 524A.M. Bloch, Y. Kodama, Dispersive regularization of the Whitham equation for the Toda lattice, SIAM J. Appl. Math., 52 (4), 909-928 (1992).
Wave phenomena of the Toda lattice with steplike initial data. J Michor, Phys. Lett. A. 380J. Michor, Wave phenomena of the Toda lattice with step- like initial data, Phys. Lett. A, 380, 1110-1116 (2016).
Linear and Nonlinear Waves. G B Whitham, WileyNew YorkG.B. Whitham, Linear and Nonlinear Waves, 1974, Wi- ley, New York.
Nonstationary structure of a collisionless shock wave. A V Gurevich, L P Pitaevskiy, Sov. Phys. JETP. 382A.V. Gurevich, L.P. Pitaevskiy, Nonstationary structure of a collisionless shock wave, Sov. Phys. JETP, 38(2), 291-297 (1974).
Resolution of a shock in hyperbolic systems modified by weak dispersion. G El, Chaos. 1537103G. El, Resolution of a shock in hyperbolic systems modi- fied by weak dispersion, Chaos 15, 037103 (2005).
Dispersive and diffusivedispersive shock waves for nonconvex conservation laws. G El, M Hoefer, M Shearer, SIAM Rev. 591G. El, M. Hoefer, M. Shearer, Dispersive and diffusive- dispersive shock waves for nonconvex conservation laws, SIAM Rev., 59(1), 3-61 (2017).
Shock dynamics of phase diagrams. A Moro, Annals of Phys. 343A. Moro, Shock dynamics of phase diagrams, Annals of Phys., 343, 49-60 (2014).
On quantum and relativistic mechanical analogues in mean-field spin models. A Barra, A Di Lorenzo, F Guerra, A Moro, Proc. Roy. Soc. A. 47020140589A. Barra, A. Di Lorenzo, F. Guerra, A. Moro, On quan- tum and relativistic mechanical analogues in mean-field spin models, Proc. Roy. Soc. A, 470, 20140589 (2014).
A mechanical approach to mean field spin models. G Genovese, A Barra, J. Math. Phys. 5053303G. Genovese, A. Barra, A mechanical approach to mean field spin models, J. Math. Phys., 50, 053303 (2009).
H Weil, The Classical Groups: Their Invariants And Their Representations. Princeton University Press2nd edH. Weil, The Classical Groups: Their Invariants And Their Representations, 2nd ed. Princeton University Press (1946).
Regularization of one-matrix models. J Jurkiewicz, Phys. Lett. B. 2452J. Jurkiewicz, Regularization of one-matrix models, Phys. Lett. B, 245 (2), 178-184 (1990).
On universality of critical behaviour in Hamiltonian PDEs. B Dubrovin, Amer. Math. Soc. Transl. 224B. Dubrovin, On universality of critical behaviour in Hamiltonian PDEs, Amer. Math. Soc. Transl., 224, 59- 109 (2008).
On critical behaviour in systems of Hamiltonian partial differential equations. B Dubrovin, T Grava, C Klein, A Moro, J. Nonlinear Sci. 25B. Dubrovin, T. Grava, C. Klein, A. Moro, On critical behaviour in systems of Hamiltonian partial differential equations, J. Nonlinear Sci., 25, 631-707 (2015).
Exactly Solved Models in Statistical Mechanics. R Baxter, Academic PressR. Baxter Exactly Solved Models in Statistical Mechanics, Academic Press (1982).
| zyda_arxiv-0187000 |
ON THE COMPACTNESS OF WEAK SOLUTIONS TO THE NAVIER-STOKES-KORTEWEG EQUATIONS FOR CAPILLARY FLUIDS
10 Aug 2018
Paolo Antonelli
Stefano Spirito
ON THE COMPACTNESS OF WEAK SOLUTIONS TO THE NAVIER-STOKES-KORTEWEG EQUATIONS FOR CAPILLARY FLUIDS
10 Aug 2018
In this paper we consider the Navier-Stokes-Korteweg equations for a viscous compressible fluid with capillarity effects in three space dimensions. We prove compactness of finite energy weak solutions for large initial data. In contrast with previous results regarding this system, vacuum regions are allowed in the definition of weak solutions and no additional damping terms are considered. The compactness is obtained by introducing suitable truncations of the velocity field and the mass density at different scales and use only the a priori bounds obtained by the energy and the BD entropy.
Introduction
This paper is concerned about the following Navier-Stokes-Korteweg system ∂ t ρ + div(ρu) = 0, ρ ≥ 0, (1.1)
∂ t (ρu) + div(ρu ⊗ u) + ∇ρ γ − 2ν div(ρDu) − 2k 2 ρ∇∆ρ = 0, (1.2) in a three dimensional periodic domain, so that (t, x) ∈ (0, T ) × T 3 . We endow system (1.1)-(1.2) with initial data ρ(0, x) = ρ 0 (x), (ρu)(0, x) = ρ 0 (x)u 0 (x).
(1.
3)
The positive scalar function ρ represents the density of the fluid and the three dimensional vector field u is the velocity. The positive constants ν and κ, respectively, are the viscosity and the capillarity constants. The aim of this paper is to prove the compactness of solutions to (1.1)-(1.2). More precisely, given a sequence of solutions to (1.1)-(1.2), we show there exists a subsequence converging to a weak solution of the same system. This is one of the key steps in studying the existence of solutions for fluid dynamical systems like (1.1)-(1.2), the other one being the construction of a suitable sequence of approximate solutions.
The system (1.1)-(1.2) falls in the class of Navier-Stokes-Korteweg equations, which in their general form read ∂ t ρ + div(ρu) = 0 ∂ t (ρu) + div(ρu ⊗ u) + ∇p = 2ν div S + 2κ 2 div K, (1.4) where S is the viscosity stress tensor given by S = h(ρ) Du + g(ρ) div uI, (1.5) the coefficients h and g satisfying h ≥ 0, h + 3g ≥ 0, and the capillarity term K satisfies div K = ∇ ρ div(k(ρ)∇ρ) − 1 2 (ρk ′ (ρ) − k(ρ))|∇ρ| 2 − div(k(ρ)∇ρ ⊗ ∇ρ).
(1.6)
The system (1.1)-(1.2) is then obtained from (1.4)-(1.6) by choosing k(ρ) = 1, h(ρ) = ρ and g(ρ) = 0. Systems of Korteweg type arise in modeling several physical phenomena, e.g. capillarity phenomena in fluids with diffuse interface, where the density experiences steep but still smooth change of value. K is called the Korteweg tensor and is derived rigorously from thermodynamic considerations by Dunn and Serrin in [17].
Local existence of smooth solutions and global existence with small data for the system (1.1)-(1.2) have been proved in [22,23]. Regarding the theory of weak solutions few result are available. By exploiting some novel a priori estimates yielded by the so-called Bresch-Desjardins (BD) entropy, [9], in [10] the authors prove the global existence of weak solutions for the system (1.1)-(1.2), by considering test functions of the type ρφ, with φ smooth and compactly supported. This particular notion of weak solutions has the advantage to avoid some mathematical difficulties which arise in the definition of the velocity field in the vacuum region. The result was later extended in [21] to the case of Quantum-Navier-Stokes, namely when we choose k(ρ) = 1/ρ in (1.6). When system (1.1)-(1.2) is augmented by a damping term in the equation for the momentum density, then it is possible to prove the existence of global solutions by using the standard notion of weak solutions [9]. Indeed the presence of the damping term allows to define the velocity field everywhere in the domain.
However when dealing with general finite energy weak solutions to (1.1)-(1.2), a major mathematical difficulty arises in defining the velocity field in the vacuum region, due to the degeneracy of the viscosity coefficient h(ρ) = ρ. The momentum density is always well defined, but unfortunately the standard a priori estimates given by the physical energy (and by the BD entropy) do not avoid a possible concentration which would prevent the convergence of the convective term in the compactness argument. Furthermore due to the presence of the capillarity term, a Mellet-Vasseur type estimate [27] does not seem to be available for the system (1.1)-(1.2). This problem was overcome for the quantum case when the viscosity coefficients are chosen to be h(ρ) = ρ and g(ρ) = 0. In [4], by defining a suitable velocity it is possible to consider an alternative formulation of the system where the third order term vanishes, thus allowing the derivation of a Mellet and Vasseur type estimate for the new velocity. Alternatively, in [24] the authors replace the Mellet-Vasseur argument a by truncation method, so that they can recover the necessary compactness. In both the results in [4] and [24] it is crucial that the viscosity and capillarity coefficients satisfy
k(ρ) = h ′ (ρ) 2 ρ . (1.7)
Note that this relation (1.7) plays a crucial role in the theory, see for example [12] where the authors study the vanishing viscosity limit for the quantum Navier-Stokes equations, or [5] where (1.7) is extensively exploited to construct the approximating system and [8] where numerical methods are performed. We stress that in (1.1)-(1.2) the viscosity and capillarity coefficients do not satisfy the relation (1.7) and hence in this paper we cannot rely on a similar analysis. In order to prove our compactness result, we also exploit a truncation argument. Contrarily to [24], here it is not sufficient to truncate only the velocity field because of the lack of control on the third order term. To overcome this issue we also perform an additional truncation of the density. Unfortunately, this approach is not as straightforward as it would appear at a first glance. Indeed when truncating for example the convective term, some remainders cannot be simply controlled from the a priori estimates. Thus we need to introduce several scales of truncations, in order to control all the error terms.
As already remarked, inferring compactness properties for solutions to fluid dynamical systems like (1.1)-(1.2) are only the first step towards an existence result for global in time finite energy weak solutions. Usually this is combined with the construction of a suitable sequence of smooth approximate solutions. Potentially, this latter step could be achieved by considering the following approximating system
∂ t ρ ε + div(ρ ε u ε ) = 0, ∂ t (ρ ε u ε ) + div(ρ ε u ε ⊗ u ε ) − 2ν div(ρ ε Du ε ) + ∇ρ γ 2 ε + ερ ε |u ε | 2 u ε + εu ε = 2κ 2 ρ ε ∇∆ρ ε + 2ε 2 ρ ε ∆ √ ρ ε √ ρ ε .
and by adapting, probably in a non trivial way, the regularisation procedure in [24] in order to rigorously derive the truncated formulation of the momentum equations. On the other hand, providing a smooth approximating system as in [5] seems to be very challenging due to the the very rigid structure of the approximation procedure. We plan to attack this problem in future works. We conclude this introduction by describing the state of art of the analysis of the Cauchy problem for the general system (1.4)-(1.6). In the case κ = 0 (1.4) reduces to the system of compressible Navier-Stokes equations. When the viscosity coefficient h(ρ) is chosen degenerating on the vacuum region {ρ = 0} the Lions-Feireisl theory, [26], [18], and the recent approach in [11] cannot be used because it is not possible to define the velocity in the vacuum regions. The global existence of weak solutions has been proved independently in [28] and [25] in the case h(ρ) = ρ and g(ρ) = 0. In both cases, non trivial approximation procedures are required to prove the BD entropy and the Mellet and Vasseur inequality.
When the viscosity ν = 0, the system (1.4) is called Euler-Korteweg and it has been also extensively studied. In [7] local well-posedness has been proved, while in [6] the global existence of smooth solutions with small data has been proved. Moreover, when k(ρ) = 1/ρ the system (1.4) is called Quantum Hydrodynamic system (QHD) and arises for example in the description of quantum fluids. The global existence of finite energy weak solutions for the QHD system has been proved in [2,3] without restrictions on the regularity or the size of the initial data. Non uniqueness results by using convex integration methods has been proved in [14].
Moreover, relative entropy methods to study singular limits for the equations (1.4)-(1.6) have been exploited in [12,14,20,16], in particular we mention the incompressible limit in [1] in the quantum case, the quasineutral limit [15] for the constant capillarity case and the vanishing viscosity limit in [12]. Finally, the analysis of the long time behaviour for the isothermal Quantum-Navier-Stokes equations has been performed in [13].
Organization of the paper. The paper is organized as follows. In Section 2 we fix the notations and give the precise definition of weak solutions of (1.1)-(1.2). In Section 3 we recall the formal a priori estimates for solutions of the system (1.1)-(1.2), namely the energy estimate and the BD entropy. Finally, in the Section 4 we prove Theorem 2.2.
Preliminaries
Notations.
Given Ω ⊂ R 3 , the space of compactly supported smooth functions with value in R d will be D((0, T ) × Ω; R d ). We will denote with L p (Ω) the standard Lebesgue spaces and with · L p their norm. The Sobolev space of L p functions with k distributional derivatives in L p is W k,p (Ω) and in the case p = 2 we will write H k (Ω). The spaces W −k,p (Ω) and H −k (Ω) denote the dual spaces of W k,p ′ (Ω) and H k (Ω) where p ′ is the Hölder conjugate of p. Given a Banach space X we use the the classical Bochner space for time dependent functions with value in X, namely L p (0, T ; X), W k,p (0, T ; X) and W −k,p (0, T ; X) and when X = L p (Ω), the norm of the space L q (0, T ; L p (Ω)) is denoted by · L q t (L p x ) . We denote by Du = (∇u + (∇u) T )/2 the symmetric part of the gradient and by Au = (∇u − (∇u) T )/2 the antisymmetric one. Finally, given a matrix M ∈ R 3×3 we denote by Symm M , the symmetric part of M and by Asymm M the antisymmetric one. (1) Integrability conditions.
ρ ∈ L ∞ (0, T ; H 1 (T 3 )) ∩ L 2 (0, T ; H 2 (T 3 )), √ ρ u ∈ L ∞ (0, T ; L 2 (T 3 )), ρ γ 2 ∈ L ∞ (0, T ; L 2 (T 3 )) ∩ L 2 (0, T ; H 1 (T 3 )), ∇ √ ρ ∈ L ∞ (0, T ; L 2 (T 3 )).
(2) Equations.
For any φ ∈ C ∞ c ([0, T ); C ∞ (T 3 ); R). ρ 0 φ(0) dx + ρφ t + √ ρ √ ρu∇φ dxdt = 0.
For any fixed l = 1, 2, 3 and
ψ ∈ C ∞ c ([0, T ); C ∞ (T 3 ); R) ρ 0 u 0,l ψ(0) dx + √ ρ( √ ρu l )ψ t dxdt + √ ρu √ ρu l : ∇ψ dxdt + ν √ ρ √ ρu l ∆ψ dxdt + ν √ ρ √ ρu∇∇ l ψ dxdt + 2ν ∇ l √ ρ √ ρ u∇ψ dxdt 2ν √ ρu l ∇ √ ρ∇ψ dxdt − 2 ∇ρ γ 2 ρ γ 2 · ψ dxdt − 2κ 2 ∇ l ρ∆ρψ dxdt − 2κ 2 ρ∆ρ∇ l ψ dxdt = 0. (3) Energy Inequality. There exist S ∈ L 2 ((0, T )×T 3 ) such that √ ρS = Symm(∇(ρu))−2∇ √ ρ⊗ √ ρu) in D ′
and Λ such that ρ u = √ ρΛ satisfying the following energy inequality
sup t∈(0,T ) T 3 |Λ(t, x)| 2 2 + ρ(t, x) γ γ − 1 + κ 2 |∇ρ(t, x)| 2 dx + |S(s, x)| 2 dxds ≤ T 3 ρ 0 (x)|u 0 (x)| 2 + ρ 0 (x) γ γ − 1 + κ 2 |∇ρ 0 (x)| 2 dx. (4) BD Entropy. There exists A ∈ L 2 ((0, T ) × T 3 ) such that √ ρA = Asymm(∇(ρu)) − 2∇ √ ρ ⊗ √ ρu) in D ′ such that sup t∈(0,T ) T 3 |Λ(t, x) + 2ν∇ √ ρ(t, x)| 2 2 + ρ(t, x) γ γ − 1 + κ 2 |∇ρ(t, x)| 2 dx + |A(s, x)| 2 dxds + 8ν γ |∇ρ γ 2 (s, x)| 2 dxds + 4κ 2 ν |∆ρ(s, x)| 2 dxds ≤ T 3 | ρ 0 (x)u 0 (x) + 2ν∇ ρ 0 (x)| 2 2 + ρ 0 (x) γ γ − 1 + κ 2 |∇ρ 0 (x)| 2 dx.
In order to state our main result, we first specify the assumptions on the initial data. We consider {ρ 0 n } n being a sequence of smooth and strictly positive functions and ρ 0 be a strictly positive function such that
ρ 0 n > 0, ρ 0 n → ρ 0 strongly in L 1 (T 3 ), {ρ 0 n } n is uniformly in bounded in L 1 ∩ L γ (T 3 ), {∇ ρ 0 n } n is uniformly bounded in L 2 (T 3 ),(2.1)
Regarding the initial velocity, let {u n 0 } be a sequence of smooth vector fields and u 0 be a smooth vector field such that
{ ρ 0 n u 0 n } is uniformly bounded in L 2 (T 3 ), ρ 0 n u 0 n → ρ 0 u 0 in L p (R 3 ) with p < 2.
(2.
2)
The main theorem of this paper is the following. Remark 2.4. We stress that (2.3) do not implies the convergence of the convective term, which comes from the truncation arguments.
Remark 2.5. The notion of weak solution in Definition 2.1 is weaker compared with the one in the quantum case in [4,5]. Indeed, in [4,5] it can be proved that Λ
= √ ρ u because √ ρ n u n → √ ρ u strongly in L 2 ((0, T ) × T 3 ).
As a consequence the energy inequality and the entire weak formulation can be written only in terms of ρ, Λ, S and A. On the contrary, in the proof of Theorem 2.2, we are not able to prove that Λ = √ ρ u, but only that √ ρΛ = ρ u. Indeed, it is not clear whether
√ ρ n u n ⇀ √ ρ u weakly in L 2 ((0, T ) × T 3 ),
since we do not know that Λ = 0 on {ρ = 0}.
A priori estimates
In this section we recall the two formal a priori estimates available for solutions of (1.1)-(1.2). The first lemma is the basic energy estimate for the system (1.1)-(1.2).
ρ n |u n | 2 2 + ρ γ n γ − 1 + κ 2 |∇ρ n | 2 dx + 2ν ρ n |Du n | 2 dxdt = ρ 0 n |u 0 n | 2 2 + ρ 0 n γ γ − 1 + κ 2 |∇ρ 0 n | 2 dx. (3.1)
The second main a priori estimates is the so-called BD entropy. Although this estimate is well-known, see [9], we give a sketch of the proof for completeness.
ρ n |w n | 2 2 + ρ γ n γ − 1 + κ 2 |∇ρ n | 2 dx + 8ν γ |∇ρ γ 2 n | 2 dx + 2ν ρ n |Au n | 2 dx + 4κ 2 ν |∆ρ n | 2 dx = ρ 0 n |w 0 n | 2 2 + ρ 0 n γ γ − 1 + κ 2 |∇ρ 0 n | 2 dx.
(3.2)
Proof. We first perform the effective velocity transformation. Let c ∈ R to be chosen later. Let us consider w n = u n + c∇ log ρ n . Then, ∂ t ρ n + div(ρ n w n ) = ∂ t ρ n + div(ρ n (u n + c∇ log ρ n )) = c∆ρ n .
We recall the following elementary identities,
c(ρ n ∇ log ρ n ) t = −c∇ div(ρ n u n ),
c div(ρ n u n ⊗ ∇ log ρ n + ρ n ∇ log ρ n ⊗ u n ) = c∆(ρ n u n ) − 2c div(ρ n Du n ) + c∇ div(ρ n u n ),
c 2 div(ρ n ∇ log ρ n ⊗ ∇ log ρ n ) = c 2 ∆(ρ n ∇ log ρ n ) − c 2 div(ρ n ∇ 2 log ρ n ).
By using these identities it is easy to prove that ∂ t (ρ n w n ) + div(ρ n w n ⊗ w n ) + ∇ρ γ n − c∆(ρ n w n ) = 2(ν −c) div(ρ n Dw n ) − (c 2 +2(ν −c)c) div(ρ n ∇ 2 log ρ n ) + 2κ 2 ρ n ∇∆ρ n .
Then, by choosing c = 2ν we obtain the following system ∂ t ρ n + div(ρ n w n ) = 2ν∆ρ n ,
3)
∂ t (ρ n w n ) + div(ρ n w n ⊗ w n ) + ∇ρ γ n − 2ν∆(ρ n w n ) + 2ν div(ρ n Dw n ) = 2κ 2 ρ n ∇∆ρ n .
(3.4)
The BD Entropy (3.2) is nothing else than the energy estimate associated with the system (3.3)-(3.4). By multiplying (3.4) by w n , by integrating in space and by using (3.3) we get d dt ρ n |w n | 2 2 dx + ∇ρ γ n w n dx + 2ν ρ n |Au n | 2 dx − 2κ 2 ρ n ∇∆ρ n w n . Finally, by multiplying (3.3) by −2κ 2 ∆ρ n we have d dt κ 2 |∇ρ n | 2 dx + 4νκ 2 |∆ρ n | 2 dx − 2κ 2 div(ρ n w n )∆ρ n = 0. (3.7)
By summing up (3.5), (3.6) and (3.7) and integrating by parts we get (3.2).
Compactness
In this Section we are going to prove the main result of our paper.
Bounds independent on n.
First of all we collect the a priori bounds we can deduce from the Proposition 3.1 and Proposition 3.2. By the energy estimates in Proposition 3.1 and the assumptions (2.1), (2.2) we have the following uniform bounds.
√ ρ n u n L ∞ t L 2 x ≤ C, ∇ρ n L ∞ t L 2 x ≤ C ρ n L ∞ t (L 1 x ∩L γ x ) ≤ C, √ ρ n Du n L 2 t,x ≤ C (4.1)
The uniform bounds obtained by the BD Entropy, Proposition 3.2, are the following √ ρ n w n L ∞ t L 2
x ≤ C, √ ρ n Au n L 2 t,x ≤ C ∇ρ γ/2 n L 2
t,x ≤ C, ∆ρ n L 2 t,x ≤ C. Combining some of the bounds in (4.1) and in (4.2) we obtain the following bounds
∇ √ ρ n L ∞ t L 2 x ≤ C, √ ρ n ∇u n L 2 t,x ≤ C. (4.3)
Of course, additional bounds can be easily obtained by interpolation and Sobolev embeddings.
Here we list only the ones will be used in the sequel. By Sobolev embeddings and interpolation inequalities we get
ρ n L 2 t L ∞ x ≤ C, ∇ρ n L 10 3 t,x ≤ C, ρ γ 2 n L 10 3 t,x ≤ C.
(4.4)
By using (4.1), (4.2), (4.4) and Hölder inequality we have
ρ n u n L 2 t,x ≤ C, ∇(ρ n u n ) L 2 t (L 1 x ) ≤ C. (4.5)
Finally, by using the continuity equation (1.1) we have that
∂ t ρ n L 2 t L 1 x ≤ C. (4.6)
4.2. Convergence Lemma. By using the above uniform bounds we are now able to prove the following convergences. (1) Up to subsequences there exist, ρ, m, S, A and Λ such that
ρ n → ρ strongly in L 2 (0, T ; H 1 (T 3 )), (4.7) ρ n u n → m strongly in L p (0, T ; L p (T 3 )) with p ∈ [1, 2), (4.8) √ ρ n D(u n ) ⇀ S weakly in L 2 ((0, T ) × T 3 ), (4.9) √ ρ n A(u n ) ⇀ A weakly in L 2 ((0, T ) × T 3 ), (4.10) √ ρ n u n * ⇀ Λ weakly* in L ∞ (0, T ; L 2 (T 3 )). (4.11)
Moreover, Λ is such that √ ρΛ = m.
(2) The following additional convergences hold for the density Then, since {ρ n } n is uniformly bounded in L 2 (0, T ; H 2 (T 3 )), by using Aubin-Lions Lemma we get (4.7). Next, by using the momentum equations and the bounds (4.1)-(4.4), it is easy to prove that {∂ t (ρ n u n )} n is uniformly bounded in L 2 (0, T ; W −2, 3 2 (T 3 )). Then, by using (4.4), (4.5) and Aubin-Lions Lemma, (4.8) follows. The convergences (4.9), (4.10) and (4.11) follow by standard weak compactness theorems and the equality √ ρΛ = m follows easily from (4.7) and (4.11). Next, the convergences (4.12), (4.13) follow from the the uniform bounds (4.1)-(4.3) and standard weak compactness arguments. Finally, The convergence (4.14) is easily obtained by using (4.7) and the bound (4.3), the convergence (4.15) follows by (4.2) and (4.7).
∇ √ ρ n ⇀ ∇ √ ρ weakly in L 2 ((0, T ) × T 3 ),
Lemma 4.2.
Let f ∈ C ∩ L ∞ (R 3 ; R) and (ρ n , u n ) be a solution of (1.1)-(1.2) and let u be defined as follows:
u = m(t,x) ρ(t,x) = Λ(t,x) √ ρ(t,x) (t, x) ∈ {ρ > 0}, 0 (t, x) ∈ {ρ = 0}. (4.16)
Then, the following convergences hold.
ρ n f (u n ) → ρ f (u) strongly in L p ((0, T ) × T 3 ) for any p < 6, (4.17) ∇ρ n f (u n ) → ∇ρ f (u) strongly in L p ((0, T ) × T 3 ) for any p < 10 3 , (4.18) ρ n u n f (u n ) → ρu f (u) strongly in L p ((0, T ) × T 3 ) for any p < 2, (4.19) ρ γ 2 n f (u n ) → ρ γ 2 f (u) strongly in L p ((0, T ) × T 3 ) for any p < 10 3 .
(4.20)
Proof. We first first note that, up to a subsequence non relabelled, (4.7) and (4.8) imply that
ρ n → ρ a.e. in (0, T ) × T 3 , ρ n u n → m a.e. in (0, T ) × T 3 ,
∇ρ n → ∇ρ a.e. in (0, T ) × T 3 . suppβ ⊂ (−2, 2) and 0 ≤β ≤ 1. Givenβ, we defineβ : R → R as follows:
β(z) = z 0β (s) ds.
For y ∈ R 3 we define for any δ > 0 the functions
β 1 δ (y) := 1 δβ (δ y 1 )β(δ y 2 )β(δ y 3 ), β 2 δ (y) := 1 δβ (δ y 1 )β(δ y 2 )β(δ y 3 ), β 2 δ (y) := 1 δβ (δ y 1 )β(δ y 2 )β(δ y 3 ).
Note that for fixed l = 1, 2, 3 the function β l δ : R 3 → R is a truncation of the function f (y) = y l . Finally, for any δ > 0 we defineβ δ : R 3 → R aŝ β δ (y) :=β δ (δ y 1 )β δ (δ y 2 )β δ (δ y 3 ), and for any λ > 0 we defineβ λ : R → R as β λ (s) =β(λ s).
In the next Lemma we collect some of the main properties of β l δ ,β δ andβ λ . Those properties are elementary and can be deduced directly from the definitions. (1) For any δ > 0 and l = 1, 2, 3
β l δ L ∞ ≤ C δ , ∇β l δ L ∞ ≤ C, ∇ 2 β l δ L ∞ ≤ C δ,(4.23)
(2) For any λ > 0
β λ L ∞ ≤ 1, β ′ λ L ∞ ≤ C λ, |s|β λ (s) ≤ C √ λ . (4.24) (3) For any δ > 0 β δ L ∞ ≤ 1, ∇β δ L ∞ ≤ Cδ, |y||β δ (y)| ≤ C δ , (4.25)(4)
The following convergences hold for l = 1, 2, 3, pointwise on R 3 , as δ → 0 β l δ (y) → y l , (∇ y β l δ )(y) → ∇ y l y,β δ (y) → 1. By using (4.7), (4.8) and (2.1) is straightforward to prove that ρ 0 n φ(0, x) + ρ n φ t dxdt + ρ n u n ∇φ dxdt converges to
ρ 0 φ(0, x) + ρφ t dxdt + ρ u∇φ dxdt, for any φ ∈ C ∞ c ([0, T ) × T 3 )
. Let us consider the momentum equations. Let l ∈ {1, 2, 3} fixed. By multiplying (1.2) by ∇ y β l δ (u n ) and by using the continuity equation (1.1) we have that ∂ t (ρ n β l δ (u n )) + div(ρ n u n β l δ (u n )) − 2ν div(ρ n D(u n ))∇ y β l δ (u n ) + ∇ρ γ n ∇ y β l δ (u n ) − 2κ 2 ρ n ∇∆ρ n ∇ y β l δ (u n ) = 0.
(4.28)
Let ψ ∈ C ∞ c ([0, T ) × T 3 ; R)
, by multiplying (4.28) byβ λ (ρ n )ψ and integrating by parts we get
ρ 0 n β l δ (u 0 n )β λ (ρ 0 n )ψ(0, x) dx + ρ n β l δ (u n )β λ (ρ n )∂ t ψ − ρ n u n β l δ (u n )β λ (ρ n ) · ∇ψ dxdt − 2ν √ ρ n Du n : √ ρ n ∇ y β l δ (u n )β λ (ρ n ) ⊗ ∇ψ dxdt − 2 ρ γ 2 n ∇ρ γ 2 n · ∇ y β l δ (u n )β λ (ρ n )ψ dxdt − 2κ 2 ∇ρ n ∆ρ n ∇ y β l δ (u n )β λ (ρ n )ψ dxdt − 2κ 2 ρ n ∆ρ n ∇ y β l δ (u n )β λ (ρ n )∇ψ dxdt + R δ,λ n ψ dxdt = 0. (4.29) where the remainder is R δ,λ n = 6 i=1 R δ,λ n,i = ρ n β l δ (u n )β ′ λ (ρ n )∂ t ρ n + ρ n u n β l δ (u n )β ′ λ (ρ n )∇ρ n − 2ν √ ρ n Du n : √ ρ n ∇ y β l δ (u n ) ⊗ ∇ρ nβ ′ λ (ρ n ) + 2κ 2 ρ n ∆ρ n ∇ 2 y β l δ (u n ) : ∇u nβλ (ρ n ) + 2κ 2 ρ n ∆ρ n ∇ y β l δ (u n )β ′ λ (ρ n )∇ρ n − 2νρ n Du n ∇ 2 y β l δ (u n )∇u nβλ (ρ n ).
(4.30)
We first perform the limit as n goes to ∞ for δ and λ fixed. Notice that, sinceβ λ ∈ L ∞ (R), and {ρ n } n converges almost everywhere, we have that β λ (ρ n ) →β λ (ρ) strongly in L q ((0, T ) × T 3 ) for any q < ∞. By using (4.17) with p = 2 and choosing q = 2 in (4.31) we have that ρ n β l δ (u n )β λ (ρ n )∂ t ψ dxdt → ρβ l δ (u)β λ (ρ)∂ t ψ dxdt.
Next, by (4.19) with p = 3/2 and choosing q = 3 in (4.31) we get ρ n u n β l δ (u n )β λ (ρ n ) · ∇ψ dxdt → ρ uβ l δ (u)β λ (ρ) · ∇ψ dxdt.
By using (4.9), (4.17) with p = 4 and (4.31) with q = 4 it follows √ ρ n Du n :
√ ρ n ∇ y β l δ (u n )β λ (ρ n ) ⊗ ∇ψ dxdt → ρ S : ∇ y β l δ (u)β λ (ρ) ⊗ ∇ψ dxdt.
By using (4.15), (4.20) with p = 3 and (4.31) with q = 6 it follows
ρ γ 2 n ∇ρ γ 2 n · ∇ y β l δ (u n )β λ (ρ n )ψ dxdt → ρ γ 2 ∇ρ γ 2 · ∇ y β l δ (u)β λ (ρ)ψ dxdt.
By using (4.13), (4.18) with p = 3 and (4.31) with q = 6 it follows ∇ρ n ∆ρ n ∇ y β l δ (u n )β λ (ρ n )ψ dxdt → ∇ρ∆ρ∇ y β l δ (u)β λ (ρ)ψ dxdt.
Next, by using (4.13), (4.17) with p = 3 and (4.31) with q = 6 it follows ρ n ∆ρ n ∇ y β l δ (u n )β λ (ρ n )∇ψ dxdt → ρ∆ρ∇ y β l δ (u)β λ (ρ)∇ψ dxdt.
Finally, by using (2.1) the convergence of the term involving the initial data can be easily proved. It remains to study the remainder R δ,λ n . We claim that there exists a C > 0 independent on n, δ and λ such that
R δ,λ n L 1 t,x ≤ C δ √ λ + λ δ + λ + δ . (4.32)
In order to prove (4.32) we estimate all the terms in (4.30) separately. By using (4.4), (4.6), (4.23) and (4.24) we have
R δ,λ n,1 L 1 t,x ≤ ρ n L 2 (L ∞ ) ∂ t ρ n L 2 (L 1 ) β l δ (u n ) L ∞ t,x β ′ λ (ρ n ) L ∞ t,x ≤ C λ δ .
By using (4.1), (4.4), (4.23) and (4.24) it holds R δ,λ n,2 L 1 t,x ≤ ρ n u n L 2 t,x ∇ρ n L 2 t,x β l δ (u n ) L ∞ t,x β ′ λ (ρ n ) L ∞ t,x ≤ C λ δ .
By using (4.1), (4.4), (4.23) and (4.24) we get
R δ,λ n,3 L 1 t,x ≤ ρ n L 2 (L ∞ ) √ ρ n Du n L 2 t,x ∇ρ n L ∞ (L 2 ) ∇ y β l δ (u n ) L ∞ t,x β ′ λ (ρ n ) L ∞ t,x ≤ Cλ.
By using (4.1), (4.2), (4.23) and (4.24) we have that
R δ,λ n,4 L 1 t,x ≤ ∆ρ n L 2 t,x √ ρ n Du n L t,x ∇ 2 y β l δ (u n ) L ∞ t,x √ ρ nβλ (ρ n ) L ∞ t,x ≤ C δ √ λ .
By using R δ,λ n,5 L 1
t,x ≤ ρ n L 2 (L ∞ ) ∆ρ n L 2 t,x ∇ρ n L ∞ (L 2 ) ∇ y β l δ (u n ) L ∞ t,x β ′ λ (ρ n ) L ∞ t,x ≤ Cλ.
Finally, by using (4.1), (4.23) and (4.24) we have R δ,λ n,6 L 1 t,x ≤ ρ n ∇u n
2 L 2 t,x ∇ 2 y β l δ (u n ) L ∞ t,x β λ (ρ n ) L ∞ t.x ≤ Cδ.
where the remainder is R δ n,j = ρ n u l n ∇ y kβ δ (u n )∇ j u k n + ρ n u j n ∇ y kβ δ (u n )∇ l u k n .
(4.36)
For fixed δ, by using the convergence (4.9) and (4.17) with p = 4, we have that 2 √ ρ nβδ (u n ) √ ρ n (D(u n )) l,j ∂ j φ dxdt → 2 √ ρS l,jβδ (u)∂ j φ dxdt
Next, we have that ρ n u l nβδ (u n )∆φ dxdt → ρ u lβ δ (u)∆φ dxdt ρ n u j nβδ (u n )∇ 2 j,l φ dxdt → ρ u jβ δ (u)∇ 2 j,l φ dxdt because of (4.19) with p = 1. By using (4.25), (4.17) with p = 2 and the weak convergence of ∇ √ ρ n in L 2 t,x we get
∇ l √ ρ n √ ρ n u nβδ (u n )∇φ dxdt → ∇ l √ ρ √ ρ uβ δ (u)∇φ dxdt ∇ √ ρ n √ ρ n u l nβδ (u n )∇φ dxdt → ∇ √ ρ √ ρ u lβ δ (u))∇φ dxdt
Finally, by using (4.1), (4.2) and (4.25) we have that
R δ n L 1 t,x ≤ C √ ρ n L ∞ (L 2 t,x √ ρ n D(u n ) L 2 t,x ∇ yβδ (u n ) L ∞ t,
x ≤ Cδ, and then there exists a measureμ δ such that R δ n · ∇φ dxdt → μ δ , ∇ψ , (4.37) and its total variation satisfies |μ δ |(T 3 ) ≤ Cδ.
Collecting the previous convergences, we have 2 √ ρS l,jβδ (u)∇ j φ dxdt = − ρ u lβ δ (u)∆φ dxdt − ρ u jβ δ (u)∇ 2 j,l φ dxdt
− 2 ∇ l √ ρ √ ρ uβ δ (u)∇φ dxdt − 2 ∇ √ ρ √ ρ u lβ δ (u))∇φ dxdt − μ δ , ∇ψ .
Finally, by using (4.26), Dominated Convergence Theorem and (4.37) we get that 2 √ ρS l,j ∇ j φ dxdt = − ρ u l ∆φ dxdt − ρ u j ∇ 2 j,l φ dxdt
− 2 ∇ l √ ρ √ ρ u∇φ dxdt − 2 ∇ √ ρ √ ρ u l ∇φ dxdt.
By the very same arguments we identify also the tensor A. Finally, the energy inequality and the BD Entropy follow from the lower semicontinuity of the norms.
2. 2 .
2Definition of weak solutions and statement of the main result. The definition of weak solution for the system (1.1)-(1.2) is the following Definition 2.1. A pair (ρ, u) with ρ ≥ 0 is said to be a weak solution of the Cauchy problem (1.1)-(1.2)-(1.3) if the following conditions are satisfied.
Theorem 2. 2 .
2Assume {ρ 0 n } n and {ρ 0 n u 0 n } n are sequences of initial data for (1.1)-(1.2) satisfying (2.1) and (2.2). Let {(ρ n , u n )} n with ρ n > 0 be a sequence of smooth solutions of (1.1)-(1.2) with initial data {ρ 0 n } n and {ρ 0 n u 0 n } n , then, up to subsequences not relabelled, there exist (ρ, u) such that ρ n → ρ strongly in L 2 ((0, T ); H 1 (T 3 )), ρ n u n → ρ u strongly in L p ((0, T ) × T 3 ) for any p < 2, (2.3) and (ρ, u) is a weak solutions of (1.1)-(1.2)-(1.3) in the sense of Definition 2.1. Remark 2.3. We stress that the velocity field is not uniquely defined on the vacuum region {ρ = 0}.
Proposition 3. 1 .
1Let (ρ n , u n ) be a smooth solution of (1.1)-(1.2), then sup t∈(0,T )
Proposition 3. 2 .
2Let (ρ n , u n ) be a smooth solution of (1.1)-(1.2). Then, w n = u n + 2ν∇ρ n and ρ n satisfy sup t∈(0,T )
Lemma 4. 1 .
1Let {(ρ n , u n )} n be a sequence of solutions of (1.1)-(1.2).
n ⇀ ∆ρ weakly in L 2 ((0, T ) × T 3 ), (4.13) ρ γ n → ρ γ strongly in L 1 ((0, T ) × T 3 in L 2 ((0, T ) × T 3 ). (4.15)Proof. By using (1.1) and (4.4), we have that {∂ t ρ n } n is uniformly bounded in L 2 (0, T ; H −1 (T 3 )).
that m = 0 on {ρ = 0} and √ ρ u ∈ L ∞ (0, T ; L 2 (T 3 )).Moreover, m = ρ u = √ ρΛ. Let us prove (4.17). On {ρ > 0} by using (4.21) we have thatρ n f (u n ) → ρ f (u) a.e. in {ρ > 0}.On the other hand, since f ∈ L ∞ (R 3 ; R) we have|ρ n f (u n )| ≤ |ρ n | f ∞ → 0 a.e. in {ρ = 0}.Then, ρ n f (u n ) → ρ f (u) a.e. in (0, T ) × T 3 and the convergence in (4.17) follows by the uniform bound ρ n L 6 t,x ≤ C. Regarding(4.18), from Lemma 4.1 we have that ρ is a Sobolev function, then ∇ρ = 0 a.e. in {ρ = 0}. From (4.21) we have that ∇ρ n f (u n ) → ∇ρ f (u) a.e. in {ρ > 0} |∇ρ n f (u n )| ≤ |∇ρ n | f ∞ → 0 a.e. in {ρ = 0}. Then, ∇ρ n f (u n ) → ∇ρ f (u) a.e. in (0, T ) × T 3 and (4.18) follows from the uniform bound (4.4). Concerning (4.19), again (4.21) implies the following convergences ρ n u n f (u n ) → m f (u) a.e. in {ρ > 0}, |ρ n u n f (u n )| ≤ |ρ n u n | f ∞ → 0 a.e. in {ρ = 0}, which, together with (4.4), imply (4.19). Finally, (4.20) follows by the same arguments used to prove (4.17) and the uniform bounds on the pressure in (4.1) and (4.2). 4.3. The Truncations. Letβ : R → R be an even positive compactly
Lemma 4 . 3 .
43Let λ, δ > 0 and K := β W 2,∞ . Then, there exists C = C(K) such that the following bounds hold.
( 5 ).
5The following convergence holds pointwise on R as λ → 0 β λ (s) Proof of the main Theorem. We are now ready to prove Theorem 2.2.Proof of Theorem 2.2. Let (ρ n , u n ) be a solution of (1.1)-(1.2). By Lemma 4.1 there exist ρ, m, Λ such that the convergences (4.7), (4.8) and (4.11) hold. Moreover, by defining the velocity u as in Lemma 4.2 we have that √ ρ u ∈ L ∞ (0, T ; L 2 (T 3 ), m = √ ρΛ = ρ u.
Then, (4.32) is proved and, when n goes to infinity, we have that (ρ, u) satisfies the following integral equalitywhere µ δ,λ is a measure such that R δ,λ n → µ δ,λ in M(T 3 ; R) and its total variations satisfiesand by (4.26), (4.27) and the Lebesgue Dominated Convergence Theorem we have that (4.33) converge toNext we need to identify the tensor S. Let φ ∈ C ∞ c ([0, T ) × T 3 ; R) and l = 1, 2, 3 fixed. Then the following equality holds 2 β δ (u n )ρ n (D(u n )) l,j ∇ j φ dxdt = (∇(ρ n u l n )β δ (u n )∇φ dxdtBy integrating by parts we get
On the Low Mach number limit for Quantum-Navier-Stokes equations. P Antonelli, L Hientzsch, P Marcati, In preparationP. Antonelli, L. Hientzsch and P. Marcati. On the Low Mach number limit for Quantum-Navier-Stokes equations, In preparation.
On the finite energy weak solutions to a system in quantum fluid dynamics. P Antonelli, P Marcati, Comm. Math. Phys. 287P. Antonelli and P. Marcati, On the finite energy weak solutions to a system in quantum fluid dynamics, Comm. Math. Phys., 287 (2009), 657-686.
The Quantum Hydrodynamics system in two space dimensions. P Antonelli, P Marcati, Arch. Ration. Mech. Anal. 203P. Antonelli and P. Marcati, The Quantum Hydrodynamics system in two space dimensions, Arch. Ration. Mech. Anal., 203 (2012), 499-527.
On the compactness of finite energy weak solutions to the quantum Navier-Stokes equations. P Antonelli, S Spirito, J. Hyperbolic Differ. Equ. 15P. Antonelli and S. Spirito, On the compactness of finite energy weak solutions to the quantum Navier- Stokes equations, J. Hyperbolic Differ. Equ., 15 (2018), 133-147.
Global existence of finite energy weak solutions of quantum Navier-Stokes equations. P Antonelli, S Spirito, Arch. Ration. Mech. Anal. 3P. Antonelli and S. Spirito, Global existence of finite energy weak solutions of quantum Navier-Stokes equations, Arch. Ration. Mech. Anal., 3 (2017), 1161-1199.
Global well-posedness of the Euler-Korteweg system for small irrotational data. C Audiard, B Haspot, Comm. Math. Phys. 351C. Audiard and B. Haspot, Global well-posedness of the Euler-Korteweg system for small irrotational data, Comm. Math. Phys., 351 (2017), 201-247.
On the well-posedness for the Euler-Korteweg model in several space dimensions. S Benzoni-Gavage, R Danchin, S Descombes, Indiana Univ. Math. J. 56S. Benzoni-Gavage, R. Danchin and S. Descombes, On the well-posedness for the Euler-Korteweg model in several space dimensions, Indiana Univ. Math. J., 56 (2007), 1499-1579.
A generalization of the quantum Bohm identity: Hyperbolic CFL condition for the Euler-Korteweg equations, Généralisation de l'identité de Bohm quantique : condition CFL hyperbolique pouréquations d'EulerKorteweg. D Bresch, F Couderc, P Noble, J P Vila, Comptes Rendus Math. 3541D. Bresch, F. Couderc, P. Noble, J.P. Vila, A generalization of the quantum Bohm identity: Hyperbolic CFL condition for the Euler-Korteweg equations, Généralisation de l'identité de Bohm quantique : con- dition CFL hyperbolique pouréquations d'EulerKorteweg., Comptes Rendus Math. 354, no. 1 (2016), 39-43.
Sur un modèle de Saint-Venant visqueux et sa limite quasi-géostrophique. [On viscous shallow-water equations (Saint-Venant model) and the quasi-geostrophic limit. D Bresch, D Desjardins, C. R. Math. Acad. Sci. 3352002D. Bresch and D. Desjardins, Sur un modèle de Saint-Venant visqueux et sa limite quasi-géostrophique. [On viscous shallow-water equations (Saint-Venant model) and the quasi-geostrophic limit.], C. R. Math. Acad. Sci. Paris, 335 2002, 1079-1084.
On some compressible fluid models: Korteweg, lubrication, and shallow water systems. D Bresch, B Desjardins, Chi-Kun Lin, Comm. Part. Differ. Equat. 28D. Bresch, B. Desjardins and Chi-Kun Lin, On some compressible fluid models: Korteweg, lubrication, and shallow water systems, Comm. Part. Differ. Equat., 28 (2003), 843-868.
Global existence of weak solutions for compressible Navier-Stokes equations; thermodinamically unstable pressure and anisotropic viscous stress tensor. D Bresch, P.-E Jabin, arXiv:1602.04373Ann. of Math. Preprint. To appear inD. Bresch and P.-E. Jabin, Global existence of weak solutions for compressible Navier-Stokes equations; thermodinamically unstable pressure and anisotropic viscous stress tensor, To appear in Ann. of Math. Preprint: arXiv:1602.04373.
D Bresch, M Gisclon, I Lacroix-Violet, arXiv:1703.09460On Navier-Stokes-Korteweg and Euler-Korteweg Systems: Application to Quantum Fluids Models. PreprintD. Bresch, M. Gisclon and I. Lacroix-Violet, On Navier-Stokes-Korteweg and Euler-Korteweg Systems: Application to Quantum Fluids Models. Preprint: arXiv:1703.09460.
R Carles, K Carrapatoso, M Hillairet, arXiv:1803.07837Rigidity results in generalized isothermal fluids. Preprint:R. Carles, K. Carrapatoso and M. Hillairet, Rigidity results in generalized isothermal fluids. Preprint: arXiv:1803.07837.
Well/ill posedness for the Euler-Korteweg-Poisson system and related problems. D Donatelli, E Feireisl, P Marcati, Comm. Part. Differ. Equat. 40D. Donatelli, E. Feireisl and P. Marcati,Well/ill posedness for the Euler-Korteweg-Poisson system and related problems, Comm. Part. Differ. Equat., 40 (2015), 1314-1335.
Quasineutral limit, dispersion and oscillations for Korteweg type fluids. D Donatelli, P Marcati, SIAM J. Math. Anal. 47D. Donatelli and P. Marcati, Quasineutral limit, dispersion and oscillations for Korteweg type fluids, SIAM J. Math. Anal., 47 (2015), 2265-2282.
Low Mach number limit for the quantum hydrodynamics system. D Donatelli, P Marcati, Res. Math. Sci. 3D. Donatelli and P. Marcati, Low Mach number limit for the quantum hydrodynamics system, Res. Math. Sci., 3 (2016), 2522-0144.
On the thermomechanics of interstitial working. J E Dunn, J Serrin, Arch. Ration. Mech. Anal. 882J.E. Dunn and J. Serrin, On the thermomechanics of interstitial working, Arch. Ration. Mech. Anal., 88 (1985), no. 2, 95-133.
On compactness of solutions to the compressible isentropic Navier-Stokes equations when the density is not square integrable. E , Comment. Math. Univ. Carolin. 42E. Feireisl, On compactness of solutions to the compressible isentropic Navier-Stokes equations when the density is not square integrable, Comment. Math. Univ. Carolin., 42 (2001), 83-98.
About the barotropic compressible quantum Navier-Stokes Nonlinear Anal. M Gisclon, I Lacroix-Violet, 128M. Gisclon and I. Lacroix-Violet, About the barotropic compressible quantum Navier-Stokes Nonlinear Anal., 128 (2015),106-121.
Relative energy for the Korteweg theory and related Hamiltonian flows in gas dynamics. J Giesselmann, C Lattanzio, A.-E Tzavaras, Arch. Ration. Mech. Anal. 223J. Giesselmann, C. Lattanzio and A.-E. Tzavaras. Relative energy for the Korteweg theory and related Hamiltonian flows in gas dynamics, Arch. Ration. Mech. Anal., 223 (2017), 1427-1484.
Global weak solutions to compressible Navier-Stokes equations for quantum fluids. A , SIAM J. Math. Anal. 42A. Jüngel, Global weak solutions to compressible Navier-Stokes equations for quantum fluids, SIAM J. Math. Anal., 42 (2010), 1025-1045.
Solutions for two dimensional system for materials of Korteweg type. H Hattori, D Li, SIAM J. Math. Anal. 25H. Hattori and D. Li, Solutions for two dimensional system for materials of Korteweg type, SIAM J. Math. Anal., 25 (1994), 85-98.
Global solutions of a high dimensional system for Korteweg materials. H Hattori, D Li, J. Math. Anal. Appl. 198H. Hattori and D. Li, Global solutions of a high dimensional system for Korteweg materials, J. Math. Anal. Appl., 198 (1996), 84-97.
Global weak solutions to the compressible quantum Navier-Stokes equation and its semi-classical limit. I Lacroix-Violet, A Vasseur, J. Math. Pures Appl. 114I. Lacroix-Violet and A. Vasseur, Global weak solutions to the compressible quantum Navier-Stokes equa- tion and its semi-classical limit, J. Math. Pures Appl., 114 (2017), 191-210.
Global Existence of Weak Solutions to the Barotropic Compressible Navier-Stokes Flows with Degenerate Viscosities. J Li, Z Xin, arXiv:1504.06826PreprintJ. Li and Z. Xin, Global Existence of Weak Solutions to the Barotropic Compressible Navier-Stokes Flows with Degenerate Viscosities, Preprint: arXiv:1504.06826.
. P L Lions, Mathematical Topics in Fluid Mechanics. 2Claredon Press, Oxford Science PublicationsP.L. Lions, Mathematical Topics in Fluid Mechanics. Vol. 2., Claredon Press, Oxford Science Publications, 1996.
On the barotropic compressible Navier-Stokes equations. A Mellet, A Vasseur, Comm. Part. Differ. Equat. 32A. Mellet and A. Vasseur, On the barotropic compressible Navier-Stokes equations, Comm. Part. Differ. Equat., 32 (2007), 431-452.
Existence of global weak solutions for 3D degenerate compressible Navier-Stokes equations. A Vasseur, C Yu, Invent. Math. 206A. Vasseur and C. Yu, Existence of global weak solutions for 3D degenerate compressible Navier-Stokes equations, Invent. Math., 206 (2015), 935-974.
. ( P Antonelli, (P. Antonelli)
. Gssi -Gran, [email protected] (S. SpiritoSasso Science Institute, Viale Francesco Crispi 7, 67100, L'Aquila, Italy E-mail addressGSSI -Gran Sasso Science Institute, Viale Francesco Crispi 7, 67100, L'Aquila, Italy E-mail address: [email protected] (S. Spirito)
address: [email protected] di Ingegneria e Science dell'Informazione e Matematica. Via Vetoio, 67100, L'Aquila, Italy E-mailDISIM-Dipartimento di Ingegneria e Science dell'Informazione e Matematica, Via Vetoio, 67100, L'Aquila, Italy E-mail address: [email protected]
| zyda_arxiv-0202000 |
Stochastic Langevin Differential Inclusions with Applications to Machine Learning
January 4, 2023
Fabio V Difonzo
Vyacheslav Kungurtsev
Jakub Mareček
Stochastic Langevin Differential Inclusions with Applications to Machine Learning
January 4, 2023
Stochastic differential equations of Langevin-diffusion form have received significant attention, thanks to their foundational role in both Bayesian sampling algorithms and optimization in machine learning. In the latter, they serve as a conceptual model of the stochastic gradient flow in training over-parametrized models. However, the literature typically assumes smoothness of the potential, whose gradient is the drift term. Nevertheless, there are many problems, for which the potential function is not continuously differentiable, and hence the drift is not Lipschitz continuous everywhere. This is exemplified by robust losses and Rectified Linear Units in regression problems. In this paper, we show some foundational results regarding the flow and asymptotic properties of Langevin-type Stochastic Differential Inclusions under assumptions appropriate to the machine-learning settings. In particular, we show strong existence of the solution, as well as asymptotic minimization of the canonical free-energy functional.
Introduction
In this paper, we study the following stochastic differential inclusion,
dX t ∈ −F (X t )dt + √ 2σ dB t(1)
wherein F (x) : R n ⇒ R n is a set-valued map. We are particularly interested in the case where F (x) is the Clarke subdifferential of some continuous tame function f (x). This is motivated by the recent interest in studying Langevintype diffusions in the context of machine learning applications, both as a scheme for sampling in a Bayesian framework (as spurred by the seminal work [Welling and Teh, 2011]) and as a model of the trajectory of stochastic gradient descent, with a view towards understanding the asymptotic properties of training deep neural networks [Hu et al., 2017]. It is typically assumed that F (x) above is Lipschitz, and as such its potential f (x) is continuously differentiable. In many problems of relevance, including empirical risk minimization with a robust (e.g., l1 or Huber) loss and neural networks with ReLU activations, this is not the case, and yet there is at least partial empirical evidence suggesting the long-run behavior of a numerically similar operation is similar in its capacity to generate a stochastic process which minimizes a Free Energy associated with the learning problem. This paper is organized as follows. In Section 2 we study the functional analytical properties of F (x) as it appears in (1) when it represents a noisy estimate of a subgradient element of an empirical loss function that itself satisfies the conditions of a definable potential, especially as it appears in the context of deep learning applications. Note that the research program undertaken relates to a recent conjecture of Bolte and Pauwels [Bolte and Pauwels, 2021, Remark 12] which suggests the strong convergence of iterates in a stochastic subgradient type sequence to stationary points for this class of potentials. Subsequently, in Section 3 we prove that there exists a strong solution to (1), confirming the existence of a trajectory in the general case. In this sense, we extend the work of Szölgyenyi, 2017, Leobacher andSteinicke, 2021] studying diffusions with discontinuous drift to set-valued drift. Next in Section 4 we prove the correspondence of a Fokker-Planck type equation to modeling the probability law associated with this stochastic process, and show that it asymptotically minimizes a free-energy functional corresponding to the loss function of interest, extending the seminal work of [Jordan et al., 1998] which had proven the same result in the case of continuous F (x). We present some numerical results confirming the expected asymptotic behavior of (1) in Section 5 and summarize our findings and their implications in Section 6.
Related Work
Stochastic differential inclusions are the topic of the monograph [Kisielewicz, 2013], which presents the associated background of stochastic differential equations (SDEs) as well as set-valued analysis and differential inclusions, providing a notion of a weak and strong solution to equations of the form (1), and ones with more general expressions, especially with respect to the noise term. In this work, and others studying stochastic differential inclusions in the literature, such as, e.g., [Kisielewicz, 2009, Kisielewicz, 2020, it is assumed that the set-valued drift term F (x) is Lipschitz continuous. In this paper we are interested in the more general case, where F (x) may not be Lipschitz.
Langevin diffusions have had three distinct significant periods of development. To begin with, the original elegant correspondence of SDEs with semi-groups associated with parabolic differential operators was explored in depth by [Stroock and Varadhan, 2007] (first edition published in the 1970s), following the seminal paper of [Itô, 1953]. See also [Kent, 1978].
Later, they served as a canonical continuous Markovian process with the height of activity on ergodicity theory, the long run behavior of stochastic processes, associated with the famous monograph of [Meyn and Tweedie, 2012], first appearing in the 1990's. See, e.g. [Meyn and Tweedie, 1993].
Most recently, the Langevin diffusion has been a model to study the distributional dynamics of noisy training of contemporary machine learning models [Hu et al., 2017]. At this point, we have the works closest to ours. In Deep Neural Networks (DNNs), Rectified Linear Units (ReLUs), which involve a component-wise maximum of zero and a linear expression, are standard functional components in predictors appearing in both regression and classification models. Mean-field analyses seek to explain the uncanny generalization abilities of DNNs by considering the distributional dynamics of idealized networks with infinitely many hidden layer neurons by considering both the stochastic equations corresponding to said dynamics, as well as Fokker-Planck-type PDEs modeling the flow of the distribution of the weights in the network. Analyses along these lines in the contemporary literature include [Luo et al., 2021, Shevchenko et al., 2021. Although this line of works does use limiting arguments involving stochastic and distributional dynamics, they do not directly consider the Langevin differential inclusion and the potential PDE solution for the distribution, as we do here.
We finally note that the Langevin diffusion as minimizing free energy even in the case of nonsmooth potentials should not be surprising. In fact, in the seminal book of [Ambrosio et al., 2005], it is shown that a Clarke subgradient of a regular function still exhibits a locally minimizing flow structure for a functional potential on a probability space. Thus, at least locally, the necessary properties appear to have some promise to exist.
Tame functions, o-minimal structures and the exceptional set
Deep learning raises a number of important questions on the interface of optimization theory and stochastic analysis [Bottou et al., 2018]. In particular, there are still major gaps in our understanding of the applications of plain stochastic gradient descent (SGD) in deep learning, leaving aside the numerous related recent optimization algorithms for deep learning [Schmidt et al., 2021, Davis et al., 2020. A particularly challenging aspect of deep learning is the composition of functions that define the objective landscape. These functions are typically recursively evaluated piecewise non-linear maps. The nonlinearities are due to the sigmoidal function, exponentiation and related operations, which are neither semialgebraic nor semianalytic, and the piece-wise nature of the landscape is due to the common appearance of component-wise max. We consider Euclidean space R n with the canonical Euclidean scalar product ·, · and a locally Lipschitz continuous function f : R n → R. For any x ∈ R n , the Clarke subgradient of f [Clarke, 1990]:
∂ c f (x) = conv v ∈ R n : ∃y k −→ k→∞ x with y k ∈ R, v k = ∇f (y k ) −→ k→∞ v .
Following a long history of work [Macintyre and Sontag, 1993, e.g.], we utilize a lesser known function class of definable functions, which are known [van den Dries et al., 1994] to include restricted analytic fields with exponentiation. We refer to Macintyre, McKenna, and van den Dries [Macintyre et al., 1983] and Knight, Pillay, and Steinhorn [Knight et al., 1986] for the original definitions, and to van den Dries-Miller [Van den Dries andMiller, 1996, van den Dries, 1998] and Coste [Coste, 1999] for excellent book-length surveys. In particular, we use the following:
Definition 2.1 (Structure, cf. [Pillay and Steinhorn, 1986]). A structure on (R, +, ·) is a collection of sets O = (O p ) p∈N , where each O p is a family of subsets of R p such that for each p ∈ N:
1. O p contains the set {x ∈ R p : g(x) = 0}, where g is a polynomial on R p , i.e., g ∈ R[X 1 , X 2 , . . . , X p ], and the set is often known as the family of real-algebraic subsets of R p ; A subset of R n is called tame if there exists an o-minimal structure such that the subset is definable in the o-minimal structure. Notice that this notion of tame geometry goes back to topologie modérée of [Grothendieck, 1997].
if any
Next, we consider two more regularity properties. First, we consider functions that have conservative set-valued fields of [Bolte and Pauwels, 2021], or equivalently, are path differentiable [Bolte and Pauwels, 2021]. This includes convex, concave, Clarke regular, and Whitney stratifiable functions.
Definition 2.3 (Conservative set-valued fields of [Bolte and Pauwels, 2021]). Let D : R n ⇒ R n be a set-valued map. D is a conservative (set-valued) field whenever it has closed graph, nonempty compact values and for any absolutely continuous loop γ :
[0, 1] → R n , that is γ(0) = γ(1), we have 1 0 max v∈D(γ(t)) γ(t), v dt = 0
in the Lebesgue sense.
Definition 2.4 (Potential function of [Bolte and Pauwels, 2021]). Let D be a conservative (set-valued) field. For any γ absolutely continuous with γ(0) = 0 and γ(1) = x, any function f defined as
f (x) = f (0) + 1 0 max v∈D(γ(t)) γ(t), v dt (2) = f (0) + 1 0 min v∈D(γ(t)) γ(t), v dt (3) = f (0) + 1 0 γ(t), D(γ(t)) dt (4)
is called a potential function for D. We shall also say that D admits f as a potential or that D is a conservative field for f . Let us note that a particular structure D admits f that is unique up to a constant.
The second notion of regularity, which we consider, is the notion of piecewise Lipschitzianity on R n of [Leobacher and Szölgyenyi, 2017]. The associated exception(al) set [Leobacher and Steinicke, 2021] is the subset of that function's domain where the function is not Lipschitz. We recall the definition here for convenience, and we state it for set-valued maps.
Definition 2.5. A function f : R n → R n is piecewise Lipschitz continuous if there exists a hypersurface Θ, which we call the exceptional set for f , such that Θ has finitely many connected components θ i , i = 1, 2, . . ., such that f R n \Θ is intrinsic Lipschitz (cf. [Leobacher and Szölgyenyi, 2017, Definition 3.2]).
A set-valued function F : R n ⇒ R n is piecewise Lipschitz continuous if there exists an exceptional set Θ, defined analogously as for single-valued functions, such that F is Lipschitz on R n \ Θ with respect to Haudsdorff H-metric (cf. [Baier and Farkhi, 2013]).
Let us recall that, given a set-valued map with compact convex values F : R n ⇒ R n , we say that F is H-Lipschitz, or just Lipschitz, if and only if there exists a constant l > 0 such that for any x, y ∈ R n we have
ρ H (F (x), F (y)) ≤ l x − y , where, for arbitrary compact sets A, B ⊆ R n , ρ H (A, B) := max{max a∈A dist(a, B), max b∈B dist(b, A)}, with dist(a, B) := min b∈B a − b 2 .
Definition 2.6 ([Van den Miller, 1996, Bolte et al., 2007]). A C r stratification of a closed (sub)manifold M of R n is a locally finite partition (M i ) i∈I of M into C r submanifolds (called strata) having the property that
for i = j, cl(M i )∩M j = ∅ implies that M j is entirely contained in cl(M i )\M i (called the frontier of M i ) and dim(M j ) < dim(M i ). Moreover, a C r stratification (M i ) i∈I of M has the Whitney-(a) property if, for each x ∈ M i ∩ M j (with i = j) and for each sequence {x k } ⊆ M i we have lim k→∞ x k = x, lim k→∞ T x k M i = T , =⇒ T x M j ⊆ T ,
where T x M j (respectively, T x k M i ) denotes the tangent space of the manifold M j at x (respectively, of M i at x k ) and the convergence in the second limit is in the sense of the standard topology of the Grassmannian bundle of the [Mather, 2012]). If, moreover, a Whitney stratification (M i ) i∈I satisfies for all i ∈ I and x ∈ M i the transversality condition
dim M i −planes in T x M , x ∈ M i (seee n+1 / ∈ T x M i ,(5)
where e n+1 = 0 · · · 0 1 ∈ R n+1 , then it is called a nonvertical Whitney stratification. A function f : R n → R is said to be stratifiable, in any of its connotations, if the graph of f , denoted by Graph(f ), admits a corresponding C r stratification.
Remark 2.1. Let us note that, from Definition 2.6, if (M i ) i∈I is a stratification of a stratifiable submanifold M , then it follows that M = i∈I M i . Therefore, by the Hausdorff maximality principle, there must exist i ∈ I such that dim M i = max i∈I dim M i and, thus, M = cl M i since all the other subspaces M i , i = i have zero Lebesgue measure. The same argument holds, a fortiori, for every finite subset J ⊆ I.
In this paper, we establish the existence guarantees of strong solutions to the equation (1) and study the PDE describing the flow of the probability mass of X(t). To set up the subsequent exposition, we must first establish the groundwork of linking Whitney stratifiable potential functions, which describe the loss landscape of neural networks and other statistical models on the one hand, to set valued piecewise Lipschitz continuous maps, which describe the distributional flow of stochastic subgradient descent training, modeled by (1), on the other.
However, in order to do so, we must make an additional assumption that limits the local oscillation and derivative growth of the potential; specifically, we assume that f , the potential of F , has a bounded variation. It can be seen that the standard activation and loss functions that appear in neural network training satisfy this condition.
Theorem 2.1. Let F : R n ⇒ R n be a definable conservative field that admits a tame potential f : R n → R with bounded variation. Let F be Lipschitz on R n \B δ (0) with constant L, for some L, δ > 0. Then F is piecewise Lipschitz continuous.
Proof. As f is tame, by definition, it is also definable. Since F is conservative, from [Bolte and Pauwels, 2021], f is locally Lipschitz and, since it is tame, [Bolte et al., 2009, Theorem 1] implies that f is semismooth. Therefore, by [Bolte et al., 2007, Corollary 9], letting r ≥ 2 be an arbitrarily fixed integer, f admits a nonvertical C r Whitney stratification (M i ) i∈I . With abuse of notation, let (M i ) i∈I denote the stratification relative to the domain of f . Therefore, due to local finiteness, there must exist a maximal finite subset of indices J ⊆ I such that [Bolte et al., 2007]. Let us note that the Clarke [Bolte et al., 2007, Proposition 4]. Now, on the account of Remark 2.1, for some j
B δ (0) ∩ M j = ∅, j ∈ J. Since f is semismooth we deduce that its directional derivative f (·; v) is C r−1 for all v ∈ R n and, since f ∈ BV(R n ), f (·; v) is bounded on B δ (0) and for all j for all x ∈ B δ (0) ∩ M j and v ∈ T x (B δ (0) ∩ M j ) it holds that f (x, v) is Lipschitz continuous with respect to x, restricted to B δ (0) ∩ M j ; hence, the Riemannian gradient ∇ R f (x) is Lipschitz on B δ (0) ∩ M j , x ∈ M j , j ∈ J, since ∇ R f (x) is the restriction of the directional derivative to tangent directions onto M jsubgradient of f at x, denoted by ∂ • f (x), is such that Proj TxM j ∂ • f (x) ⊆ {∇ R f (x)}, where M j is the stratum such that x ∈ M j , j ∈ J. Seex ∈ J, we have dim M j x = dim B δ (0) = n. By compactness, there exists a finite covering of B δ (0) of maximal dimension, denoted by (M j ) j∈J for some J ⊆ J, such that B δ (0) ⊆ M , where M := j∈J M j . Therefore ∇ R f (x) = ∇f (x) on B δ (0).
Then, as a consequence of [Bolte and Pauwels, 2021,
Theorem 1], there exists a zero-measure set S ⊆ R n such that F (x) = {∇f (x)} for all x ∈ B δ (0) \ S. Now, for x ∈ B δ (0) \ S,∂ • f (x) ⊆ convF (x) = conv{∇f (x)} = {∇f (x)}, so that Proj TxM j ∂ • f (x) ⊆ Proj TxM j {∇f (x)} = {∇f (x)}.
It then follows that ∇f (x), and so F , is Lipschitz on
B δ (0)\S. Since B δ (0) is compact, there exists a finite family {θ k } k∈K such that B δ (0) ∩ S ⊆ ∪ k∈K θ k .
Thus F is Lipschitz on B δ (0) \ ∪ k∈K θ k , and the claim is proved.
Example 2.1. In case F : R n ⇒ R n is a definable conservative field such that none of its tame potentials f : R n → R has bounded variation, then Theorem 2.1 does not hold. In fact, let us consider
F (x) := x sin 1 x , x ∈ R \ {0}, [−1, 1], x = 0.
A simple computation provides that F is a definable conservative field. Moreover, its unique potential, up to constants, is
f (x) := sin 1 x − 1 x cos 1 x ,
that is not of bounded variation. In this case, Theorem 2.1 does not hold, as it is clear that F is not piecewise Lipschitz.
Existence and Uniqueness of Solution to the SDI
In this section we will prove that (1) admits a strong solution. More precisely, it will be proven that there exists a suitable selection of F (X t ) such that the corresponding SDE has a strong solution: this will in turn imply that the original stochastic differential inclusion has a solution as well. Our result will rely on an existence and uniqueness pertaining to SDEs with discontinuous drift in [Leobacher and Szölgyenyi, 2017].
Piecewise Lipschitz selections of upper semicontinuous set-valued maps
In our setting, assuming that F is the Clarke subdifferential of some continuous tame function guarantees that F is an upper semi-continuous set-valued map. We further assume that F is bounded with compact convex values.
Our aim is to prove that, under these assumptions, F has a piecewise Lipschitz selection.
We are going to need the following results:
Theorem 3.1 (Theorem 9.4.3 in [Aubin and Frankowska, 1990]). Consider a Lipschitz set-valued map F from a metric space to nonempty closed convex subsets of R n . Then F has a Lipschitz selection, called Steiner selection.
Theorem 3.2 (Kirszbraun's Theorem, cf. [Federer, 1996]). If S ⊆ R n and f : S → R n is Lipschitz, then f has a Lipschitz extension g : R n → R n with the same Lipschitz constant.
In the next Theorem, which is the main result of this section, we prove that, under some mild assumptions, the set-valued map F in (1) has a piecewise Lipschitz selection for any suitable compact covering of R n .
Theorem 3.3. Let F : R n ⇒ R n be an upper semi-continuous set-valued map with closed convex values, and piecewise Lipschitz continuous, with exceptional set Θ. Let us assume that there exists b > 0 such that F (x) ⊆ bB n for all x ∈ R n . Then F has a piecewise Lipschitz selection with exceptional set Θ, arbitrarily smooth on the interior of each connected component.
Proof. Let {R i } r i=1
be the finite family of closed subsets of R n such that
r i=1 R i = R n \ Θ.
For each i = 1, . . . , r, Theorem 3.1 on F R i implies that there exists a finite sequence of equi-Lipschitz, and thus continuous, selection functions
{f i } where each f i is defined on R i , that is f i : R i → R n .
Let us note that, since F (x) ⊆ bB n for all x ∈ R n then {f i } r i=1 is uniformly bounded. We can now extend each f i on the whole R n , to some function, still denoted by f i , which is Lipschitz with the same constant as the original function's on the account of Kirszbraun's Theorem 3.2; moreover, the family {f i } can be assumed to retain uniform boundedness. Let now ε > 0 be given and let i = 1, . . . , r be fixed. Let us consider a partition of unity {ϕ ε } as in [Shubin, 1990, Lemma 1.2]. Now, following a classical construction (e.g., see [Azagra et al., 2007]), we let
g ε i (x) := R n f i (x)ϕ ε (y − x) dy.
It then follows that g ε i ∈ C ∞ (R n ) is a Lipschitz function, with the same Lipschitz constant as f i 's, and is such that g ε i ∈ F (x) by straightforward computations. Let us stress that the Lipschitz constant of each g ε i is independent of ε, so that {g ε i } ε>0 is equi-Lipschitz on R i . Let now x ∈ R n and i x ∈ N be unique index such that x ∈ R ix . We then define f (x) := g ε ix (x), x ∈ R n . It then follows that f : R n → R n is countably piecewise Lipschitz on R n according to Definition 2.5 and its exceptional set is Θ. Obviously f (x) ∈ F (x), and this proves the claim.
Remark 3.1. Theorem 3.3 still holds if we replace R n with any A ⊆ R n : in fact, we can always extend F to an upper semi-continuous set-valued map defined on the whole R n by virtue of [Smirnov, 2002, Theorem 2.6].
Finally, we have the following.
Corollary 3.1. Let F : R n ⇒ R n be an upper semi-continuous set-valued map with closed convex values, and piecewise Lipschitz continuous with a C 3 exceptional set Θ. Assume that there exists b > 0 such that F (x) ⊆ bB n for all x ∈ R n . Then the SDI (1) admits a strong solution. In particular, for every piecewise Lipschitz selection of F , there exists a unique strong solution to the SDI (1).
Proof. From Theorem 3.3 there exists a drift µ : R n → R n , piecewise Lipschitz selection of the set-valued map F , which satisfies Assumptions 3.4 − 3.6 from [Leobacher and Szölgyenyi, 2017]. Therefore, on the account of [Leobacher and Szölgyenyi, 2017, Theorem 3.21] , we obtain that the SDE
dX t = −µ(X t )dt + √ 2σ dB t(6)
has a unique global strong solution, which in turn represents a strong solution to the stochastic differential inclusion (1), and this proves the claim.
Remark 3.2. Let us stress that the Lipschitz selection of the set-valued map F may not be Lipschitz on Θ s \Θ. This does not affect the result of Corollary 3.1 since, in order for it to hold, it suffices to provide a strong solution to (6).
Fokker-Planck Equation and Free Energy Minimization 4.1 Fokker-Planck Equation
The Fokker-Planck (FP) equation describes the evolution of the probability density associated with the random process modeled by the diffusion flow. Classical results deriving the Fokker-Planck equation from SDEs can be reviewed in [Eklund, 1971, Itô, 1953, Kent, 1978, Stroock and Varadhan, 2007. In particular, for a drift diffusion of the form dX t = −∇f (X t ) dt + √ 2σdB t it holds that, from any initial distribution ρ(0) = ρ 0 , the density ρ(x, t) of X t is given by
∂ρ ∂t = −∇ · (∇f (x)ρ) + 1 2 ∆ (σρ)(7)
and has a limiting stationary distribution defined by the Gibbs form exp {−f (x)/σ} /Z. In [Jordan et al., 1998], it was shown that the Fokker-Planck evolution corresponds to the gradient flow of a variational problem of minimizing a freeenergy functional composed of the potential f (x) and an entropy regularization. These classical results in order to even concern well-defined objects, require the smoothness of f . The FP equation can be derived from applying integration by parts after Itô's Lemma on the diffusion process. Follow-ing [Gardiner, 2009, 4.3.5], for arbitrary g, we have,
g(x(t)) dt = d dt g(X(t)) = ∇f (x)∂ x f + 1 2 σ∂ 2 xx g = dx ∇f (x)∂ x g + 1 2 σ∂ 2 xx g ρ(x, t|x 0 , t 0 ) = dxg(x)∂ρ(x, t|x 0 , t 0 )
Replacing the expression ∇f (x) with an arbitrary coefficient function of form a(x), we can see that in the case that a(x) is a selection of the Clarke subdifferential it may not be a continuous function of x. In this case, even the weak distributional sense derivative of a(x) does not exist and integration by parts cannot be applied.
Generic Solution Existence Results
Nevertheless, a solution ρ(x, t) can be shown to exist for the system as stated in weak form. To this end, we follow [Bogachev et al., 2001]. Let Ω T ⊆ R n × [0, T ] be open, and
L A,b ϕ = n i=1 a i ∂ϕ ∂x i + n i,j=1 σ ij ∂ϕ ∂x i ∂x j , ϕ ∈ C ∞ 0 (Ω T ).
Moreover, let n be such that 1 n + 1 n = 1. The following holds:
Theorem 4.1 (Corollary 3.2, [Bogachev et al., 2001]). Let µ be a locally finite Borel measure on Ω such that a i , σ ij ∈ L 1 loc (Ω T , µ) and,
Ω T ∂φ ∂t + n i=1 a i ∂φ ∂x + n i,j=1 σ ij ∂ϕ ∂x i ∂x j dµ = 0 for all nonnegative ϕ ∈ C ∞ 0 (Ω T )
. Furthermore let σ ij be uniformly bounded, nondegenerate and Hölder continuous, then µ = ρ(x, t) dx dt with ρ ∈ L p loc (Ω T ) for every p ∈ [1, (n + 2) ).
Remark 4.1. As observed in [Bogachev et al., 2001], one cannot expect that the density of µ is continuous even for infinitely differentiable σ ij under these conditions. However, we note that in [Portenko, 1990] a continuous solution is shown under the assumption that a i ∈ L p (Ω T , µ), i.e. it is globally integrable. Again, however, this is very restrictive in the case of studying the evolution of diffusion operators on tame nonsmooth potentials of interest.
Existence of an invariant measure µ for the probability flow, i.e., a solution for the purely elliptic part, is guaranteed by [Bogachev and Röckner, 2001 The strong regularity conditions, and the bounded open set Ω T , however, clearly limit both the applicability and informativeness of these results for our SDI.
Fokker-Planck With Boundary Conditions
Note, however, that we have an expectation as to what the stationary distribution for (1), namely, of Gibbs form proportional to exp{−f (x)/σ}. This measure is even absolutely continuous, and thus has higher regularity than Theorem 4.1 suggests.
To this end, consider [Gardiner, 2009, Chapter 5.1.1] which considers boundary conditions at a discontinuity for the FP equation. We can consider that the space is partitioned into connected components R i , and in each region, the continuous SDE and associated FP (7) holds. However, for boundaries between regions S ij ⊆ Θ we have,
n · J i (x, t) S i ij = n · J j (x, t) S j ij , ρ(x, t) S i ij = ρ(x, t) S j ij (8)
where the probability current J is defined as,
J i (x, t) = F i (x)ρ(x, t) + 1 2 σ ∂ρ(x, t) ∂x i
as defined on region R i Note that in one dimension this is simple: we have, at a point of nondifferentiability x c ,
ρ(x, t) x + c = ρ(x, t) x − c , F (x)ρ(x, t) + 1 2 σ ∂ρ(x, t) ∂x i x + c = − F (x)ρ(x, t) + 1 2 σ ∂ρ(x, t) ∂x i x − c
where we abuse notation to indicate the partial directional derivative of ρ from either side of x c . We can consider writing a stationary solution π(x) to this system as we have a suspected ansatz, similarly as done in [Gardiner, 2009, Section 5.3].
Generically stationary implies ∇ · J(x) = 0, or that J is divergence-free with respect to x. We have,
n i=1 ∂ ∂x i F i (x)ρ(x, t) + 1 2 σ ∂ρ(x, t) ∂x i = 0 (9) Indeed, let π(x) = exp {−f (x)/σ} /Z.
On the domain R n \ Θ where ∇f (x) is well-defined, this can be seen immediately to solve (9). Since Θ is of measure zero with respect to the ambient space, we can define,
Z = R n π(x) dx = ∪ i R i π(x) dx
Of course, a constructive ansatz for the stationary solution, the elliptic form, neither shows its uniqueness, nor the existence, uniqueness and regularity of the parabolic evolution of the probability mass flow ρ(x, t). To the best of our knowledge, no such result exists for the network of parabolic firstorder systems under consideration, and at the same time, we shall see that the continuity of ρ(x, t) is important in the next section for the variational conception of the FP equation as a minimizing flow for the free energy.
To this end, we can consider two lines of work as a foundation to formulate the requisite results. In particular, [Nittka, 2011] shows regularity conditions of solutions to second-order parabolic equations defined on a Lipschitz domain. As seen in Section 2, for the problems of interest, the exceptional set can be parameterized in a smooth way, which implies that for compact sets, the boundary is Lipschitz. We must, however, take care to translate the results appropriately to the potentially unbounded domains a potential R i could correspond to.
The closest to our line of work is considering graph-structured networks of PDEs, such as modeling Kirchhoff's laws. A prominent and representative work along these lines is [von Below, 1988]. In this setting, there is a network of one-dimensional paths embedded in some ambient space in R n , with the paths connected at a series of vertices, with boundary conditions connecting a collection of linear parabolic PDEs governing the flow of a quantity across the network. Thus, the spirit of a network structure of PDEs with connecting boundary connections is analogous to our problem, however, with the caveat that they are one-dimensional embedded domains, even if embedded in a larger space. Existence and smoothness regularity conditions are shown.
We consider an approach using domain decomposition methods for the solutions of PDEs based on optimal control theory [Gunzburger et al., 1999]. See also, e.g., [Dolean et al., 2015]. We consider reformulating the boundary conditions as a control to establish a variational formulation.
First let δ > 0 be a regularization parameter and consider the optimization problem
min {ρ i ∈H 1 (R i )}{g ij ∈L 2 (S ij )} J δ ({ρ i }, {g ij }) subject to ∂ρ i ∂t + F (x) · ∇ρ i + σ∇ · ∇ρ i = 0, on R i , ∀i, n · J i = g ij , on S ij , n · J j = −g ij , on S ij ,(10)
where
J δ ({ρ i }, {g ij }) := ij S ij (ρ i − ρ j ) 2 dS ij + δ 2 S ij g 2 ij dS ij .
Let us also note the weak form of the PDE constraints,
R i [∂ t ρ i v + σ∇ρ i · ∇v + F · ∇ρv] dx = j S ij g ij dS ij − j S ji g ji dS ji , ∀v ∈ H 1 (R i ).(11)
We have the following, akin to [Gunzburger et al., 1999, Theorem 2.1]. Let
U = {ρ i ∈ H 1 (R i )}{g ij ∈ L 2 (S ij )} satisfying (11), J δ ({ρ i }, {g ij }) < ∞ Theorem 4.2. There exists a unique solution ({ρ i ∈ H 1 (R i )}{g ij ∈ L 2 (S ij )}) to (10) in U. Proof. Let {ρ (n) i , g (n)
ij } be a minimizing sequence in U, i.e.,
lim n→∞ J δ {ρ (n) i , g (n) ij } = inf {ρ i ,g ij }∈U J δ ({ρ i , g ij })
By the definition of U we have that g (n)
ij are uniformly bounded in L 2 (S ij ). Now, we argue that by [Ladyzhenskaya et al., 1967, Theorem IV.5.3] the PDEs given by (11) have unique solutions ρ i continuous with respect to the inputs, i.e.,
ρ i H l+2,l/2+1 ≤ C j g ij H l+1,(l+1)/2 (S ij ) + g ji H l+1,(l+1)/2 (S ji )(12)
for some non-integral l, when the norms on the right-hand side are well defined. When we consider the conditions for the application of this Theorem, we see that the only unsatisfied assumption is the integrability of the coefficients with respect to an appropriate dual Sobolev space. However, we can see from the proof of the result, in particular [Ladyzhenskaya et al., 1967, Equation (IV.7.1)], that this is used only to show that the operators,
∂ t + σ∇ 2 + F · ∇ and [σ∇ + F ·]
are bounded. However, with σ constant and F ∈ L ∞ (R i ), this also clearly holds and these are operators from H l+2,l/2+1 to H l,l/2+1 on R i and H l+1,l/2+1 on S ij respectively.
For any l, we can apply the Poincaré inequality on the left and Sobolev embedding on the right of (12) to obtain,
ρ i H 1 (R i ) ≤ C j g ij L 2 (S ij ) + g ji L 2 (S ji )(13)
Thus, by the uniform boundedness of g (n)
ij from the definition of U, we get the boundedness of ρ (n) i and the existence of a convergent subsequence {{ρ
(n k ) i }, {g (n k ) ij }} convergent to ({ρ i },ĝ ij } with everyρ i in H 1 (R i ) andĝ ij in L 2 (S ij ).
Passing to the limit we see that they also satisfy (11) and by the lower semicontinuity of J δ we have that
inf {ρ i },{g ij } J δ (ρ i , g ij ) = lim inf k J δ (ρ (n k ) i , g (n k ) ij ) ≥ J δ ({ρ i , g ij })
and ({ρ i },ĝ ij } is optimal. Since J δ is convex and U is linear, it is uniquely optimal. Now consider the following weak system of PDEs, now for a unique ρ,
R i [∂ t ρv + σ∇ρ · ∇v + F · ∇ρv] dx = j S ij (n · (F ρ + 1 2 σ∇ρ))v dS ij − j S ji (n · (F ρ + 1 2 σ∇ρ))v dS ji , ∀v ∈ H 1 (R i )(14)
Then, we prove the following.
Theorem 4.3. For each δ > 0, denote ({ρ δ i }{g δ ij }) the solutions to (10) as given by Theorem 4.2. We have that for any convergent subsequence as δ → 0, it holds that there exists ρ such that with ρ i = ρ| R i and g ij = n ·
(F ρ + 1 2 σ∇ρ)| S ij , i ρ δ i − ρ i H 1 (R i ) + ij g δ ij − g ij L 2 (S ij ) → 0
and ρ solves (14).
Proof. From the definition of ({ρ δ i }{g δ ij }) we have that
J δ ({ρ δ i }{g δ ij }) ≤ J δ ({ρ i }{g ij }), which is, ij S ij (ρ δ i − ρ δ j ) 2 dS ij + δ 2 S ij (g δ ij ) 2 dS ij ≤ δ 2 S ij g 2 ij dS ij ,
implying, by the uniform boundedness of g ij in L 2 (S ij ) that ρ δ i −ρ δ j L 2 (S ij ) → 0 as δ → 0. Furthermore, from (13) we get that ρ δ i H 1 (R i ) are uniformly bounded. Thus, with δ → 0 there is a subsequence converging to {ρ * ij , g * ij } over {H 1 (R i ), L 2 (S ij )} and passing to the limit implies they satisfy (11).
Furthermore ρ δ i − ρ δ j H 1 (S ij ) → 0 implies that ρ * i | S ij = ρ j | S ij . Defining ρ ∈ H 1 (R n ) by ρ| R i ∪(∪ j {S ij }) = ρ i we obtain the unique solution to (14).
We have now proven the existence of ρ ∈ H 1 (R n ) satisfying the weak form of the PDE (7) over ∪ i R i , corresponding to where the coefficients are smooth almost everywhere, and with the boundary conditions (8).
Variational Flow for the Free Energy
In [Jordan et al., 1998], the FP equation (7) is shown to be the gradient flow of the functional,
F(ρ) = E(ρ) + S(ρ; σ) = f (x)ρ dx + ρ log ρ dx(15)
when F (x) = ∇f (x) and the stationary distribution is given by its minimizer. To this effect, one can consider a scheme,
ρ (k) := arg min ρ 1 2 W (ρ (k−1) , ρ) 2 + hF(ρ)(16)
for some small h, where W refers to the Wasserstein distance. Let M be defined as
M := ρ : R n → [0, ∞) measurable, and ρ(x) dx = 1, |x| 2 ρ(x) dx < ∞
We have the following Proposition, whose proof is unchanged in our setting, Proposition 4.1. [Jordan et al., 1998, Proposition 4.1] Given ρ ∈ M , there exists a unique solution to (16). Now we extend the classical main result, Theorem 5.1 in [Jordan et al., 1998], to our setting, which requires a few modifications to account for the nonsmooth potential in the free energy.
ρ h (t) = ρ (k) h for t ∈ [kh, (k + 1)h) and k ∈ N ∪ {0} Then as h → 0, ρ h (t) → ρ(t) weakly in L 1 (R n ) for all t ∈ (0, ∞) where ρ ∈ H 1 ((0, ∞) × R n ) solves (9) with initial condition ρ(t) → ρ 0 strongly in L 1 (R n ) for t → 0.
Proof. Let ξ ∈ C ∞ 0 (R n , R n ) be a smooth vector field with bounded support, and its flux Φ τ as ∂ τ Φ τ = ξ • Φ τ for all τ ∈ R and Φ 0 = id. The measure ρ τ (y) dy is the push forward of ρ (k) (y) dy under Φ τ . This means that R n ρ τ (y)ζ(y) dy = R n ρ (k) (y)ζ(Φ τ (y)) dy for all ζ ∈ C 0 0 (R n ), which implies that det ∇Φ τ ρ t • Φ τ = ρ (k) . By the properties of (16) we have that 1 τ
1 2 W (ρ (k−1) , ρ τ ) 2 + hF(ρ τ ) − 1 2 W (ρ (k−1) , ρ (k) ) 2 + hF(ρ (k) ) ≥ 0.
(17) Now, consider ζ = f and so
R n ρ τ (y)f (y) dy = R n ρ (k) (y)f (Φ τ (y)) dy and also 1 τ E(ρ τ ) − E(ρ (k) ) = R n 1 τ (f (Φ τ (y)) − f (y)) ρ (k) (y) dy
Recalling the consideration of a conservative vector field, cf. Definition 2.3 above. In the original, the term ∇f (y) · ξ(y) appears, instead, we have a vector χ(y) that represents directional change in f . Specifically, we can write that
d dτ [E(ρ τ )] τ =0 = R n χ(y) · ξ(y)ρ (k) (y) dy.
We can continue similarly as in the proof of [Jordan et al., 1998, Theorem 5.1] to obtain,
d dτ [S(ρ τ )] τ =0 = − R n ρ (k) divξ dy
and subsequently the following a priori estimates, wherein for any T < ∞ there exists C such that for all N ∈ N and h ∈ [0, T ] with N h ≤ T there holds,
M (ρ (N ) h ) ≤ C, R n max{ρ (N ) h log ρ (N ) h , 0} dx ≤ C, E(ρ (N ) h ) ≤ C, N k=1 W (ρ (k−1) h , ρ (k) h ) 2 ≤ Ch(18)
which implies the existence of a convergent subsequence, ρ h ρ weakly in L 1 ((0, T ) × R n ) for all T < ∞ with ρ(t) ∈ M and E(ρ) ∈ L ∞ ((0, T )), and ρ satisfying
− (0,∞)×R n ρ(∂ t ζ −χ·∇ζ + ζ) dx dt = R n ρ 0 ζ(0) dx for all ζ ∈ C ∞ 0 (R×R n )(19)
We can proceed similarly as in the proof of Theorem 5.1 in [Jordan et al., 1998] using appropriate test functions to conclude that ρ ∈ L p loc ((0, ∞), R n ). Now, clearly this ρ solves (14), which has a unique solution in H 1 (R n , (0, T )) by Theorem 4.3.
The final statements of the Theorem follow as in Theorem 5.1 in [Jordan et al., 1998].
Numerical Illustration
One-Dimensional Example
For the first illustration, we took a one-dimensional function
f (x) := −x − 1 x < −1 x + 1 −1 ≤ x < 0 1 − x 0 ≤ x < 1 x − 1 x ≥ 1 , F (x) := −1 x < −1 [−1, 1] x = −1 1 −1 < x < 0 [−1, 1] x = 0 −1 0 < x < 1 [−1, 1] x = 1 1 x > 1(20)
The probability density function associated with minimizing the free energy is shown in Figure 1 and the result of one hundred thousand samples generated by the Metropolis Algorithm in Figure 2.
As an illustration, we ran unadjusted Langevin dynamics with the Euler-Maruyama discretization, i.e., generated samples with the following iteration,
x k+1 = x k − g k + √ 2 B k , g ∈ F (x k ), B k ∼ N (0, 1).(21)
We generated ten million samples with ∈ {0.01, 0.001, 0.0001}. We plot the histograms of the final sample count in Figure 3 and plot the Wasserstein distance (computed with scipy) in Figure 4. We can observe that indeed the iteration does recreate the posterior, i.e., it appears to be ergodic, although the quantity of samples required is fairly large, indeed it seems to converge in probability distance after about 3 million samples, suggesting geometric ergodicity is unlikely.
Bayesian ReLU Neural Network
The use of gradient-based samplers had been introduced due to their improved mixing rate with respect to dimensionality dependence. Although we do not derive quantitative mixing rates in this work, it would be naturally suspected that such behavior could carry over to the nonsmooth case. For this purpose, we perform Metropolis-Hastings, (subgradient) unadjusted Langevin, and Metropolis-corrected Langevin on a ReLU network. To avoid complications associated with inexactness, we used moderately sized datasets and performed backpropagation on the entire data sample, rather than a stochastic variant.
Specifically, we consider the E2006 and YearPredictionMSD datasets from the UCI LIBSVM datset repository [Chang and Lin, 2011]. Both datasets have high parameter dimension, 150000 and 90, respectively, and with 16k or 460k as the number of training samples. We used a ReLU network with three hidden layers, each with 10 neurons. In this case, with the high dimension, rather than attempting to visualize the posterior, we plot the averaged (across 20 runs) of the loss on the test set. See Figure 5 for the results of the Langevin and Metropolis-adjusted Langevin approach on the test accuracy for dataset E2006. In this case, Metropolis resulted in a completely static mean high variance test loss across the samples, being entirely uninformative. Next, Figure 6 shows the test loss for a ReLU network using unadjusted Langevin. In this case, note the noisy initial plateau is followed by decrease. For both Metropolis as well as the Metropolis-corrected variant of Langevin there is no improvement due to repeated step rejection of the initial sample or minimal change in the error. Figure 6: Test Loss on the sampled parameters generated by the unadjusted Langevin method on a ReLU Neural Nework on a regression task for the dataset YearPredictionMSD. The discretization rate is 1e-05.
Discussion and Implications
The standard potential gradient diffusion process has featured prominently in the theoretical analysis meant to give insight as to the approximation and generalization performance associated with the long-term behavior of SGD as applied to neural networks, for example, [Hu et al., 2017]. It has also featured prominently in algorithms for sampling high-dimensional data sets, e.g. [Bussi and Parrinello, 2007]. It is standard for these studies to require that the drift term is Lipschitz and, as such, that the potential is continuously differentiable. In many contemporary applications, for example in (Bayesian, in the case of sampling) Neural Networks with ReLU or convolutional layers, this is not the case, and the presence of points wherein the loss function is not continuously differentiable is endemic. Therefore, the primary results of this paper, the existence of a solution to the stochastic differential inclusion drift as well as the existence of a Fokker-Planck equation characterizing the evolution of the probability distribution which asymptotically converges to a Gibbs distribution of the free energy, provide some basic insights into these processes without unrealistic assumptions. Specifically, even with these nonsmooth elements as typically arising in Whitney-stratifiable compositions of model and loss criteria, the overall understanding of the asymptotic macro behavior of the algorithmic processes remains as expected. Additional insights gathered from studying an approximating SDE are not as straightforward to extend to the differential-inclusion setting. The issues of wide and shallow basins around minima seem minor, when one considers that a Hessian of a loss function may not exist at certain points. Thus, considerations of mixing rate for sampling and qualitative properties of limiting distributions are interesting topics for further study in specific cases of specific stochastic differential inclusions.
Theorem 4 . 4 .
44Let ρ 0 ∈ M with F (ρ 0 ) < ∞ and ρ (k)h the solution for (16) and the interpolation ρ h : (0, ∞) × R n → [0, ∞) by,
Figure 1 :
1Stationary Distribution e −f (x) Associated with (20).
Figure 2 :
2Histogram
Figure 3 :
3Histogram for (21) at ∈ {0.01, 0.001, 0.0001} in this order.
Figure 4 :
4Wasserstein Distance of the Langevin-type iteration (21) and Metropolis-generated samples for ∈ {0.01, 0.001, 0.0001}, in this order.
Figure 5 :
5Test Loss on the sampled parameters generated by the unadjusted as well as the Metropolis-adjusted Langevin method on a ReLU Neural Nework on a regression task for the dataset E2006. The discretization rate is 1e-05.
A belongs to O p , then A × R and R × A belong to O p+1 ;3. if any A belongs to O p+1 , then π(A) ∈ O p , where π : R p+1 → R p is
the coordinate or "canonical" projection onto R p , i.e., the projection
to the first p coordinates;
4. O p is stable by complementation, finite union, finite intersection and
contains R p , which defines a Boolean algebra of subsets of R p .
Definition 2.2 (Definable functions, cf. [Van den Dries and Miller, 1996]).
A structure on (R, +, ·) is called o-minimal when the elements of O 1 are
exactly the finite unions of (possibly infinite) intervals and points. The sets
A ⊆ R p belonging to an o-minimal structure O p , for some p ∈ N, are called
definable in the o-minimal structure O. One often shortens this to definable
if the structure O is clear from the context. A set-valued mapping is said to
be definable in O, whenever its graph is definable in O.
from [Bolte and Pauwels, 2021, Corollary 1], we have that
, Theorem 1.6] and [Albeverio et al., 1999, Theorem 1.2].
Acknowledgements We'd like to thank Thomas Surowiec for helpful discussions regarding PDE theory as related to the arguments in this paper. FVD has been supported by REFIN Project, grant number 812E4967 funded by Regione Puglia; he is also part of the INdAM-GNCS research group. VK and JM were supported by the OP RDE project "Research Center for Informatics" (CZ.02.1.01/0.0/0.0/16 019/0000765) and the Czech Science Foundation (22-15524S).
On uniqueness of invariant measures for finite-and infinitedimensional diffusions. References [albeverio, Communications on Pure and Applied Mathematics. 523References [Albeverio et al., 1999] Albeverio, S., Bogachev, V., and Röckner, M. (1999). On uniqueness of invariant measures for finite-and infinite- dimensional diffusions. Communications on Pure and Applied Mathe- matics, 52(3):325-362.
Gradient flows: in metric spaces and in the space of probability measures. [ Ambrosio, Springer Science & Business Media[Ambrosio et al., 2005] Ambrosio, L., Gigli, N., and Savaré, G. (2005). Gra- dient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media.
Set-Valued Analysis. Frankowska ; Aubin, J Aubin, H Frankowska, BirkhauserBoston[Aubin and Frankowska, 1990] Aubin, J. and Frankowska, H. (1990). Set- Valued Analysis. Birkhauser, Boston.
Smooth Approximation of Lipschitz functions on Riemannian manifolds. [ Azagra, Journal of Mathematical Analysis and Applications. 326[Azagra et al., 2007] Azagra, D., Ferrera, J., López-Mesas, F., and Rangel Oliveros, Y. (2007). Smooth Approximation of Lipschitz func- tions on Riemannian manifolds. Journal of Mathematical Analysis and Applications, 326:1370-1378.
Regularity of setvalued maps and their selections through set differences. Part 1: Lipschitz continuity. Farkhi ; Baier, R Baier, E Farkhi, Serdica Mathematical Journal. 39[Baier and Farkhi, 2013] Baier, R. and Farkhi, E. (2013). Regularity of set- valued maps and their selections through set differences. Part 1: Lipschitz continuity. Serdica Mathematical Journal, 39:365-390.
On regularity of transition probabilities and invariant measures of singular diffusions under minimal conditions. [ Bogachev, Communications in Partial Differential Equations. 26[Bogachev et al., 2001] Bogachev, V., Krylov, N., and Röckner, M. (2001). On regularity of transition probabilities and invariant measures of sin- gular diffusions under minimal conditions. Communications in Partial Differential Equations, 26(11-12):2037-2080.
A Generalization of Khasminskii's Theorem on the Existence of Invariant Measures for Locally Integrable Drifts. Röckner ; Bogachev, V Bogachev, M Röckner, Theory of Probability and Its Applications. 45[Bogachev and Röckner, 2001] Bogachev, V. and Röckner, M. (2001). A Generalization of Khasminskii's Theorem on the Existence of Invariant Measures for Locally Integrable Drifts. Theory of Probability and Its Ap- plications, 45:363-378.
Tame functions are semismooth. [ Bolte, Mathematical Programming. 1171[Bolte et al., 2009] Bolte, J., Daniilidis, A., and Lewis, A. (2009). Tame functions are semismooth. Mathematical Programming, 117(1):5-19.
Clarke subgradients of stratifiable functions. [ Bolte, SIAM Journal on Optimization. 182[Bolte et al., 2007] Bolte, J., Daniilidis, A., Lewis, A., and Shiota, M. (2007). Clarke subgradients of stratifiable functions. SIAM Journal on Optimization, 18(2):556-572.
Conservative set valued fields, automatic differentiation, stochastic gradient methods and deep learning. J Bolte, E Pauwels, Mathematical Programming. 1881Bolte and Pauwels, 2021[Bolte and Pauwels, 2021] Bolte, J. and Pauwels, E. (2021). Conservative set valued fields, automatic differentiation, stochastic gradient methods and deep learning. Mathematical Programming, 188(1):19-51.
Optimization methods for large-scale machine learning. [ Bottou, Siam Review. 602[Bottou et al., 2018] Bottou, L., Curtis, F. E., and Nocedal, J. (2018). Optimization methods for large-scale machine learning. Siam Review, 60(2):223-311.
Accurate sampling using Langevin dynamics. [ Bussi, G Parrinello ; Bussi, M Parrinello, Physical Review E. 75556707[Bussi and Parrinello, 2007] Bussi, G. and Parrinello, M. (2007). Accurate sampling using Langevin dynamics. Physical Review E, 75(5):056707.
Libsvm: a library for support vector machines. Lin ; Chang, C.-C Chang, C.-J Lin, ACM transactions on intelligent systems and technology (TIST). 2[Chang and Lin, 2011] Chang, C.-C. and Lin, C.-J. (2011). Libsvm: a li- brary for support vector machines. ACM transactions on intelligent sys- tems and technology (TIST), 2(3):1-27.
Optimization and nonsmooth analysis. F H Clarke, SIAM[Clarke, 1990] Clarke, F. H. (1990). Optimization and nonsmooth analysis. SIAM.
An introduction to o-minimal geometry. M Coste ; Coste, Univ. de RennesCoste, 1999] Coste, M. (1999). An introduction to o-minimal geometry. Univ. de Rennes.
Stochastic subgradient method converges on tame functions. Davis, Foundations of computational mathematics. 201[Davis et al., 2020] Davis, D., Drusvyatskiy, D., Kakade, S., and Lee, J. D. (2020). Stochastic subgradient method converges on tame functions. Foundations of computational mathematics, 20(1):119-154.
An Introduction to Domain Decomposition Methods: Algorithms, Theory, and Parallel Implementation. [ Dolean, Other Titles in Applied Mathematics. Society for Industrial and Applied Mathematics. [Dolean et al., 2015] Dolean, V., Jolivet, P., and Nataf, F. (2015). An In- troduction to Domain Decomposition Methods: Algorithms, Theory, and Parallel Implementation. Other Titles in Applied Mathematics. Society for Industrial and Applied Mathematics.
Boundary behavior of solutions of parabolic equations with discontinuous coefficients. N A Eklund, Bulletin of the American Mathematical Society. 775Eklund, 1971[Eklund, 1971] Eklund, N. A. (1971). Boundary behavior of solutions of parabolic equations with discontinuous coefficients. Bulletin of the Amer- ican Mathematical Society, 77(5):788-792.
Geometric measure theory. Classics in mathematics. H Federer, Springer[Federer, 1996] Federer, H. (1996). Geometric measure theory. Classics in mathematics. Springer.
C Gardiner, Stochastic methods. BerlinSpringer4[Gardiner, 2009] Gardiner, C. (2009). Stochastic methods, volume 4. Springer Berlin.
Around Grothendieck's Esquisse D'un Programme. A Grothendieck ; Grothendieck, Cambridge University Press1Grothendieck, 1997] Grothendieck, A. (1997). Around Grothendieck's Es- quisse D'un Programme, volume 1. Cambridge University Press.
An optimization based domain decomposition method for partial differential equations. [ Gunzburger, Computers & Mathematics with Applications. 3710[Gunzburger et al., 1999] Gunzburger, M., Peterson, J., and Kwon, H. (1999). An optimization based domain decomposition method for par- tial differential equations. Computers & Mathematics with Applications, 37(10):77-93.
On the diffusion approximation of nonconvex stochastic gradient descent. [ Hu, arXiv:1705.07562arXiv preprint[Hu et al., 2017] Hu, W., Li, C. J., Li, L., and Liu, J.-G. (2017). On the diffusion approximation of nonconvex stochastic gradient descent. arXiv preprint arXiv:1705.07562.
The fundamental solution of the parabolic equation in a differentiable manifold. S Itô, Osaka Mathematical Journal. 51Itô, 1953[Itô, 1953] Itô, S. (1953). The fundamental solution of the parabolic equa- tion in a differentiable manifold. Osaka Mathematical Journal, 5(1):75-92.
The Variational Formulation of the Fokker-Planck Equation. [ Jordan, SIAM J. Math. Anal. 291[Jordan et al., 1998] Jordan, R., Kinderlehrer, D., and Otto, F. (1998). The Variational Formulation of the Fokker-Planck Equation. SIAM J. Math. Anal., 29(1):1-17.
Time-reversible diffusions. J Kent ; Kent, Advances in Applied Probability. 10Kent, 1978] Kent, J. (1978). Time-reversible diffusions. Advances in Ap- plied Probability, 10(4):819-835.
Stochastic representation of partial differential inclusions. 353Journal of mathematical analysis and applications[Kisielewicz, 2009] Kisielewicz, M. (2009). Stochastic representation of par- tial differential inclusions. Journal of mathematical analysis and applica- tions, 353(2):592-606.
Stochastic differential inclusions and applications. M Kisielewicz, Springer[Kisielewicz, 2013] Kisielewicz, M. (2013). Stochastic differential inclusions and applications. Springer.
Set-Valued Stochastic Integrals and Applications. Kisielewicz, M Kisielewicz, SpringerKisielewicz, 2020] Kisielewicz, M. (2020). Set-Valued Stochastic Integrals and Applications. Springer.
Definable sets in ordered structures. ii. Transactions of the. [ Knight, American Mathematical Society295[Knight et al., 1986] Knight, J. F., Pillay, A., and Steinhorn, C. (1986). Definable sets in ordered structures. ii. Transactions of the American Mathematical Society, 295(2):593-605.
. [ Ladyzhenskaya, [Ladyzhenskaya et al., 1967] Ladyzhenskaya, O. A., Solonnikov, V. A., and Ural'ceva, N. N. (1967).
Linejnye i kvazilinejnye uravneniâ paraboličeskogo tipa. Izdatel'stvo" Nauka. Glavnaâ redakciâ fizikomatematičeskoj literatury. Linejnye i kvazilinejnye uravneniâ paraboličeskogo tipa. Izdatel'stvo" Nauka", Glavnaâ redakciâ fiziko- matematičeskoj literatury.
Exception sets of intrinsic and piecewise Lipschitz functions. G Leobacher, A Steinicke, arXiv:2105.12004arXiv preprintLeobacher and Steinicke, 2021[Leobacher and Steinicke, 2021] Leobacher, G. and Steinicke, A. (2021). Ex- ception sets of intrinsic and piecewise Lipschitz functions. arXiv preprint arXiv:2105.12004.
A strong order 1/2 method for multidimensional SDEs with discontinuous drift. Szölgyenyi ; Leobacher, G Leobacher, M Szölgyenyi, The Annals of Applied Probability. 274[Leobacher and Szölgyenyi, 2017] Leobacher, G. and Szölgyenyi, M. (2017). A strong order 1/2 method for multidimensional SDEs with discontinuous drift. The Annals of Applied Probability, 27(4):2383-2418.
Phase diagram for two-layer relu neural networks at infinite-width limit. [ Luo, Journal of Machine Learning Research. 2271[Luo et al., 2021] Luo, T., Xu, Z.-Q. J., Ma, Z., and Zhang, Y. (2021). Phase diagram for two-layer relu neural networks at infinite-width limit. Journal of Machine Learning Research, 22(71):1-47.
Elimination of quantifiers in algebraic structures. [ Macintyre, Advances in Mathematics. 471[Macintyre et al., 1983] Macintyre, A., McKenna, K., and van den Dries, L. (1983). Elimination of quantifiers in algebraic structures. Advances in Mathematics, 47(1):74-87.
Finiteness results for sigmoidal "neural" networks. A Macintyre, E D Sontag, Proceedings of the twenty-fifth annual ACM symposium on Theory of computing. the twenty-fifth annual ACM symposium on Theory of computingMacintyre and Sontag[Macintyre and Sontag, 1993] Macintyre, A. and Sontag, E. D. (1993). Finiteness results for sigmoidal "neural" networks. In Proceedings of the twenty-fifth annual ACM symposium on Theory of computing, pages 325- 334.
J Mather ; Mather, Notes on topological stability. Bulletin (New Series) of the. American Mathematical Society49Mather, 2012] Mather, J. (2012). Notes on topological stability. Bulletin (New Series) of the American Mathematical Society, 49.
Stability of Markovian processes III: Foster-Lyapunov criteria for continuous-time processes. Tweedie ; Meyn, S P Meyn, R L Tweedie, Advances in Applied Probability. [Meyn and Tweedie, 1993] Meyn, S. P. and Tweedie, R. L. (1993). Stability of Markovian processes III: Foster-Lyapunov criteria for continuous-time processes. Advances in Applied Probability, pages 518-548.
Markov chains and stochastic stability. Tweedie ; Meyn, S P Meyn, R L Tweedie, Springer Science & Business Media[Meyn and Tweedie, 2012] Meyn, S. P. and Tweedie, R. L. (2012). Markov chains and stochastic stability. Springer Science & Business Media.
Regularity of solutions of linear second order elliptic and parabolic boundary value problems on Lipschitz domains. R Nittka, Journal of Differential Equations. 2514-5[Nittka, 2011] Nittka, R. (2011). Regularity of solutions of linear second or- der elliptic and parabolic boundary value problems on Lipschitz domains. Journal of Differential Equations, 251(4-5):860-880.
Definable sets in ordered structures. i. Transactions of the. A Pillay, C Steinhorn, American Mathematical Society295Pillay and Steinhorn[Pillay and Steinhorn, 1986] Pillay, A. and Steinhorn, C. (1986). Definable sets in ordered structures. i. Transactions of the American Mathematical Society, 295(2):565-592.
Generalized diffusion processes. N I Portenko, American Mathematical Soc83Portenko, 1990[Portenko, 1990] Portenko, N. I. (1990). Generalized diffusion processes, volume 83. American Mathematical Soc.
Descending through a crowded valley-benchmarking deep learning optimizers. [ Schmidt, PMLRInternational Conference on Machine Learning. [Schmidt et al., 2021] Schmidt, R. M., Schneider, F., and Hennig, P. (2021). Descending through a crowded valley-benchmarking deep learning opti- mizers. In International Conference on Machine Learning, pages 9367- 9376. PMLR.
Mean-field analysis of piecewise linear solutions for wide relu networks. [ Shevchenko, arXiv:2111.02278Accepted to Journal of Machine Learning Research. arXiv preprint[Shevchenko et al., 2021] Shevchenko, A., Kungurtsev, V., and Mondelli, M. (2021). Mean-field analysis of piecewise linear solutions for wide relu net- works. arXiv preprint arXiv:2111.02278. Accepted to Journal of Machine Learning Research.
Weak Bloch property and weight estimates for elliptic operators. SéminaireÉquations aux dérivées partielles (Polytechnique) dit aussi. M A Shubin, Séminaire Goulaouic-Schwartz. Shubin, 1990[Shubin, 1990] Shubin, M. A. (1989-1990). Weak Bloch property and weight estimates for elliptic operators. SéminaireÉquations aux dérivées par- tielles (Polytechnique) dit aussi "Séminaire Goulaouic-Schwartz".
Introduction to the Theory of Differential Inclusions. G Smirnov, American Mathematical SocietyRhode IslandSmirnov, 2002[Smirnov, 2002] Smirnov, G. (2002). Introduction to the Theory of Differ- ential Inclusions. American Mathematical Society, Rhode Island.
Tame topology and ominimal structures. D W Stroock, S S Varadhan, London Mathematical Society Lecture Note Series. 248Cambridge University PressMultidimensional diffusion processes[Stroock and Varadhan, 2007] Stroock, D. W. and Varadhan, S. S. (2007). Multidimensional diffusion processes. Springer, Berlin Heidelberg. [van den Dries, 1998] van den Dries, L. (1998). Tame topology and o- minimal structures, volume 248 of London Mathematical Society Lecture Note Series. Cambridge University Press, United Kingdom.
The elementary theory of restricted analytic fields with exponentiation. Den Dries, Annals of Mathematics. 1401den Dries et al., 1994] van den Dries, L., Macintyre, A., and Marker, D. (1994). The elementary theory of restricted analytic fields with expo- nentiation. Annals of Mathematics, 140(1):183-205.
Geometric categories and o-minimal structures. L Van Den Dries, C Miller, Duke Mathematical Journal. 842Van den Dries and Miller[Van den Dries and Miller, 1996] Van den Dries, L. and Miller, C. (1996). Geometric categories and o-minimal structures. Duke Mathematical Jour- nal, 84(2):497-540.
Classical solvability of linear parabolic equations on networks. J Von Below, Journal of Differential Equations. 722von Below, 1988[von Below, 1988] von Below, J. (1988). Classical solvability of linear parabolic equations on networks. Journal of Differential Equations, 72(2):316-337.
Bayesian learning via stochastic gradient Langevin dynamics. Teh ; Welling, M Welling, Y W Teh, Proceedings of the 28th international conference on machine learning (ICML-11). the 28th international conference on machine learning (ICML-11)Citeseer[Welling and Teh, 2011] Welling, M. and Teh, Y. W. (2011). Bayesian learn- ing via stochastic gradient Langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681-688. Citeseer.
| zyda_arxiv-0238000 |
Snowmass White Paper: The Quest to Define QFT
10 Jan 2023
Mykola Dedushenko
Simons Center for Geometry and Physics
Stony Brook University
11794-3636Stony BrookNYUSA
Snowmass White Paper: The Quest to Define QFT
10 Jan 2023
This article provides a review of the literature on rigorous definitions and constructions in Quantum Field Theory, spanning the period of seven decades. Comparing with the ideas and constructions found in the modern physics literature, we conclude that none of the existing systems of QFT axioms can cover all the physical situations. Therefore, it is still an outstanding open problem to formulate a complete definition of QFT. We argue that the question is of relevance for both physicists and mathematicians. arXiv:2203.08053v2 [hep-th] 10 Jan 2023 2 Existing axiomatics 2.1 Correlator-focused approaches Wightman axioms. One of the older axiom systems that remains relevant to date is that due to Wightman [20-23] (see also books [25, 28, 29]). The axioms view fields as operator-valued tempered distributions and formalize the notion of expectation values of their products ("Wightman functions"). One starts with the assumption (W0) of relativistic invariance (as in Wigner's classification [30] 1 , see also [31-34]): physical Hilbert space is a unitary representation of the Poincare group. This assumption is supplemented by the spectral condition (the energy-momentum spectrum lies in the closed upper light-cone) and the uniqueness and Poincare-invariance of the vacuum state Ψ 0 ∈ H. The axiom W1 states that there is a set of fields ϕ i [f ] given by tempered distributions 2 valued in the operators defined on (and preserving) a dense subset D (which includes the vacuum) of the Hilbert space H. The subset D is assumed to be Poincare-invariant. Then W2 states covariance of fields with respect to the Poincare group, and W3 requires locality (also called microcausality) in the form of (anti)commutativity of spacelike-separated fields. A quantum field theory is said to satisfy W0-W3, and in addition obey cyclicity of the vacuum: The span of vactors of the form ϕ 1 [f 1 ] . . . ϕ n [f n ]Ψ 0 (for all possible n and f i ) is dense in H. The latter condition guarantees that there are enough fields in the theory.
Introduction
The subject of Quantum Field Theory is nearing the centennial, with its inception dating back to the papers [1,2], followed by [3,4] and many others. Growing mostly out of the need to reconcile special relativity with quantum mechanics, both young subjects at that time, it led to the development of the early version of perturbative QFT over the following two decades. An interesting historical account of those early years can be found in the first chapter of Weinberg's excellent textbook [5].
One of the biggest challenges that had to be overcome were the UV divergences, which eventually led to the development of renormalization techniques by the end of forties in the works of Dyson, Feynman, Schwinger, and Tomonaga, [6][7][8][9][10][11][12][13][14][15][16][17][18][19].
A new field, with all its strange renormalization machinery, desperately needed a clear set of rules, or axioms, from which everything else would follow in a logical manner. Such rules would "distill" the subject into a mathematical subfield, but they were also necessary due to the limitations of the perturbative Lagrangian techniques. Thus starting from the fifties, various axiomatics for QFT began to appear. For the purpose of this paper, we will consider Wightman's axioms [20][21][22][23] as the start of that process, which will inevitably miss some earlier attempts, such as the S-matrix program of Heisenberg [24], a closely related extended (off-shell) S-matrix approach of Bogolyubov-Medvedev-Polivanov [25], or axioms of Lehmann-Symanzik-Zimmermann [26,27] (the LSZ reduction formula, however, became part of the standard QFT formalism). Some of the other approaches are still being developed nowadays. Here we would like to give a brief overview of these issues, and discuss the range of applicability of various axiom systems. A point that we want to make is that none of the existing definitions covers the full range of notions of Quantum Field Theory that appears in the physics literature. Having a rigorous system of axioms for a physics subfield has two philosophical motivations.
On the one hand, it provides a starting point for mathematical investigations, sort of extracting the abstract truth from the messy reality. On the other, it indicates that the physical understanding of the subject has matured enough. Indeed, most other physics subfield (all the "non-quantum" physics and the nonrelativistic quantum mechanics) have already undergone this process. The fact that QFT does not have (as we will see) one clear and universal set of axioms likely shows that the physical understanding is still lacking. Hence, we argue, it is a challenge both for physicists and for mathematicians to define QFT. Below we will provide a brief overview of the existing approaches.
Announcement: comments are highly appreciated! Axioms of Euclidean QFT. The Osterwalder-Schrader (OS) axioms [65][66][67], as well as their modifications by Glimm-Jaffe (GJ) [68] and the axioms of Nelson [69] provide, roughly, the Euclidean version 3 of Wightman axioms. OS axioms are also based on formalizing the notion of correlation functions, known as Schwinger functions S n (x 1 , . . . , x n ) = φ 1 (x 1 ) . . . φ n (x n ) in the Euclidean case. They include: OS0 temperedness of S n as distributions; OS1 Euclidean covariance; OS2 Reflection positivity; OS3 (anti)symmetry under permutations ("anti" in the fermionic case); OS4 cluster decomposition. Under a subtle additional property of linear growth condition, the OS theorem [66] (sometimes called the OS reconstruction theorem) states that the Schwinger functions can be analytically continued to the Minkowski signature to obey Wightman axioms there (see also modification by Zinoviev [77]). The Glimm-Jaffe axioms similarly formalize the generating
functional S[f ] = e φ[f ] ≡ e φ[f ] dµ,
where dµ is a measure on the space of distributions φ. They demand its analyticity, regularity (in the form of a certain growth bound), Euclidean covariance, reflection positivity, and ergodicity of the time translations. These axioms imply OS with the growth condition, and thus also Wightman axioms. Finally, Nelson axioms [69,69,[78][79][80] similarly take the measure-theoretic approach seriously and, importantly, require Markov property, which essentially captures locality and implies that the state of a system in some region is assigned to the boundary of this region (and in particular, the Hilbert space is assigned to the boundary). He also requires ergodicity and proves that the Wightman axioms follow upon analytic continuation to the Minkowsi space ("Nelson's reconstruction theorem", see book [81] by B.Simon for the review of this and other topics; see also [82]). Also note the result of [83] extending the OS reconstruction to equilibrium statistical mechanics (see also [84]), and the result of [85] studying the reconstruction of representations of the Poincare group in the same context of Euclidean QFT. A recent work [86] also provides generalizations of W and OS axioms (and reconstruction theorems) that are supposed to be suitable for gauge theories.
Constructive Field Theory. The axioms discussed so far are completely non-constructive, one has to do extra work to provide examples. At first, only free fields (including generalized free fields defined in [87],) and various solvable models related to free fields were known to satisfy the Wightman axioms (and, naturally, other axiom systems). This led to the subject of Constructive Field Theory (CQFT) emerging in the 1960's [88][89][90][91][92], whose main goal was to provide rigorous interacting examples of QFTs. An extensive review (as of 1987) can be found in the book [68] (it focuses on the Euclidean path integral approach, see also other books on the subject: [93][94][95][96]), also see [97][98][99][100][101]. An early vision of the field can be found in [102,103], as well as a slightly later review [104]; a review summarizing some successes of CQFT as of 2000 is in [105], as well as a slightly more detailed review in [106]. More recent reviews include [107][108][109] and a talk [110]. Through a lot of work starting from the late sixties, success has been achieved in rigorously constructing and studying 2d scalar theories with arbitrary polynomial interaction (the so-called P(φ) 2 theories) [91,[111][112][113][114][115][116][117][118][119][120][121][122][123][124][125][126][127][128][129][130] (see also book [81], recent papers [131,132], and [133][134][135][136] on other potentials), threedimensional λφ 4 theory [137][138][139][140][141][142][143][144][145] (see also more recent [146][147][148][149][150]), Gross-Neveu theory [151][152][153][154][155][156], Thirring model [157][158][159][160][161] (in particular, all these theories were shown to obey Wightman axioms); other theories with fermions include 2D and 3D Yukawa models [65, (see also [187]) and some supersymmetric models [188][189][190][191][192][193][194]. Random walks representations of Euclidean theories were introduced [73,75] and developed later [195][196][197][198], resulting in various applications [145,199], most prominently the proof of triviality of the φ 4 theory in d ≥ 5 spacetime dimensions [200][201][202] (see book [203]). The four-dimensional case turned out to be much more subtle [204][205][206][207][208][209][210][211][212][213] and has been resolved only recently in [214] (see also lecture notes [215]) confirming triviality of the φ 4 4 model. 4 Lattice regularization has played role, especially in gauge theories [217][218][219], see for example [220] and many references therein, especially works [221][222][223][224][225][226][227][228][229][230][231][232][233][234][235][236][237][238][239][240] of Balaban, see also [241][242][243] that revisits Balaban's approach to the renormalization group (illustrated with the φ 4 theory) and [244][245][246].
The results of [232,234] and [247] (using different methods) provide a significant progress towards solving the Millennium problem on the four-dimensional Yang-Mills, 5 see [248,249] for discussions.
Algebraic QFT
Haag-Kastler axioms. Algebraic QFT (AQFT) is another approach to axiomatizing QFT that de-emphasizes the notion of fields, and instead formalizes the algebra of observables without referring to any Hilbert space at first. This subject was initiated by the formulation of Haag-Kastler (HK) axioms in [250] (with some elements appearing in earlier works, such as [39,40,53,[251][252][253][254], see also reference [2] in [250]). There exists a number of books and monographs on AQFT [255][256][257][258][259][260][261] and on operator algebras [262][263][264][265][266][267][268][269][270][271][272][273][274][275][276][277][278][279][280], which should be consulted for details. Among the more recent literature, we mention a collection [281], a concise review [282], books and monographs [283][284][285] and a related book [286]. The key points and references are also summarized in [287]. The HK axioms (sometimes called Araki-Haag-Kastler axioms) are about relativistic local unitary QFTs in flat Minkowski space-time. For every causally closed 6 subset U of observables one assigns a C * -algebra 7 of observables A(U ). Under an inclusion U 1 ⊂ U 2 , one has an inclusion A(U 1 ) ⊂ A(U 2 ) 4 "Trivial" means "free" or "Gaussian", and the statements are about UV-complete models (i.e., with cutoffs removed) in precisely integral dimensions. Of course nothing prevents models with cut-offs from being nontrivial effective field theories, and furthermore, d = 4 − [216] is not covered by such statements. 5 Curiously, after two decades of rapid progress in the 70s and 80s, the field of Constructive Field Theory has gone so far away from the mainstream that, even though it hosts one of the most famous problems in theoretical physics, many young people nowadays do not even know that this field exists. In authors opinion, this state of affairs will change in the future, as more mathematicians are starting to think about QFT nowadays again. 6 See discussion of causal closedness in [288] 7 C * -algebras were introduced in the works of Gelfand and Naimark. See above block of references and [256] on C * algebras. In short, a C * -algebra is characterized by the following data: a C-algebra with an involution * obeying natural properties (this is a * -algebra); norm A obeying AB ≤ A B and A * = A ; completeness with respect to the topology induced by · (Banach * -algebra, or B * -algebra); the C * -property A * A = A A * .
of C * -algebras (this property is called isotony), which is functorial, i.e., respects compositions (this data is often called a local net of algebras; we could say that A(·) is an isotonic pre-cosheaf, except it is defined on causally closed subsets rather than opens). The requirement of causal locality says that A(U 1 ) and A(U 2 ) commute with each other (inside A(U ), where U i ⊂ U ) if U 1 and U 2 are spacelike separated. Furthermore, one usually imposes Poincare covariance in the form of a morphism α(p) : A(U ) → A(pU ) for any Poincare transformation p. Another requirement, due to the existence of linear dynamics, known as a time slice axiom, states that if U 1 ⊂ U 2 and U 1 contains a Cauchy surface of U 2 , then A(U 1 ) → A(U 2 ) is an isomorphism. One also often requires positivity of the energy spectrum (i.e., of the operator of time translations).
In the algebraic formulation, quantum states are understood as linear maps ω : A → C satisfying the positivity condition ω(A * A) ≥ 0 for all A ∈ A, where ω(A) is the "expectation value of A". One can consider faithful representations π ω of algebras A by bounded operators π ω (A(U )) ⊂ B(H) on the Hilbert space H obtained via the GNS construction from the state ω [289,290]. The algebra B(H) has two useful notions of closed * -subalgebras: the above-mentioned C * -algebras (closed in the norm topology), and von Neumann algebras [291] (which are closed in either one of the three topologies: strong operator, weak * , and weak operator), see for example [255] or any other textbook referred above. Sometimes A(U ) is taken to be a C * -algebra, but quite often one focuses on R(U ) = (A(U )) , which is the minimal von Neumann algebra containing A(U ), where (·) denotes the commutant 8 inside B(H). One often talks about the net R(U ) forming a vacuum representation, see, e.g., a review [292]. This then connects to the rich theory of von Neumann algebras [293][294][295][296][297][298][299] (see collection [300] and textbooks cited previously), in particular such topics as: Decomposition [299] into factors of Type I, II, III [293] depending on whether the spectrum of dimensions of projectors on the invariant subspaces in H is, respectively, discrete containing all integers in an interval, continuous, or consisting of just 0 and ∞, and a deep result that in QFT we deal with the Type III factors [301][302][303]; modular or Tomita-Takesaki theory (introduced by Tomita [304] and clarified by Takesaki [305], see also [275] and expositions by Borchers [306] and Summers [307], or the book [255],) which provides the structural theory of the Type III factors, and has connections to other topics, such as KMS states (see [308][309][310]). There has been a lot of interest in the modular theory recently due to its connection to the entanglement properties in QFT (see [311] as the entrance point into this portion of literature).
Similar to other approaches, it is possible to study general structural properties within the AQFT system of axioms, such as existence of scattering states [40,252], superselection sectors 9 [315,316] (see also, e.g., [317][318][319][320]), spin-statistics and CPT theorems (see [321][322][323], also see the 8 In the subject of operator algebras, the "commutant" of X means everything that commutes with X, while in the rest of math this notion would be called a centralizer. 9 The concept of superselection sectors [312,313], as it is apparent from [250], was from the beginning important in the development of AQFT, see also review [314]. The idea was that different superselection sectors arise from the inequivalent representations of one algebraic structure. topological version in [324]), the Reeh-Schlieder theorem [56] (it was already mentioned earlier, but traditionally this theorem is viewed as part of the AQFT machinery), Goldstone theorem [325,326].
One difference from the Wightman axioms should be clear: while they did not require boundedness of the operators (e.g., the momentum operator has unbounded spectrum), this was sort of an idealization. Any realistic experiment involves devices with finite ranges of possible values, thus any outcome should be predictable with arbitrary precision by a theory dealing with bounded operators only, like in the AQFT framework. 10 In the discussion of connection to the Wightman axioms, one asks two questions: whether, starting with a Wightman field smeared out with a compactly supported test function, one can find a self-adjoint bounded operator, and whether, starting with a net of algebras of bounded operators, one can obtain Wightman fields by a limiting process shrinking the regions to points (such questions were studied, e.g., in [327][328][329][330][331][332][333]).
Perturbative AQFT. Requirements of bounded, C * or von Neumann, are dropped in perturbative AQFT, where instead one deals with formal series star-algebras. Reviews include [284,334,335], see also a book [336] and expositions [337,338]. A few references on causal perturbation theory relevant in this context are: [339][340][341][342][343][344][345], books [346][347][348] and a review [349]. A more recent block of papers on the formalism of perturbative AQFT is [350][351][352][353][354][355][356][357][358][359][360][361][362] (including [353] on the 1/N expansion) and [363] (see also comments in [364]). See also [365][366][367][368][369] and [284,Chapter 7] on the role of Batalin-Vilkovisky formalism [370,371], and [372][373][374][375][376] on the relation to deformation quantization.
AQFT in curved space. Quantum field theory on curved space pushes the limits of applicability of the QFT machinery. 11 It comes with new physical phenomena (such as particle production effects [377]; Hawking effect [378], with earlier precursors [379][380][381]; Fulling-Davies-Unruh effect [382][383][384]). They generally follow from the absence of Poincare invariance, and, as a result, absence of the distinguished vacuum, no particle interpretation, no momentum space representation, etc. Continuation from Mikowskian to Euclidean signature is also not generically available, and, relatedly, there is no unique choice of the Feynman propagator. All these subtleties put traditional particle-based techniques in danger, and it was recognized early on 12 that the AQFT framework extended to general curved backgrounds must be the right way to proceed. 13 By the 80s, some version of such an approach was available [386][387][388][389][390][391], but it had shortcomings: it could only describe free fields; there were also problems in subtracting singularities 14 when renormalizing composite operators, such as the stress-energy tensor [394] or general Wick polynomials, where the answer depended on the choice of a reference quasi-free Hadamard state. This prevented both analyzing backreaction and building consistent perturbation theory in the interacting case. Imposing locality and covariance [394] would eventually help to fix these issues. Real progress, however, began in the 90s when it became clear that the microlocal analysis gave a more refined control over the singularities of distributions and allowed to overcome these issues in a more systematic way [395][396][397]. The works [398][399][400][401][402][403][404][405][406][407] studied the microlocal aspects, in particular: Formulated the microlocal spectral condition, developed a proper (local and covariant) notion of Wick polynomials, including constructions of the covariantly conserved stress tensor, and reduced the renormalization ambiguity to that generated by local gravity counterterms. The gravity counterterms (and more general background counterterms) are generic in the discussion of QFT on classical backgrounds, they lead to fundamental ambiguities and regularization scheme-dependencies that will be mentioned later. QFT in curved space has been a subject of books, monographs and reviews [394,[408][409][410][411][412][413][414][415], in particular see a recent accessible introduction [416] (and the follow-up [417]). In this context one usually talks about globally hyperbolic spacetimes 15 (see [418] for a departure from this condition). The AQFT axioms on globally hyperbolic spacetimes were formulated in [400,419,420], see also reviews [414,[421][422][423][424][425], in particular [422,423] for the history briefly outlined above, and the review [424] and the collection [414] for more technical details. These axioms are often referred to as locally covariant quantum field theory (LCQFT). They are similar to the Haag-Kastler axioms, yet have important differences. Again, there is a net of C * -algebras, but not just on a single globally hyperbolic M and its causal globally hyperbolic subsets; instead, it is defined on all globally-hyperbolic d-manifolds simultaneously, with a natural local covariance property with respect to isometric embeddings. 16 Clearly, the Poincare covariance is dropped, and other conditions present in the Haag-Kastler system are replaced by their locally covariant analogs. In [322,429,430] superselection sectors and the spin-statistics on curved spaces were considered, for further aspects of the theory see: [431][432][433][434][435][436][437][438][439].
An alternative approach to AQFT formalizing the OPE on curved spacetime is presented in [440].
Constructions of concrete interacting models proceed via the perturbation theory and renormalization, see the original papers and the reviews [284,334,335,363,399,401,405,406,441]. Note also a construction of quantum Yang-Mills (YM) as the perturbative AQFT in [442], see also [366]. Other references on gauge theories include [443][444][445][446][447][448].
Dynamical C * algebras. A novel C * -algebraic approach to QFT is being developed in the recent series of papers [449][450][451][452][453][454][455][456]. It is based on the Lagrangian formulation of field theory, and could probably be called constructive AQFT. Indeed, given a Lagrangian L, this approach produces a concrete C * -algebra A L called the dynamical C * -algebra in this context. The output obeys the 15 A pseudo-Riemannian spacetime (of Lorentzian signature) is globally hyperbolic if it has no closed causal curves, and for any two points, the intersection of the causal past of one with the causal future of the other is compact. 16 Further readings on the principle of "same physics in all spacetimes" (SPASs) include [425][426][427][428].
Haag-Kastler axioms and at the same time incorporates ideas from perturbative QFT.
Homotopical AQFT. Perturbative gauge theories live in the topologically trivial sector. Inclusion of the topological effects like instantons, however, breaks some the axioms of LCQFT: The isotony is violated, as well as the ability to reconstruct global algebras from the local ones. 17 This fact is explained, for example, in the talks [457][458][459], see also [445,460]. One way to address this problem replaces the space of gauge orbits (configuration space) by a stack given by the corresponding gauge groupoid (a category, whose objects are the bundle-connection pairs and whose morphisms are gauge equivalences). Correspondingly, the "quantized algebra of functions on fields"
typical to the usual approach is now replaced by some appropriate homotopy 18 dg-algebra. In such a generalized approach (called by practitioners the homotopical LCQFT ) one obtains, instead of locally covariant nets of C * -algebras, their homotopic dg-versions. Such structures are currently under investigation, see reviews [461,462], the monograph [463] and the papers [464][465][466][467][468][469][470][471][472][473][474][475][476] in which the subject is being developed (see also [477]).
Haag duality and DHR. The global symmetries and their role in AQFT (in particular, superselection sectors) were studied by Doplicher, Haag, and Roberts (DHR) [315,316,478,479]. To include gauge theories, a modification of the local QFT rules was proposed [259,480,481], suggesting to consider, in addition to the bounded regions in Minkowski space, infinite cones. Another approach is being developed in [482][483][484], where the violation of Haag duality A(U ) = A(U ) [255,485] is at the heart of issue (these authors also consider generalized symmetries and associated extended operators that are responsible for the breakdown of Haag duality, see also [320,486]).
Factorization algebras and Euclidean perturbative AQFT. Another approach to Euclidean perturbative QFT, which spiritually fits into the AQFT philosophy, is that of factorization algebras (FA). The notion of FA goes back to [487,488]. The perturbative renormalization in QFT and formulation via factorization algebras was developed in [489][490][491] (see also [492]). The idea of FA looks superficially similar to the nets of algebras in AQFT, and indeed [493] made comparison between the FA approach and the perturbative AQFT, concluding that the two are closely related.
At least for free theories, they show them to be equivalent. In a later paper [494] the same authors relate observables in the perturbative AQFT and the FA. A general result of [495] (where FA are considered on Lorentzian oriented time-oriented globally hyperbolic spaces) abstractly shows their equivalence, modulo natural hypotheses. Therefore, the likely status of the FA approach is that it provides an alternative viewpoint, and technically quite a different approach to constructing concrete models. Some papers that use this framework include [496][497][498][499][500][501][502][503][504][505][506][507][508][509][510][511], also [512][513][514][515]. It is also suggested by [516] that this approach has close relations to [517].
Atiah-Segal-like approach, or Functorial Field Theory
Following the Atiyah-Segal's axiomatization of TQFT [518] (and its many successes, e.g., classification of fully extended TQFTs [488]), as well as earlier ideas from the work on CFT [519,520], G.Segal, in a series of lectures [521], proposed another set of axioms that are supposed to define a general Euclidean QFT. Similar axioms have been used by Stolz and Teichner [522,523], and, apparently, were also considered by Kontsevich (unpublished). These are sometimes referred to as Functorial Field Theory (FFT) [524], though the name is slightly abused, since locally covariant AQFTs discussed earlier are also defined as functors between the appropriate categories (of globally hyperbolic spaces and C * -algebras). We will nevertheless use the term FFT here for concreteness, but we should mention that some authors [525][526][527] call it geometric field theory because it depends on some geometric data on the spacetime, such as the metric. These latter authors seem to have seriously undertaken the task of developing the geometric FFT ideas, and claim to have a definition and even classification (as in the cobordism hypothesis of [488]) of the fully extended geometric FFTs [526,527]. Recently, the FFT framework was used by Segal and Kontsevich [528],
where the definition of QFT on Riemannian manifolds was extended to "allowable" complex metrics (a notion serving as a bridge between the Euclidean and the Lorentzian cases). In general, the FFT philosophy for non-topological QFTs has been gaining momentum in the past decade, 19 even though the number of papers devoted to non-topological FFTs is still relatively small. 20 We should note that an approach to FFTs on CW-complexes (serving as a discretization of spacetime) was developed in [530][531][532][533]. The main idea of FFT is that the field theory is a functor from the category of geometric bordisms (i.e., decorated by some geometric structure) to the category of topological vector spaces (see [528]). The functoriality here encodes the gluing axiom that follows from the locality and means that spacetime can be glued from pieces, and these pieces talk to each other only through the boundaries. Namely, FFT on each piece produces a state (co)vector in the tensor product of vector spaces assigned to its boundary components, and gluing (at least in the absence of corners) is dove via composing vectors and covectors. In essence, this is the very same Markov property of Euclidean path integrals that was noticed by Nelson in the 70s [69], as we discussed earlier (see also [534]). The relation between FFT and AQFT and how the former implies the latter is proposed in [535]. See also a discussion of physics and formal properties of the gluing axiom in [536].
Conformal Field Theory (CFT)
CFTs form a special subclass of QFTs as they correspond to fixed points of the RG flows. Due to the constraints of conformal symmetry, the operator product expansion (OPE) of their local observables becomes much more concrete and tangible than in generic QFTs. Axiomatically, we could start with any of the approaches reviewed above and specialize them to CFTs. Wightman's and Osterwalder-Schrader axioms in the presence of conformal invariance (really, scale-invariance is enough) are supplied with the OPE relations under the correlators. Historically, this has been the most popular approach to CFT, see recent works [537,538] reviewing, among other things, Euclidean CFT axioms and their relation to (W) and (OS) axioms. Functorial QFT in the presence of conformal symmetry in two dimensions leads to the definition of conformal field theory by Segal [519,520], which actually predates the FQFT axioms. Finally, AQFT axioms supplied by conformal invariance in 2D lead to the notion of conformal nets [539][540][541][542][543][544][545][546] (they are known to be related to Vertex Operator Algebras (VOAs), see [546][547][548][549], the last two of which also mention the relation to FQFT), see also series of works [550][551][552][553][554][555], [556] and [557].
Most results on CFTs appear in the physics literature, but they are often mathematically rigorous (or there are no conceptual obstacles to making them rigorous). The CFTs are usually characterized by the spectrum of local observables and their OPE data, and in two-dimensional spacetime, the enhanced Virasoro symmetry often affords exact solutions [558], connecting to the theory of Vertex Operator Algebras (VOAs). Higher-dimensional CFTs are a subject of an active subfield reviewed in a separate Snowmass paper [559], and there is another one reviewing some aspects of the VOAs [560]. Here, we only briefly scratched the surface of the subject, mostly because the CFT literature is really vast and cannot be given any justice in this review.
Discussion
As is apparent from this review and an inevitably incomplete yet huge list of references, the amount of intellectual resources invested into understanding QFT is enormous. Despite that, it is also clear that we are still lacking a single satisfactory unifying viewpoint on the subject. To some extent, the LCQFT axioms of Brunetti-Fredenhagen-Verch-Fewster and the FFT axioms of the last section present the most general and advanced attempts to axiomatize QFT, but even the oldest Wightman axioms still play role in the modern literature (see, for example, [537,538]). There are, however, some obvious issues with these axioms:
• The fact that LCQFT faces difficulties in gauge theories and has to be replaced by homotopic AQFT teaches us something. Over the past decades we have learned about dualities in field theories, and understood that "being a gauge theory" is not an intrinsic property of a QFT but merely a construction. Indeed, there are known cases when a gauge theory admits a dual non-gauge formulation. Therefore, a model-independent formalism such as AQFT should not treat gauge theories separately. In fact, topological effects occur not only in gauge theories.
This suggests that perhaps homotopic AQFT is the right arena for general AQFT machinery, not only gauge theories (it goes in line with the derived mathematics playing more and more role in physics, starting, perhaps, with the Batalin-Vilkovisky formalism). On the other hand, some progress on the issue of gauge theories is being made in [482,483].
• One can ask a few obvious questions about the FFT approach. It implants the notion of spaces of states, essentially, into the axioms, while the AQFT paradigm emphasizes that the Hilbert space of states is a secondary object not part of the axioms. Furthermore, in case the spatial slices are non-compact, as was emphasized recently in [416], the Hilbert space does not even have to exist. Of course one may overcome this in FFT by only allowing compact spatial slices, but the situation seems a bit uncomfortable.
• Another issue that does not seem to be addressed in the FFT framework are ambiguities.
As we mentioned in the text, QFTs on generic backgrounds have ambiguities due to the background counterterms that render partition functions regularization-dependent. In some cases, in the presence of extra symmetries, such ambiguities make partition functions valued in bundles, like the S 2 partition function in 2d (2, 2) SCFTs, which is valued in the Kähler bundle [561] over the moduli space. In more generic cases (like 2d theories with (1, 1) SUSY or less), such an interpretation is lost and the partition function appears completely ambiguous.
Thus it might be too naive to assume that the FFT functor always produces a unique answer, in particular always assigns a complex number to a closed spacetime. However, this might be just a normalization issue.
• Currently available axiomatic approaches to non-topological QFT do not take extended operators and defects very seriously. That is not say it is impossible: One can include extended operators in the nets of local observables, and it is possible to include defects by modifying the algebras assigned to regions that intersect the defect. 21 One can also incorporate all sorts of (extended or not) observables in the FFT formalism by excising a tubular neighborhood of the observable and assigning the corresponding state to the boundary. However, these issues do not appear to be particularly explored. Even less understood and more mysterious is the case of corners (i.e., extending the theory to codimension ≥ 2 in the non-topological case).
Additionally, we should note that there exists a philosophy typical to the condensed matter literature that QFT describes small perturbations around a critical point of some lattice, many-body or other finite system. We did not include this in the main text as it does not provide a system 21 We thank O. Gwilliam for a discussion on this point. of axioms. It is not very clear how to relate such a philosophy to any of the axiomatic approaches we have, especially to the AQFT. For example, a lattice system usually comes with a well-defined unique Hilbert space, while the QFT that should emerge from it must, somehow, lose this property.
These are of course old questions, some of them have been partially answered in the Constructive Field Theory program for concrete models. We also note a recent increased interest in the lattice approach to QFTs, in particular papers [562][563][564], where a specially designed continuum limit is supposed to address the above questions.
Besides the issues mentioned before, a number of QFTs studied in the modern literature do not fit into any of the axiom systems currently available. QFT was originally introduced to marry quantum mechanics with special relativity, but today we know that this was more of a historic accident. For instance, QFTs exist outside the Lorentz-invariant setting, examples including Lifshitz field theories (see a review [565] and references therein), Horava gravity [566] (which, however, is a gravitational theory,) and many others. Surely, when placed on curved spaces, such theories are expected to obey some modified version of local covariance, if any at all. Hence they do not fit into any of the axiom systems described above.
More generally, it has been recently appreciated in the hep-th community [567] that our understanding of QFT is incomplete. The standard techniques are very limited, and a number of physically acceptable theories do not fit the old profile of QFT. Such theories include field theories on non-commutative spaces, little string theories, and various exotic theories such as those from [568][569][570][571] and references therein. Combining everything we said, there is a clear problem: we do not have the general definition of QFT.
While it is not known how to generalize the notion of QFT yet, one idea is worth mentioning.
In [572] A.Losev and S.Hu made a bold proposal that one should modify the geometry on which the QFT is defined. Instead of working on ordinary manifolds, Riemannian or Lorentzian, one should consider a certain generalization that captures the algebraic operations that are used in constructing QFTs. The authors of [572] coined the name "Feynmann geometry," and suggested that it should be described by an A ∞ -algebra with trace-class operations (such a definition covers many UV regulators: momentum cut-offs, lattices, non-commutativity). In this respect, one should also mention the work of Kontsevich-Soibelman on the A ∞ approach to non-commutative geometry [573], which perhaps can be of use. It is also possible that the correct notion of "Feynman geometry"
should be even more general to cover all the instances of exotic QFTs, if this is the right approach.
See[70][71][72][73][74][75] as well as references in[76] for origins of the Euclidean QFT.
We did not describe the latter property in this review, but it is discussed in most of the references.18 In this context, the word "homotopy" means that various relations like commutativity or associativity hold up to higher homotopies.
For example, it is apparent from reading[529, Section 2] that the authors of this article think about QFT in terms of the FFT paradigm. One can find more examples like this in the literature.20 On the other hand, the field of topological FFT (often called TQFT or Atiyah-Segal TQFT) is thriving, so we omit talking about it, for the same reason that we had to skip CFTs above: the subfield is simply too huge and deserves a separate review. Unfortunately, the related topic of cohomological field theories also has to be skipped.
. M Born, W Heisenberg, P Jordan, 10.1007/BF01379806Zur Quantenmechanik. II. 358-9Z. Phys.M. Born, W. Heisenberg, and P. Jordan, "Zur Quantenmechanik. II.," Z. Phys. 35 no. 8-9, (1926) 557-615.
Quantum theory of emission and absorption of radiation. P A M Dirac, 10.1098/rspa.1927.0039Proc. Roy. Soc. Lond. A. 114243P. A. M. Dirac, "Quantum theory of emission and absorption of radiation," Proc. Roy. Soc. Lond. A 114 (1927) 243.
W Heisenberg, W Pauli, 10.1007/BF01340129On Quantum Field Theory. 56In GermanW. Heisenberg and W. Pauli, "On Quantum Field Theory. (In German)," Z. Phys. 56 (1929) 1-61.
W Heisenberg, W Pauli, 10.1007/BF01341423On Quantum Field Theory. 2. 59In GermanW. Heisenberg and W. Pauli, "On Quantum Field Theory. 2. (In German)," Z. Phys. 59 (1930) 168-190.
S Weinberg, The Quantum theory of fields. Cambridge University Press1S. Weinberg, The Quantum theory of fields. Vol. 1: Foundations. Cambridge University Press, 6, 2005.
On Quantum electrodynamics and the magnetic moment of the electron. J S Schwinger, 10.1103/PhysRev.73.416Phys. Rev. 73J. S. Schwinger, "On Quantum electrodynamics and the magnetic moment of the electron," Phys. Rev. 73 (1948) 416-417.
Quantum electrodynamics. 2. Vacuum polarization and selfenergy. J S Schwinger, 10.1103/PhysRev.75.651Phys. Rev. 75651J. S. Schwinger, "Quantum electrodynamics. 2. Vacuum polarization and selfenergy," Phys. Rev. 75 (1948) 651.
Quantum electrodynamics. I A covariant formulation. J S Schwinger, 10.1103/PhysRev.74.1439Phys. Rev. 741439J. S. Schwinger, "Quantum electrodynamics. I A covariant formulation," Phys. Rev. 74 (1948) 1439.
Quantum electrodynamics. III: The electromagnetic properties of the electron: Radiative corrections to scattering. J S Schwinger, 10.1103/PhysRev.76.790Phys. Rev. 76J. S. Schwinger, "Quantum electrodynamics. III: The electromagnetic properties of the electron: Radiative corrections to scattering," Phys. Rev. 76 (1949) 790-817.
Space-time approach to nonrelativistic quantum mechanics. R P Feynman, 10.1103/RevModPhys.20.367Rev. Mod. Phys. 20R. P. Feynman, "Space-time approach to nonrelativistic quantum mechanics," Rev. Mod. Phys. 20 (1948) 367-387.
Relativistic cutoff for quantum electrodynamics. R P Feynman, 10.1103/PhysRev.74.1430Phys. Rev. 74R. P. Feynman, "Relativistic cutoff for quantum electrodynamics," Phys. Rev. 74 (1948) 1430-1438.
A Relativistic cutoff for classical electrodynamics. R P Feynman, 10.1103/PhysRev.74.939Phys. Rev. 74R. P. Feynman, "A Relativistic cutoff for classical electrodynamics," Phys. Rev. 74 (1948) 939-946.
On a relativistically invariant formulation of the quantum theory of wave fields. S Tomonaga, 10.1143/PTP.1.27Prog. Theor. Phys. 1S. Tomonaga, "On a relativistically invariant formulation of the quantum theory of wave fields," Prog. Theor. Phys. 1 (1946) 27-42.
On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. II: Case of Interacting Electromagnetic and Electron Fields. Z Koba, T Tati, S I Tomonaga, 10.1143/ptp/2.3.101Prog. Theor. Phys. 23Z. Koba, T. Tati, and S. i. Tomonaga, "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. II: Case of Interacting Electromagnetic and Electron Fields," Prog. Theor. Phys. 2 no. 3, (1947) 101-116.
On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. III: Case of Interacting Electromagnetic and Electron Fields. Z Koba, T Tati, S.-I Tomonaga, 10.1143/ptp/2.4.198Progress of Theoretical Physics. 24Z. Koba, T. Tati, and S.-i. Tomonaga, "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. III: Case of Interacting Electromagnetic and Electron Fields," Progress of Theoretical Physics 2 no. 4, (12, 1947) 198-208.
On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. IV: Case of Interacting Electromagnetic and Meson Fields. S Kanesawa, S.-I Tomonaga, 10.1143/ptp/3.1.1Progress of Theoretical Physics. 31S. Kanesawa and S.-i. Tomonaga, "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. IV: Case of Interacting Electromagnetic and Meson Fields," Progress of Theoretical Physics 3 no. 1, (03, 1948) 1-13.
On Radiation Reactions in Collision Processes. I: Application of the "Self-Consistent" Subtraction Method to the Elastic Scattering of an Electron*. Z Koba, S.-I Tomonaga, 10.1143/ptp/3.3.290Progress of Theoretical Physics. 33Z. Koba and S.-i. Tomonaga, "On Radiation Reactions in Collision Processes. I: Application of the "Self-Consistent" Subtraction Method to the Elastic Scattering of an Electron*," Progress of Theoretical Physics 3 no. 3, (09, 1948) 290-303.
On Infinite Field Reactions in Quantum Field Theory. S.-I Tomonaga, J R Oppenheimer, 10.1103/PhysRev.74.224Phys. Rev. 74S.-I. Tomonaga and J. R. Oppenheimer, "On Infinite Field Reactions in Quantum Field Theory," Phys. Rev. 74 (1948) 224-225.
The Radiation theories of Tomonaga, Schwinger, and Feynman. F J Dyson, 10.1103/PhysRev.75.486Phys. Rev. 75F. J. Dyson, "The Radiation theories of Tomonaga, Schwinger, and Feynman," Phys. Rev. 75 (1949) 486-502.
Quantum Field Theory in Terms of Vacuum Expectation Values. A S Wightman, 10.1103/PhysRev.101.860Phys. Rev. 101A. S. Wightman, "Quantum Field Theory in Terms of Vacuum Expectation Values," Phys. Rev. 101 (1956) 860-866.
Quelques problèmes mathématique de la théorie quantique relativiste. A S Wightman, Colloq. Int. CNRS. 75A. S. Wightman, "Quelques problèmes mathématique de la théorie quantique relativiste," Colloq. Int. CNRS 75 (1959) 1-38.
Fields as operator-valued distributions in relativistic quantum theory. A S Wightman, L Garding, Arkiv Fys. A. S. Wightman and L. Garding, "Fields as operator-valued distributions in relativistic quantum theory," Arkiv Fys. . https://www.osti.gov/biblio/4606723.
Recent Achievements of Axiomatic Field Theory. A Wightman, Theoretical Physics. ViennaA. Wightman, "Recent Achievements of Axiomatic Field Theory," in Theoretical Physics, pp. 11-58. IAEA, Vienna, 1963.
Die "beobachtbaren Größen. W Heisenberg, 10.1007/BF01329800der Theorie der Elementarteilchen. 120W. Heisenberg, "Die "beobachtbaren Größen" in der Theorie der Elementarteilchen," Zeitschrift für Physik 120 (1943) 513-538.
N N Bogolyubov, A A Logunov, A I Oksak, I T Todorov, General principles of quantum field theory. Mathematical Physics and Applied Mathematics. Kluwer Academic PublishersN. N. Bogolyubov, A. A. Logunov, A. I. Oksak, and I. T. Todorov, General principles of quantum field theory. Mathematical Physics and Applied Mathematics. Kluwer Academic Publishers, 1990.
On the formulation of quantized field theories. H Lehmann, K Symanzik, W Zimmermann, 10.1007/BF02731765Nuovo Cim. 1H. Lehmann, K. Symanzik, and W. Zimmermann, "On the formulation of quantized field theories," Nuovo Cim. 1 (1955) 205-225.
On the formulation of quantized field theories. H Lehmann, K Symanzik, W Zimmermann, 10.1007/BF02832508Nuovo Cim. IIH. Lehmann, K. Symanzik, and W. Zimmermann, "On the formulation of quantized field theories. II," Nuovo Cim. 6 (1957) 319-333.
. R F Streater, A S Wightman, Pct, Spin, All Statistics, That, 10.1515/9781400884230Princeton University PressR. F. Streater and A. S. Wightman, PCT, Spin and Statistics, and All That. Princeton University Press, 1964. https://doi.org/10.1515/9781400884230.
Mathematical physics monograph series. N Bogoliubov, A Logunov, I Todorov, Introduction to Axiomatic Quantum Field Theory. W. A. Benjamin, Advanced Book ProgramN. Bogoliubov, A. Logunov, and I. Todorov, Introduction to Axiomatic Quantum Field Theory. Mathematical physics monograph series. W. A. Benjamin, Advanced Book Program, 1975.
On Unitary Representations of the Inhomogeneous Lorentz Group. E P Wigner, 10.2307/1968551Annals Math. 40E. P. Wigner, "On Unitary Representations of the Inhomogeneous Lorentz Group," Annals Math. 40 (1939) 149-204.
Irreducible Unitary Representations of the Lorentz Group. V Bargmann, Annals of Mathematics. 483V. Bargmann, "Irreducible Unitary Representations of the Lorentz Group," Annals of Mathematics 48 no. 3, (1947) 568-640. http://www.jstor.org/stable/1969129.
On Unitary Ray Representations of Continuous Groups. V Bargmann, Annals of Mathematics. 591V. Bargmann, "On Unitary Ray Representations of Continuous Groups," Annals of Mathematics 59 no. 1, (1954) 1-46. http://www.jstor.org/stable/1969831.
Representations of the Rotation and Lorentz Groups and Their Applications. Representations of the Rotation and Lorentz Groups and Their Applications.
Unitary representations of the Poincare group and relativistic wave equations. Y Ohnuki, 10.1142/0537World Scientific Pub Co IncY. Ohnuki, Unitary representations of the Poincare group and relativistic wave equations. World Scientific Pub Co Inc, 1988.
Properties of Vacuum Expectation Values of Field Operators. G Kallen, Les Houches Lect. Notes. 10G. Kallen, "Properties of Vacuum Expectation Values of Field Operators," Les Houches Lect. Notes 10 (1960) 387-454.
Quantum Field Theory and Analytic Functions of several Complex Variables. A Wightman, 10.18311/jims/1960/16936J Indian Math. Soc. 243-4625A. Wightman, "Quantum Field Theory and Analytic Functions of several Complex Variables," J Indian Math. Soc. 24 no. 3-4, (1960) 625.
On asymptotic behavior of vacuum expectation values at large space-like separation. H Araki, 10.1016/0003-4916(60)90135-4Annals of Physics. 112H. Araki, "On asymptotic behavior of vacuum expectation values at large space-like separation," Annals of Physics 11 no. 2, (1960) 260-274.
Necessary restriction on Wightman functions. K Hepp, R Jost, D Ruelle, O Steinmann, 10.5169/seals-113183Helv. Phys. Acta. 34VK. Hepp, R. Jost, D. Ruelle, and O. Steinmann, "Necessary restriction on Wightman functions," Helv. Phys. Acta 34 no. V, (1961) 542-544.
R Haag, B Schroer, 10.1063/1.1703797Postulates of Quantum Field Theory. 3R. Haag and B. Schroer, "Postulates of Quantum Field Theory," Journal of Mathematical Physics 3 no. 2, (1962) 248-256.
On the asymptotic condition in quantum field theory. D Ruelle, 10.5169/seals-113272Helv. Phys. Acta. 35IIID. Ruelle, "On the asymptotic condition in quantum field theory," Helv. Phys. Acta 35 no. III, (1962) 147-163.
Über die Matrixelemente des Translationsoperators. R Jost, K Hepp, 10.5169/seals-113266Helv. Phys. Acta. 35IR. Jost and K. Hepp, "Über die Matrixelemente des Translationsoperators," Helv. Phys. Acta 35 no. I, (1962) 34-46.
On the asymptotic behaviour of Wightman functions in space-like directions. H Araki, K Hepp, D Ruelle, 10.5169/seals-113273Helv. Phys. Acta. 35IIIH. Araki, K. Hepp, and D. Ruelle, "On the asymptotic behaviour of Wightman functions in space-like directions," Helv. Phys. Acta 35 no. III, (1962) 164-174.
Quantentheorie der felder als distributionstheorie. W Schmidt, K Baumann, 4Il Nuovo Cimento (1955-1965W. Schmidt and K. Baumann, "Quantentheorie der felder als distributionstheorie," Il Nuovo Cimento (1955-1965) 4 (1956) 860-886.
On the Equivalence of Invariance under Time Reversal and under Particle-Antiparticle Conjugation for Relativistic Field Theories. G Luders, Kong. Dan. Vid. Sel. Mat. Fys. Med. 28N5. 5G. Luders, "On the Equivalence of Invariance under Time Reversal and under Particle-Antiparticle Conjugation for Relativistic Field Theories," Kong. Dan. Vid. Sel. Mat. Fys. Med. 28N5 no. 5, (1954) 1-17.
Exclusion Principle, Lorentz Group and Reflection of Space-Time and Charge. W Pauli, Pergamon PressNew YorkW. Pauli, Exclusion Principle, Lorentz Group and Reflection of Space-Time and Charge. Pergamon Press, New York, 1955.
Force-free particles with any spin. M Fierz, Helv. Phys. Acta. 12M. Fierz, "Force-free particles with any spin," Helv. Phys. Acta 12 (1939) 3-37.
The Connection Between Spin and Statistics. W Pauli, 10.1103/PhysRev.58.716Phys. Rev. 58W. Pauli, "The Connection Between Spin and Statistics," Phys. Rev. 58 (1940) 716-722.
A remark on the C.T.P. theorem. R Jost, Helv. Phys. Acta. 30R. Jost, "A remark on the C.T.P. theorem," Helv. Phys. Acta 30 (1957) 409-416.
Connection between local commutativity and regularity of Wightman functions. F J Dyson, 10.1103/PhysRev.110.579Phys. Rev. 110F. J. Dyson, "Connection between local commutativity and regularity of Wightman functions," Phys. Rev. 110 (1958) 579-581.
Connection between Spin and Statistics. G Luders, B Zumino, 10.1103/PhysRev.110.1450Phys. Rev. 110G. Luders and B. Zumino, "Connection between Spin and Statistics," Phys. Rev. 110 (1958) 1450-1453.
On the connection of spin with statistics. 10.1007/BF02828775Nuovo Cimento. 8"On the connection of spin with statistics," Nuovo Cimento 8 (1958) 607-609.
On the connection between spin and statistics. G Dell'antonio, 10.1016/0003-4916(61)90031-8Annals of Physics. 162G. Dell'Antonio, "On the connection between spin and statistics," Annals of Physics 16 no. 2, (1961) 153-157.
Discussion des "axiomes" et des propriétés asymptotiques d'une théorie des champs locales avec particules composées. R Haag, Colloque Internationaux du CNRS LXXV. R. Haag, Discussion des "axiomes" et des propriétés asymptotiques d'une théorie des champs locales avec particules composées. Colloque Internationaux du CNRS LXXV (Lille 1957). CNRS Paris, 1959.
Properties of wightman functions. R Jost, 10.1016/B978-0-12-395617-0.50013-XLectures on the Many-body Problems, E. CAIANIELLO. Academic PressR. Jost, "Properties of wightman functions," in Lectures on the Many-body Problems, E. CAIANIELLO, ed., pp. 127-145. Academic Press, 1961.
On structure of the algebra of field operators. H J Borchers, 10.1007/BF02745645Nuovo Cim. 242H. J. Borchers, "On structure of the algebra of field operators," Nuovo Cim. 24 no. 2, (1962) 214-236.
Bemerkungen zur unitäräquivalenz von lorentzinvarianten feldern. H Reeh, S Schlieder, 10.1007/BF02787889Nuovo Cim. 225H. Reeh and S. Schlieder, "Bemerkungen zur unitäräquivalenz von lorentzinvarianten feldern," Nuovo Cim. 22 no. 5, (1961) 1051-1068.
On quantum field theories. R Haag, Kong. Dan. Vid. Sel. Mat. Fys. Med. 2912R. Haag, "On quantum field theories," Kong. Dan. Vid. Sel. Mat. Fys. Med. 29N12 (1955) 1-37.
A Theorem on invariant analytic functions with applications to relativistic quantum field theory. D Hall, A Wightman, Mat. Fys. Medd. Dan. Vid. Selsk. 31D. Hall and A. Wightman, "A Theorem on invariant analytic functions with applications to relativistic quantum field theory," Mat. Fys. Medd. Dan. Vid. Selsk. 31 (1957) .
Haag's Theorem and Clothed Operators. O W Greenberg, 10.1103/PhysRev.115.706Phys. Rev. 115O. W. Greenberg, "Haag's Theorem and Clothed Operators," Phys. Rev. 115 (1959) 706-710.
Uniqueness Property of the Twofold Vacuum Expectation Value. P G Federbush, K A Johnson, 10.1103/PhysRev.120.1926Phys. Rev. 120P. G. Federbush and K. A. Johnson, "Uniqueness Property of the Twofold Vacuum Expectation Value," Phys. Rev. 120 (1960) 1926-1926.
Über die mannigfaltigkeit der interpolierenden felder zu einer kausalen S Matrix. H.-J Borchers, 10.1007/BF02732693Il Nuovo Cimento. 1503H.-J. Borchers, "Über die mannigfaltigkeit der interpolierenden felder zu einer kausalen S Matrix," Il Nuovo Cimento 15 (03, 1960) 784-794.
Change of variables and equivalence theorems in quantum field theories. S Kamefuchi, L O'raifeartaigh, A Salam, 10.1016/0029-5582(61)90056-6Nucl. Phys. 28S. Kamefuchi, L. O'Raifeartaigh, and A. Salam, "Change of variables and equivalence theorems in quantum field theories," Nucl. Phys. 28 (1961) 529-549.
On the borchers class of a free field. H Epstein, 10.1007/BF02783277Il Nuovo Cimento. 27H. Epstein, "On the borchers class of a free field," Il Nuovo Cimento 27 (1963) 886-893.
The General Theory of Quantized Fields. R Jost, Lectures in Applied Mathematics. American Mathematical SocietyR. Jost, The General Theory of Quantized Fields. Lectures in Applied Mathematics. American Mathematical Society, Providence, Rhode Island.
Axioms for Euclidean Green's Functions. K Osterwalder, R Schrader, 10.1007/BF01645738Commun. Math. Phys. 31K. Osterwalder and R. Schrader, "Axioms for Euclidean Green's Functions," Commun. Math. Phys. 31 (1973) 83-112.
Axioms for Euclidean Green's Functions. 2. K Osterwalder, R Schrader, 10.1007/BF01608978Commun. Math. Phys. 42281K. Osterwalder and R. Schrader, "Axioms for Euclidean Green's Functions. 2.," Commun. Math. Phys. 42 (1975) 281.
On the equivalence of the Euclidean and Wightman formulation of field theory. V Glaser, 10.1007/BF01645941Commun. Math. Phys. 37V. Glaser, "On the equivalence of the Euclidean and Wightman formulation of field theory," Commun. Math. Phys. 37 (1974) 257-272.
Quantum Physics. A Functional Integral Point of View. J Glimm, A M Jaffe, J. Glimm and A. M. Jaffe, Quantum Physics. A Functional Integral Point of View.
. Springer, Springer, 1987.
Construction of quantum fields from Markoff fields. E Nelson, 10.1016/0022-1236(73)90091-8Journal of Functional Analysis. 121E. Nelson, "Construction of quantum fields from Markoff fields," Journal of Functional Analysis 12 no. 1, (1973) 97-112.
On the Euclidean Structure of Relativistic Field Theory. J Schwinger, 10.1073/pnas.44.9.956Proc. Nat. Acad. Sci. 449J. Schwinger, "On the Euclidean Structure of Relativistic Field Theory," Proc. Nat. Acad. Sci. 44 no. 9, (1958) 956-965.
Euclidean quantum electrodynamics. J Schwinger, 10.1103/PhysRev.115.721Phys. Rev. 115J. Schwinger, "Euclidean quantum electrodynamics," Phys. Rev. 115 (Aug, 1959) 721-731.
Quantum Field Theory in Terms of Euclidean Parameters. T Nakano, 10.1143/PTP.21.241Progress of Theoretical Physics. 212T. Nakano, "Quantum Field Theory in Terms of Euclidean Parameters," Progress of Theoretical Physics 21 no. 2, (02, 1959) 241-259.
A Modified Model of Euclidean Quantum Field Theory. K Symanzik, K. Symanzik, "A Modified Model of Euclidean Quantum Field Theory,". http://www.arthurjaffe.com/Assets/pdf/Symanzik-ModifiedModel.pdf.
Euclidean quantum field theory. i. equations for a scalar model. K Symanzik, 10.1063/1.1704960Journal of Mathematical Physics. 73K. Symanzik, "Euclidean quantum field theory. i. equations for a scalar model," Journal of Mathematical Physics 7 no. 3, (1966) 510-525.
Euclidean quantum field theory. K Symanzik, Local quantum theory. R. JostVarenna; New YorkAcademic PressK. Symanzik, "Euclidean quantum field theory," in Local quantum theory (Varenna, 1968), R. Jost, ed. Academic Press, New York, 1969.
Euclidean quantum field theory. A Jaffe, 10.1016/0550-3213(85)90208-1Nuclear Physics B. 254A. Jaffe, "Euclidean quantum field theory," Nuclear Physics B 254 (1985) 31-43.
Equivalence of Euclidean and Wightman field theories. Y M Zinoviev, Communications in Mathematical Physics. 1741Y. M. Zinoviev, "Equivalence of Euclidean and Wightman field theories," Communications in Mathematical Physics 174 no. 1, (1995) 1 -27.
Quantum fields and Markoff fields. E Nelson, Partial Differential Equations. Providence, R.I.American Mathematical SocietyE. Nelson, "Quantum fields and Markoff fields," in Partial Differential Equations, pp. 413-420. American Mathematical Society, Providence, R.I., 1973.
The free Markoff field. E Nelson, 10.1016/0022-1236(73)90025-6Journal of Functional Analysis. 122E. Nelson, "The free Markoff field," Journal of Functional Analysis 12 no. 2, (1973) 211-227.
Probability theory and euclidean field theory. E Nelson, Ettore Majorana: 1st course: Constructive Quantum Field Theory. E. Nelson, "Probability theory and euclidean field theory," in International School of Mathematical Physics, Ettore Majorana: 1st course: Constructive Quantum Field Theory, pp. 94-124. 1973.
The P(φ) 2 Euclidean (Quantum) Field Theory. B Simon, Princeton Univ. PressB. Simon, The P(φ) 2 Euclidean (Quantum) Field Theory. Princeton Univ. Press, 1974.
From Euclidean to relativistic fields and on the notion of Markoff fields. G C Hegerfeldt, Communications in Mathematical Physics. 35G. C. Hegerfeldt, "From Euclidean to relativistic fields and on the notion of Markoff fields," Communications in Mathematical Physics 35 (1974) 155-171.
Unbounded, symmetric semigroups on a separable Hilbert space are essentially selfadjoint. Fr, 10.1016/0196-8858(80)90012-3Advances in Applied Mathematics. 13Fr "Unbounded, symmetric semigroups on a separable Hilbert space are essentially selfadjoint," Advances in Applied Mathematics 1 no. 3, (1980) 237-256.
On Virtual Representations of Symmetric Spaces and Their Analytic Continuation. J Frohlich, J Osterwalder, E Seiler, Annals of Mathematics. 1183J. Frohlich, J. Osterwalder, and E. Seiler, "On Virtual Representations of Symmetric Spaces and Their Analytic Continuation," Annals of Mathematics 118 no. 3, (1983) 461-489. http://www.jstor.org/stable/2006979.
M C Lee, J Glimm, arXiv:2112.08575Axioms for Quantum Gauge Fields. math-phM. C. Lee and J. Glimm, "Axioms for Quantum Gauge Fields," arXiv:2112.08575 [math-ph].
Generalized Free Fields and Models of Local Field Theory. O W Greenberg, 10.1016/0003-4916(61)90032-XAnnals Phys. 16O. W. Greenberg, "Generalized Free Fields and Models of Local Field Theory," Annals Phys. 16 (1961) 158-176.
Existence Theorems for a Cut-off λϕ 4 Field Theory. A Jaffe, Conference on the Mathematical Theory of Elementary Particles. Cambridge, MassachusettsMIT PressA. Jaffe, "Existence Theorems for a Cut-off λϕ 4 Field Theory," in Conference on the Mathematical Theory of Elementary Particles. MIT Press, Cambridge, Massachusetts, 1966.
Construction of quantum fields interacting by a cutoff Yukawa coupling. O Lanford, PhD thesisO. Lanford, Construction of quantum fields interacting by a cutoff Yukawa coupling. PhD thesis, 1966.
A quadratic interaction in two dimensions. E Nelson, Conference on the Mathematical Theory of Elementary Particles. Cambridge, MassachusettsMIT PressE. Nelson, "A quadratic interaction in two dimensions," in Conference on the Mathematical Theory of Elementary Particles. MIT Press, Cambridge, Massachusetts, 1966.
A λϕ 4 quantum field theory without cutoffs. 1. J Glimm, A M Jaffe, 10.1103/PhysRev.176.1945Phys. Rev. 176J. Glimm and A. M. Jaffe, "A λϕ 4 quantum field theory without cutoffs. 1," Phys. Rev. 176 (1968) 1945-1951.
A General class of cut-off model field theories. A M Jaffe, O E Lanford, A S Wightman, 10.1007/BF01645424Commun. Math. Phys. 15A. M. Jaffe, O. E. Lanford, and A. S. Wightman, "A General class of cut-off model field theories," Commun. Math. Phys. 15 (1969) 47-68.
B Simon, 10.1090/chel/351Functional Integration and Quantum Physics: Second Edition. AMS Chelsea Publishing2nd ed.B. Simon, Functional Integration and Quantum Physics: Second Edition. AMS Chelsea Publishing, 2nd ed., 2005.
J R Klauder, 10.1007/978-0-8176-4791-9A Modern Approach to Functional Integration. Applied and Numerical Harmonic Analysis. Birkhäuser Basel. 1 ed.J. R. Klauder, A Modern Approach to Functional Integration. Applied and Numerical Harmonic Analysis. Birkhäuser Basel, 1 ed., 2011.
Mathematical Theory of Feynman Path Integrals. S Albeverio, R Høegh-Krohn, S Mazzucchi, 10.1007/978-3-540-76956-9Lecture Notes in Mathematics. Springer2nd ed.S. Albeverio, R. Høegh-Krohn, and S. Mazzucchi, Mathematical Theory of Feynman Path Integrals. Lecture Notes in Mathematics. Springer, Berlin, Heidelberg, 2nd ed., 2008.
S Mazzucchi, 10.1142/11679Mathematical Feynman Path Integrals and Their Applications. WORLD SCIENTIFIC. 2nd ed.S. Mazzucchi, Mathematical Feynman Path Integrals and Their Applications. WORLD SCIENTIFIC, 2nd ed., 2021.
Quantum field theory on curved backgrounds. I. The Euclidean functional integral. A Jaffe, G Ritter, 10.1007/s00220-006-0166-2arXiv:hep-th/0609003Commun. Math. Phys. 270A. Jaffe and G. Ritter, "Quantum field theory on curved backgrounds. I. The Euclidean functional integral," Commun. Math. Phys. 270 (2007) 545-572, arXiv:hep-th/0609003.
A Jaffe, G Ritter, arXiv:0704.0052Quantum field theory on curved backgrounds. II. Spacetime symmetries. hep-thA. Jaffe and G. Ritter, "Quantum field theory on curved backgrounds. II. Spacetime symmetries," arXiv:0704.0052 [hep-th].
From Perturbative to Constructive Renormalization. V Rivasseau, 10.1515/9781400862085Princeton Series in Physics. Princeton University PressV. Rivasseau, From Perturbative to Constructive Renormalization. Princeton Series in Physics. Princeton University Press, 2014.
J C Baez, I E Segal, Z Zhou, Introduction to Algebraic and Constructive Quantum Field Theory. Princeton Series in Physics. Princeton University PressJ. C. Baez, I. E. Segal, and Z. Zhou, Introduction to Algebraic and Constructive Quantum Field Theory. Princeton Series in Physics. Princeton University Press, 1992.
Constructive Physics Results in Field Theory, Statistical Mechanics and Condensed Matter Physics. 10.1007/3-540-59190-7Lecture Notes in Physics. Springer"Constructive Physics Results in Field Theory, Statistical Mechanics and Condensed Matter Physics," Lecture Notes in Physics. Springer, 1995.
Constructive field theory -introduction to the problems. A S Wightman, 10.1007/978-1-4613-4586-2_1Stud. Nat. Sci. 3A. S. Wightman, "Constructive field theory -introduction to the problems," Stud. Nat. Sci. 3 (1973) 1-85.
Constructive Quantum Field Theory. 10.1007/BFb0113079Lect Notes Phys. Springer. "Constructive Quantum Field Theory," Lect Notes Phys. Springer, Berlin, Heidelberg, 1973.
Constructive Gauge Theory. T Balaban, A M Jaffe, 10.1007/978-1-4757-0363-4_5NATO Sci. Ser. B. 141T. Balaban and A. M. Jaffe, "Constructive Gauge Theory," NATO Sci. Ser. B 141 (1986) 207-263.
Constructive quantum field theory. A M Jaffe, Mathematical physics. A. Fokas, A. Grigorian, T. Kibble, and B. ZegarlinskiA. M. Jaffe, "Constructive quantum field theory," in Mathematical physics 2000, A. Fokas, A. Grigorian, T. Kibble, and B. Zegarlinski, eds., pp. 111-127. 2000.
Constructive field theory and applications: Perspectives and open problems. V Rivasseau, 10.1063/1.533326math-ph/0006017Journal of Mathematical Physics. 416V. Rivasseau, "Constructive field theory and applications: Perspectives and open problems," Journal of Mathematical Physics 41 no. 6, (Jun, 2000) 3764-3775, math-ph/0006017.
Quantum Theory and Relativity. A Jaffe, A. Jaffe, "Quantum Theory and Relativity," 2007. https://www.arthurjaffe.com/Assets/pdf/Quantum-Theory_Relativity.pdf.
S J Summers, A Perspective on Constructive Quantum Field Theory. S. J. Summers, "A Perspective on Constructive Quantum Field Theory," 2016.
S Summers, Constructive Quantum Field Theory. S. Summers, "Constructive Quantum Field Theory." https://people.clas.ufl.edu/sjs/constructive-quantum-field-theory/.
Is relativity compatible with quantum theory?. A Jaffe, A. Jaffe, "Is relativity compatible with quantum theory?," December 2, 2020. https://mathpicture.fas.harvard.edu/news/ arthur-jaffe-presents-cmsaymsc-talk-relativity-compatible-quantum-theory.
CMSA/YMSC literature talk. CMSA/YMSC literature talk.
A λϕ 4 quantum field theory without cutoffs, II. The field operators and the approximate vacuum. J Glimm, A M Jaffe, 10.1103/PhysRev.176.1945Ann. of Math. 912J. Glimm and A. M. Jaffe, "A λϕ 4 quantum field theory without cutoffs, II. The field operators and the approximate vacuum," Ann. of Math. 91 no. 2, (1970) 362-401.
The λ(ϕ 4 ) 2 quantum field theory without cutoffs, III. The physical vacuum. J Glimm, A Jaffe, 10.1007/BF02392335Acta Mathematica. 125J. Glimm and A. Jaffe, "The λ(ϕ 4 ) 2 quantum field theory without cutoffs, III. The physical vacuum," Acta Mathematica 125 no. none, (1970) 203 -267.
The λφ 4 in Two-dimensions Quantum Field Theory Without Cutoffs. 4. Perturbations of the Hamiltonian. J Glimm, A M Jaffe, 10.1063/1.1665879J. Math. Phys. 13J. Glimm and A. M. Jaffe, "The λφ 4 in Two-dimensions Quantum Field Theory Without Cutoffs. 4. Perturbations of the Hamiltonian," J. Math. Phys. 13 (1972) 1568-1584.
Lorentz covariance of the λ(φ 4 ) 2 quantum field theory. J T Cannon, A M Jaffe, 10.1007/BF01646027Commun. Math. Phys. 17J. T. Cannon and A. M. Jaffe, "Lorentz covariance of the λ(φ 4 ) 2 quantum field theory," Commun. Math. Phys. 17 (1970) 261-321.
Uniqueness of the vacuum energy density and van hove phenomenon in the infinite-volume limit for two-dimensional self-coupled bose fields. F Guerra, 10.1103/PhysRevLett.28.1213Phys. Rev. Lett. 28F. Guerra, "Uniqueness of the vacuum energy density and van hove phenomenon in the infinite-volume limit for two-dimensional self-coupled bose fields," Phys. Rev. Lett. 28 (May, 1972) 1213-1215.
The p(φ) 2 euclidean quantum field theory as classical statistical mechanics. F Guerra, L Rosen, B Simon, Annals of Mathematics. 1011F. Guerra, L. Rosen, and B. Simon, "The p(φ) 2 euclidean quantum field theory as classical statistical mechanics," Annals of Mathematics 101 no. 1, (1975) 111-189.
The particle structure of the weakly coupled P (φ) 2 model and other applications of high temperature expansions, Part II: The cluster expansion. J Glimm, A Jaffe, T Spencer, Constructive Quantum Field Theory. Springer25J. Glimm, A. Jaffe, and T. Spencer, "The particle structure of the weakly coupled P (φ) 2 model and other applications of high temperature expansions, Part II: The cluster expansion.," in Constructive Quantum Field Theory, vol. 25 of Springer Lecture Notes in Physics. Springer, 1973.
Absolute Bounds on Vertices and Couplings. J Glimm, A M Jaffe, Ann. Inst. H. Poincare Phys. Theor. A. 2297J. Glimm and A. M. Jaffe, "Absolute Bounds on Vertices and Couplings," Ann. Inst. H. Poincare Phys. Theor. A 22 (1975) 97.
The wightman axioms and particle structure in the P(φ) 2 quantum field model. J Glimm, A Jaffe, T Spencer, Annals of Mathematics. 1003J. Glimm, A. Jaffe, and T. Spencer, "The wightman axioms and particle structure in the P(φ) 2 quantum field model," Annals of Mathematics 100 no. 3, (1974) 585-632.
The Absence of Even Bound States for λ(φ 4 ) in Two-Dimensions. T Spencer, 10.1007/BF01609172Commun. Math. Phys. 39T. Spencer, "The Absence of Even Bound States for λ(φ 4 ) in Two-Dimensions," Commun. Math. Phys. 39 (1974) 77-79.
Scattering States and Bound States in λ℘(ϕ) 2 in Two-Dimensions. T Spencer, F Zirilli, 10.1007/BF01608631Commun. Math. Phys. 491T. Spencer and F. Zirilli, "Scattering States and Bound States in λ℘(ϕ) 2 in Two-Dimensions," Commun. Math. Phys. 49 (1976) 1.
Verification of axioms for euclidean and relativistic fields and Haag's theorem in a class of P (ϕ) 2 -models. J Fröhlich, Annales de l'I.H.P. Physique théorique. 214J. Fröhlich, "Verification of axioms for euclidean and relativistic fields and Haag's theorem in a class of P (ϕ) 2 -models," Annales de l'I.H.P. Physique théorique 21 no. 4, (1974) 271-317. http://www.numdam.org/item/AIHPA_1974__21_4_271_0/.
Schwinger functions and their generating functionals I. J Fröhlich, 10.5169/seals-114572Helv. Phys. Acta. 473J. Fröhlich, "Schwinger functions and their generating functionals I.," Helv. Phys. Acta 47 no. 3, (1974) 265-306.
On the Bound State in Weakly Coupled λ(φ 6 − φ 4 ) in Two-Dimensions. J Dimock, J P Eckmann, 10.1007/BF01609050Commun. Math. Phys. 51J. Dimock and J. P. Eckmann, "On the Bound State in Weakly Coupled λ(φ 6 − φ 4 ) in Two-Dimensions," Commun. Math. Phys. 51 (1976) 41-54.
Existence of phase transitions for φ 4 2 quantum fields. J Glimm, A Jaffe, T Spencer, Mathematical Methods of Quantum Field Theory. ParisCNRSJ. Glimm, A. Jaffe, and T. Spencer, "Existence of phase transitions for φ 4 2 quantum fields," in Mathematical Methods of Quantum Field Theory. CNRS, Paris, 1976.
Pure states for general p(φ) 2 theories: Construction, regularity and variational equality. J Frohlich, B Simon, Annals of Mathematics. 1053J. Frohlich and B. Simon, "Pure states for general p(φ) 2 theories: Construction, regularity and variational equality," Annals of Mathematics 105 no. 3, (1977) 493-526.
Phase Transitions for ϕ 4 2 Quantum Fields. J Glimm, A M Jaffe, T Spencer, 10.1007/BF01608328Commun. Math. Phys. 45203J. Glimm, A. M. Jaffe, and T. Spencer, "Phase Transitions for ϕ 4 2 Quantum Fields," Commun. Math. Phys. 45 (1975) 203.
A New Proof of the Asymptotic Nature of Perturbation Theory in P (φ) 2. S J Summers, S. J. Summers, "A New Proof of the Asymptotic Nature of Perturbation Theory in P (φ) 2
. Models, Helv. Phys. Acta. 531Models," Helv. Phys. Acta 53 (1980) 1.
Fluctuations of P (φ) Random Fields. Z Haba, 10.1063/1.525113J. Math. Phys. 221687Z. Haba, "Fluctuations of P (φ) Random Fields," J. Math. Phys. 22 (1981) 1687.
Markov processes as a tool in field theory. E Dynkin, 10.1016/0022-1236(83)90066-6Journal of Functional Analysis. 502E. Dynkin, "Markov processes as a tool in field theory," Journal of Functional Analysis 50 no. 2, (1983) 167-187.
An SPDE approach to perturbation theory of Φ 4 2 : asymptoticity and short distance behavior. H Shen, R Zhu, X Zhu, arXiv:2108.11312math.PRH. Shen, R. Zhu, and X. Zhu, "An SPDE approach to perturbation theory of Φ 4 2 : asymptoticity and short distance behavior," arXiv:2108.11312 [math.PR].
The Euclidean φ 4 2 theory as a limit of an interacting Bose gas. J Fröhlich, A Knowles, B Schlein, V Sohinger, arXiv:2201.07632math-phJ. Fröhlich, A. Knowles, B. Schlein, and V. Sohinger, "The Euclidean φ 4 2 theory as a limit of an interacting Bose gas," arXiv:2201.07632 [math-ph].
Remarks on Exponential Interactions and the Quantum Sine-Gordon Equation in Two Space-Time Dimensions. J Frohlich, Y M Park, Helv. Phys. Acta. 50J. Frohlich and Y. M. Park, "Remarks on Exponential Interactions and the Quantum Sine-Gordon Equation in Two Space-Time Dimensions," Helv. Phys. Acta 50 (1977) 315-329.
Massless Quantum Sine-Gordon Equation in Two Space-Time Dimensions: Correlation Inequalities and Infinite Volume Limit. Y M Park, 10.1063/1.523230J. Math. Phys. 18Y. M. Park, "Massless Quantum Sine-Gordon Equation in Two Space-Time Dimensions: Correlation Inequalities and Infinite Volume Limit," J. Math. Phys. 18 (1977) 2423-2426.
Possible Approach to the Construction of the : exp ξ : d quantum field theory in a finite volume. E P Osipov, 10.1063/1.526168J. Math. Phys. 25633E. P. Osipov, "Possible Approach to the Construction of the : exp ξ : d quantum field theory in a finite volume," J. Math. Phys. 25 (1984) 633.
Elliptic stochastic quantization of Sinh-Gordon QFT. N Barashkov, F C De, Vecchi, arXiv:2108.12664math.PRN. Barashkov and F. C. De Vecchi, "Elliptic stochastic quantization of Sinh-Gordon QFT," arXiv:2108.12664 [math.PR].
. J Glimm, A M Jaffe, Positivity of the φ 4J. Glimm and A. M. Jaffe, "Positivity of the φ 4
. Hamiltonian, 10.1002/prop.19730210702Fortsch. Phys. 21Hamiltonian," Fortsch. Phys. 21 (1973)
The Wightman Axioms and the Mass Gap for Weakly Coupled (φ 4 ) 3 Quantum Field Theories. J S Feldman, K Osterwalder, 10.1016/0003-4916(76)90223-2Annals Phys. 97J. S. Feldman and K. Osterwalder, "The Wightman Axioms and the Mass Gap for Weakly Coupled (φ 4 ) 3 Quantum Field Theories," Annals Phys. 97 (1976) 80-135.
The Infinite Volume Limit of the φ 4. J Magnen, R Seneor, J. Magnen and R. Seneor, "The Infinite Volume Limit of the φ 4
. Model, Ann. Inst. H. Model," Ann. Inst. H.
. Poincare Phys. Theor. 24Poincare Phys. Theor. 24 (1976) 95-159.
A Convergent Expansion About Mean Field Theory. 1. the Expansion. J Glimm, A M Jaffe, T Spencer, 10.1016/0003-4916(76)90026-9Annals Phys. 101610J. Glimm, A. M. Jaffe, and T. Spencer, "A Convergent Expansion About Mean Field Theory. 1. the Expansion," Annals Phys. 101 (1976) 610.
A Convergent Expansion About Mean Field Theory. 2. Convergence of the Expansion. J Glimm, A M Jaffe, T Spencer, 10.1016/0003-4916(76)90027-0Annals Phys. 101J. Glimm, A. M. Jaffe, and T. Spencer, "A Convergent Expansion About Mean Field Theory. 2. Convergence of the Expansion," Annals Phys. 101 (1976) 631-669.
Infrared Bounds, Phase Transitions and Continuous Symmetry Breaking. J Frohlich, B Simon, T Spencer, 10.1007/BF01608557Commun. Math. Phys. 50J. Frohlich, B. Simon, and T. Spencer, "Infrared Bounds, Phase Transitions and Continuous Symmetry Breaking," Commun. Math. Phys. 50 (1976) 79-95.
Phase Space Cell Expansion and Borel Summability for the Euclidean φ 4. J Magnen, R Seneor, J. Magnen and R. Seneor, "Phase Space Cell Expansion and Borel Summability for the Euclidean φ 4
. Theory, 10.1007/BF01614211Commun. Math. Phys. 56237Theory," Commun. Math. Phys. 56 (1977) 237.
Convergence of Lattice Approximations and Infinite Volume Limit in the (λφ 4 − σφ 2 − τ φ) 3 Field Theory. Y M Park, 10.1063/1.523277J. Math. Phys. 18Y. M. Park, "Convergence of Lattice Approximations and Infinite Volume Limit in the (λφ 4 − σφ 2 − τ φ) 3 Field Theory," J. Math. Phys. 18 (1977) 354-366.
A new proof of the existence and nontriviality of the continuum ϕ 4 2 and ϕ 4 3 quantum field theories. D C Brydges, J Fröhlich, A D Sokal, 10.1007/BF01211157Communications in Mathematical Physics. 91D. C. Brydges, J. Fröhlich, and A. D. Sokal, "A new proof of the existence and nontriviality of the continuum ϕ 4 2 and ϕ 4 3 quantum field theories," Communications in Mathematical Physics 91 (1983) 141-186.
A PDE Construction of the Euclidean Φ 4 3 Quantum Field Theory. M Gubinelli, M Hofmanová, 10.1007/s00220-021-04022-0arXiv:1810.01700Commun. Math. Phys. 3841math-phM. Gubinelli and M. Hofmanová, "A PDE Construction of the Euclidean Φ 4 3 Quantum Field Theory," Commun. Math. Phys. 384 no. 1, (2021) 1-75, arXiv:1810.01700 [math-ph].
Construction of a non-Gaussian and rotation-invariant Φ 4 -measure and associated flow on R 3 through stochastic quantization. S Albeverio, S Kusuoka, arXiv:2102.08040math.PRS. Albeverio and S. Kusuoka, "Construction of a non-Gaussian and rotation-invariant Φ 4 -measure and associated flow on R 3 through stochastic quantization," arXiv:2102.08040 [math.PR].
. M Hairer, R Steele, The Φ. 4M. Hairer and R. Steele, "The Φ 4
Measure Has Sub-Gaussian Tails. 10.1007/s10955-021-02866-3arXiv:2102.11685J. Statist. Phys. 186338math.PRMeasure Has Sub-Gaussian Tails," J. Statist. Phys. 186 no. 3, (2022) 38, arXiv:2102.11685 [math.PR].
Stochastic quantization of the Φ 3 3 -model. T Oh, M Okamoto, L Tolomeo, arXiv:2108.06777math.PRT. Oh, M. Okamoto, and L. Tolomeo, "Stochastic quantization of the Φ 3 3 -model," arXiv:2108.06777 [math.PR].
A simple construction of the dynamical Φ 4 3 model. A Jagannath, N Perkowski, arXiv:2108.13335math.PRA. Jagannath and N. Perkowski, "A simple construction of the dynamical Φ 4 3 model," arXiv:2108.13335 [math.PR].
Exact Renormalization for the Gross-Neveu Model of Quantum Fields. K Gawedzki, A Kupiainen, 10.1103/PhysRevLett.54.2191Phys. Rev. Lett. 54K. Gawedzki and A. Kupiainen, "Exact Renormalization for the Gross-Neveu Model of Quantum Fields," Phys. Rev. Lett. 54 (1985) 2191-2194.
Gross-Neveu Model Through Convergent Perturbation Expansions. K Gawedzki, A Kupiainen, 10.1007/BF01208817Commun. Math. Phys. 1021K. Gawedzki and A. Kupiainen, "Gross-Neveu Model Through Convergent Perturbation Expansions," Commun. Math. Phys. 102 (1985) 1.
Massive Gross-Neveu Model: a Rigorous Perturbative Construction. J Feldman, J Magnen, V Rivasseau, R Seneor, 10.1103/PhysRevLett.54.1479Phys. Rev. Lett. 54J. Feldman, J. Magnen, V. Rivasseau, and R. Seneor, "Massive Gross-Neveu Model: a Rigorous Perturbative Construction," Phys. Rev. Lett. 54 (1985) 1479-1481.
J Feldman, J Magnen, V Rivasseau, R Seneor, 10.1007/BF01464282A Renormalizable Field Theory: The Massive Gross-Neveu Model in Two-dimensions. 103J. Feldman, J. Magnen, V. Rivasseau, and R. Seneor, "A Renormalizable Field Theory: The Massive Gross-Neveu Model in Two-dimensions," Commun. Math. Phys. 103 (1986) 67-103.
Continuous constructive fermionic renormalization. M Disertori, V Rivasseau, 10.1007/PL00000998arXiv:hep-th/9802145Annales Henri Poincare. 1M. Disertori and V. Rivasseau, "Continuous constructive fermionic renormalization," Annales Henri Poincare 1 (2000) 1-57, arXiv:hep-th/9802145.
M Salmhofer, C Wieczerkowski, 10.1142/9789812777874_0007Construction of the renormalized GN 2− trajectory. M. Salmhofer and C. Wieczerkowski, Construction of the renormalized GN 2− trajectory, pp. 1-19. 2002.
J Frohlich, E Seiler, 10.5169/seals-114796The Massive Thirring-Schwinger Model (QED in Two-Dimensions. 49889Convergence of Perturbation Theory and Particle StructureJ. Frohlich and E. Seiler, "The Massive Thirring-Schwinger Model (QED in Two-Dimensions): Convergence of Perturbation Theory and Particle Structure," Helv. Phys. Acta 49 (1976) 889.
Functional Integral Construction of the Thirring model: Axioms verification and massless limit. G Benfatto, P Falco, V Mastropietro, 10.1007/s00220-007-0254-yarXiv:hep-th/0606177Commun. Math. Phys. 273G. Benfatto, P. Falco, and V. Mastropietro, "Functional Integral Construction of the Thirring model: Axioms verification and massless limit," Commun. Math. Phys. 273 (2007) 67-118, arXiv:hep-th/0606177.
Massless sine-Gordon and massive Thirring models: Proof of the Coleman's equivalence. G Benfatto, P Falco, V Mastropietro, 10.1007/s00220-008-0619-xarXiv:0711.5010Commun. Math. Phys. 285hep-thG. Benfatto, P. Falco, and V. Mastropietro, "Massless sine-Gordon and massive Thirring models: Proof of the Coleman's equivalence," Commun. Math. Phys. 285 (2009) 713-762, arXiv:0711.5010 [hep-th].
Schwinger functions in Thirring and Luttinger models. V Mastropietro, 10.1007/BF02827305Nuovo Cim. B. 108V. Mastropietro, "Schwinger functions in Thirring and Luttinger models," Nuovo Cim. B 108 (1993) 1095-1107.
P Falco ; Rome, U , La Sapienza, arXiv:hep-th/0703274Rigorous construction of the Thirring model: Ward-Takahashi Identities, Schwinger-Dyson Equations and New Anomalies. PhD thesisP. Falco, Rigorous construction of the Thirring model: Ward-Takahashi Identities, Schwinger-Dyson Equations and New Anomalies. PhD thesis, Rome U., La Sapienza, 2005. arXiv:hep-th/0703274.
Feynman-kac formula for euclidean fermi and bose fields. K Osterwalder, R Schrader, 10.1103/PhysRevLett.29.1423Phys. Rev. Lett. 29K. Osterwalder and R. Schrader, "Feynman-kac formula for euclidean fermi and bose fields," Phys. Rev. Lett. 29 (1972) 1423-1425.
Yukawa quantum field theory in two space-time dimensions without cutoffs. R Schrader, 10.1016/0003-4916(72)90274-6Annals Phys. 70R. Schrader, "Yukawa quantum field theory in two space-time dimensions without cutoffs," Annals Phys. 70 (1972) 412-457.
Euclidean fermi fields and a feynman-kac formula for boson-fermion models. K Osterwalder, R Schrader, Helv. Phys. Acta. 46K. Osterwalder and R. Schrader, "Euclidean fermi fields and a feynman-kac formula for boson-fermion models," Helv. Phys. Acta 46 (1973) 277-302.
Quantum field theory models. J Glimm, A M Jaffe, Les Houches Summer School of Theoretical Physics: Statistical mechanics and quantum field theory. J. Glimm and A. M. Jaffe, "Quantum field theory models," in Les Houches Summer School of Theoretical Physics: Statistical mechanics and quantum field theory, pp. 1-108. 1971.
The Yukawa 2 quantum field theory without cutoffs. J Glimm, A Jaffe, 10.1016/0022-1236(71)90039-5J. Funct. Anal. 72J. Glimm and A. Jaffe, "The Yukawa 2 quantum field theory without cutoffs," J. Funct. Anal. 7 no. 2, (1971) 323-357.
Schwinger Functions for the Yukawa Model in Two-Dimensions with Space-Time Cutoff. E Seiler, 10.1007/BF01614159Commun. Math. Phys. 42163E. Seiler, "Schwinger Functions for the Yukawa Model in Two-Dimensions with Space-Time Cutoff," Commun. Math. Phys. 42 (1975) 163.
On Finite Mass Renormalizations in the Two-Dimensional Yukawa Model. E Seiler, B Simon, 10.1063/1.522458J. Math. Phys. 162289E. Seiler and B. Simon, "On Finite Mass Renormalizations in the Two-Dimensional Yukawa Model," J. Math. Phys. 16 (1975) 2289.
E Seiler, B Simon, 10.1007/BF01629241Bounds in the Yukawa in Two-Dimensions Quantum Field Theory: Upper Bound on the Pressure, Hamiltonian Bound and Linear Lower Bound. 4599E. Seiler and B. Simon, "Bounds in the Yukawa in Two-Dimensions Quantum Field Theory: Upper Bound on the Pressure, Hamiltonian Bound and Linear Lower Bound," Commun. Math. Phys. 45 (1975) 99.
Nelson's Symmetry and All That in the Yukawa-2 and φ 4 3 Field Theories. E Seiler, B Simon, 10.1016/0003-4916(76)90044-0Annals Phys. 97E. Seiler and B. Simon, "Nelson's Symmetry and All That in the Yukawa-2 and φ 4 3 Field Theories," Annals Phys. 97 (1976) 470-518.
Higher Order Estimates for the Yukawa Two-Dimensional Quantum Field Theory. O A Mcbryan, 10.1007/BF01609429Commun. Math. Phys. 421O. A. McBryan, "Higher Order Estimates for the Yukawa Two-Dimensional Quantum Field Theory," Commun. Math. Phys. 42 (1975) 1.
Recent Progress on the Yukawa-2 Quantum Field Theory. O A Mcbryan, Symposium on Mathematical Problems of Quantum Dynamics -Models and Mathematics. O. A. Mcbryan, "Recent Progress on the Yukawa-2 Quantum Field Theory," in Symposium on Mathematical Problems of Quantum Dynamics -Models and Mathematics. 1975.
Volume Dependence of Schwinger Functions in the Yukawa-2 Quantum Field Theory. O A Mcbryan, 10.1007/BF01608332Commun. Math. Phys. 45279O. A. McBryan, "Volume Dependence of Schwinger Functions in the Yukawa-2 Quantum Field Theory," Commun. Math. Phys. 45 (1975) 279.
O A Mcbryan, 10.1007/BF01609828Finite Mass Renormalizations in the Euclidean Yukawa-2 Field Theory. 44237O. A. McBryan, "Finite Mass Renormalizations in the Euclidean Yukawa-2 Field Theory," Commun. Math. Phys. 44 (1975) 237.
Convergence of the Vacuum Energy Density, phi-Bounds and Existence of Wightman Functions for the Yukawa-2 Model. O A Mcbryan, International Colloquium on Mathematical Methods of Quantum Field Theory. O. A. McBryan, "Convergence of the Vacuum Energy Density, phi-Bounds and Existence of Wightman Functions for the Yukawa-2 Model," in International Colloquium on Mathematical Methods of Quantum Field Theory, pp. 237-252. 1975.
Lorentz covariance of the yukawa2 quantum field theory. O A Mcbryan, Y M Park, 10.1063/1.522400Journal of Mathematical Physics. 161O. A. McBryan and Y. M. Park, "Lorentz covariance of the yukawa2 quantum field theory," Journal of Mathematical Physics 16 no. 1, (1975) 104-110.
The Wightman Axioms for the Weakly Coupled Yukawa Model in Two-Dimensions. J Magnen, R Seneor, 10.1007/BF01617924Commun. Math. Phys. 51J. Magnen and R. Seneor, "The Wightman Axioms for the Weakly Coupled Yukawa Model in Two-Dimensions," Commun. Math. Phys. 51 (1976) 297-313.
The Weakly Coupled Yukawa 2 Field Theory: Cluster Expansion and Wightman Axioms. A Cooper, L Rosen, Trans. Am. Math. Soc. 2341A. Cooper and L. Rosen, "The Weakly Coupled Yukawa 2 Field Theory: Cluster Expansion and Wightman Axioms," Trans. Am. Math. Soc. 234 no. 1, (1977) 1-88.
E P Osipov, The Yukawa-2 Quantum Field Theory: Linear n (tau) Bound, Locally Fock Property. 30E. P. Osipov, "The Yukawa-2 Quantum Field Theory: Linear n (tau) Bound, Locally Fock Property," Ann. Inst. H. Poincare Phys. Theor. 30 (1979) 159-192.
The Yukawa-2 Quantum Field Theory: Lorentz Invariance. E P Osipov, 10.1016/0003-4916(80)90118-9Annals Phys. 125E. P. Osipov, "The Yukawa-2 Quantum Field Theory: Lorentz Invariance," Annals Phys. 125 (1980) 53-66.
Analyticity and Borel Summability of Schwinger Functions in the Two-Dimensional Yukawa Model. 1. Finite Volume Approximation. P Renouard, Ann. Inst. H. Poincare Phys. Theor. 27P. Renouard, "Analyticity and Borel Summability of Schwinger Functions in the Two-Dimensional Yukawa Model. 1. Finite Volume Approximation," Ann. Inst. H. Poincare Phys. Theor. 27 (1977) 237-277.
Analyticity and Borel Summability of Schwinger Functions in the Two-Dimensional Yukawa Model. II. The 'Adiabatic Limit'. (In French). P Renouard, Ann. Inst. H. Poincare Phys. Theor. 31P. Renouard, "Analyticity and Borel Summability of Schwinger Functions in the Two-Dimensional Yukawa Model. II. The 'Adiabatic Limit'. (In French)," Ann. Inst. H. Poincare Phys. Theor. 31 (1979) 235-318.
J Magnen, R Seneor, 10.1111/j.1749-6632.1980.tb18003.xYukawa Quantum Field Theory in Three Dimensions. 337J. Magnen and R. Seneor, "Yukawa Quantum Field Theory in Three Dimensions (Y 3 ) * ," Annals of the New York Academy of Sciences 337 no. 1, (1980) 13-43.
A Low Temperature Expansion for the Pseudoscalar Yukawa Model of Quantum Fields in Two Space-time Dimensions. T Balaban, K Gawedzki, Ann. Inst. H. Poincare Phys. Theor. 36271T. Balaban and K. Gawedzki, "A Low Temperature Expansion for the Pseudoscalar Yukawa Model of Quantum Fields in Two Space-time Dimensions," Ann. Inst. H. Poincare Phys. Theor. 36 (1982) 271.
Effective Action for the Yukawa(2) Quantum Field Theory. A Lesniewski, 10.1007/BF01212319Commun. Math. Phys. 108A. Lesniewski, "Effective Action for the Yukawa(2) Quantum Field Theory," Commun. Math. Phys. 108 (1987) 437-467.
Is There a Euclidean Field Theory for Fermions. J Frohlich, K Osterwalder, Helv. Phys. Acta. 47781J. Frohlich and K. Osterwalder, "Is There a Euclidean Field Theory for Fermions," Helv. Phys. Acta 47 (1975) 781.
Bosonization, Topological Solitons and Fractional Charges in Two-dimensional Quantum Field Theory. J Frohlich, P A Marchetti, 10.1007/BF01239028Commun. Math. Phys. 116127J. Frohlich and P. A. Marchetti, "Bosonization, Topological Solitons and Fractional Charges in Two-dimensional Quantum Field Theory," Commun. Math. Phys. 116 (1988) 127.
A Possible Constructive Approach to Super φ 3 in Four-dimensions. 2. Regularization of the Model. H Nicolai, 10.1016/0550-3213(79)90500-5Nucl. Phys. B. 156157H. Nicolai, "A Possible Constructive Approach to Super φ 3 in Four-dimensions. 2. Regularization of the Model," Nucl. Phys. B 156 (1979) 157.
On the Normalization of Schwinger Functions in the Euclidean Wess-Zumino Model. H Nicolai, 10other thesisH. Nicolai, "On the Normalization of Schwinger Functions in the Euclidean Wess-Zumino Model," other thesis, 10, 1978.
The Two-dimensional N = 2 Wess-Zumino Model on a Cylinder. A M Jaffe, A Lesniewski, J Weitsman, 10.1007/BF01218293Commun. Math. Phys. 114147A. M. Jaffe, A. Lesniewski, and J. Weitsman, "The Two-dimensional N = 2 Wess-Zumino Model on a Cylinder," Commun. Math. Phys. 114 (1988) 147.
A Priori Estimates for N = 2 Wess-Zumino Models on a Cylinder. A M Jaffe, A Lesniewski, 10.1007/BF01229455Commun. Math. Phys. 114A. M. Jaffe and A. Lesniewski, "A Priori Estimates for N = 2 Wess-Zumino Models on a Cylinder," Commun. Math. Phys. 114 (1988) 553-575.
Supersymmetric Quantum Fields and Infinite Dimensional Analysis. A M Jaffe, A Lesniewski, NATO Advanced Summer Institute on Nonperturbative Quantum Field Theory. Cargese Summer Institute12A. M. Jaffe and A. Lesniewski, "Supersymmetric Quantum Fields and Infinite Dimensional Analysis," in NATO Advanced Summer Institute on Nonperturbative Quantum Field Theory (Cargese Summer Institute). 12, 1987. https://lib-extopc.kek.jp/preprints/PDF/2000/0031/0031810.pdf.
The Loop Space S(1) → R and Supersymmetric Quantum Fields. A M Jaffe, A Lesniewski, J Weitsman, 10.1016/0003-4916(88)90237-0Annals Phys. 183337A. M. Jaffe, A. Lesniewski, and J. Weitsman, "The Loop Space S(1) → R and Supersymmetric Quantum Fields," Annals Phys. 183 (1988) 337.
The Phase structure of the two-dimensional N=2. S A Janowsky, J Weitsman, S. A. Janowsky and J. Weitsman, "The Phase structure of the two-dimensional N=2
. Wess-Zumino Model, 10.1007/BF02099171Commun. Math. Phys. 142Wess-Zumino model," Commun. Math. Phys. 142 (1991) 25-66.
The random walk representation of classical spin systems and correlation inequalities. D Brydges, J Fröhlich, T Spencer, Communications in Mathematical Physics. 831D. Brydges, J. Fröhlich, and T. Spencer, "The random walk representation of classical spin systems and correlation inequalities," Communications in Mathematical Physics 83 no. 1, (1982) 123 -150.
The random-walk representation of classical spin systems and correlation inequalities. II. The skeleton inequalities. D C Brydges, J Fröhlich, A D Sokal, Communications in Mathematical Physics. 911D. C. Brydges, J. Fröhlich, and A. D. Sokal, "The random-walk representation of classical spin systems and correlation inequalities. II. The skeleton inequalities," Communications in Mathematical Physics 91 no. 1, (1983) 117 -139.
Quantum Field Theory in Terms of Random Walks and Random Surfaces. J Frohlich, NATO Sci. Ser. B. 115J. Frohlich, "Quantum Field Theory in Terms of Random Walks and Random Surfaces," NATO Sci. Ser. B 115 (1984) 169-233.
The Intersection of Brownian Paths as a Case Study of a Renormalization Group Method for Quantum Field Theory. M Aizenman, 10.1007/978-3-642-70307-2_6SpringerBerlin Heidelberg; Berlin, HeidelbergM. Aizenman, The Intersection of Brownian Paths as a Case Study of a Renormalization Group Method for Quantum Field Theory, pp. 91-110. Springer Berlin Heidelberg, Berlin, Heidelberg, 1985.
The Low Temperature Behavior of Disordered Magnets. J T Chayes, L Chayes, J Frohlich, 10.1007/BF01206137Commun. Math. Phys. 100399J. T. Chayes, L. Chayes, and J. Frohlich, "The Low Temperature Behavior of Disordered Magnets," Commun. Math. Phys. 100 (1985) 399.
Proof of the Triviality of φ 4 D Field Theory and Some Mean Field Features of Ising Models for D>4. M Aizenman, 10.1103/PhysRevLett.47.1Phys. Rev. Lett. 47M. Aizenman, "Proof of the Triviality of φ 4 D Field Theory and Some Mean Field Features of Ising Models for D>4," Phys. Rev. Lett. 47 (1981) 1-4.
Geometric Analysis of φ 4 Fields and Ising Models (Parts 1 & 2). M Aizenman, 10.1007/BF01205659Commun. Math. Phys. 861M. Aizenman, "Geometric Analysis of φ 4 Fields and Ising Models (Parts 1 & 2)," Commun. Math. Phys. 86 (1982) 1.
On the Triviality of λφ 4 d Theories and the Approach to the Critical Point in d ≥ 4 Dimensions. J Frohlich, 10.1016/0550-3213(82)90088-8Nucl. Phys. B. 200J. Frohlich, "On the Triviality of λφ 4 d Theories and the Approach to the Critical Point in d ≥ 4 Dimensions," Nucl. Phys. B 200 (1982) 281-296.
R Fernandez, J Fröhlich, A D Sokal, 10.1007/978-3-662-02866-7Random Walks, Critical Phenomena, and Triviality in Quantum Field Theory. Texts and Monographs in Physics. Berlin, HeidelbergSpringerR. Fernandez, J. Fröhlich, and A. D. Sokal, Random Walks, Critical Phenomena, and Triviality in Quantum Field Theory. Texts and Monographs in Physics. Springer, Berlin, Heidelberg, 1992.
An Alternate Constructive Approach to the φ 4 3 Quantum Field Theory, and a Possible Destructive Approach to φ 4 4. A D Sokal, Ann. Inst. H. Poincare Phys. Theor. A. 37A. D. Sokal, "An Alternate Constructive Approach to the φ 4 3 Quantum Field Theory, and a Possible Destructive Approach to φ 4 4 ," Ann. Inst. H. Poincare Phys. Theor. A 37 (1982) 317-398.
Triviality of φ 4 4 and All That in a Hierarchical Model Approximation. K Gawedzki, A Kupiainen, 10.1007/BF01011785J. Statist. Phys. 29K. Gawedzki and A. Kupiainen, "Triviality of φ 4 4 and All That in a Hierarchical Model Approximation," J. Statist. Phys. 29 (1982) 683-698.
Polymers and gφ 4 Theory in Four-Dimensions. C. Aragao De Carvalho, S Caracciolo, J Frohlich, 10.1016/0550-3213(83)90213-4Nucl. Phys. B. 215C. Aragao De Carvalho, S. Caracciolo, and J. Frohlich, "Polymers and gφ 4 Theory in Four-Dimensions," Nucl. Phys. B 215 (1983) 209-248.
On the renormalized coupling constant and the susceptibility in φ 4 4 field theory and the Ising model in four dimensions. M Aizenman, R Graham, 10.1016/0550-3213(83)90053-6Nuclear Physics B. 2252M. Aizenman and R. Graham, "On the renormalized coupling constant and the susceptibility in φ 4 4 field theory and the Ising model in four dimensions," Nuclear Physics B 225 no. 2, (1983) 261-288.
Masless Lattice φ 4 4 Theory: a Nonperturbative Control of a Renormalizable Model. K Gawedzki, A Kupiainen, 10.1103/PhysRevLett.54.92Phys. Rev. Lett. 54K. Gawedzki and A. Kupiainen, "Masless Lattice φ 4 4 Theory: a Nonperturbative Control of a Renormalizable Model," Phys. Rev. Lett. 54 (1985) 92-94.
Nontrivial Continuum Limit of a φ 4. K Gawedzki, A Kupiainen, K. Gawedzki and A. Kupiainen, "Nontrivial Continuum Limit of a φ 4
Model With Negative Coupling Constant. 10.1016/0550-3213(85)90359-1Nucl. Phys. B. 257Model With Negative Coupling Constant," Nucl. Phys. B 257 (1985) 474-504.
Massless lattice φ 4 4 theory: Rigorous control of a renormalizable asymptotically free model. K Gawedzki, A Kupiainen, 10.1007/BF01212281Communications in Mathematical Physics. 99K. Gawedzki and A. Kupiainen, "Massless lattice φ 4 4 theory: Rigorous control of a renormalizable asymptotically free model," Communications in Mathematical Physics 99 (1985) 197-252.
A rigorous control of logarithmic corrections in four-dimensional ϕ 4 spin systems. T Hara, H Tasaki, 10.1007/BF01009036Journal of Statistical Physics. 47T. Hara and H. Tasaki, "A rigorous control of logarithmic corrections in four-dimensional ϕ 4 spin systems," Journal of Statistical Physics 47 (1987) 99-121.
Construction and Borel summability of infrared Φ 4 4 by a phase space expansion. J Feldman, J Magnen, V Rivasseau, R Sénéor, Communications in Mathematical Physics. 1093J. Feldman, J. Magnen, V. Rivasseau, and R. Sénéor, "Construction and Borel summability of infrared Φ 4 4 by a phase space expansion," Communications in Mathematical Physics 109 no. 3, (1987) 437 -480.
Scaling limits and critical behaviour of the 4-dimensional n-component |ϕ| 4 spin model. R Bauerschmidt, D C Brydges, G Slade, arXiv:1403.7424math-phR. Bauerschmidt, D. C. Brydges, and G. Slade, "Scaling limits and critical behaviour of the 4-dimensional n-component |ϕ| 4 spin model," arXiv:1403.7424 [math-ph].
Marginal triviality of the scaling limits of critical 4D Ising and φ 4 4 models. M Aizenman, H Duminil-Copin, 10.4007/annals.2021.194.1.3arXiv:1912.07973Annals Math. 1941math-phM. Aizenman and H. Duminil-Copin, "Marginal triviality of the scaling limits of critical 4D Ising and φ 4 4 models," Annals Math. 194 no. 1, (2021) , arXiv:1912.07973 [math-ph].
A geometric perspective on the scaling limits of critical Ising and ϕ 4 d models. M Aizenman, arXiv:2112.04248122021math-phM. Aizenman, "A geometric perspective on the scaling limits of critical Ising and ϕ 4 d models," 12, 2021. arXiv:2112.04248 [math-ph].
Critical exponents in 3.99 dimensions. K G Wilson, M E Fisher, 10.1103/PhysRevLett.28.240Phys. Rev. Lett. 28K. G. Wilson and M. E. Fisher, "Critical exponents in 3.99 dimensions," Phys. Rev. Lett. 28 (1972) 240-243.
Quarks and Strings on a Lattice. K G Wilson, 13th International School of Subnuclear Physics: New Phenomena in Subnuclear Physics. 11. K. G. Wilson, "Quarks and Strings on a Lattice," in 13th International School of Subnuclear Physics: New Phenomena in Subnuclear Physics. 11, 1975.
Finite-lattice approximations to renormalization groups. T L Bell, K G Wilson, 10.1103/PhysRevB.11.3431Phys. Rev. B. 1193431T. L. Bell and K. G. Wilson, "Finite-lattice approximations to renormalization groups," Phys. Rev. B 11 no. 9, (1975) 3431.
Quantum Chromodynamics on a Lattice. K G Wilson, Cargese Summer Institute: New Developments in Quantum Field Theory and Statistical Mechanics. 1. K. G. Wilson, "Quantum Chromodynamics on a Lattice," in Cargese Summer Institute: New Developments in Quantum Field Theory and Statistical Mechanics. 1, 1977.
Canonical quantization of 1+1-dimensional Yang-Mills theory: An operator-algebraic approach. A Brothier, A Stottmeister, arXiv:1907.05549math-phA. Brothier and A. Stottmeister, "Canonical quantization of 1+1-dimensional Yang-Mills theory: An operator-algebraic approach," arXiv:1907.05549 [math-ph].
Higgs) 2,3 QUANTUM FIELDS IN A FINITE VOLUME. 1. A LOWER BOUND. T Balaban, 10.1007/BF01403506Commun. Math. Phys. 85T. Balaban, "(Higgs) 2,3 QUANTUM FIELDS IN A FINITE VOLUME. 1. A LOWER BOUND," Commun. Math. Phys. 85 (1982) 603-636.
Higgs) 2,3 QUANTUM FIELDS IN A FINITE VOLUME. 2. AN UPPER BOUND. T Balaban, 10.1007/BF01214890Commun. Math. Phys. 86T. Balaban, "(Higgs) 2,3 QUANTUM FIELDS IN A FINITE VOLUME. 2. AN UPPER BOUND," Commun. Math. Phys. 86 (1982) 555-594.
Higgs) 2,3 Quantum Field in a Finite Volume. 3. Renormalization. T Balaban, 10.1007/BF01213217Commun. Math. Phys. 88411T. Balaban, "(Higgs) 2,3 Quantum Field in a Finite Volume. 3. Renormalization," Commun. Math. Phys. 88 (1983) 411.
Propagators and Renormalization Transformations for Lattice Gauge Theories. I. T Balaban, 10.1007/BF01215753Commun. Math. Phys. 95T. Balaban, "Propagators and Renormalization Transformations for Lattice Gauge Theories. I," Commun. Math. Phys. 95 (1984) 17-40.
Regularity and Decay of Lattice Green's Functions. T Balaban, 10.1007/BF01214744Commun. Math. Phys. 89571T. Balaban, "Regularity and Decay of Lattice Green's Functions," Commun. Math. Phys. 89 (1983) 571.
Propagators and Renormalization Transformations for Lattice Gauge Theories. 2. T Balaban, 10.1007/BF01240221Commun. Math. Phys. 96223T. Balaban, "Propagators and Renormalization Transformations for Lattice Gauge Theories. 2.," Commun. Math. Phys. 96 (1984) 223.
The Mass Gap for Higgs Models on a Unit Lattice. T Balaban, J Imbrie, A M Jaffe, D Brydges, 10.1016/0003-4916(84)90121-0Annals Phys. 158281T. Balaban, J. Imbrie, A. M. Jaffe, and D. Brydges, "The Mass Gap for Higgs Models on a Unit Lattice," Annals Phys. 158 (1984) 281.
T Balaban, 10.1007/BF01466594Spaces of Regular Gauge Field Configurations on a Lattice and Gauge Fixing Conditions. 9975T. Balaban, "Spaces of Regular Gauge Field Configurations on a Lattice and Gauge Fixing Conditions," Commun. Math. Phys. 99 (1985) 75.
T Balaban, J Imbrie, A M Jaffe, 10.1007/BF01206191Renormalization of the Higgs Model: Minimizers, Propagators and the Stability of Mean Field Theory. 97299T. Balaban, J. Imbrie, and A. M. Jaffe, "Renormalization of the Higgs Model: Minimizers, Propagators and the Stability of Mean Field Theory," Commun. Math. Phys. 97 (1985) 299.
Averaging Operations for Lattice Gauge Theories. T Balaban, 10.1007/BF01211042Commun. Math. Phys. 98T. Balaban, "Averaging Operations for Lattice Gauge Theories," Commun. Math. Phys. 98 (1985) 17-51.
Propagators for Lattice Gauge Theories in a Background Field. T Balaban, 10.1007/BF01240355Commun. Math. Phys. 99389T. Balaban, "Propagators for Lattice Gauge Theories in a Background Field," Commun. Math. Phys. 99 (1985) 389.
Ultraviolet Stability of Three-Dimensional Lattice Pure Gauge Field Theories. T Balaban, 10.1007/BF01229380Commun. Math. Phys. 102255T. Balaban, "Ultraviolet Stability of Three-Dimensional Lattice Pure Gauge Field Theories," Commun. Math. Phys. 102 (1985) 255.
The Variational Problem and Background Fields in Renormalization Group Method for Lattice Gauge Theories. T Balaban, 10.1007/BF01229381Commun. Math. Phys. 102277T. Balaban, "The Variational Problem and Background Fields in Renormalization Group Method for Lattice Gauge Theories," Commun. Math. Phys. 102 (1985) 277.
Renormalization Group Approach to Lattice Gauge Fields Theories. 1. Generation of Effective Actions in a Small Fields Approximation and a Coupling Constant Renormalization in Four-dimensions. T Balaban, 10.1007/BF01215223Commun. Math. Phys. 109249T. Balaban, "Renormalization Group Approach to Lattice Gauge Fields Theories. 1. Generation of Effective Actions in a Small Fields Approximation and a Coupling Constant Renormalization in Four-dimensions," Commun. Math. Phys. 109 (1987) 249.
Effective Action and Cluster Properties of the Abelian Higgs Model. T Balaban, J Z Imbrie, A M Jaffe, 10.1007/BF01225038Commun. Math. Phys. 114257T. Balaban, J. Z. Imbrie, and A. M. Jaffe, "Effective Action and Cluster Properties of the Abelian Higgs Model," Commun. Math. Phys. 114 (1988) 257.
Renormalization group approach to lattice gauge field theories. II. Cluster expansions. T Balaban, Communications in Mathematical Physics. 1161T. Balaban, "Renormalization group approach to lattice gauge field theories. II. Cluster expansions," Communications in Mathematical Physics 116 no. 1, (1988) 1 -22.
Convergent Renormalization Expansions for Lattice Gauge Theories. T Balaban, 10.1007/BF01217741Commun. Math. Phys. 119T. Balaban, "Convergent Renormalization Expansions for Lattice Gauge Theories," Commun. Math. Phys. 119 (1988) 243-285.
Block Averaging Renormalization Group for Lattice and Continuum Euclidean Fermions: Expected and Unexpected Results. T Balaban, M O'carroll, R Schor, 10.1007/BF00401587Lett. Math. Phys. 17T. Balaban, M. O'Carroll, and R. Schor, "Block Averaging Renormalization Group for Lattice and Continuum Euclidean Fermions: Expected and Unexpected Results," Lett. Math. Phys. 17 (1989) 209-214.
Large Field Renormalization. 1: The Basic Step of the R Operation. T Balaban, 10.1007/BF01257412Commun. Math. Phys. 122T. Balaban, "Large Field Renormalization. 1: The Basic Step of the R Operation," Commun. Math. Phys. 122 (1989) 175-202.
Large Field Renormalization. 2: Localization, Exponentiation, and Bounds for the R Operation. T Balaban, 10.1007/BF01238433Commun. Math. Phys. 122T. Balaban, "Large Field Renormalization. 2: Localization, Exponentiation, and Bounds for the R Operation," Commun. Math. Phys. 122 (1989) 355-392.
The Renormalization Group According to Balaban, I. Small fields. J Dimock, 10.1142/S0129055X13300100arXiv:1108.1335Rev. Math. Phys. 2571330010math-phJ. Dimock, "The Renormalization Group According to Balaban, I. Small fields," Rev. Math. Phys. 25 no. 7, (2013) 1330010, arXiv:1108.1335 [math-ph].
The Renormalization Group According to Balaban -II. Large fields. J Dimock, 10.1063/1.4821275arXiv:1212.5562J. Math. Phys. 54992301math-phJ. Dimock, "The Renormalization Group According to Balaban -II. Large fields," J. Math. Phys. 54 no. 9, (2013) 092301, arXiv:1212.5562 [math-ph].
The Renormalization Group According to Balaban III. Convergence. J Dimock, 10.1007/s00023-013-0303-3arXiv:1304.0705Annales Henri Poincare. 1511math-phJ. Dimock, "The Renormalization Group According to Balaban III. Convergence," Annales Henri Poincare 15 no. 11, (2014) 2133-2175, arXiv:1304.0705 [math-ph].
Nonperturbative renormalization of scalar quantum electrodynamics in d=3. J Dimock, 10.1063/1.4933224arXiv:1502.02946J. Math. Phys. 5610102304math-phJ. Dimock, "Nonperturbative renormalization of scalar quantum electrodynamics in d=3," J. Math. Phys. 56 no. 10, (2015) 102304, arXiv:1502.02946 [math-ph].
Ultraviolet regularity for QED in d=3. J Dimock, 10.1063/1.5009458arXiv:1512.04373J. Math. Phys. 59112301math-phJ. Dimock, "Ultraviolet regularity for QED in d=3," J. Math. Phys. 59 no. 1, (2018) 012301, arXiv:1512.04373 [math-ph].
Multiscale block averaging for QED in d = 3. J Dimock, 10.1063/1.5134439arXiv:1712.10029J. Math. Phys. 61332302math-phJ. Dimock, "Multiscale block averaging for QED in d = 3," J. Math. Phys. 61 no. 3, (2020) 032302, arXiv:1712.10029 [math-ph].
Construction of YM(4) with an infrared cutoff. J Magnen, V Rivasseau, R Seneor, 10.1007/BF02097397Commun. Math. Phys. 155J. Magnen, V. Rivasseau, and R. Seneor, "Construction of YM(4) with an infrared cutoff," Commun. Math. Phys. 155 (1993) 325-384.
Quantum Yang-Mills theory. A M Jaffe, E Witten, A. M. Jaffe and E. Witten, "Quantum Yang-Mills theory,". https://www.claymath.org/sites/default/files/yangmills.pdf.
Report on the Status of the Yang-Mills Millenium Prize Problem. M R Douglas, M. R. Douglas, "Report on the Status of the Yang-Mills Millenium Prize Problem,". http://www.claymath.org/sites/default/files/ym2.pdf.
An Algebraic approach to quantum field theory. R Haag, D Kastler, 10.1063/1.1704187J. Math. Phys. 5R. Haag and D. Kastler, "An Algebraic approach to quantum field theory," J. Math. Phys. 5 (1964) 848-861.
Einführung in die axiomatische quantenfeldtheorie. H Araki, H. Araki, "Einführung in die axiomatische quantenfeldtheorie.".
Quantum field theories with composite particles and asymptotic conditions. R Haag, 10.1103/PhysRev.112.669Phys. Rev. 112R. Haag, "Quantum field theories with composite particles and asymptotic conditions," Phys. Rev. 112 (1958) 669-673.
Postulates for General Quantum Mechanics. I E Segal, 10.2307/1969387Annals of Mathematics. 484I. E. Segal, "Postulates for General Quantum Mechanics," Annals of Mathematics 48 no. 4, (1947) 930-948.
Mathematical Problems of Relativistic Physics. I Segal, 10.1002/zamm.19630431215ZAMM -Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik. 4312I. Segal, "Mathematical Problems of Relativistic Physics," ZAMM -Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik 43 no. 12, (1963) 572-572.
R Haag, 10.1007/978-3-642-61458-3Local Quantum Physics: Fields, Particles, Algebras. Texts and Monographs in Physics. Berlin, HeidelbergSpringerR. Haag, Local Quantum Physics: Fields, Particles, Algebras. Texts and Monographs in Physics. Springer, Berlin, Heidelberg, 1996.
H Araki, Mathematical theory of quantum fields. OxfordOxford University Press101H. Araki, Mathematical theory of quantum fields., vol. 101. Oxford: Oxford University Press, 1999.
Causal nets of operator algebras: Mathematical aspects of algebraic quantum field theory. H Baumgaertel, M Wollenberg, Akademie VerlagH. Baumgaertel and M. Wollenberg, Causal nets of operator algebras: Mathematical aspects of algebraic quantum field theory. Akademie Verlag, 1992.
Operator algebraic methods in quantum field theory: A Series of lectures. H Baumgaertel, Akademie VerlagH. Baumgaertel, Operator algebraic methods in quantum field theory: A Series of lectures. Akademie Verlag, 1995.
S Horuzhy, Introduction to Algebraic Quantum Field Theory. Kluwer Academic PublishersS. Horuzhy, Introduction to Algebraic Quantum Field Theory. Kluwer Academic Publishers, 1990.
G G Emch, Algebraic Methods in Statistical Mechanics and Quantum Field Theory. Interscience Monographs and Texts in Physics and Astronomy. R.E. MarshakG. G. Emch, Algebraic Methods in Statistical Mechanics and Quantum Field Theory. Interscience Monographs and Texts in Physics and Astronomy (Edited by R.E. Marshak).
. Wiley-Interscience, Wiley-Interscience, 1972.
H J Borchers, 10.1007/978-3-540-49954-1Translation group and particle representations in quantum field theory. 40H. J. Borchers, Translation group and particle representations in quantum field theory, vol. 40. 1996.
Operator algebras and quantum statistical mechanics, I. O Bratteli, D Robinson, 10.1007/978-3-662-02520-8SpringerO. Bratteli and D. Robinson, Operator algebras and quantum statistical mechanics, I. Springer, 1979.
Operator algebras and quantum statistical mechanics II. O Bratteli, D Robinson, 10.1007/978-3-662-09089-3SpringerO. Bratteli and D. Robinson, Operator algebras and quantum statistical mechanics II. Springer, 1981.
Von Neumann Algebras. J Dixmier, North HollandJ. Dixmier, Von Neumann Algebras. North Holland, 1981, 2011.
. J Dixmier, C *-Algebras, North HollandJ. Dixmier, C*-Algebras. North Holland, 2011.
Fundamentals of the Theory of Operator Algebras. R V Kadison, J R Ringrose, American Mathematical SocietyVolumes I-IVR. V. Kadison and J. R. Ringrose, Fundamentals of the Theory of Operator Algebras. American Mathematical Society, Volumes I-IV, 1997.
Normed Algebras. M A Naimark, SpringerNetherlands1 ed.M. A. Naimark, Normed Algebras. Springer Netherlands, 1 ed., 1972.
G K Pedersen, C*-algebras and their automorphism groups. Academic Press2 ed.G. K. Pedersen, C*-algebras and their automorphism groups. Academic Press, 2 ed., 2018.
. S Sakai, C *-Algebras, W*-Algebras , Classics in Mathematics. Springerspringer ed.S. Sakai, C*-Algebras and W*-Algebras. Classics in Mathematics. Springer, springer ed., 1997.
S Sakai, Operator algebras in dynamical systems. Encyclopedia of Mathematics and its Applications. Cambridge University PressS. Sakai, Operator algebras in dynamical systems. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 1991.
An invitation to von Neumann algebras. V Sunder, Springer-VerlagUniversitext1 ed.V. Sunder, An invitation to von Neumann algebras. Universitext. Springer-Verlag, 1 ed., 1987.
S V Stratila, L Zsido, 10.1017/9781108654975Lectures on von Neumann Algebras. Cambridge IISc Series. Cambridge University Press2 ed.S. V. Stratila and L. Zsido, Lectures on von Neumann Algebras. Cambridge IISc Series. Cambridge University Press, 2 ed., 2019.
S V Stratila, Modular Theory in Operator Algebras. Cambridge Univ Press US2 ed.S. V. Stratila, Modular Theory in Operator Algebras. Cambridge Univ Press US, 2 ed., 2021.
Theory of operator algebras I. Encyclopaedia of mathematical sciences, Operator algebras and non-commutative geometry. M Takesaki, SpringerM. Takesaki, Theory of operator algebras I. Encyclopaedia of mathematical sciences, Operator algebras and non-commutative geometry. Springer, 2001.
Theory of Operator Algebras II. M Takesaki, Enc.Math.Sci. 125Springer1 ed.M. Takesaki, Theory of Operator Algebras II. Enc.Math.Sci.125. Springer, 1 ed., 2002.
Theory of Operator Algebras III. M Takesaki, Enc.Math.Sci. 127Springer1 ed.M. Takesaki, Theory of Operator Algebras III. Enc.Math.Sci.127. Springer, 1 ed., 2002.
Von Neumann Algebras. V F Jones, V. F. Jones, Von Neumann Algebras. 2009. https://math.berkeley.edu/~vfr/VonNeumann2009.pdf.
. D E Evans, M Takesaki, Operator Algebras and Applications. 1Structure TheoryD. E. Evans and M. Takesaki, Operator Algebras and Applications: Volume 1, Structure Theory;
. K-Theory, Topology Geometry, Cambridge University PressK-theory, Geometry and Topology. London Mathematical Society Lecture Note Series. Cambridge University Press, 1989.
D E Evans, M Takesaki, Operator Algebras and Applications. Cambridge University Press2D. E. Evans and M. Takesaki, Operator Algebras and Applications: Volume 2. London Mathematical Society Lecture Note Series. Cambridge University Press, 1989.
Tomita-Takesaki Theory in Algebras of Unbounded Operators. A Inoue, Lecture Notes in Mathematics. Springer-Verlag1 ed.A. Inoue, Tomita-Takesaki Theory in Algebras of Unbounded Operators. Lecture Notes in Mathematics. Springer-Verlag Berlin Heidelberg, 1 ed., 1998.
Mathematical Physics Studies. 10.1007/978-3-319-21353-8Advances in Algebraic Quantum Field Theory. Springer International Publishing"Advances in Algebraic Quantum Field Theory," Mathematical Physics Studies. Springer International Publishing, 2015.
C J Fewster, K Rejzner, arXiv:1904.04051Algebraic Quantum Field Theory -an introduction. hep-thC. J. Fewster and K. Rejzner, "Algebraic Quantum Field Theory -an introduction," arXiv:1904.04051 [hep-th].
Algebraic quantum field theory. H Halvorson, M Muger, 10.1016/B978-044451560-5/50011-7arXiv:math-ph/0602036Philosophy of physics. J. Butterfield and J. EarmanH. Halvorson and M. Muger, "Algebraic quantum field theory," in Philosophy of physics, J. Butterfield and J. Earman, eds., pp. 731-864. 2007. arXiv:math-ph/0602036.
K Rejzner, 10.1007/978-3-319-25901-7Perturbative Algebraic Quantum Field Theory: An Introduction for Mathematicians. Mathematical Physics Studies. Springer International PublishingK. Rejzner, Perturbative Algebraic Quantum Field Theory: An Introduction for Mathematicians. Mathematical Physics Studies. Springer International Publishing, 2016.
M Dütsch, From Classical Field Theory to Perturbative Quantum Field Theory. M. Dütsch, "From Classical Field Theory to Perturbative Quantum Field Theory,".
Entanglement Measures and Their Properties in Quantum Field Theory. S Hollands, K Sanders, S. Hollands and K. Sanders, "Entanglement Measures and Their Properties in Quantum Field Theory,".
Haag-kastler axioms. "Haag-kastler axioms." http://ncatlab.org/nlab/show/Haag-Kastler+axioms.
Causal complement. "Causal complement." http://ncatlab.org/nlab/show/causal+complement.
On the imbedding of normed rings into the ring of operators in Hilbert space. I Gelfand, M Neumark, Rec. Math. [Mat. Sbornik] N.S. 1254I. Gelfand and M. Neumark, "On the imbedding of normed rings into the ring of operators in Hilbert space," Rec. Math. [Mat. Sbornik] N.S. 12(54) (1943) 197-217.
Irreducible representations of operator algebras. I Segal, 10.1090/S0002-9904-1947-08742-5Bull. Amer. Math. Soc. 53I. Segal, "Irreducible representations of operator algebras," Bull. Amer. Math. Soc. 53 (1947) .
Zur Algebra der Funktionaloperationen und Theorie der normalen Operatoren. J Neumann, 10.1007/BF01782352Mathematische Annalen. 1021J. von Neumann, "Zur Algebra der Funktionaloperationen und Theorie der normalen Operatoren," Mathematische Annalen 102 no. 1, (1930) 370-427.
Haag-kastler vacuum representation. "Haag-kastler vacuum representation." https://ncatlab.org/nlab/show/Haag-Kastler+vacuum+representation.
On Rings of Operators. F J Murray, J Von Neumann, 10.2307/1968693Annals of Mathematics. 371F. J. Murray and J. von Neumann, "On Rings of Operators," Annals of Mathematics 37 no. 1, (1936) 116-229.
On rings of operators. II. F J Murray, J Von Neumann, 10.1090/S0002-9947-1937-1501899-4Trans. Amer. Math. Soc. 41F. J. Murray and J. von Neumann, "On rings of operators. II," Trans. Amer. Math. Soc. 41 (1937) 208-248.
On infinite direct products. J Neumann, Compositio Mathematica. 6J. von Neumann, "On infinite direct products," Compositio Mathematica 6 (1939) 1-77. http://eudml.org/doc/88704.
On Rings of Operators. III. J Neumann, 10.2307/1968823Annals of Mathematics. 411J. von Neumann, "On Rings of Operators. III," Annals of Mathematics 41 no. 1, (1940) 94-161.
On Rings of Operators. IV. F J Murray, J Von Neumann, 10.2307/1969107Annals of Mathematics. 444F. J. Murray and J. von Neumann, "On Rings of Operators. IV," Annals of Mathematics 44 no. 4, (1943) 716-808.
On Some Algebraical Properties of Operator Rings. J Neumann, 10.2307/1969106Annals of Mathematics. 444J. von Neumann, "On Some Algebraical Properties of Operator Rings," Annals of Mathematics 44 no. 4, (1943) 709-715.
On Rings of Operators. Reduction Theory. J V Neumann, 10.2307/1969463Annals of Mathematics. 502J. V. Neumann, "On Rings of Operators. Reduction Theory," Annals of Mathematics 50 no. 2, (1949) 401-485.
Rings of Operators. J , Von Neumann, Collected Works. IIIPergamon PressJ. Von Neumann, "Collected Works, Volume III: Rings of Operators," Pergamon Press, 1961.
Type of von Neumann Algebra Associated with Free Field. H Araki, 10.1143/PTP.32.956Progress of Theoretical Physics. 326H. Araki, "Type of von Neumann Algebra Associated with Free Field," Progress of Theoretical Physics 32 no. 6, (12, 1964) 956-965.
Algebraic And Modular Structure of Von Neumann Algebras of Physics. R Longo, 38R. Longo, "Algebraic And Modular Structure of Von Neumann Algebras of Physics," vol. 38. 1982.
On the modular structure of local algebras of observables. K Fredenhagen, Communications in Mathematical Physics. 971-2K. Fredenhagen, "On the modular structure of local algebras of observables," Communications in Mathematical Physics 97 no. 1-2, (1985) 79 -89.
Quasi-standard von neumann algebras. M Tomita, M. Tomita, "Quasi-standard von neumann algebras," 1967.
Tomita's Theory of Modular Hilbert Algebras and its Applications. M Takesaki, 10.1007/bfb0065832Lecture Notes in Mathematics. Springer-VerlagM. Takesaki, Tomita's Theory of Modular Hilbert Algebras and its Applications. Lecture Notes in Mathematics. Springer-Verlag, 1970.
On revolutionizing quantum field theory with Tomita's modular theory. H J Borchers, 10.1063/1.533323J. Math. Phys. 41H. J. Borchers, "On revolutionizing quantum field theory with Tomita's modular theory," J. Math. Phys. 41 (2000) 3604-3673.
S J Summers, arXiv:math-ph/0511034Tomita-Takesaki modular theory. S. J. Summers, "Tomita-Takesaki modular theory," arXiv:math-ph/0511034.
On the Equilibrium states in quantum statistical mechanics. R Haag, N M Hugenholtz, M Winnink, 10.1007/BF01646342Commun. Math. Phys. 5R. Haag, N. M. Hugenholtz, and M. Winnink, "On the Equilibrium states in quantum statistical mechanics," Commun. Math. Phys. 5 (1967) 215-236.
Local normality in quantum statistical mechanics. M Takesaki, M Winnink, 10.1007/BF01645976Commun. Math. Phys. 30M. Takesaki and M. Winnink, "Local normality in quantum statistical mechanics," Commun. Math. Phys. 30 (1973) 129-152.
Modular groups of quantum fields in thermal states. H J Borchers, J Yngvason, 10.1063/1.532678arXiv:math-ph/9805013J. Math. Phys. 40H. J. Borchers and J. Yngvason, "Modular groups of quantum fields in thermal states," J. Math. Phys. 40 (1999) 601-624, arXiv:math-ph/9805013.
APS Medal for Exceptional Achievement in Research: Invited article on entanglement properties of quantum field theory. E Witten, 10.1103/RevModPhys.90.045003arXiv:1803.04993Rev. Mod. Phys. 90445003hep-thE. Witten, "APS Medal for Exceptional Achievement in Research: Invited article on entanglement properties of quantum field theory," Rev. Mod. Phys. 90 no. 4, (2018) 045003, arXiv:1803.04993 [hep-th].
The intrinsic parity of elementary particles. G C Wick, A S Wightman, E P Wigner, 10.1103/PhysRev.88.101Phys. Rev. 88G. C. Wick, A. S. Wightman, and E. P. Wigner, "The intrinsic parity of elementary particles," Phys. Rev. 88 (1952) 101-105.
Superselection rule for charge. G C Wick, A S Wightman, E P Wigner, 10.1103/PhysRevD.1.3267Phys. Rev. D. 1G. C. Wick, A. S. Wightman, and E. P. Wigner, "Superselection rule for charge," Phys. Rev. D 1 (1970) 3267-3269.
Outline of Axiomatic Relativistic Quantum Field Theory. R F Streater, 10.1088/0034-4885/38/7/001Rept. Prog. Phys. 38R. F. Streater, "Outline of Axiomatic Relativistic Quantum Field Theory," Rept. Prog. Phys. 38 (1975) 771-846.
Local observables and particle statistics. 1. S Doplicher, R Haag, J E Roberts, 10.1007/BF01877742Commun. Math. Phys. 23S. Doplicher, R. Haag, and J. E. Roberts, "Local observables and particle statistics. 1," Commun. Math. Phys. 23 (1971) 199-230.
Local observables and particle statistics. 2. S Doplicher, R Haag, J E Roberts, 10.1007/BF01646454Commun. Math. Phys. 35S. Doplicher, R. Haag, and J. E. Roberts, "Local observables and particle statistics. 2," Commun. Math. Phys. 35 (1974) 49-85.
Quantization and Superselection Sectors. 1. Transformation Group C* Algebras. N P Landsman, 10.1142/S0129055X9000003XRev. Math. Phys. 2N. P. Landsman, "Quantization and Superselection Sectors. 1. Transformation Group C* Algebras," Rev. Math. Phys. 2 (1990) 45-72.
Quantization and Superselection Sectors. 2. Dirac Monopole and Aharonov-Bohm Effect. N P Landsman, 10.1142/S0129055X90000041Rev. Math. Phys. 2N. P. Landsman, "Quantization and Superselection Sectors. 2. Dirac Monopole and Aharonov-Bohm Effect," Rev. Math. Phys. 2 (1990) 73-104.
Algebraic theory of superselection sectors and the measurement problem in quantum mechanics. N P Landsman, 10.1142/S0217751X91002513Int. J. Mod. Phys. A. 6N. P. Landsman, "Algebraic theory of superselection sectors and the measurement problem in quantum mechanics," Int. J. Mod. Phys. A 6 (1991) 5349-5372.
Entanglement entropy and superselection sectors. Part I. Global symmetries. H Casini, M Huerta, J M Magán, D Pontello, 10.1007/JHEP02(2020)014arXiv:1905.10487JHEP. 0214hep-thH. Casini, M. Huerta, J. M. Magán, and D. Pontello, "Entanglement entropy and superselection sectors. Part I. Global symmetries," JHEP 02 (2020) 014, arXiv:1905.10487 [hep-th].
An Algebraic spin and statistics theorem. D Guido, R Longo, 10.1007/BF02101806arXiv:funct-an/9406005Commun. Math. Phys. 172517D. Guido and R. Longo, "An Algebraic spin and statistics theorem," Commun. Math. Phys. 172 (1995) 517, arXiv:funct-an/9406005.
A spin statistics theorem for quantum fields on curved space-time manifolds in a generally covariant framework. R Verch, 10.1007/s002200100526arXiv:math-ph/0102035Commun. Math. Phys. 223R. Verch, "A spin statistics theorem for quantum fields on curved space-time manifolds in a generally covariant framework," Commun. Math. Phys. 223 (2001) 261-288, arXiv:math-ph/0102035.
On the PCT theorem in the theory of local observables. H J Borchers, J Yngvason, arXiv:math-ph/0012020Fields Inst. Commun. 30H. J. Borchers and J. Yngvason, "On the PCT theorem in the theory of local observables," Fields Inst. Commun. 30 (2001) 39-64, arXiv:math-ph/0012020.
Spin, statistics, orientations, unitarity. T Johnson-Freyd, 10.2140/agt.2017.17.917arXiv:1507.06297Algebr. Geom. Topol. 172math-phT. Johnson-Freyd, "Spin, statistics, orientations, unitarity," Algebr. Geom. Topol. 17 no. 2, (2017) 917-956, arXiv:1507.06297 [math-ph].
Conserved currents and associated symmetries; Goldstone's theorem. D Kastler, D W Robinson, A Swieca, 10.1007/BF01773346Commun. Math. Phys. 21D. Kastler, D. W. Robinson, and A. Swieca, "Conserved currents and associated symmetries; Goldstone's theorem," Commun. Math. Phys. 2 no. 1, (1966) 108-120.
Spontaneous Breakdown of Symmetries and Zero-Mass States. H Ezawa, J A Swieca, 10.1007/BF01646447Commun. Math. Phys. 5H. Ezawa and J. A. Swieca, "Spontaneous Breakdown of Symmetries and Zero-Mass States," Commun. Math. Phys. 5 (1967) 330-336.
On the self-adjointness of field operators. H J Borchers, W Zimmermann, 10.1007/BF02821677Il Nuovo Cimento. 31H. J. Borchers and W. Zimmermann, "On the self-adjointness of field operators," Il Nuovo Cimento 31 (1964) 1047-1059.
On the Connection Between Quantum Fields and Von Neumann Algebras of Local Operators. W Driessler, S J Summers, E H Wichmann, 10.1007/BF01212341Commun. Math. Phys. 105W. Driessler, S. J. Summers, and E. H. Wichmann, "On the Connection Between Quantum Fields and Von Neumann Algebras of Local Operators," Commun. Math. Phys. 105 (1986) 49-84.
Positivity of Wightman Functionals and the Existence of Local Nets. H J Borchers, J Yngvason, 10.1007/BF02104505Commun. Math. Phys. 127607H. J. Borchers and J. Yngvason, "Positivity of Wightman Functionals and the Existence of Local Nets," Commun. Math. Phys. 127 (1990) 607.
Local nets and selfadjoint extensions of quantum field operators. H J Borchers, J Yngvason, 10.1007/BF00401649Lett. Math. Phys. 21H. J. Borchers and J. Yngvason, "Local nets and selfadjoint extensions of quantum field operators," Lett. Math. Phys. 21 (1991) 151-155.
From quantum fields to local von Neumann algebras. H J Borchers, J Yngvason, 10.1142/S0129055X92000145Rev. Math. Phys. 401H. J. Borchers and J. Yngvason, "From quantum fields to local von Neumann algebras," Rev. Math. Phys. 4 no. spec01, (1992) 15-47.
Local Algebras of Observables and Point -Like Localized Fields. K Fredenhagen, J Hertel, 10.1007/BF01941663Commun. Math. Phys. 80555K. Fredenhagen and J. Hertel, "Local Algebras of Observables and Point -Like Localized Fields," Commun. Math. Phys. 80 (1981) 555.
From Algebras of Local Observables to Quantum Fields: Generalized H Bonds. S J Summers, 10.5169/seals-115883Helv. Phys. Acta. 60S. J. Summers, "From Algebras of Local Observables to Quantum Fields: Generalized H Bonds," Helv. Phys. Acta 60 (1987) 1004-1023.
Perturbative algebraic quantum field theory. K Fredenhagen, K Rejzner, 10.1007/978-3-319-09949-1_2arXiv:1208.1428Winter School in Mathematical Physics: Mathematical Aspects of Quantum Field Theory. Springer8math-phK. Fredenhagen and K. Rejzner, "Perturbative algebraic quantum field theory," in Winter School in Mathematical Physics: Mathematical Aspects of Quantum Field Theory, pp. 17-55. Springer, 8, 2012. arXiv:1208.1428 [math-ph].
Perturbative Construction of Models of Algebraic Quantum Field Theory. K Fredenhagen, K Rejzner, 10.1007/978-3-319-21353-8_2arXiv:1503.078143math-phK. Fredenhagen and K. Rejzner, Perturbative Construction of Models of Algebraic Quantum Field Theory, pp. 31-74. 3, 2015. arXiv:1503.07814 [math-ph].
M Dütsch, 10.1007/978-3-030-04738-2From Classical Field Theory to Perturbative Quantum Field Theory. Springer74M. Dütsch, From Classical Field Theory to Perturbative Quantum Field Theory, vol. 74 of Progress in Mathematical Physics. Springer, 2019.
U Schreiber, Mathematical Quantum Field Theory. U. Schreiber, "Mathematical Quantum Field Theory," 2017. http://ncatlab.org/nlab/files/Schreiber_pQFT20211103.pdf.
Perturbative algebraic quantum field theory. "Perturbative algebraic quantum field theory." http: //ncatlab.org/nlab/show/perturbative%20algebraic%20quantum%20field%20theory.
Relativistic quantum theory for finite time intervals. E C G Stueckelberg, 10.1103/PhysRev.81.130Phys. Rev., II. Ser. 81E. C. G. Stueckelberg, "Relativistic quantum theory for finite time intervals," Phys. Rev., II. Ser. 81 (1951) 130-133.
Causalite et structure de la matrice S. E C G Stueckelberg, D Rivier, Helv. Phys. Acta. 23E. C. G. Stueckelberg and D. Rivier, "Causalite et structure de la matrice S," Helv. Phys. Acta 23 (1950) 215-222.
Normalization of constants in the quanta theory. E C G Stueckelberg, A Petermann, 10.5169/seals-112426Helv. Phys. Acta. 26E. C. G. Stueckelberg and A. Petermann, "Normalization of constants in the quanta theory," Helv. Phys. Acta 26 (1953) 499-520.
Overlapping divergences and the S-matrix. A Salam, 10.1103/PhysRev.82.217Phys. Rev., II. Ser. 82A. Salam, "Overlapping divergences and the S-matrix," Phys. Rev., II. Ser. 82 (1951) 217-227.
Über die Multiplikation der Kausalfunktionen in der Quantentheorie der Felder. N N Bogoliubow, O S Parasiuk, 10.1007/BF02392399Acta Math. 97N. N. Bogoliubow and O. S. Parasiuk, "Über die Multiplikation der Kausalfunktionen in der Quantentheorie der Felder," Acta Math. 97 (1957) 227-266.
The role of locality in perturbation theory. H Epstein, V Glaser, Annales de l'I.H.P. Physique théorique. 193H. Epstein and V. Glaser, "The role of locality in perturbation theory," Annales de l'I.H.P. Physique théorique 19 no. 3, (1973) 211-295.
Observable Algebras in the S Matrix Approach. V A Ilyin, D A Slavnov, Teor. Mat. Fiz. 36V. A. Ilyin and D. A. Slavnov, "Observable Algebras in the S Matrix Approach," Teor. Mat. Fiz. 36 (1978) 32-41.
Finite quantum electrodynamics. Texts and monographs in physics. G Scharf, Springer-VerlagG. Scharf, Finite quantum electrodynamics. Texts and monographs in physics. Springer-Verlag, 1989.
Finite Quantum Electrodynamics: The Causal Approach. G Scharf, Dover3 ed.G. Scharf, Finite Quantum Electrodynamics: The Causal Approach. Dover, 3 ed., 2014.
Gauge field theories : spin one and spin two : 100 years after general relativity. G Scharf, Dover PublicationsG. Scharf, Gauge field theories : spin one and spin two : 100 years after general relativity. Dover Publications, 2016.
Regularization in quantum field theory from the causal point of view. A Aste, C Arx, G Scharf, 10.1016/j.ppnp.2009.08.003arXiv:0906.1952Prog. Part. Nucl. Phys. 64hep-thA. Aste, C. von Arx, and G. Scharf, "Regularization in quantum field theory from the causal point of view," Prog. Part. Nucl. Phys. 64 (2010) 61-119, arXiv:0906.1952 [hep-th].
A Local (perturbative) construction of observables in gauge theories: The Example of QED. M Dütsch, K Fredenhagen, 10.1007/s002200050606arXiv:hep-th/9807078Commun. Math. Phys. 203M. Dütsch and K. Fredenhagen, "A Local (perturbative) construction of observables in gauge theories: The Example of QED," Commun. Math. Phys. 203 (1999) 71-105, arXiv:hep-th/9807078.
Algebraic quantum field theory, perturbation theory, and the loop expansion. M Dütsch, K Fredenhagen, 10.1007/PL00005563arXiv:hep-th/0001129Commun. Math. Phys. 219M. Dütsch and K. Fredenhagen, "Algebraic quantum field theory, perturbation theory, and the loop expansion," Commun. Math. Phys. 219 (2001) 5-30, arXiv:hep-th/0001129.
The Master Ward Identity and generalized Schwinger-Dyson equation in classical field theory. M Düetsch, K Fredenhagen, 10.1007/s00220-003-0968-4arXiv:hep-th/0211242Commun. Math. Phys. 243M. Düetsch and K. Fredenhagen, "The Master Ward Identity and generalized Schwinger-Dyson equation in classical field theory," Commun. Math. Phys. 243 (2003) 275-314, arXiv:hep-th/0211242.
Algebraic Approach to the 1/N Expansion in Quantum Field Theory. S Hollands, 10.1142/S0129055X04002072Reviews in Mathematical Physics. 1604S. Hollands, "Algebraic Approach to the 1/N Expansion in Quantum Field Theory," Reviews in Mathematical Physics 16 no. 04, (2003) 509-558.
Algebraic approach to quantum field theory. R Brunetti, K Fredenhagen, arXiv:math-ph/0411072R. Brunetti and K. Fredenhagen, "Algebraic approach to quantum field theory," arXiv:math-ph/0411072.
Causal perturbation theory in terms of retarded products, and a proof of the action Ward identity. M Dütsch, K Fredenhagen, 10.1142/S0129055X04002266arXiv:hep-th/0403213Rev. Math. Phys. 16M. Dütsch and K. Fredenhagen, "Causal perturbation theory in terms of retarded products, and a proof of the action Ward identity," Rev. Math. Phys. 16 (2004) 1291-1348, arXiv:hep-th/0403213.
Action Ward identity and the Stuckelberg-Petermann renormalization group. M Dütsch, K Fredenhagen, arXiv:hep-th/0501228Prog. Math. 251M. Dütsch and K. Fredenhagen, "Action Ward identity and the Stuckelberg-Petermann renormalization group," Prog. Math. 251 (2007) 113-124, arXiv:hep-th/0501228.
Removal of violations of the Master Ward Identity in perturbative QFT. F Brennecke, M Dütsch, 10.1142/S0129055X08003237arXiv:0705.3160Rev. Math. Phys. 20hep-thF. Brennecke and M. Dütsch, "Removal of violations of the Master Ward Identity in perturbative QFT," Rev. Math. Phys. 20 (2008) 119-172, arXiv:0705.3160 [hep-th].
Dimensional Regularization in Position Space and a Forest Formula for Regularized Epstein-Glaser Renormalization. K J Keller, arXiv:1006.2148Hamburg U.PhD thesismath-phK. J. Keller, Dimensional Regularization in Position Space and a Forest Formula for Regularized Epstein-Glaser Renormalization. PhD thesis, Hamburg U., 2010. arXiv:1006.2148 [math-ph].
Fermionic fields in the functional approach to classical field theory. K Rejzner, 10.1142/S0129055X11004503arXiv:1101.5126Rev. Math. Phys. 23math-phK. Rejzner, "Fermionic fields in the functional approach to classical field theory," Rev. Math. Phys. 23 (2011) 1009-1033, arXiv:1101.5126 [math-ph].
Dimensional Regularization in Position Space, and a Forest Formula for Epstein-Glaser Renormalization. M Duetsch, K Fredenhagen, K J Keller, K Rejzner, 10.1063/1.4902380arXiv:1311.5424J. Math. Phys. 55122303hep-thM. Duetsch, K. Fredenhagen, K. J. Keller, and K. Rejzner, "Dimensional Regularization in Position Space, and a Forest Formula for Epstein-Glaser Renormalization," J. Math. Phys. 55 (2014) 122303, arXiv:1311.5424 [hep-th].
Lorentzian 2D CFT from the pAQFT Perspective. S Crawford, K Rejzner, B Vicedo, 10.1007/s00023-022-01167-zarXiv:2107.12347Annales Henri Poincare. 2310math-phS. Crawford, K. Rejzner, and B. Vicedo, "Lorentzian 2D CFT from the pAQFT Perspective," Annales Henri Poincare 23 no. 10, (2022) 3525-3585, arXiv:2107.12347 [math-ph].
S Crawford, K Rejzner, B Vicedo, arXiv:2205.01003Chirality in 2d pAQFT. math-phS. Crawford, K. Rejzner, and B. Vicedo, "Chirality in 2d pAQFT," arXiv:2205.01003 [math-ph].
Perturbative Algebraic Quantum Field Theory and the Renormalization Groups. R Brunetti, M Dütsch, K Fredenhagen, 10.4310/ATMP.2009.v13.n5.a7arXiv:0901.2038Adv. Theor. Math. Phys. 135math-phR. Brunetti, M. Dütsch, and K. Fredenhagen, "Perturbative Algebraic Quantum Field Theory and the Renormalization Groups," Adv. Theor. Math. Phys. 13 no. 5, (2009) 1541-1599, arXiv:0901.2038 [math-ph].
Renormalized Perturbation Theory: A Missing Chapter. R Stora, 10.1142/S0219887808003363arXiv:0901.3426Int. J. Geom. Meth. Mod. Phys. 5hep-thR. Stora, "Renormalized Perturbation Theory: A Missing Chapter," Int. J. Geom. Meth. Mod. Phys. 5 (2008) 1345-1360, arXiv:0901.3426 [hep-th].
Batalin-Vilkovisky formalism in the functional approach to classical field theory. K Fredenhagen, K Rejzner, 10.1007/s00220-012-1487-yarXiv:1101.5112Commun. Math. Phys. 314math-phK. Fredenhagen and K. Rejzner, "Batalin-Vilkovisky formalism in the functional approach to classical field theory," Commun. Math. Phys. 314 (2012) 93-127, arXiv:1101.5112 [math-ph].
Batalin-Vilkovisky Formalism in Perturbative Algebraic Quantum Field Theory. K Fredenhagen, K Rejzner, 1110.5232Communications in Mathematical Physics. 317K. Fredenhagen and K. Rejzner, "Batalin-Vilkovisky Formalism in Perturbative Algebraic Quantum Field Theory," Communications in Mathematical Physics 317 (2011) 697-725, 1110.5232.
Batalin-Vilkovisky formalism in locally covariant field theory. K A Rejzner, arXiv:1111.5130Hamburg U.PhD thesismath-phK. A. Rejzner, Batalin-Vilkovisky formalism in locally covariant field theory. PhD thesis, Hamburg U., 2011. arXiv:1111.5130 [math-ph].
K Rejzner, 10.1007/s00023-014-0312-xarXiv:1301.7037Remarks on Local Symmetry Invariance in Perturbative Algebraic Quantum Field Theory. 16math-phK. Rejzner, "Remarks on Local Symmetry Invariance in Perturbative Algebraic Quantum Field Theory," Annales Henri Poincare 16 no. 1, (2015) 205-238, arXiv:1301.7037 [math-ph].
BV quantization in perturbative algebraic QFT: Fundamental concepts and perspectives. K Rejzner, arXiv:2004.142722020math-phK. Rejzner, "BV quantization in perturbative algebraic QFT: Fundamental concepts and perspectives," 4, 2020. arXiv:2004.14272 [math-ph].
Gauge Algebra and Quantization. I A Batalin, G A Vilkovisky, 10.1016/0370-2693(81)90205-7Phys. Lett. B. 102I. A. Batalin and G. A. Vilkovisky, "Gauge Algebra and Quantization," Phys. Lett. B 102 (1981) 27-31.
Quantization of Gauge Theories with Linearly Dependent Generators. I A Batalin, G A Vilkovisky, 10.1103/PhysRevD.28.2567Phys. Rev. D. 28508Phys.Rev.DI. A. Batalin and G. A. Vilkovisky, "Quantization of Gauge Theories with Linearly Dependent Generators," Phys. Rev. D 28 (1983) 2567-2582. [Erratum: Phys.Rev.D 30, 508 (1984)].
Star product approach to quantum field theory: The free scalar field. J Dito, 10.1007/BF00398277Lett. Math. Phys. 20J. Dito, "Star product approach to quantum field theory: The free scalar field," Lett. Math. Phys. 20 (1990) 125-134.
Perturbative algebraic field theory, and deformation quantization. M Dütsch, K Fredenhagen, arXiv:hep-th/0101079Fields Inst. Commun. 30M. Dütsch and K. Fredenhagen, "Perturbative algebraic field theory, and deformation quantization," Fields Inst. Commun. 30 (2001) 151-160, arXiv:hep-th/0101079.
Star products and perturbative quantum field theory. A C Hirshfeld, P Henselder, 10.1006/aphy.2002.6251arXiv:hep-th/0208194Annals Phys. 298A. C. Hirshfeld and P. Henselder, "Star products and perturbative quantum field theory," Annals Phys. 298 (2002) 382-393, arXiv:hep-th/0208194.
G Collini, arXiv:1603.09626Fedosov Quantization and Perturbative Quantum Field Theory. math-phG. Collini, "Fedosov Quantization and Perturbative Quantum Field Theory," arXiv:1603.09626 [math-ph].
The star product in interacting quantum field theory. E Hawkins, K Rejzner, 1612.09157Letters in Mathematical Physics. 110E. Hawkins and K. Rejzner, "The star product in interacting quantum field theory," Letters in Mathematical Physics 110 (2016) 1257-1313, 1612.09157.
Quantized fields and particle creation in expanding universes. 1. L Parker, 10.1103/PhysRev.183.1057Phys. Rev. 183L. Parker, "Quantized fields and particle creation in expanding universes. 1.," Phys. Rev. 183 (1969) 1057-1068.
Particle Creation by Black Holes. S W Hawking, 10.1007/BF02345020Erratum: Commun.Math.Phys. 43206Commun. Math. Phys.S. W. Hawking, "Particle Creation by Black Holes," Commun. Math. Phys. 43 (1975) 199-220. [Erratum: Commun.Math.Phys. 46, 206 (1976)].
Particle production and vacuum polarization in an anisotropic gravitational field. Y B Zeldovich, A A Starobinsky, Zh. Eksp. Teor. Fiz. 61Y. B. Zeldovich and A. A. Starobinsky, "Particle production and vacuum polarization in an anisotropic gravitational field," Zh. Eksp. Teor. Fiz. 61 (1971) 2161-2175.
Amplification of waves reflected from a rotating "black hole. A A Starobinsky, Sov. Phys. JETP. 371A. A. Starobinsky, "Amplification of waves reflected from a rotating "black hole".," Sov. Phys. JETP 37 no. 1, (1973) 28-32.
Second quantization in the Kerr metric. W G Unruh, 10.1103/PhysRevD.10.3194Phys. Rev. D. 10W. G. Unruh, "Second quantization in the Kerr metric," Phys. Rev. D 10 (1974) 3194-3205.
Nonuniqueness of canonical field quantization in Riemannian space-time. S A Fulling, 10.1103/PhysRevD.7.2850Phys. Rev. D. 7S. A. Fulling, "Nonuniqueness of canonical field quantization in Riemannian space-time," Phys. Rev. D 7 (1973) 2850-2862.
Scalar particle production in Schwarzschild and Rindler metrics. P C W Davies, 10.1088/0305-4470/8/4/022J. Phys. A. 8P. C. W. Davies, "Scalar particle production in Schwarzschild and Rindler metrics," J. Phys. A 8 (1975) 609-616.
Notes on black hole evaporation. W G Unruh, 10.1103/PhysRevD.14.870Phys. Rev. D. 14W. G. Unruh, "Notes on black hole evaporation," Phys. Rev. D 14 (1976) 870.
Missed opportunities. F J Dyson, 10.1090/S0002-9904-1972-12971-9Bull. Am. Math. Soc. 78F. J. Dyson, "Missed opportunities," Bull. Am. Math. Soc. 78 (1972) 635-639.
Quantum Fields in Curved Space-Times. A Ashtekar, A Magnon, 10.1098/rspa.1975.0181Proc. Roy. Soc. Lond. A. 346A. Ashtekar and A. Magnon, "Quantum Fields in Curved Space-Times," Proc. Roy. Soc. Lond. A 346 (1975) 375-394.
Algebras of local observables on a manifold. J Dimock, 10.1007/BF01269921Communications in Mathematical Physics. 77J. Dimock, "Algebras of local observables on a manifold," Communications in Mathematical Physics 77 (1980) 219-228.
Dirac quantum fields on a manifold. J Dimock, 10.1090/S0002-9947-1982-0637032-8Trans. Amer. Math. Soc. 269J. Dimock, "Dirac quantum fields on a manifold," Trans. Amer. Math. Soc. 269 (1982) 133-147.
Quantum fields on manifolds: PCT and gravitationally induced thermal states. G L Sewell, 10.1016/0003-4916(82)90285-8Annals Phys. 141G. L. Sewell, "Quantum fields on manifolds: PCT and gravitationally induced thermal states," Annals Phys. 141 (1982) 201-224.
Linear Spin 0 Quantum Fields in External Gravitational and Scalar Fields. 1. A One Particle Structure for the Stationary Case. B S Kay, 10.1007/BF01940330Commun. Math. Phys. 62B. S. Kay, "Linear Spin 0 Quantum Fields in External Gravitational and Scalar Fields. 1. A One Particle Structure for the Stationary Case," Commun. Math. Phys. 62 (1978) 55-70.
The Double Wedge Algebra for Quantum Fields on Schwarzschild and Minkowski Space-times. B S Kay, 10.1007/BF01212687Commun. Math. Phys. 10057B. S. Kay, "The Double Wedge Algebra for Quantum Fields on Schwarzschild and Minkowski Space-times," Commun. Math. Phys. 100 (1985) 57.
Singularity Structure of the Two Point Function in Quantum Field Theory in Curved Space-Time. S A Fulling, M Sweeny, R M Wald, 10.1007/BF01196934Commun. Math. Phys. 63S. A. Fulling, M. Sweeny, and R. M. Wald, "Singularity Structure of the Two Point Function in Quantum Field Theory in Curved Space-Time," Commun. Math. Phys. 63 (1978) 257-264.
Singularity Structure of the Two Point Function in Quantum Field Theory in Curved Space-time. S A Fulling, F J Narcowich, R M Wald, 10.1016/0003-4916(81)90098-1Annals Phys. IIS. A. Fulling, F. J. Narcowich, and R. M. Wald, "Singularity Structure of the Two Point Function in Quantum Field Theory in Curved Space-time. II," Annals Phys. 136 (1981) 243-272.
Quantum field theory in curved spacetime and black hole thermodynamics. R M Wald, Univ. of Chicago PressChicago, ILR. M. Wald, Quantum field theory in curved spacetime and black hole thermodynamics. Chicago, IL: Univ. of Chicago Press, 1994.
The Hadamard condition and Kay's conjecture in (axiomatic) quantum field theory on curved space-time. M Radzikowski, Princeton UniversityPhD thesisM. Radzikowski, The Hadamard condition and Kay's conjecture in (axiomatic) quantum field theory on curved space-time. PhD thesis, Princeton University, 1992.
Micro-local approach to the Hadamard condition in quantum field theory on curved space-time. M J Radzikowski, 10.1007/BF02100096Commun. Math. Phys. 179M. J. Radzikowski, "Micro-local approach to the Hadamard condition in quantum field theory on curved space-time," Commun. Math. Phys. 179 (1996) 529-553.
A Local to global singularity theorem for quantum field theory on curved space-time. M J Radzikowski, R Verch, 10.1007/BF02101180Commun. Math. Phys. 180M. J. Radzikowski and R. Verch, "A Local to global singularity theorem for quantum field theory on curved space-time," Commun. Math. Phys. 180 (1996) 1-22.
The Microlocal spectrum condition and Wick polynomials of free fields on curved space-times. R Brunetti, K Fredenhagen, M Kohler, 10.1007/BF02099626arXiv:gr-qc/9510056Commun. Math. Phys. 180R. Brunetti, K. Fredenhagen, and M. Kohler, "The Microlocal spectrum condition and Wick polynomials of free fields on curved space-times," Commun. Math. Phys. 180 (1996) 633-652, arXiv:gr-qc/9510056.
Microlocal analysis and interacting quantum field theories: Renormalization on physical backgrounds. R Brunetti, K Fredenhagen, 10.1007/s002200050004arXiv:math-ph/9903028Commun. Math. Phys. 208R. Brunetti and K. Fredenhagen, "Microlocal analysis and interacting quantum field theories: Renormalization on physical backgrounds," Commun. Math. Phys. 208 (2000) 623-661, arXiv:math-ph/9903028.
The Generally covariant locality principle: A New paradigm for local quantum field theory. R Brunetti, K Fredenhagen, R Verch, 10.1007/s00220-003-0815-7arXiv:math-ph/0112041Commun. Math. Phys. 237R. Brunetti, K. Fredenhagen, and R. Verch, "The Generally covariant locality principle: A New paradigm for local quantum field theory," Commun. Math. Phys. 237 (2003) 31-68, arXiv:math-ph/0112041.
Local Wick polynomials and time ordered products of quantum fields in curved space-time. S Hollands, R M Wald, 10.1007/s002200100540arXiv:gr-qc/0103074Commun. Math. Phys. 223S. Hollands and R. M. Wald, "Local Wick polynomials and time ordered products of quantum fields in curved space-time," Commun. Math. Phys. 223 (2001) 289-326, arXiv:gr-qc/0103074.
Existence of local covariant time ordered products of quantum fields in curved space-time. S Hollands, R M Wald, 10.1007/s00220-002-0719-yarXiv:gr-qc/0111108Commun. Math. Phys. 231S. Hollands and R. M. Wald, "Existence of local covariant time ordered products of quantum fields in curved space-time," Commun. Math. Phys. 231 (2002) 309-345, arXiv:gr-qc/0111108.
Microlocal spectrum condition and Hadamard form for vector valued quantum fields in curved space-time. H Sahlmann, R Verch, 10.1142/S0129055X01001010arXiv:math-ph/0008029Rev. Math. Phys. 13H. Sahlmann and R. Verch, "Microlocal spectrum condition and Hadamard form for vector valued quantum fields in curved space-time," Rev. Math. Phys. 13 (2001) 1203-1246, arXiv:math-ph/0008029.
Microlocal analysis of quantum fields on curved space-times: Analytic wavefront sets and Reeh-Schlieder theorems. A Strohmaier, R Verch, M Wollenberg, 10.1063/1.1506381arXiv:math-ph/0202003J. Math. Phys. 43A. Strohmaier, R. Verch, and M. Wollenberg, "Microlocal analysis of quantum fields on curved space-times: Analytic wavefront sets and Reeh-Schlieder theorems," J. Math. Phys. 43 (2002) 5514-5530, arXiv:math-ph/0202003.
On the renormalization group in curved space-time. S Hollands, R M Wald, 10.1007/s00220-003-0837-1arXiv:gr-qc/0209029Commun. Math. Phys. 237S. Hollands and R. M. Wald, "On the renormalization group in curved space-time," Commun. Math. Phys. 237 (2003) 123-160, arXiv:gr-qc/0209029.
Conservation of the stress tensor in perturbative interacting quantum field theory in curved spacetimes. S Hollands, R M Wald, 10.1142/S0129055X05002340arXiv:gr-qc/0404074Rev. Math. Phys. 17S. Hollands and R. M. Wald, "Conservation of the stress tensor in perturbative interacting quantum field theory in curved spacetimes," Rev. Math. Phys. 17 (2005) 227-312, arXiv:gr-qc/0404074.
Equivalence of the (Generalised) Hadamard and Microlocal Spectrum Condition for (Generalised) Free Fields in Curved Spacetime. K Sanders, 0903.1021Communications in Mathematical Physics. 295K. Sanders, "Equivalence of the (Generalised) Hadamard and Microlocal Spectrum Condition for (Generalised) Free Fields in Curved Spacetime," Communications in Mathematical Physics 295 (2010) 485-501, 0903.1021.
Quantum Field Theory in Curved Space-Time. B S Dewitt, 10.1016/0370-1573(75)90051-4Phys. Rept. 19B. S. DeWitt, "Quantum Field Theory in Curved Space-Time," Phys. Rept. 19 (1975) 295-357.
N D Birrell, P C W Davies, 10.1017/CBO9780511622632Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge, UK, 2Cambridge Univ. PressN. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 2, 1984.
S A Fulling, Aspects of Quantum Field Theory in Curved Space-time. Cambridge University Press17S. A. Fulling, Aspects of Quantum Field Theory in Curved Space-time, vol. 17. Cambridge University Press, 1989.
I L Buchbinder, S D Odintsov, I L Shapiro, 10.1201/9780203758922Effective Action in Quantum Gravity. Institute of Physics Publishing. I. L. Buchbinder, S. D. Odintsov, and I. L. Shapiro, Effective Action in Quantum Gravity. Institute of Physics Publishing, 1992.
Quantum field theory in curved spacetime. L Ford, lectures ed.L. Ford, Quantum field theory in curved spacetime. lectures ed., 1997.
L Parker, D Toms, Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity. Cambridge Monographs on Mathematical Physics. Cambridge University PressL. Parker and D. Toms, Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 2009.
10.1007/978-3-642-02780-2Quantum Field Theory on Curved Spacetimes: Concepts and Mathematical Foundations. Springer-Verlag"Quantum Field Theory on Curved Spacetimes: Concepts and Mathematical Foundations," Lecture Notes in Physics. Springer-Verlag, 2009.
Quantum fields in curved spacetime. S Hollands, R M Wald, 10.1016/j.physrep.2015.02.001arXiv:1401.2026Phys. Rept. 574gr-qcS. Hollands and R. M. Wald, "Quantum fields in curved spacetime," Phys. Rept. 574 (2015) 1-35, arXiv:1401.2026 [gr-qc].
Why Does Quantum Field Theory In Curved Spacetime Make Sense? And What Happens To The Algebra of Observables In The Thermodynamic Limit?. E Witten, arXiv:2112.11614hep-thE. Witten, "Why Does Quantum Field Theory In Curved Spacetime Make Sense? And What Happens To The Algebra of Observables In The Thermodynamic Limit?," arXiv:2112.11614 [hep-th].
Gravity and the Crossed Product. E Witten, arXiv:2112.12828hep-thE. Witten, "Gravity and the Crossed Product," arXiv:2112.12828 [hep-th].
The Principle of locality and quantum field theory on (nonglobally hyperbolic) curved space-times. B S Kay, 10.1142/S0129055X92000194Rev. Math. Phys. 401B. S. Kay, "The Principle of locality and quantum field theory on (nonglobally hyperbolic) curved space-times," Rev. Math. Phys. 4 no. spec01, (1992) 167-195.
Quantum Field Theory on Curved Backgrounds. R Brunetti, K Fredenhagen, 10.1007/978-3-642-02780-2_5arXiv:0901.2063Lect. Notes Phys. 786129gr-qcR. Brunetti and K. Fredenhagen, "Quantum Field Theory on Curved Backgrounds," Lect. Notes Phys. 786 (2009) 129, arXiv:0901.2063 [gr-qc].
Algebraic Quantum Field Theory in Curved Spacetimes. C J Fewster, R Verch, 10.1007/978-3-319-21353-8_4Advances in Algebraic Quantum Field Theory. Mathematical Physics Studies. C. J. Fewster and R. Verch, "Algebraic Quantum Field Theory in Curved Spacetimes," Advances in Algebraic Quantum Field Theory. Mathematical Physics Studies., pp. 125-189.
. Springer, Springer, 2015. 1504.00586.
Locally covariant quantum field theory. K Fredenhagen, arXiv:hep-th/040300714th International Congress on Mathematical Physics. 3K. Fredenhagen, "Locally covariant quantum field theory," in 14th International Congress on Mathematical Physics, pp. 29-37. 3, 2004. arXiv:hep-th/0403007.
The History and Present Status of Quantum Field Theory in Curved Spacetime. R M Wald, 10.1007/978-0-8176-4940-1_16arXiv:gr-qc/0608018Einstein Stud. 12R. M. Wald, "The History and Present Status of Quantum Field Theory in Curved Spacetime," Einstein Stud. 12 (2012) 317-331, arXiv:gr-qc/0608018.
R M Wald, 10.1007/978-1-4939-7708-6_15arXiv:0907.0416The Formulation of Quantum Field Theory in Curved Spacetime. 14gr-qcR. M. Wald, "The Formulation of Quantum Field Theory in Curved Spacetime," Einstein Stud. 14 (2018) 439-449, arXiv:0907.0416 [gr-qc].
Quantum field theory on curved spacetimes: Axiomatic framework and examples. K Fredenhagen, K Rejzner, 10.1063/1.4939955arXiv:1412.5125J. Math. Phys. 57331101math-phK. Fredenhagen and K. Rejzner, "Quantum field theory on curved spacetimes: Axiomatic framework and examples," J. Math. Phys. 57 no. 3, (2016) 031101, arXiv:1412.5125 [math-ph].
Locally covariant quantum field theory and the problem of formulating the same physics in all space-times. C J Fewster, 10.1098/rsta.2014.0238arXiv:1502.04642Phil. Trans. Roy. Soc. Lond. A. 373gr-qcC. J. Fewster, "Locally covariant quantum field theory and the problem of formulating the same physics in all space-times," Phil. Trans. Roy. Soc. Lond. A 373 (2015) 20140238, arXiv:1502.04642 [gr-qc].
Dynamical locality and covariance: What makes a physical theory the same in all spacetimes?. C J Fewster, R Verch, 10.1007/s00023-012-0165-0arXiv:1106.4785Annales Henri Poincare. 13math-phC. J. Fewster and R. Verch, "Dynamical locality and covariance: What makes a physical theory the same in all spacetimes?," Annales Henri Poincare 13 (2012) 1613-1674, arXiv:1106.4785 [math-ph].
Dynamical locality of the free scalar field. C J Fewster, R Verch, 10.1007/s00023-012-0166-zarXiv:1109.6732Annales Henri Poincare. 13math-phC. J. Fewster and R. Verch, "Dynamical locality of the free scalar field," Annales Henri Poincare 13 (2012) 1675-1709, arXiv:1109.6732 [math-ph].
On the notion of 'the same physics in all spacetimes. C J Fewster, 10.1007/978-3-0348-0043-3_11arXiv:1105.6202Quantum Field Theory and Gravity: Conceptual and Mathematical Advances in the Search for a Unified Framework. math-phC. J. Fewster, "On the notion of 'the same physics in all spacetimes'," in Quantum Field Theory and Gravity: Conceptual and Mathematical Advances in the Search for a Unified Framework, pp. 207-227. 2012. arXiv:1105.6202 [math-ph].
Charged sectors, spin and statistics in quantum field theory on curved space-times. D Guido, R Longo, J E Roberts, R Verch, 10.1142/S0129055X01000557arXiv:math-ph/9906019Rev. Math. Phys. 13D. Guido, R. Longo, J. E. Roberts, and R. Verch, "Charged sectors, spin and statistics in quantum field theory on curved space-times," Rev. Math. Phys. 13 (2001) 125-198, arXiv:math-ph/9906019.
Superselection sectors and general covariance. I. R Brunetti, G Ruzzi, 10.1007/s00220-006-0147-5arXiv:gr-qc/0511118Commun. Math. Phys. 270R. Brunetti and G. Ruzzi, "Superselection sectors and general covariance. I.," Commun. Math. Phys. 270 (2007) 69-108, arXiv:gr-qc/0511118.
Endomorphisms and automorphisms of locally covariant quantum field theories. C J Fewster, 10.1142/S0129055X13500086arXiv:1201.3295Rev. Math. Phys. 251350008math-phC. J. Fewster, "Endomorphisms and automorphisms of locally covariant quantum field theories," Rev. Math. Phys. 25 (2013) 1350008, arXiv:1201.3295 [math-ph].
The Necessity of the Hadamard Condition. C J Fewster, R Verch, 10.1088/0264-9381/30/23/235027arXiv:1307.5242Class. Quant. Grav. 30235027gr-qcC. J. Fewster and R. Verch, "The Necessity of the Hadamard Condition," Class. Quant. Grav. 30 (2013) 235027, arXiv:1307.5242 [gr-qc].
Pure quasifree states of the Dirac field from the fermionic projector. C J Fewster, B Lang, 10.1088/0264-9381/32/9/0950011408.1645Classical and Quantum Gravity. C. J. Fewster and B. Lang, "Pure quasifree states of the Dirac field from the fermionic projector," Classical and Quantum Gravity (2015) , 1408.1645.
Vacuum-like' Hadamard states for quantum fields on curved spacetimes. M Brum, K Fredenhagen, 10.1088/0264-9381/31/2/025024arXiv:1307.0482Class. Quant. Grav. 3125024gr-qcM. Brum and K. Fredenhagen, "'Vacuum-like' Hadamard states for quantum fields on curved spacetimes," Class. Quant. Grav. 31 (2014) 025024, arXiv:1307.0482 [gr-qc].
Locally covariant quantum field theory with external sources. C J Fewster, A Schenkel, 10.1007/s00023-014-0372-yarXiv:1402.2436Annales Henri Poincare. 1610math-phC. J. Fewster and A. Schenkel, "Locally covariant quantum field theory with external sources," Annales Henri Poincare 16 no. 10, (2015) 2303-2365, arXiv:1402.2436 [math-ph].
Differential cohomology and locally covariant quantum field theory. C Becker, A Schenkel, R J Szabo, 10.1142/S0129055X17500039arXiv:1406.1514Rev. Math. Phys. 29011750003hep-thC. Becker, A. Schenkel, and R. J. Szabo, "Differential cohomology and locally covariant quantum field theory," Rev. Math. Phys. 29 no. 01, (2016) 1750003, arXiv:1406.1514 [hep-th].
The Split Property for Locally Covariant Quantum Field Theories in Curved Spacetime. C J Fewster, 10.1007/s11005-015-0798-21501.02682Letters in Mathematical Physics. 105C. J. Fewster, "The Split Property for Locally Covariant Quantum Field Theories in Curved Spacetime," Letters in Mathematical Physics 105 (2015) 1633-1661, 1501.02682.
Dynamical locality of the nonminimally coupled scalar field and enlarged algebra of Wick polynomials. M Ferguson, 10.1007/s00023-012-0206-8arXiv:1203.2151Annales Henri Poincare. 14math-phM. Ferguson, "Dynamical locality of the nonminimally coupled scalar field and enlarged algebra of Wick polynomials," Annales Henri Poincare 14 (2013) 853-892, arXiv:1203.2151 [math-ph].
On the Reeh-Schlieder Property in Curved Spacetime. K Sanders, 0801.4676Communications in Mathematical Physics. 288K. Sanders, "On the Reeh-Schlieder Property in Curved Spacetime," Communications in Mathematical Physics 288 (2008) 271-285, 0801.4676.
Axiomatic quantum field theory in curved spacetime. S Hollands, R M Wald, 10.1007/s00220-009-0880-7arXiv:0803.2003Commun. Math. Phys. 293gr-qcS. Hollands and R. M. Wald, "Axiomatic quantum field theory in curved spacetime," Commun. Math. Phys. 293 (2010) 85-125, arXiv:0803.2003 [gr-qc].
Analytic Dependence is an Unnecessary Requirement in Renormalization of Locally Covariant QFT. I Khavkine, V Moretti, 10.1007/s00220-016-2618-7arXiv:1411.1302Commun. Math. Phys. 3442gr-qcI. Khavkine and V. Moretti, "Analytic Dependence is an Unnecessary Requirement in Renormalization of Locally Covariant QFT," Commun. Math. Phys. 344 no. 2, (2016) 581-620, arXiv:1411.1302 [gr-qc].
Renormalized Quantum Yang-Mills Fields in Curved Spacetime. S Hollands, 10.1142/S0129055X08003420arXiv:0705.3340Rev. Math. Phys. 20gr-qcS. Hollands, "Renormalized Quantum Yang-Mills Fields in Curved Spacetime," Rev. Math. Phys. 20 (2008) 1033-1172, arXiv:0705.3340 [gr-qc].
Quantized electromagnetic field on a manifold. J Dimock, 10.1142/S0129055X92000078Rev. Math. Phys. 4J. Dimock, "Quantized electromagnetic field on a manifold," Rev. Math. Phys. 4 (1992) 223-233.
Quantization of the Maxwell field in curved spacetimes of arbitrary dimension. M J Pfenning, 10.1088/0264-9381/26/13/1350170902.4887Class.Quant.Grav. 26M. J. Pfenning, "Quantization of the Maxwell field in curved spacetimes of arbitrary dimension," Class.Quant.Grav. 26 (2009) , 0902.4887.
Quantization of Maxwell's equations on curved backgrounds and general local covariance. C Dappiaggi, B Lang, 10.1007/s11005-012-0571-8arXiv:1104.1374Lett. Math. Phys. 101gr-qcC. Dappiaggi and B. Lang, "Quantization of Maxwell's equations on curved backgrounds and general local covariance," Lett. Math. Phys. 101 (2012) 265-287, arXiv:1104.1374 [gr-qc].
Hadamard States for the Vector Potential on Asymptotically Flat Spacetimes. C Dappiaggi, D Siemssen, 10.1142/S0129055X13500025arXiv:1106.5575Rev. Math. Phys. 251350002gr-qcC. Dappiaggi and D. Siemssen, "Hadamard States for the Vector Potential on Asymptotically Flat Spacetimes," Rev. Math. Phys. 25 (2013) 1350002, arXiv:1106.5575 [gr-qc].
Electromagnetism, Local Covariance, the Aharonov-Bohm Effect and Gauss' Law. K Sanders, C Dappiaggi, T.-P Hack, 10.1007/s00220-014-1989-xarXiv:1211.6420Commun. Math. Phys. 328math-phK. Sanders, C. Dappiaggi, and T.-P. Hack, "Electromagnetism, Local Covariance, the Aharonov-Bohm Effect and Gauss' Law," Commun. Math. Phys. 328 (2014) 625-667, arXiv:1211.6420 [math-ph].
Linear bosonic and fermionic quantum gauge theories on curved spacetimes. T.-P Hack, A Schenkel, 10.1007/s10714-013-1508-yarXiv:1205.3484Gen. Rel. Grav. 45math-phT.-P. Hack and A. Schenkel, "Linear bosonic and fermionic quantum gauge theories on curved spacetimes," Gen. Rel. Grav. 45 (2013) 877-910, arXiv:1205.3484 [math-ph].
D Buchholz, K Fredenhagen, 10.1007/s00220-021-04213-9arXiv:1902.06062Correction to: A C*-algebraic Approach to Interacting Quantum Field Theories. 377math-phD. Buchholz and K. Fredenhagen, "Correction to: A C*-algebraic Approach to Interacting Quantum Field Theories [doi: 10.1007/s00220-020-03700-9]," Commun. Math. Phys. 377 no. 2, (2020) 947-969, arXiv:1902.06062 [math-ph].
Classical dynamics, arrow of time, and genesis of the Heisenberg commutation relations. D Buchholz, K Fredenhagen, 10.1016/j.exmath.2020.06.002arXiv:1905.02711Expositiones Mathematicae. 382quant-phD. Buchholz and K. Fredenhagen, "Classical dynamics, arrow of time, and genesis of the Heisenberg commutation relations," Expositiones Mathematicae 38 no. 2, (2020) 150-167, arXiv:1905.02711 [quant-ph].
From path integrals to dynamical algebras: a macroscopic view of quantum physics. D Buchholz, K Fredenhagen, 10.1007/s10701-020-00345-5arXiv:1905.04250Found. Phys. 507quant-phD. Buchholz and K. Fredenhagen, "From path integrals to dynamical algebras: a macroscopic view of quantum physics," Found. Phys. 50 no. 7, (2020) 727-734, arXiv:1905.04250 [quant-ph].
Dynamical C*-algebras and kinetic perturbations. D Buchholz, K Fredenhagen, 10.1007/s00023-020-01002-3arXiv:2008.02034Annales Henri Poincare. 223math-phD. Buchholz and K. Fredenhagen, "Dynamical C*-algebras and kinetic perturbations," Annales Henri Poincare 22 no. 3, (2021) 1001-1033, arXiv:2008.02034 [math-ph].
C * -algebraic approach to interacting quantum field theory: Inclusion of Fermi fields. R Brunetti, M Dütsch, K Fredenhagen, K Rejzner, arXiv:2103.05740math-phR. Brunetti, M. Dütsch, K. Fredenhagen, and K. Rejzner, "C * -algebraic approach to interacting quantum field theory: Inclusion of Fermi fields," arXiv:2103.05740 [math-ph].
R Brunetti, M Dütsch, K Fredenhagen, K Rejzner, arXiv:2108.13336The unitary Master Ward Identity: Time slice axiom, Noether's Theorem and Anomalies. math-phR. Brunetti, M. Dütsch, K. Fredenhagen, and K. Rejzner, "The unitary Master Ward Identity: Time slice axiom, Noether's Theorem and Anomalies," arXiv:2108.13336 [math-ph].
Unitary, anomalous Master Ward Identity and its connections to the Wess-Zumino condition. R Brunetti, M Dütsch, K Fredenhagen, K Rejzner, arXiv:2210.05908BV formalism and L ∞ -algebras. math-phR. Brunetti, M. Dütsch, K. Fredenhagen, and K. Rejzner, "Unitary, anomalous Master Ward Identity and its connections to the Wess-Zumino condition, BV formalism and L ∞ -algebras," arXiv:2210.05908 [math-ph].
Locally covariant approach to effective quantum gravity. R Brunetti, K Fredenhagen, K Rejzner, arXiv:2212.07800gr-qcR. Brunetti, K. Fredenhagen, and K. Rejzner, "Locally covariant approach to effective quantum gravity," arXiv:2212.07800 [gr-qc].
On the problem of gauge theories in locally covariant qft. Operator and Geometric Analysis on Quantum Theory. [457] "On the problem of gauge theories in locally covariant qft," Trento, 2014. https://ncatlab.org/nlab/files/SchenkelTrento2014.pdf. talk at Operator and Geometric Analysis on Quantum Theory.
Towards homotopical algebraic quantum field theory. at Foundational and Structural Aspects of Gauge Theories. Mainz Institute for Theoretical Physics"Towards homotopical algebraic quantum field theory," Mainz Institute for Theoretical Physics, 2017. http://aschenkel.eu/Mainz17.pdf. talk at Foundational and Structural Aspects of Gauge Theories.
From fredenhagen's universal algebra to homotopy theory and operads. at Quantum Physics meets Mathematics. "From fredenhagen's universal algebra to homotopy theory and operads," Hamburg, 2017. http://aschenkel.eu/Hamburg17.pdf. talk at Quantum Physics meets Mathematics.
Dynamical Locality of the Free Maxwell Field. C J Fewster, B Lang, 10.1007/s00023-015-0398-91403.7083Annales Henri Poincaré. 17C. J. Fewster and B. Lang, "Dynamical Locality of the Free Maxwell Field," Annales Henri Poincaré 17 (2016) 401-436, 1403.7083.
Higher Structures in Algebraic Quantum Field Theory. M Benini, A Schenkel, 10.1002/prop.201910015arXiv:1903.02878Fortsch. Phys. 678-91910015hep-thM. Benini and A. Schenkel, "Higher Structures in Algebraic Quantum Field Theory," Fortsch. Phys. 67 no. 8-9, (2019) 1910015, arXiv:1903.02878 [hep-th].
Coloring Operads for Algebraic Field Theory. S Bruinsma, 10.1002/prop.201910004arXiv:1903.02863Fortsch. Phys. 678-91910004hep-thS. Bruinsma, "Coloring Operads for Algebraic Field Theory," Fortsch. Phys. 67 no. 8-9, (2019) 1910004, arXiv:1903.02863 [hep-th].
D Yau, 1802.08101Homotopical Quantum Field Theory. D. Yau, "Homotopical Quantum Field Theory," 1802.08101.
Quantized Abelian principal connections on Lorentzian manifolds. M Benini, C Dappiaggi, A Schenkel, 10.1007/s00220-014-1917-0arXiv:1303.2515Commun. Math. Phys. 330math-phM. Benini, C. Dappiaggi, and A. Schenkel, "Quantized Abelian principal connections on Lorentzian manifolds," Commun. Math. Phys. 330 (2014) 123-152, arXiv:1303.2515 [math-ph].
Homotopy colimits and global observables in Abelian gauge theory. M Benini, A Schenkel, R J Szabo, 10.1007/s11005-015-0765-yarXiv:1503.08839Lett. Math. Phys. 1059math-phM. Benini, A. Schenkel, and R. J. Szabo, "Homotopy colimits and global observables in Abelian gauge theory," Lett. Math. Phys. 105 no. 9, (2015) 1193-1222, arXiv:1503.08839 [math-ph].
Quantum field theories on categories fibered in groupoids. M Benini, A Schenkel, 10.1007/s00220-017-2986-7arXiv:1610.06071Commun. Math. Phys. 3561math-phM. Benini and A. Schenkel, "Quantum field theories on categories fibered in groupoids," Commun. Math. Phys. 356 no. 1, (2017) 19-64, arXiv:1610.06071 [math-ph].
M Benini, A Schenkel, U Schreiber, 10.1007/s00220-018-3120-1arXiv:1704.01378The Stack of Yang-Mills Fields on Lorentzian Manifolds. 359math-phM. Benini, A. Schenkel, and U. Schreiber, "The Stack of Yang-Mills Fields on Lorentzian Manifolds," Commun. Math. Phys. 359 no. 2, (2018) 765-820, arXiv:1704.01378 [math-ph].
Operads for algebraic quantum field theory. M Benini, A Schenkel, L Woike, 10.1142/S0219199720500078arXiv:1709.08657Commun. Contemp. Math. 23022050007math-phM. Benini, A. Schenkel, and L. Woike, "Operads for algebraic quantum field theory," Commun. Contemp. Math. 23 no. 02, (2021) 2050007, arXiv:1709.08657 [math-ph].
Homotopy theory of algebraic quantum field theories. M Benini, A Schenkel, L Woike, 10.1007/s11005-018-01151-xarXiv:1805.08795Lett. Math. Phys. 1097math-phM. Benini, A. Schenkel, and L. Woike, "Homotopy theory of algebraic quantum field theories," Lett. Math. Phys. 109 no. 7, (2019) 1487-1532, arXiv:1805.08795 [math-ph].
Linear Yang-Mills Theory as a Homotopy AQFT. M Benini, S Bruinsma, A Schenkel, 10.1007/s00220-019-03640-zarXiv:1906.00999Commun. Math. Phys. 3781math-phM. Benini, S. Bruinsma, and A. Schenkel, "Linear Yang-Mills Theory as a Homotopy AQFT," Commun. Math. Phys. 378 no. 1, (2019) 185-218, arXiv:1906.00999 [math-ph].
Categorification of algebraic quantum field theories. M Benini, M Perin, A Schenkel, L Woike, 10.1007/s11005-021-01371-8arXiv:2003.13713Lett. Math. Phys. 11135math-phM. Benini, M. Perin, A. Schenkel, and L. Woike, "Categorification of algebraic quantum field theories," Lett. Math. Phys. 111 (2021) 35, arXiv:2003.13713 [math-ph].
Homotopical Analysis of 4d Chern-Simons Theory and Integrable Field Theories. M Benini, A Schenkel, B Vicedo, 10.1007/s00220-021-04304-7arXiv:2008.01829Commun. Math. Phys. 3893hep-thM. Benini, A. Schenkel, and B. Vicedo, "Homotopical Analysis of 4d Chern-Simons Theory and Integrable Field Theories," Commun. Math. Phys. 389 no. 3, (2022) 1417-1443, arXiv:2008.01829 [hep-th].
Algebraic Quantum Field Theories: a homotopical view. V Carmona, arXiv:2107.141762021math-phV. Carmona, "Algebraic Quantum Field Theories: a homotopical view," 2021. arXiv:2107.14176 [math-ph].
Relative Cauchy evolution for linear homotopy AQFTs. S Bruinsma, C J Fewster, A Schenkel, arXiv:2108.10592math-phS. Bruinsma, C. J. Fewster, and A. Schenkel, "Relative Cauchy evolution for linear homotopy AQFTs," arXiv:2108.10592 [math-ph].
A Anastopoulos, M Benini, arXiv:2201.06464Homotopy theory of net representations. math-phA. Anastopoulos and M. Benini, "Homotopy theory of net representations," arXiv:2201.06464 [math-ph].
Spacetimes categories and disjointness for algebraic quantum field theory. A Grant-Stuart, arXiv:2201.09166math-phA. Grant-Stuart, "Spacetimes categories and disjointness for algebraic quantum field theory," arXiv:2201.09166 [math-ph].
Homotopical algebraic quantum field theory. "Homotopical algebraic quantum field theory." https: //ncatlab.org/nlab/show/homotopical%20algebraic%20quantum%20field%20theory.
Fields, observables and gauge transformations I. S Doplicher, R Haag, J E Roberts, 10.1007/BF01645267Commun. Math. Phys. 13S. Doplicher, R. Haag, and J. E. Roberts, "Fields, observables and gauge transformations I," Commun. Math. Phys. 13 (1969) 1-23.
Fields, observables and gauge transformations. 2. S Doplicher, R Haag, J E Roberts, 10.1007/BF01645674Commun. Math. Phys. 15S. Doplicher, R. Haag, and J. E. Roberts, "Fields, observables and gauge transformations. 2.," Commun. Math. Phys. 15 (1969) 173-200.
Locality and the Structure of Particle States. D Buchholz, K Fredenhagen, 10.1007/BF01208370Commun. Math. Phys. 841D. Buchholz and K. Fredenhagen, "Locality and the Structure of Particle States," Commun. Math. Phys. 84 (1982) 1.
Quantum groups, quantum categories and quantum field theory. J Frohlich, T Kerler, 10.1007/BFb00842446J. Frohlich and T. Kerler, Quantum groups, quantum categories and quantum field theory. 6, 1993.
Entropic order parameters for the phases of QFT. H Casini, M Huerta, J M Magan, D Pontello, 10.1007/JHEP04(2021)277arXiv:2008.11748JHEP. 04277hep-thH. Casini, M. Huerta, J. M. Magan, and D. Pontello, "Entropic order parameters for the phases of QFT," JHEP 04 (2021) 277, arXiv:2008.11748 [hep-th].
On completeness and generalized symmetries in quantum field theory. H Casini, J M Magan, 10.1142/S0217732321300251arXiv:2110.11358Mod. Phys. Lett. A. 36362130025hep-thH. Casini and J. M. Magan, "On completeness and generalized symmetries in quantum field theory," Mod. Phys. Lett. A 36 no. 36, (2021) 2130025, arXiv:2110.11358 [hep-th].
Generalized symmetries of the graviton. V Benedetti, H Casini, J M Magan, 10.1007/JHEP05(2022)045arXiv:2111.12089JHEP. 0545hep-thV. Benedetti, H. Casini, and J. M. Magan, "Generalized symmetries of the graviton," JHEP 05 (2022) 045, arXiv:2111.12089 [hep-th].
A Lattice of Von Neumann Algebras Associated with the Quantum Theory of a Free Bose Field. H Araki, 10.1063/1.1703912Journal of Mathematical Physics. 411H. Araki, "A Lattice of Von Neumann Algebras Associated with the Quantum Theory of a Free Bose Field," Journal of Mathematical Physics 4 no. 11, (1963) 1343-1362.
Modular structure and duality in conformal quantum field theory. R Brunetti, D Guido, R Longo, 10.1007/BF02096738arXiv:funct-an/9302008Commun. Math. Phys. 156R. Brunetti, D. Guido, and R. Longo, "Modular structure and duality in conformal quantum field theory," Commun. Math. Phys. 156 (1993) 201-220, arXiv:funct-an/9302008.
Chiral algebras. A Beilinson, V G Drinfeld, American Mathematical SocietyA. Beilinson and V. G. Drinfeld, Chiral algebras. American Mathematical Society, 2004.
On the Classification of Topological Field Theories. J Lurie, 10.4310/CDM.2008.v2008.n1.a3arXiv:0905.0465Current Developments in Mathematics. math.CTJ. Lurie, "On the Classification of Topological Field Theories," Current Developments in Mathematics 2008 (2009) 129-280, arXiv:0905.0465 [math.CT].
of Mathematical Surveys and Monographs. K Costello, Renormalization and effective field theory. American Mathematical Society170K. Costello, Renormalization and effective field theory, vol. 170 of Mathematical Surveys and Monographs. American Mathematical Society, 2011.
New mathematical monographs: 31. K Costello, O Gwilliam, Factorization algebras in quantum field theory. Cambridge University Press1K. Costello and O. Gwilliam, Factorization algebras in quantum field theory. Vol.1. New mathematical monographs: 31. Cambridge University Press, 2017.
K Costello, O Gwilliam, Factorization Algebras in Quantum Field Theory. Cambridge University Press2New Mathematical Monographs: 41K. Costello and O. Gwilliam, Factorization Algebras in Quantum Field Theory. Vol.2. New Mathematical Monographs: 41. Cambridge University Press, 2021.
Factorization algebras and free field theories. O Gwilliam, Northwestern UniversityPhD thesisO. Gwilliam, Factorization algebras and free field theories. PhD thesis, Northwestern University, 2012.
Relating Nets and Factorization Algebras of Observables: Free Field Theories. O Gwilliam, K Rejzner, 10.1007/s00220-019-03652-91711.06674Communications in Mathematical Physics. 373O. Gwilliam and K. Rejzner, "Relating Nets and Factorization Algebras of Observables: Free Field Theories," Communications in Mathematical Physics 373 (2017) 107-174, 1711.06674.
The observables of a perturbative algebraic quantum field theory form a factorization algebra. O Gwilliam, K Rejzner, arXiv:2212.08175math-phO. Gwilliam and K. Rejzner, "The observables of a perturbative algebraic quantum field theory form a factorization algebra," arXiv:2212.08175 [math-ph].
Model-independent comparison between factorization algebras and algebraic quantum field theory on Lorentzian manifolds. M Benini, M Perin, A Schenkel, 10.1007/s00220-019-03561-xarXiv:1903.03396Commun. Math. Phys. 3772math-phM. Benini, M. Perin, and A. Schenkel, "Model-independent comparison between factorization algebras and algebraic quantum field theory on Lorentzian manifolds," Commun. Math. Phys. 377 no. 2, (2019) 971-997, arXiv:1903.03396 [math-ph].
The Virasoro vertex algebra and factorization algebras on Riemann surfaces. B Williams, 10.1007/s11005-017-0982-7Letters in Mathematical Physics. 10712B. Williams, "The Virasoro vertex algebra and factorization algebras on Riemann surfaces," Letters in Mathematical Physics 107 no. 12, (Aug, 2017) 2189-2237, 1603.02349.
Chiral differential operators via Batalin-Vilkovisky quantization. V Gorbounov, O Gwilliam, B R Williams, 10.24033/ast.11211610.09657Asterisque. 419V. Gorbounov, O. Gwilliam, and B. R. Williams, "Chiral differential operators via Batalin-Vilkovisky quantization," Asterisque 419 (2020) , 1610.09657.
O Gwilliam, B Williams, arXiv:1711.05823The holomorphic bosonic string. math-phO. Gwilliam and B. Williams, "The holomorphic bosonic string," arXiv:1711.05823 [math-ph].
Asymptotic Freedom in the BV Formalism. C Elliott, B Williams, P Yoo, 10.1016/j.geomphys.2017.08.009arXiv:1702.05973J. Geom. Phys. 123math-phC. Elliott, B. Williams, and P. Yoo, "Asymptotic Freedom in the BV Formalism," J. Geom. Phys. 123 (2018) 246-283, arXiv:1702.05973 [math-ph].
Renormalization for Holomorphic Field Theories. B R Williams, 10.1007/s00220-020-03693-51809.02661Communications in Mathematical Physics. 374B. R. Williams, "Renormalization for Holomorphic Field Theories," Communications in Mathematical Physics 374 (2018) 1693-1742, 1809.02661.
Higher Kac-Moody algebras and symmetries of holomorphic field theories. O Gwilliam, B R Williams, 10.4310/ATMP.2021.v25.n1.a4arXiv:1810.06534Adv. Theor. Math. Phys. 251math.QAO. Gwilliam and B. R. Williams, "Higher Kac-Moody algebras and symmetries of holomorphic field theories," Adv. Theor. Math. Phys. 25 no. 1, (2021) 129-239, arXiv:1810.06534 [math.QA].
Twisted characters and holomorphic symmetries. I Saberi, B R Williams, arXiv:1906.04221math-phI. Saberi and B. R. Williams, "Twisted characters and holomorphic symmetries," arXiv:1906.04221 [math-ph].
I Saberi, B R Williams, arXiv:1910.04120Superconformal algebras and holomorphic field theories. math-phI. Saberi and B. R. Williams, "Superconformal algebras and holomorphic field theories," arXiv:1910.04120 [math-ph].
A one-loop exact quantization of Chern-Simons theory. O Gwilliam, B R Williams, 1910.05230O. Gwilliam and B. R. Williams, "A one-loop exact quantization of Chern-Simons theory," 1910.05230.
Factorization algebras and abelian CS/WZW-type correspondences. O Gwilliam, E Rabinovich, B R Williams, arXiv:2001.07888math.QAO. Gwilliam, E. Rabinovich, and B. R. Williams, "Factorization algebras and abelian CS/WZW-type correspondences," arXiv:2001.07888 [math.QA].
C Elliott, B R Williams, arXiv:2008.02302Holomorphic Poisson Field Theories. math-phC. Elliott and B. R. Williams, "Holomorphic Poisson Field Theories," arXiv:2008.02302 [math-ph].
I Saberi, B R Williams, arXiv:2009.07116Constraints in the BV formalism: six-dimensional supersymmetry and its twists. math-phI. Saberi and B. R. Williams, "Constraints in the BV formalism: six-dimensional supersymmetry and its twists," arXiv:2009.07116 [math-ph].
Factorization Algebras for Classical Bulk-Boundary Systems. E Rabinovich, arXiv:2008.04953math.QAE. Rabinovich, "Factorization Algebras for Classical Bulk-Boundary Systems," arXiv:2008.04953 [math.QA].
Vertex Algebras and Costello-Gwilliam Factorization Algebras. D Bruegmann, arXiv:2012.12214math.QAD. Bruegmann, "Vertex Algebras and Costello-Gwilliam Factorization Algebras," arXiv:2012.12214 [math.QA].
Factorization Algebras for Bulk-Boundary Systems. E Rabinovich, University of California, BerkeleyPhD thesisE. Rabinovich, Factorization Algebras for Bulk-Boundary Systems. PhD thesis, University of California, Berkeley, 2021. 2111.01757.
Higher Deformation Quantization for Kapustin-Witten Theories. C Elliott, O Gwilliam, B R Williams, arXiv:2108.13392math-phC. Elliott, O. Gwilliam, and B. R. Williams, "Higher Deformation Quantization for Kapustin-Witten Theories," arXiv:2108.13392 [math-ph].
Higher Hochschild Homology, Topological Chiral Homology and Factorization Algebras. G Ginot, T Tradler, M Zeinalian, 10.1007/S00220-014-1889-01011.6483Communications in Mathematical Physics. 326G. Ginot, T. Tradler, and M. Zeinalian, "Higher Hochschild Homology, Topological Chiral Homology and Factorization Algebras," Communications in Mathematical Physics 326 (2010) 635-686, 1011.6483.
Notes on factorization algebras, factorization homology and applications. G Ginot, 10.1007/978-3-319-09949-1_131307.5213Mathematical Aspects of Quantum Field Theories, Mathematical Physics Studies. G. Ginot, "Notes on factorization algebras, factorization homology and applications," Mathematical Aspects of Quantum Field Theories, Mathematical Physics Studies, pp. 429-552. 2013. 1307.5213.
Chiral Koszul duality. J Francis, D Gaitsgory, 10.1007/S00029-011-0065-Z1103.5803Selecta Mathematica. 18J. Francis and D. Gaitsgory, "Chiral Koszul duality," Selecta Mathematica 18 (2011) 27-87, 1103.5803.
Universal factorization spaces and algebras. E Cliff, 10.4310/mrl.2019.v26.n4.a51608.08122Mathematical Research Letters. 264E. Cliff, "Universal factorization spaces and algebras," Mathematical Research Letters 26 no. 4, (2019) 1059-1096, 1608.08122.
Factorization algebra. "Factorization algebra." https://ncatlab.org/nlab/show/factorization+algebra.
Quantum field theory in terms of consistency conditions. I. General framework, and perturbation theory via Hochschild cohomology. S Hollands, 10.3842/SIGMA.2009.090arXiv:0802.2198SIGMA. 590hep-thS. Hollands, "Quantum field theory in terms of consistency conditions. I. General framework, and perturbation theory via Hochschild cohomology," SIGMA 5 (2009) 090, arXiv:0802.2198 [hep-th].
Topological quantum field theories. M Atiyah, 10.1007/BF02698547Publications Mathématiques de l'Institut des HautesÉtudes Scientifiques. 68M. Atiyah, "Topological quantum field theories," Publications Mathématiques de l'Institut des HautesÉtudes Scientifiques 68 no. 1, (Jan, 1988) 175-186.
. 10.1007/BF02698547https://doi.org/10.1007/BF02698547.
The Definition of Conformal Field Theory. G B Segal, 10.1007/978-94-015-7809-7_9Differential Geometrical Methods in Theoretical Physics. 250G. B. Segal, "The Definition of Conformal Field Theory," in Differential Geometrical Methods in Theoretical Physics, vol. 250, pp. 165-171. 1987.
The definition of conformal field theory. G Segal, Cambridge University PressG. Segal, The definition of conformal field theory, p. 421-577. London Mathematical Society Lecture Note Series. Cambridge University Press, 2004.
Three roles of quantum field theory. G Segal, G. Segal, "Three roles of quantum field theory." http://www.mpim-bonn.mpg.de/node/3372/abstracts, May, 2011.
What is an elliptic object?. P Teichner, S Stolz, 10.1017/CBO9780511526398.013Topology, geometry and quantum field theory. 6P. Teichner and S. Stolz, "What is an elliptic object?," Topology, geometry and quantum field theory, 247-343 (2004) 308 (06, 2004) .
Supersymmetric field theories and generalized cohomology. S Stolz, P Teichner, 10.1090/pspum/083/2742432of Proceedings of Symposia in Pure Mathematics. 83S. Stolz and P. Teichner, "Supersymmetric field theories and generalized cohomology," vol. 83 of Proceedings of Symposia in Pure Mathematics, pp. 279-340. 2011.
Functorial field theory. "Functorial field theory." https://ncatlab.org/nlab/show/functorial+field+theory.
A Framework for Geometric Field Theories and their Classification in Dimension One. M Ludewig, A Stoffel, 10.3842/SIGMA.2021.072Symmetry, Integrability and Geometry: Methods and Applications. M. Ludewig and A. Stoffel, "A Framework for Geometric Field Theories and their Classification in Dimension One," Symmetry, Integrability and Geometry: Methods and Applications (Jul, 2021) . http://dx.doi.org/10.3842/SIGMA.2021.072.
Extended field theories are local and have classifying spaces. D Grady, D Pavlov, D. Grady and D. Pavlov, "Extended field theories are local and have classifying spaces," 2011.01208.
The geometric cobordism hypothesis. D Grady, D Pavlov, 2111.01095D. Grady and D. Pavlov, "The geometric cobordism hypothesis," 2111.01095.
Wick Rotation and the Positivity of Energy in Quantum Field Theory. M Kontsevich, G Segal, 10.1093/qmath/haab027arXiv:2105.10161Quart. J. Math. Oxford Ser. 721-2hep-thM. Kontsevich and G. Segal, "Wick Rotation and the Positivity of Energy in Quantum Field Theory," Quart. J. Math. Oxford Ser. 72 no. 1-2, (2021) 673-699, arXiv:2105.10161 [hep-th].
Relative quantum field theory. D S Freed, C Teleman, 10.1007/s00220-013-1880-1arXiv:1212.1692Commun. Math. Phys. 326hep-thD. S. Freed and C. Teleman, "Relative quantum field theory," Commun. Math. Phys. 326 (2014) 459-476, arXiv:1212.1692 [hep-th].
Classical BV theories on manifolds with boundary. A S Cattaneo, P Mnev, N Reshetikhin, 10.1007/s00220-014-2145-3arXiv:1201.0290Commun. Math. Phys. 332math-phA. S. Cattaneo, P. Mnev, and N. Reshetikhin, "Classical BV theories on manifolds with boundary," Commun. Math. Phys. 332 (2014) 535-603, arXiv:1201.0290 [math-ph].
Perturbative quantum gauge theories on manifolds with boundary. A S Cattaneo, P Mnev, N Reshetikhin, 10.1007/s00220-017-3031-6arXiv:1507.01221Commun. Math. Phys. 3572math-phA. S. Cattaneo, P. Mnev, and N. Reshetikhin, "Perturbative quantum gauge theories on manifolds with boundary," Commun. Math. Phys. 357 no. 2, (2018) 631-730, arXiv:1507.01221 [math-ph].
Perturbative BV theories with Segal-like gluing. A S Cattaneo, P Mnev, N Reshetikhin, arXiv:1602.00741math-phA. S. Cattaneo, P. Mnev, and N. Reshetikhin, "Perturbative BV theories with Segal-like gluing," arXiv:1602.00741 [math-ph].
A cellular topological field theory. A S Cattaneo, P Mnev, N Reshetikhin, 10.1007/s00220-020-03687-3arXiv:1701.05874Commun. Math. Phys. 3742math.ATA. S. Cattaneo, P. Mnev, and N. Reshetikhin, "A cellular topological field theory," Commun. Math. Phys. 374 no. 2, (2020) 1229-1320, arXiv:1701.05874 [math.AT].
Markov Quantum Fields on a Manifold. J Dimock, 10.1142/s0129055x04001947math-ph/0305017Reviews in Mathematical Physics. 1602J. Dimock, "Markov Quantum Fields on a Manifold," Reviews in Mathematical Physics 16 no. 02, (Mar, 2004) 243-255, math-ph/0305017.
AQFT from n-Functorial QFT. U Schreiber, 10.1007/s00220-009-0840-20806.1079Communications in Mathematical Physics. 2912U. Schreiber, "AQFT from n-Functorial QFT," Communications in Mathematical Physics 291 no. 2, (May, 2009) 357-401, 0806.1079.
Gluing. Part I. Integrals and symmetries. M Dedushenko, 10.1007/JHEP04(2020)175arXiv:1807.04274JHEP. 04175hep-thM. Dedushenko, "Gluing. Part I. Integrals and symmetries," JHEP 04 (2020) 175, arXiv:1807.04274 [hep-th].
Distributions in CFT. Part I. Cross-ratio space. P Kravchuk, J Qiao, S Rychkov, 10.1007/JHEP05(2020)137arXiv:2001.08778JHEP. 05137hep-thP. Kravchuk, J. Qiao, and S. Rychkov, "Distributions in CFT. Part I. Cross-ratio space," JHEP 05 (2020) 137, arXiv:2001.08778 [hep-th].
Distributions in CFT. Part II. Minkowski space. P Kravchuk, J Qiao, S Rychkov, 10.1007/JHEP08(2021)094arXiv:2104.02090JHEP. 0894hep-thP. Kravchuk, J. Qiao, and S. Rychkov, "Distributions in CFT. Part II. Minkowski space," JHEP 08 (2021) 094, arXiv:2104.02090 [hep-th].
Operator algebras and conformal field theory. F Gabbiani, J Frohlich, 10.1007/BF02096729Commun. Math. Phys. 155F. Gabbiani and J. Frohlich, "Operator algebras and conformal field theory," Commun. Math. Phys. 155 (1993) 569-640.
Conformal subnets and intermediate subfactors. R Longo, 10.1007/s00220-003-0814-8arXiv:math/0102196Commun. Math. Phys. 237R. Longo, "Conformal subnets and intermediate subfactors," Commun. Math. Phys. 237 (2003) 7-30, arXiv:math/0102196.
Classification of local conformal nets: Case c < 1. Y Kawahigashi, R Longo, arXiv:math-ph/0201015Annals Math. 160Y. Kawahigashi and R. Longo, "Classification of local conformal nets: Case c < 1," Annals Math. 160 (2004) 493-522, arXiv:math-ph/0201015.
Classification of two-dimensional local conformal nets with c less than 1 and 2 cohomology vanishing for tensor categories. Y Kawahigashi, R Longo, 10.1007/s00220-003-0979-1arXiv:math-ph/0304022Commun. Math. Phys. 244Y. Kawahigashi and R. Longo, "Classification of two-dimensional local conformal nets with c less than 1 and 2 cohomology vanishing for tensor categories," Commun. Math. Phys. 244 (2004) 63-97, arXiv:math-ph/0304022.
Classification of operator algebraic conformal field theories in dimensions one and two. Y Kawahigashi, arXiv:math-ph/030802914th International Congress on Mathematical Physics. 8Y. Kawahigashi, "Classification of operator algebraic conformal field theories in dimensions one and two," in 14th International Congress on Mathematical Physics, pp. 476-485. 8, 2003. arXiv:math-ph/0308029.
N =2 Superconformal Nets. S Carpi, R Hillier, Y Kawahigashi, R Longo, F Xu, 10.1007/s00220-014-2234-3arXiv:1207.2398Commun. Math. Phys. 336math.OAS. Carpi, R. Hillier, Y. Kawahigashi, R. Longo, and F. Xu, "N =2 Superconformal Nets," Commun. Math. Phys. 336 (2015) 1285-1328, arXiv:1207.2398 [math.OA].
Representations of Conformal Nets, Universal C*-Algebras and K-Theory. S Carpi, R Conti, R Hillier, M Weiner, 10.1007/s00220-012-1561-5arXiv:1202.2543Commun. Math. Phys. 320math.OAS. Carpi, R. Conti, R. Hillier, and M. Weiner, "Representations of Conformal Nets, Universal C*-Algebras and K-Theory," Commun. Math. Phys. 320 (2013) 275-300, arXiv:1202.2543 [math.OA].
From vertex operator algebras to conformal nets and back. S Carpi, Y Kawahigashi, R Longo, M Weiner, arXiv:1503.01260math.OAS. Carpi, Y. Kawahigashi, R. Longo, and M. Weiner, "From vertex operator algebras to conformal nets and back," arXiv:1503.01260 [math.OA].
Operator algebras and vertex operator algebras. S Carpi, 10.1142/9789813226609_0508The Fourteenth Marcel Grossmann Meeting. WORLD SCIENTIFICS. Carpi, "Operator algebras and vertex operator algebras," in The Fourteenth Marcel Grossmann Meeting. WORLD SCIENTIFIC, Nov, 2017.
Geometric realization of algebraic conformal field theories. J E Tener, 10.1016/j.aim.2019.04.001arXiv:1611.01176Advances in Mathematics. 349J. E. Tener, "Geometric realization of algebraic conformal field theories," Advances in Mathematics 349 (2019) 488-563, arXiv:1611.01176.
J E Tener, arXiv:1810.08168Representation theory in chiral conformal field theory: from fields to observables. math-phJ. E. Tener, "Representation theory in chiral conformal field theory: from fields to observables," arXiv:1810.08168 [math-ph].
A Bartels, C L Douglas, A G Henriques, arXiv:0912.5307Conformal nets and local field theory. math.ATA. Bartels, C. L. Douglas, and A. G. Henriques, "Conformal nets and local field theory," arXiv:0912.5307 [math.AT].
Conformal Nets I: Coordinate-Free Nets. A Bartels, C L Douglas, A Henriques, 10.1093/imrn/rnu080arXiv:1302.2604International Mathematics Research Notices. 13A. Bartels, C. L. Douglas, and A. Henriques, "Conformal Nets I: Coordinate-Free Nets," International Mathematics Research Notices 2015 no. 13, (06, 2014) 4975-5052, arXiv:1302.2604.
Conformal nets II: conformal blocks. A Bartels, C L Douglas, A Henriques, Commun. Math. Phys. 354A. Bartels, C. L. Douglas, and A. Henriques, "Conformal nets II: conformal blocks," Commun. Math. Phys. 354 (2017) 393-458.
A Bartels, C L Douglas, A Henriques, arXiv:1310.8263Conformal nets III: fusion of defects. A. Bartels, C. L. Douglas, and A. Henriques, "Conformal nets III: fusion of defects," arXiv:1310.8263.
Conformal nets IV: The 3-category. A Bartels, C L Douglas, A Henriques, 10.2140/agt.2018.18.897arXiv:1605.00662Algebraic & Geometric Topology. 18A. Bartels, C. L. Douglas, and A. Henriques, "Conformal nets IV: The 3-category," Algebraic & Geometric Topology 18 (2018) 897-956, arXiv:1605.00662.
Conformal Nets V: Dualizability. A Bartels, C L Douglas, A Henriques, Commun. Math. Phys. 391A. Bartels, C. L. Douglas, and A. Henriques, "Conformal Nets V: Dualizability," Commun. Math. Phys. 391 (2022) 1-31.
Three-tier CFTs from Frobenius algebras. A Henriques, A. Henriques, "Three-tier CFTs from Frobenius algebras.".
Conformal net. "Conformal net." http://ncatlab.org/nlab/show/conformal%20net.
Infinite Conformal Symmetry in Two-Dimensional Quantum Field Theory. A A Belavin, A M Polyakov, A B Zamolodchikov, 10.1016/0550-3213(84)90052-XNucl. Phys. B. 241A. A. Belavin, A. M. Polyakov, and A. B. Zamolodchikov, "Infinite Conformal Symmetry in Two-Dimensional Quantum Field Theory," Nucl. Phys. B 241 (1984) 333-380.
Snowmass White Paper: The Analytic Conformal Bootstrap. T Hartman, D Mazac, D Simmons-Duffin, A Zhiboedov, arXiv:2202.11012Snowmass Summer Study. 22022hep-thT. Hartman, D. Mazac, D. Simmons-Duffin, and A. Zhiboedov, "Snowmass White Paper: The Analytic Conformal Bootstrap," in 2022 Snowmass Summer Study. 2, 2022. arXiv:2202.11012 [hep-th].
Snowmass White Paper: Moonshine. S M Harrison, J A Harvey, N M Paquette, arXiv:2201.13321hep-thS. M. Harrison, J. A. Harvey, and N. M. Paquette, "Snowmass White Paper: Moonshine," arXiv:2201.13321 [hep-th].
Sphere Partition Functions and the Zamolodchikov Metric. E Gerchkovitz, J Gomis, Z Komargodski, 10.1007/JHEP11(2014)001arXiv:1405.7271JHEP. 111hep-thE. Gerchkovitz, J. Gomis, and Z. Komargodski, "Sphere Partition Functions and the Zamolodchikov Metric," JHEP 11 (2014) 001, arXiv:1405.7271 [hep-th].
D Radicevic, arXiv:2105.11470The Ultraviolet Structure of Quantum Field Theories. Part 1: Quantum Mechanics. hep-thD. Radicevic, "The Ultraviolet Structure of Quantum Field Theories. Part 1: Quantum Mechanics," arXiv:2105.11470 [hep-th].
D Radicevic, arXiv:2105.12147The Ultraviolet Structure of Quantum Field Theories. Part 2: What is Quantum Field Theory?. hep-thD. Radicevic, "The Ultraviolet Structure of Quantum Field Theories. Part 2: What is Quantum Field Theory?," arXiv:2105.12147 [hep-th].
D Radicevic, arXiv:2105.12751The Ultraviolet Structure of Quantum Field Theories. Part 3: Gauge Theories. hep-thD. Radicevic, "The Ultraviolet Structure of Quantum Field Theories. Part 3: Gauge Theories," arXiv:2105.12751 [hep-th].
J , 10.1142/S0217751X11054656arXiv:1109.5629Lifshitz-type Quantum Field Theories in Particle Physics. 26hep-phJ. Alexandre, "Lifshitz-type Quantum Field Theories in Particle Physics," Int. J. Mod. Phys. A 26 (2011) 4523-4541, arXiv:1109.5629 [hep-ph].
Quantum Gravity at a Lifshitz Point. P Horava, 10.1103/PhysRevD.79.084008arXiv:0901.3775Phys. Rev. D. 7984008hep-thP. Horava, "Quantum Gravity at a Lifshitz Point," Phys. Rev. D 79 (2009) 084008, arXiv:0901.3775 [hep-th].
N Seiberg, What is Quantum Field Theory. N. Seiberg, "What is Quantum Field Theory?." https://www.youtube.com/watch?v=GZvs-ae4YRA.
Field Theories With a Vector Global Symmetry. N Seiberg, 10.21468/SciPostPhys.8.4.050arXiv:1909.10544SciPost Phys. 8450cond-mat.str-elN. Seiberg, "Field Theories With a Vector Global Symmetry," SciPost Phys. 8 no. 4, (2020) 050, arXiv:1909.10544 [cond-mat.str-el].
Exotic U (1) Symmetries, Duality, and Fractons in 3+1-Dimensional Quantum Field Theory. N Seiberg, S.-H Shao, 10.21468/SciPostPhys.9.4.046arXiv:2004.00015SciPost Phys. 9446cond-mat.str-elN. Seiberg and S.-H. Shao, "Exotic U (1) Symmetries, Duality, and Fractons in 3+1-Dimensional Quantum Field Theory," SciPost Phys. 9 no. 4, (2020) 046, arXiv:2004.00015 [cond-mat.str-el].
More Exotic Field Theories in 3+1 Dimensions. P Gorantla, H T Lam, N Seiberg, S.-H Shao, 10.21468/SciPostPhys.9.5.073arXiv:2007.04904SciPost Phys. 973cond-mat.str-elP. Gorantla, H. T. Lam, N. Seiberg, and S.-H. Shao, "More Exotic Field Theories in 3+1 Dimensions," SciPost Phys. 9 (2020) 073, arXiv:2007.04904 [cond-mat.str-el].
Low-energy limit of some exotic lattice theories and UV/IR mixing. P Gorantla, H T Lam, N Seiberg, S.-H Shao, 10.1103/PhysRevB.104.235116arXiv:2108.00020Phys. Rev. B. 10423235116cond-mat.str-elP. Gorantla, H. T. Lam, N. Seiberg, and S.-H. Shao, "Low-energy limit of some exotic lattice theories and UV/IR mixing," Phys. Rev. B 104 no. 23, (2021) 235116, arXiv:2108.00020 [cond-mat.str-el].
Feynman Geometries. S Hu, A Losev, 10.1142/9789814725569_002760 Years of Yang-Mills Gauge Field Theories: C N Yang's Contributions to Physics. S. Hu and A. Losev, "Feynman Geometries," in 60 Years of Yang-Mills Gauge Field Theories: C N Yang's Contributions to Physics, pp. 453-471.
Notes on A∞-Algebras, A∞-Categories and Non-Commutative Geometry. M Kontsevich, Y Soibelman, 10.1007/978-3-540-68030-7_6arXiv:math/0606241Lect. Notes Phys. 757M. Kontsevich and Y. Soibelman, "Notes on A∞-Algebras, A∞-Categories and Non-Commutative Geometry," Lect. Notes Phys. 757 (2009) 153-220, arXiv:math/0606241.
| zyda_arxiv-0248000 |
Restless pions: orbifold boundary conditions and noise suppression in lattice QCD
1 Aug 2007 (Dated: February 1, 2008 -17:37)
Paulo F Bedaque *[email protected]†[email protected]
Maryland Center for Fundamental Physics Department of Physics
University of Maryland
20742College ParkMD
André Walker-Loud
Maryland Center for Fundamental Physics Department of Physics
University of Maryland
20742College ParkMD
Restless pions: orbifold boundary conditions and noise suppression in lattice QCD
1 Aug 2007 (Dated: February 1, 2008 -17:37)arXiv:0708.0207v1 [hep-lat]
The study of one or more baryons in lattice QCD is severely hindered by the exponential decay in time of the signal-to-noise ratio. The rate at which the signal-to-noise decreases is a function of the the pion mass. More precisely, it depends on the minimum allowed pion energy in the box, which, for periodic boundary conditions, is equal to its mass. We propose a set of boundary conditions, given by a "parity orbifold" construction, which eliminates the zero momentum pion modes, raising the minimum pion energy without altering the QCD ground state, and thereby improving the signal-to-noise ratio of (multi)-baryon correlation functions at long Euclidean times.We discuss variations of these "restless pions" boundary conditions and focus on their impact on the study of nuclear forces.
I. INTRODUCTION
Lattice QCD studies of heavy systems are plagued by large statistical noise. The signalto-noise ratio of a correlation function created by an operator with n quark-anti-quark pairs decreases with (Euclidean) time as e −(E− n 2 mπ)t , where E is the mass of the state under consideration. This is a particularly nasty problem for the recent studies of nucleon-nucleon [1,2] and hyperon-nucleon [3] forces with lattice QCD. The large statistical error renders the numerical information at large times useless. Compounded with this problem, at early times the correlators are contaminated by excited states and so there is only a very narrow range of time slices left containing useful information. In unquenched calculations [1,3] the statistical noise allows for semi-quantitative results only, even after the computation of thousands of fermion propagators. These errors are also much larger than finite-volume [4] and finite lattice spacing [5] effects in these observables.
We propose here a scheme to alleviate this signal-to-noise problem. We start with the simple observation that the statistical noise is dominated by the energy of the lightest pion states. Periodic boundary conditions allow a pion zero mode and thus the lowest pion energy is equal to its mass. If one were to impose anti-periodic boundary conditions for all three pions, then the pion zero-modes are forbidden and the minimum energy is given by (assuming anti-periodic boundary conditions in all three spatial directions)
E π = 3 π L 2 + m 2 π ,
with L the size of each spatial direction. Thus it is clear that the signal can be improved by using these "restless pions" boundary conditions. There are other applications for which these restless pions are useful. In addition to the obvious benefit to spectroscopy studies [6], the advantage of anti-periodic boundary conditions for the pion in the extraction of the K → ππ amplitude using the Lellouch-Lüscher method [7] were pointed out in references [8,9].
In lattice calculations, one does not have direct control over the hadronic boundary conditions. What can be controlled at will are the boundary conditions of the quark and gluon fields. However, it is not obvious which modifications of the quarks and gluons at the boundary implies an anti-periodic boundary condition for all three of the pions; antiperiodic boundary conditions for the neutral pion have remained elusive. Obvious choices get tantalizingly close to the desired restless pions but upon close inspection, have undesired consequences. For instance, twisted boundary conditions [10] allow for continuous momentum transfer by providing hadrons a momentum kick at the boundary. But as we will explain in sec. II, these twisted boundary conditions do not effect the signal-to-noise issue we are interested in. The G-parity boundary condition suggested in references [8,9] breaks both the spatial subset of the hypercubic rotation invariance as well as chiral symmetry [11]. In Ref. [9], an isospin boundary condition was used, q(L) = τ 3 q(0), but this leaves the neutral pion unaffected. This allows for an extraction of the ∆ I = 3/2 K → ππ amplitude (in which the pions are in an I = 2 final state) but it does not serve our purpose of reducing the statistical noise for baryon calculations. Various "hybrid boundary conditions" have been employed, first in the numerical study of penta-qaurk states [12]. In this first implementation, the u and d quarks were given anti-periodic boundary conditions while the s quark was given periodic boundary conditions, allowing for a definitive identification of bound vs. scattering states by forcing the nK-system to have non-zero relative momentum (the scattering state) while leaving a possible Θ + (uudds) resonant state unaffected (the bound state). In a second variation of the hybrid boundary condition, used in the numerical study of charmonium [13] and a possible tetra-quark state [14], an anti-periodic boundary condition is imposed upon the quarks while the anti-quarks are given periodic boundary conditions. This allows for the same identification of bound and scattering states for these systems as the first variant, however this second hybrid boundary condition violates charge conjugation invariance. Furthermore, neither variant of these hybrid boundary conditions will help with the signal to noise issue we want to address. 1 An axial twisted boundary condition, q(L) = γ 5 q(0) (and similar choices) provide for anti-periodic pions but additionally make σ ∼ qq anti-periodic and alter the QCD pattern of symmetry breaking.
We propose a novel approach to this problem making use of an orbifold boundary condition. Similar constructions have been employed in the context of the "chirality problem"
in extra-dimensional extensions of the Standard Model [15], domain-wall fermions [16,17] and the Schrödinger functional formalism [18]. Instead of relating the field values at the two ends of the box (z = 0 and z = L), we impose periodic boundary conditions on an extended box −L < z < L. However, the fields at negative values of z are not independent, but are MeV. The signal-to-noise estimate of eq. (5) was normalized to the lattice calculation at t = 11.
determined from those with positive z. By appropriately choosing a relation between the quark and gluon fields in these two halves of the lattice (the orbifold condition), we can enforce a π(−z) = −π(z) condition, eliminating the zero momentum mode for all pions, making them restless.
II. SIGNAL-TO-NOISE RATIO ESTIMATES
Here we review the argument estimating the statistical noise for lattice QCD correlation functions [19]. Consider first a nucleon correlator
C(t) = q(t)q(t)q(t)q(0)q(0)q(0) ,
where, for clarity, we have suppressed the Dirac, flavor and color indices. At large times,
C(t) is dominated by the intermediate state of lowest energy with the quantum numbers of
the nucleon:
C(t) t→∞ −→ Ae −M t ,(1)
where M is the nucleon mass. In a Monte Carlo calculations, C(t) is estimated by an average over N gauge configurations
C(t) ∼ =C(t) = 1 N A S A (t)S A (t)S A (t) ≡ S 3 A (t) ,(2)
where S A (t) is the quark propagator in each one of the gauge configurations, A. The variance in this estimate is given by
σ 2 C (t) = 1 N A |S A (t)S A (t)S A (t) −C(t)| 2 = S 3 A (t)S † 3 A (t) − |C(t)| 2 .(3)
For large times,
C 2 (t) ∼ e −2M t , while the large time behavior of S 3 A (t)S † 3 A (t) can be found by noticing that S 3 A (t)S † 3 A (t) = q 3 (t)Q 3 (t)q 3 (0)Q 3 (0) t→∞ −→ Be −3mπ t ,(4)
where Q is a fictitious quark with identical quantum numbers and properties of the q quarks. 2
The long time behavior of the correlator in eq. (4) is then dominated by the intermediate state with the lowest energy with the quantum numbers of three qQ mesons. Since they have the same mass as the qq mesons, this lowest energy state is given by three times the pion mass. Thus, for sufficiently light pions, S 3 A (t)S † 3 A (t) decays at a rate smaller than C 2 (t). The signal-to-noise ratio of the nucleon correlator is then given by
C(t) 1 N σ 2 C (t) t→∞ −→ A √ N e −M t e − 3 2 mπ t ∼ √ N e −(M − 3 2 mπ )t .(5)
We show in fig. (1) the signal-to-noise ratio in an actual lattice QCD calculation (details of the simulation can be found in Refs. [1,20]) as well as the estimate in eq. (5).
The estimate in eq. (5) is easily generalized for correlation functions of multi-baryons and baryons with strange quarks. In the case of two-nucleon correlators, for example, the 2 This explains why the twisted and hybrid boundary conditions do not help the signal-to-noise problem.
The fictitious Q quarks have the same boundary conditions as the q quarks, and thus the qQ and Qq mesons have periodic boundary conditions and are allowed a zero-momentum mode.
signal-to-noise ratio is proportional to
√ Ne −(2M −3mπ )t .
Recent lattice studies of nuclear forces (and hyperon-nucleon interactions) were severely hindered by the fast decrease of the signal-to-noise ratio with time [1,3]. The correlators at short times cannot be used for fitting purposes since it is contaminated by excited states 3 while at later times the statistical noise overwhelms the signal, leaving only a very narrow plateau from which the physics is extracted. This is in stark contrast to lattice calculations of ππ interactions [21] and other two-meson systems [22].
III. PARITY ORBIFOLDS
Let us now describe the basic idea of the orbifold construction in the case when only one dimension is orbifolded. Consider a lattice whose z coordinate belongs to the interval Then identify the points z and −z by relating φ(z) to φ(−z), effectively transforming the circle into a line segment (including the boundary) as shown in fig. (2). In the simplest case, φ(z) = ±φ(−z). If the plus sign is chosen, φ(z) will be a linear combination of spatially symmetric wavefunctions,
φ + (z) = ∞ n=0 A (n) + cos nπz L .(6)
If, however, the minus sign is chosen then the φ(z) will be a linear combination of antisymmetric wavefunctions,
φ − (z) = ∞ n=1 A (n) − sin nπz L ,(7)
and consequently there is no zero mode for this field. The lowest momentum allowed is k min = π L with an energy of (π/L) 2 + m 2 . This upward shift in the minimum allowed energy value is the desired result. In order to eliminate the pions at rest we will require that π(z) = −π(−z). The ways to achieve this by imposing orbifold conditions on the quark and gluon fields and the generalization to higher dimensions will be discussed next. 3 In the single baryon sector, a significant improvement in the isolation of the ground state and excited states at early times has been achieved with the use of multiple operators combined with quark and gluon smearing [6]. The equivalent study for operators coupling to multi-nucleon states has not been performed and is anticipated to be significantly more challenging and costly given the larger number of operators and quark contractions. the "parity orbifolding" condition (the issues we discuss here belong to the infrared regime and we use a continuum notation)
A µ (t, x, y, z) = A µ (t, x, y, −z), for µ = 3
A 3 (t, x, y, z) = −A 3 (t,
x, y, −z), q(t, x, y, z) = P z q(t, x, y, −z),
q(t, x, y, z) =q(t, x, y, −z)P z ,(8)
where P z = iγ 5 γ 3 is the z-parity operator corresponding to a reversal of the z direction and we work in Euclidean space. 4 The z-parity operator P z is obtained from the usual parity operator γ 0 , corresponding to a simultaneous reversal of all three spatial axes, combined with a rotation by π around the z-axis. The conditions in eq. (8) relate the QCD fields in one side of the box to their parity conjugates in the opposite side. Notice that, since parity is a symmetry of the theory, the contribution to the action from the z < 0 region is exactly the same as the z > 0 region and the computational cost of using the extended box, [−L, L]
is the same as that of the smaller box, [0, L]. The only effect of the orbifold condition is on the link connecting the z < 0 and z > 0 regions. In other words, it acts as a boundary 4 We use the conventions γ 2 5 = γ 2 µ = 1, γ † µ = γ µ .
condition at z = 0. In fact, consider the orbifolded action in the case of Wilson quarks
S = κ [q −1 (γ 3 − r)q 1 −q 1 (γ 3 + r)q −1 ] + a 4 (q 1 q 1 +q −1 q −1 ) + · · · = −2κq 1 (γ 3 + r)P z q 1 + 2a 4q 1 q 1 + · · · ,(9)
where κ is the hopping parameter, the index on the quark fields denotes the position in z Notice that we could have equally used the opposite z-parity operator, −P z , implementing a reversal of all three spatial axis followed by a rotation by −π about the z-axis. The difference between rotating by π in the positive or in the negative direction amounts to a 2π rotation which, for spin-1/2 fermions, leads to a minus sign difference between P z and −P z .
Physical observables, being quark bilinears, generally do not depend on this sign. As can be seen in eq. (9), however, the boundary terms are linear in P z and are able to distinguish between the choice in sign of P z . This shows that the orbifold condition breaks the z → −z symmetry.
The parity orbifold condition on the quark and gluon fields implies orbifold conditions for the hadronic fields. If we identify the pion field with the π ∼qγ 5 τ q interpolating field, we see that it satisfies the desired π(t, x, y, z) = −π(t, x, y, −z)
orbifold condition. In fact, the same condition will follow if any other pion interpolating field is used like, for instance, π ∼qτ qF µνFµν , since it depends only on the fact that the pion has negative intrinsic parity. In fact, all parity odd operators will satisfy a condition similar to eq. (10) while the parity even operators will satisfy the analogue equation without the minus sign. In particular, the σ field σ ∼ qq has a zero mode and the QCD pattern of 5 Notice that, contrary to the continuum case, the boundary conditions in lattice field theory are already contained in the action. Different lattice action terms localized at the boundary imply different boundary conditions in the continuum and the relation between them is, in general, a complicated dynamical question.
symmetry breaking is not affected by the orbifolding procedure. The nucleon fields satisfy N(t, x, y, z) = −P z N(t, x, y, −z),
N (t, x, y, z) = −N (t, x, y, −z)P z ,(11)
as can be seen using the interpolating field N ∼ qq T τ 2 Cγ 5 q. In the non-relativistic domain, P z = iγ 5 γ 3 reduces to σ 3 and the allowed modes for the nucleon are
N(x, y, z) = e i nxπx 2L x+i ny πy 2L y cos( nzπz L ) 1 0 , n x , n y , n z = 0, 1, · · · sin( nzπz L ) 0 1 , n x , n y = 0, 1, · · · , n z = 1, 2, · · · .(12)
Notice that only spin up nucleons can be at rest. Consequently we can construct a spin triplet two-nucleon state, like the deuteron, with zero momentum but a spin singlet twonucleon state will necessarily have a minimum momentum equal to π/L. This asymmetry between spin up and down is a consequence of the breaking of the z → −z symmetry discussed above.
Unfortunately, the boundary term shown in eq. (9) is not γ 5 -Hermitian and the fermion determinant is not positive definite. This makes simulations with dynamical quarks satisfying the parity orbifold condition impractical. However, this method is perfectly suited to implementation in the valence sector only, i.e. only on the propagators generated in the background of dynamical configurations. In refs. [23,24], it was argued that up to exponentially suppressed corrections, for many channels of interest including baryon-baryon channels, different boundary conditions can be used in the valence and sea sectors of the theory, known as "partially twisted boundary conditions". Therefore, gauge configurations generated with sea quarks satisfying periodic boundary conditions can be used with valence quarks satisfying "parity orbifold" boundary conditions. Intuitively, the possibility of using different boundary conditions for sea and valence quarks follows from the observation that sea quarks can "notice" their different boundary conditions only if they propagate around the lattice. But, for observables without annihilation diagrams, the propagation of sea quarks around the lattice is suppressed by e −mL , where m is the mass of the lightest hadron made of sea quarks or a mixture of valence and sea quarks. In our case, this is the pion mass. This argument is better appreciated by looking at the graphs in fig. (3), which display examples of processes contributing to baryon-baryon scattering. Only diagrams containing a baryon-baryon intermediate state give rise to power law volume dependence (below the inelastic threshold). These two intermediate baryons are made of valence quarks and therefore satisfy the orbifold boundary condition. We stress that the rate at which the signal-to-noise decreases is set by the valence nucleon and pion masses.
The increase on the pion minimum energy has an additional benefit. With the exception of the relation between two-particle energy levels and the S-matrix, described by the Lüscher formula, finite volume effects are suppressed by factors of e −EπL . An increase on the value of E π is then clearly beneficial. This is specially important for the exponentially supressed correction to the Lüscher formula where the suppression factor, formally of order e −EπL , can be sizable for realistic lattices and periodic pions with E π = m π [4]. These finite volume corrections can be estimated using an extension of chiral perturbation theory adapted to the case where valence and sea quarks obey different boundary conditions in the molds of [23,24,25,26,27].
B. Three-dimensional parity T 3 /Z 2 orbifold
The method of the previous section can be generalized in order to remove the zeromomentum modes of the pions in all three directions, further improving the signal-to-noise ratio. The simplest generalization of eq. (8) is
A 0 (t, r) = A 0 (t, −r), A i (t, r) = −A i (t, r), for i = 1, 2, 3 q(t, ) = Pq(t, −r), q(t, r) =q(t, −r)P,(13)
where P = γ 0 is the usual parity operator corresponding to the reversal of all three space condition π(t, r) = −π(t, −r) but now their minimum energy is 3( π L ) 2 + m 2 π . Nucleons obey the same conditions as the quarks, N(t, r) = γ 0 N(t, −r). Since in the non-relativistic limit γ 0 reduces to 1, non-relativistic nucleons satisfy periodic boundary conditions and contain zero modes. This property is very convenient when extracting low-energy phase shifts on the lattice, as with the 3-D parity-orbifolding, there is no restriction on the spinisospin channels one can study in the ground state and the standard Lüscher formula relating energy levels to phase shifts is unchanged. As it will be exemplified below, the increase in the signal-to-noise ratio is dramatic.
IV. IMPACT ON LATTICE CALCULATIONS
A. Nuclear force studies
In order to provide an explicit example, we use the values of the parameters used in [1] to estimate the impact of the method advocated here in the expected rate with which the signal-to-noise ratio decreases with increasing time. We disregard the interaction energy between the hadrons and approximate the energy of the two-nucleon state by ≈ 2M. The energy of the three-pions is approximated by ≈ 3m π when periodic boundary conditions are used, 3 ( π L ) 2 + m 2 π if the S 1 /Z 2 orbifold is used and 3 3( π L ) 2 + m 2 π if the T 3 /Z 2 orbifold is used. The result is plotted in fig. (4). The inclusion of the interaction energy between the two nucleons would change the figure by very little. In fact, for pion masses above 350
MeV the energy shifts found [1] are of order of 10 − 20 MeV. It is expected, however, that in a narrow band close to the physical value of m π the energy shift should be larger [28], corresponding to the diverging scattering lengths, but still much smaller than the rest mass of the nucleons. Even the modest increase in the pion minimum energy found in the onedimensional orbifolding has a potential significant impact by noise limited measurements.
In the case of the three-dimensional orbifolding that potential improvement is enormous (notice the log scale in the corresponding graph).
B. Impact on K → ππ As pointed out in [8,9], the extraction of the K → ππ amplitude with the Lellouch-Lüscher method [7] can benefit from eliminating pion zero modes. The method to eliminate pions at rest discussed here can only be applied to the I = 2 channel. In the I = 0 channel, the use of different boundary conditions in the valence and sea sectors alters the amplitude by factors that are not exponentially suppressed. Of course, a modified chiral perturbation theory taking into account the differences of the valence and sea sectors can still be used to relate the results of such a lattice calculation with the real world QCD amplitude.
V. DISCUSSION
We have introduced "restless pions" boundary conditions designed to reduce the rapid degradation of the signal-to-noise ratio which plagues studies of heavy systems with lattice QCD. We have shown how these boundary conditions can be implemented with a parityorbifold construction in either one or three spatial dimensions. Unfortunately, the action at the boundary is not γ 5 -Hermitian and so this particular construction is not suitable for the sea sector. However, this method is perfectly suited for implementation of the valence fermions. For non-scalar channels, the difference in sea and valence boundary conditions is felt only by exponentially small terms. The numerical cost of implementing these parityorbifolded valence propagators is the same for propagators with (anti)-periodic boundary conditions, as the fields in each half of the bulk are not independent, and therefore the implementation is achieved with a special boundary condition on the non-doubled lattice.
FIG. 1 :
1Log of signal-to-noise ratio of the two-nucleon correlator in the spin singlet channel as a function of Euclidean time (from the calculation described in[20]). The pion mass is about 350
[ 0 ,
0L]. Extend it to [−L, L] and identify the points z = −L and z = L, effectively turning the interval [−L, L] into a circle. Let all fields, φ(z), satisfy the periodic condition φ(L) = φ(−L).
FIG. 2 :
2Identification of z and −z points reduces the circle to a line segment. A. One-dimensional S 1 /Z 2 parity orbifold In the simplest version of our proposal the orbifold trick is used in only one of the spatial directions. Consider QCD fields in the periodic box [0, L] × [0, L] × [−L, L] × [0, β] satisfying
(
the remaining coordinates are implicit) and the dots denote the contributions from the two sides of the bulk, z > 0 and z < 0 (which are equal to each other). We see then that the orbifolded [−L, L] lattice is equivalent to a [0, L] lattice with some extra terms residing at the boundary, as is the case with any boundary condition.5
directions. While the boundary conditions in eq. (8) can be seen as a mirror placed at z = 0, the conditions in eq. (13) can be visualized as a pin hole located x = y = z = 0 with a lattice [−L/2, L/2] × [−L/2, L/2] × [−L, L] × [0, β]. Again, all three pions obey the odd orbifold
FIG. 3 :
3Examples of two-nucleon graphs containing sea quarks. The left column shows the graphs at QCD level (dotted lines represent sea quarks) and the right column represents the same graphs at the low energy effective theory level. The graphs on the first row are proportional to e −Λ QCD L , the second and third are proportional to e −mπ L . The last row shows a graph with a power law dependence on the volume.
LL
Estimate of the signal-to-noise ratio with the S 3 /Z 2 orbifold condition ) 2 +m 2 )t (solid line) and with periodic boundary conditions e −(2M −3m)t (dashed line) as a function of t. Right: Log plot of the signal-to-noise ratio with the T 3 /Z 2 orbifold condition ) 2 +m 2 )t (solid line) and with periodic boundary conditions e −(2M −3m)t (dashed line) as a function of t. In both figures the pion mass is 350 MeV and the box size is L = 2.5 fm.
See section II for details.
AcknowledgmentsWe would like to thank T. Cohen and K. Orginos for conversations on this subject and the NPLQCD collaboration for the use of their data infig. (1). This research was supported
of Energy under grant no. DE-FG02-93Er-40762. the U.S. Deptpart by the U.S. Dept. of Energy under grant no. DE-FG02-93Er-40762.
. S R Beane, P F Bedaque, K Orginos, M J Savage, arXiv:hep-lat/0602010Phys. Rev. Lett. 9712001S. R. Beane, P. F. Bedaque, K. Orginos and M. J. Savage, Phys. Rev. Lett. 97, 012001 (2006) [arXiv:hep-lat/0602010].
. N Ishii, S Aoki, T Hatsuda, arXiv:nucl-th/0611096N. Ishii, S. Aoki and T. Hatsuda, arXiv:nucl-th/0611096.
. S R Beane, NPLQCD CollaborationP F Bedaque, NPLQCD CollaborationT C Luu, NPLQCD CollaborationK Orginos, NPLQCD CollaborationE Pallante, NPLQCD CollaborationA Parreno, NPLQCD CollaborationM J Savage, NPLQCD CollaborationarXiv:hep-lat/0612026S. R. Beane, P. F. Bedaque, T. C. Luu, K. Orginos, E. Pallante, A. Parreno and M. J. Savage [NPLQCD Collaboration], arXiv:hep-lat/0612026.
. I Sato, P F Bedaque, arXiv:hep-lat/0702021I. Sato and P. F. Bedaque, arXiv:hep-lat/0702021.
. J W Chen, D O'connell, A Walker-Loud, arXiv:hep-lat/07060035J. W. Chen, D. O'Connell and A. Walker-Loud, arXiv:hep-lat/07060035.
. S Basak, arXiv:hep-lat/0506029Phys. Rev. D. 7294506S. Basak et al., Phys. Rev. D 72, 094506 (2005) [arXiv:hep-lat/0506029];
. S Basak, Lattice Hadron Physics Collaboration ; LHPCarXiv:hep-lat/0508018Phys. Rev. D. 7274501S. Basak et al. [Lattice Hadron Physics Collaboration (LHPC)], Phys. Rev. D 72, 074501 (2005) [arXiv:hep-lat/0508018];
. S Basak, arXiv:hep-lat/0509179PoS. 200576S. Basak et al., PoS LAT2005, 076 (2006) [arXiv:hep-lat/0509179];
. A C Lichtl, arXiv:hep-lat/0609019A. C. Lichtl, arXiv:hep-lat/0609019;
. S Basak, arXiv:hep-lat/0609052S. Basak et al., arXiv:hep-lat/0609052;
. S Basak, Lattice Hadron Physics CollaborationarXiv:hep-lat/0609072PoS. 2006S. Basak et al. [Lattice Hadron Physics Collaboration], PoS LAT2006, 197 (2006) [arXiv:hep-lat/0609072].
. L Lellouch, M Luscher, arXiv:hep-lat/0003023Commun. Math. Phys. 21931L. Lellouch and M. Luscher, Commun. Math. Phys. 219, 31 (2001) [arXiv:hep-lat/0003023].
. C H Kim, N H Christ, arXiv:hep-lat/0210003Nucl. Phys. Proc. Suppl. 119C. H. Kim and N. H. Christ, Nucl. Phys. Proc. Suppl. 119, 365 (2003) [arXiv:hep-lat/0210003].
. C H Kim, arXiv:hep-lat/0311003Nucl. Phys. Proc. Suppl. 129C. H. Kim, Nucl. Phys. Proc. Suppl. 129, 197 (2004) [arXiv:hep-lat/0311003].
. P F Bedaque, arXiv:nucl-th/0402051Phys. Lett. B. 59382P. F. Bedaque, Phys. Lett. B 593, 82 (2004) [arXiv:nucl-th/0402051].
. U J Wiese, Nucl. Phys. B. 37545U. J. Wiese, Nucl. Phys. B 375, 45 (1992);
. N Ishii, T Doi, H Iida, M Oka, F Okiharu, H Suganuma, arXiv:hep-lat/0408030Phys. Rev. D. 7134001N. Ishii, T. Doi, H. Iida, M. Oka, F. Okiharu and H. Suganuma, Phys. Rev. D 71, 034001 (2005) [arXiv:hep-lat/0408030];
. N Ishii, T Doi, Y Nemoto, M Oka, H Suganuma, arXiv:hep-lat/0506022Phys. Rev. D. 7274503N. Ishii, T. Doi, Y. Nemoto, M. Oka and H. Suganuma, Phys. Rev. D 72, 074503 (2005) [arXiv:hep-lat/0506022].
. H Iida, T Doi, N Ishii, H Suganuma, K Tsumura, arXiv:hep-lat/0602008Phys. Rev. D. 7474502H. Iida, T. Doi, N. Ishii, H. Suganuma and K. Tsumura, Phys. Rev. D 74, 074502 (2006) [arXiv:hep-lat/0602008].
. H Suganuma, K Tsumura, N Ishii, F Okiharu, arXiv:0707.3309hep-latH. Suganuma, K. Tsumura, N. Ishii and F. Okiharu, arXiv:0707.3309 [hep-lat].
. K R Dienes, E Dudas, T Gherghetta, arXiv:hep-ph/9806292Nucl. Phys. B. 53747K. R. Dienes, E. Dudas and T. Gherghetta, Nucl. Phys. B 537, 47 (1999) [arXiv:hep-ph/9806292].
. D B Kaplan, arXiv:hep-lat/9206013Phys. Lett. B. 288D. B. Kaplan, Phys. Lett. B 288, 342 (1992) [arXiv:hep-lat/9206013].
. M Luscher, arXiv:hep-th/0102028M. Luscher, arXiv:hep-th/0102028.
. Y Taniguchi, ; S Sint, ; M Luscher, arXiv:hep-lat/0412024arXiv:hep-lat/0603029JHEP. 051242JHEPY. Taniguchi, JHEP 0512, 037 (2005) [arXiv:hep-lat/0412024], S. Sint, PoS LAT2005, 235 (2006) [arXiv:hep-lat/0511034], M. Luscher, JHEP 0605, 042 (2006) [arXiv:hep-lat/0603029].
The Analysis Of Algorithms For Lattice Field Theory. G P Lepage, lectures given at TASI'89 Summer School. Boulder, COG. P. Lepage, "The Analysis Of Algorithms For Lattice Field Theory", lectures given at TASI'89 Summer School, Boulder, CO, 1989.
. S R Beane, K Orginos, M J Savage, arXiv:hep-lat/0605014Nucl. Phys. B. 76838S. R. Beane, K. Orginos and M. J. Savage, Nucl. Phys. B 768, 38 (2007) [arXiv:hep-lat/0605014];
. S R Beane, K Orginos, M J Savage, arXiv:hep-lat/0604013S. R. Beane, K. Orginos and M. J. Savage, arXiv:hep-lat/0604013.
. S R Sharpe, Nucl. Phys. B. 383309S. R. Sharpe et al. Nucl. Phys. B 383, 309 (1992);
. R Gupta, Phys. Rev. D. 48388R. Gupta et al. Phys. Rev. D 48, 388 (1993);
. Y Kuramashi, Phys. Rev. Lett. 712387Y. Kuramashi et al. Phys. Rev. Lett. 71, 2387 (1993);
. M Fukugita, Phys. Rev. D. 523003M. Fukugita et al. Phys. Rev. D 52, 3003 (1995);
. C A Liu, arXiv:hep-lat/0109010C. A. Liu et al. arXiv:hep-lat/0109010;
. S Aoki, JLQCD CollaborationPhys. Rev. D. 6677501S. Aoki et al. [JLQCD Collaboration], Phys. Rev. D 66, 077501 (2002);
. S Aoki, CP-PACS CollaborationPhys. Rev. D. 6714502S. Aoki et al. [CP-PACS Collaboration], Phys. Rev. D 67, 014502 (2003);
. T Yamazaki, CP-PACS CollaborationPhys. Rev. D. 7074513T. Yamazaki et al. [CP-PACS Collaboration], Phys. Rev. D 70, 074513 (2004);
. X Du, Int. J. Mod. Phys. A. 195609X. Du et al. Int. J. Mod. Phys. A 19, 5609 (2004);
. S Aoki, CP-PACS CollaborationPhys. Rev. D. 7194504S. Aoki et al. [CP-PACS Collaboration], Phys. Rev. D 71, 094504 (2005);
. S R Beane, NPLQCD CollaborationPhys. Rev. D. 7354503S. R. Beane et al. [NPLQCD Collaboration], Phys. Rev. D 73, 054503 (2006);
. S R Beane, NPLQCD CollaborationarXiv:0706.3026hep-latS. R. Beane et al. [NPLQCD Collaboration], arXiv:0706.3026 [hep-lat].
. C Miao, Phys. Lett. B. 595400C. Miao et al. Phys. Lett. B 595, 400 (2004);
. S R Beane, NPLQCD CollaborationPhys. Rev. D. 74114503S. R. Beane et al. [NPLQCD Collaboration], Phys. Rev. D 74, 114503 (2006);
. C T Sachrajda, G Villadoro, arXiv:hep-lat/0411033Phys. Lett. B. 60973C. T. Sachrajda and G. Villadoro, Phys. Lett. B 609, 73 (2005) [arXiv:hep-lat/0411033].
. P F Bedaque, J W Chen, arXiv:hep-lat/0412023Phys. Lett. B. 616208P. F. Bedaque and J. W. Chen, Phys. Lett. B 616, 208 (2005) [arXiv:hep-lat/0412023].
. B C Tiburzi, arXiv:hep-lat/0504002Phys. Lett. B. 61740B. C. Tiburzi, Phys. Lett. B 617, 40 (2005) [arXiv:hep-lat/0504002];
. arXiv:hep-lat/0607019Phys. Lett. B. 641Phys. Lett. B 641, 342 (2006) [arXiv:hep-lat/0607019].
. C W Bernard, M F L Golterman, arXiv:hep-lat/9306005Phys. Rev. D. 49486C. W. Bernard and M. F. L. Golterman, Phys. Rev. D 49, 486 (1994) [arXiv:hep-lat/9306005];
. S R Sharpe, Phys. Rev. D. 567052Erratum-ibid. D 62, 099901 (2000)S. R. Sharpe, Phys. Rev. D 56, 7052 (1997) [Erratum-ibid. D 62, 099901 (2000)]
. S R Sharpe, N Shoresh, arXiv:hep-lat/0006017Phys. Rev. D. 6294503S. R. Sharpe and N. Shoresh, Phys. Rev. D 62, 094503 (2000) [arXiv:hep-lat/0006017];
. S R Sharpe, N Shoresh, arXiv:hep-lat/0108003Phys. Rev. D. 64114510S. R. Sharpe and N. Shoresh, Phys. Rev. D 64, 114510 (2001) [arXiv:hep-lat/0108003];
. J N Labrenz, S R Sharpe, arXiv:hep-lat/9605034Phys. Rev. D. 544595J. N. Labrenz and S. R. Sharpe, Phys. Rev. D 54, 4595 (1996) [arXiv:hep-lat/9605034];
. J W Chen, M J Savage, arXiv:hep-lat/0111050Phys. Rev. D. 6594001J. W. Chen and M. J. Savage, Phys. Rev. D 65, 094001 (2002) [arXiv:hep-lat/0111050];
. S R Beane, M J Savage, arXiv:hep-lat/0203003Nucl. Phys. A. 709319S. R. Beane and M. J. Savage, Nucl. Phys. A 709, 319 (2002) [arXiv:hep-lat/0203003];
. S R Beane, P F Bedaque, A Parreno, M J Savage, arXiv:hep-lat/0312004Phys. Lett. B. 585106S. R. Beane, P. F. Bedaque, A. Parreno and M. J. Savage, Phys. Lett. B 585, 106 (2004) [arXiv:hep-lat/0312004].
| zyda_arxiv-0270000 |
Pair production of neutral Higgs bosons from the left-right twin Higgs model at the ILC and LHC
21 May 2009 May 21, 2009
Wei Ma
Department of Physics
Liaoning Normal University
116029DalianP. R. China
Chong-Xing Yue
Department of Physics
Liaoning Normal University
116029DalianP. R. China
Yong-Zhi Wang
Department of Physics
Liaoning Normal University
116029DalianP. R. China
Pair production of neutral Higgs bosons from the left-right twin Higgs model at the ILC and LHC
21 May 2009 May 21, 2009number: 1260Cn1480Cp1215Ji * cxyue@lnnueducn 1
In the framework of the left-right twin Higgs model, we study pair production of the neutral Higgs bosons at the International Linear Collider (ILC) and the CERN LHC. We find that the production cross section of the process e + e − → φ 0 h are at the level of several tens f b at the ILC, the production cross section of the φ 0 φ 0 pair and φ 0 h pair are at the level of several hundreds f b at the LHC. As long as the neutral Higgs boson φ 0 is not too heavy, we conclude that its pair production might be used to test for the left-right twin Higgs model at the LHC experiment or in the future ILC experiment.
I. Introduction
The Higgs mechanism is the heart of the standard model (SM) providing masses to gauge bosons via electroweak symmetry breaking (EW SB). However, the SM fails to explain the origin of the fermion mass and has naturalness problems. Many alternative new physics models with extended Higgs sectors are free from the above difficulties. The hunt for the Higgs bosons came to be one of the most important goals for present and future high energy collider experiments. Apart from the SM, neutral Higgs bosons appear in almost every scenario exploring new phenomena [1]. Pair production of neutral Higgs bosons at the CERN LHC, which will provide a way to test the Higgs boson self-coupling, may be sensitive to new physics [2,3]. Many works have contributed to studies of the neutral Higgs pair production at the hadron collider in model independent [4], in SM [5][6][7][8], and in new physics models beyond the SM, such as little Higgs models [9], Randall-Sundrum-like models [10], top condensation models [11], supersymmetric models (SUSY ) [12,13] and models of universal extra dimensions (UED) [14].
The SM has been proved by all existing precise experimental data with its theoretical predictions beyond one-loop level being coincident with experimental observations. But in the SM the Higgs boson mass suffers from an instability under radiative corrections, which is called "hierarchy problem" [15]. Recently, the twin Higgs mechanism has been proposed as a solution to the little hierarchy problem. The Higgs bosons emerge as pseudo-Goldstone bosons once the global symmetry is spontaneously broken. Gauge and Y ukawa interactions that break the global symmetry give masses to the Higgses. The twin Higgs mechanism can be implemented in left-right models with the additional discrete symmetry being identified with left-right symmetry [16,17]. The left-right twin Higgs (LRT H) [17][18][19][20][21].
In the context of the LRT H model, pair production of the charged Higgs bosons (φ + , φ − ) in the LRT H model at the ILC and LHC are studied in Ref. [21], but they did not consider production of the neutral Higgs bosons (φ 0 , h). As we know, so far production of the neutral Higgs pair at the LHC and the ILC in the LRT H model has not been considered, which is the main aim of this paper.
Besides the SM-like Higgs boson h, there are two additional neutral Higgs bosons in the LRT H model, which areĥ 0 2 and φ 0 . The neutral Higgs bosonĥ 0 2 is a possible dark matter candidate that only couples to the gauge bosons (including the SM gauge bosons γ, Z, W , and the new gauge boson Z H ). The production cross section ofĥ 0 2 at the collider is very small and escapes the detector. Therefore, in this paper, we will not discuss the production ofĥ 0 2 at the ILC or LHC. The neutral Higgs boson φ 0 is a pseudoscalar that couples to both the SM fermions and gauge bosons. The neutral Higgs boson pair φ 0 h can be produced via the processes e + e − → Z(Z H ) → φ 0 h at the ILC, and via the partonic processes qq → φ 0 h(q = u, c, d, s, b), gg → φ 0 h at the LHC, respectively. While the neutral Higgs pair φ 0 φ 0 can only be produced via the partonic process gg → φ 0 φ 0 and the t-channel partonic process bb → φ 0 φ 0 at the LHC. We calculate all above these processes.
Our numerical results denote that, for m h = 120GeV , 120GeV ≤ m φ 0 ≤ 180GeV and 500GeV ≤ f ≤ 1500GeV : (i)the production cross section of φ 0 h at the ILC with the center-of-mass (c.m.) energy √ s = 500GeV is in the range of 0.92f b ∼ 20f b; (ii)the production cross section of φ 0 h at the LHC with the c.m. energy √ s = 14T eV is in the range of 34f b − 306f b, and the main contribution comes from light quarks; (iii)the production cross section of φ 0 φ 0 at the LHC is in the range of 4f b − 122f b, and the main contribution comes from the top quark loop.
This paper is organized as follows. In Sec. II, we briefly review the essential features of the LRT H model. The relevant couplings of the neutral Higgs bosons to other particles and the feature of the decay for the neutral Higgs bosons φ 0 are also discussed in this section. In Secs. III and IV, we give our numerical results for pair production of neutral Higgs bosons predicted by the LRT H model at the ILC and LHC, respectively. Our conclusions are given in Sec. V.
II. The LRTH Model
The LRT H model was first proposed in Ref. [16] and the details of the model as well as the particle spectrum, F eynman rules, and some phenomenology analysis have been studied in Ref. [17]. Here we will briefly review the essential features of the model and The fermion sector of the LRT H model is similar to that of the SM, with the righthanded quarks (u R , d R ) and leptons (l R , υ R ) form fundamental representations of SU(2) R .
In order to give the top-quark mass of the order of the electroweak scale, a pair of vectorlike quarks Q L and Q R are introduced. The mass eigenstates, which contain one the SM top quark t and a heavy top partner T , are mixtures of the gauge eigenstates. Their masses are given by
m 2 t = 1 2 (M 2 + y 2 f 2 − N t ), M 2 T = 1 2 (M 2 + y 2 f 2 + N t ).(1)
where N t = (y 2 f 2 + M 2 ) 2 − y 4 f 4 sin 2 2x with x = ν/ √ 2f , in which ν = 246GeV is the scale of the EW SB. Provided M T ≤ f and that the parameter y is of order one, the top Y ukawa coupling will also be of order one. The parameter M is essential to the mixing between the SM top quark and its partner T .
According the symmetry-breaking pattern discussed above, with certain reparametriza- The couplings expression forms which are related our calculation, are shown as [17] φ 0d 1,2,3 d 1,2,3 :
im d i γ 5 /( √ 2f ); φ 0ū 1,2 u 1,2 :−im u i γ 5 /( √ 2f ) ; htt : − em t C L C R / (2m W S W ) ; φ 0t t :−iyS R S L γ 5 / √ 2 ; hTT : −y(S R S L − C L C R x)/ √ 2; φ 0T T :−iyC L C R γ 5 / √ 2; hφ 0 φ 0 :x(30p 2 ·p 3 +11p 1 ·p 1 )/(27 √ 2f); hφ 0 Z µ : iexp 3µ /(6C W S W ); Z Hµū1,2 u 1,2 :−eγ µ (2S 2 W P L +(1 − 7cos2θ W )P R )/(12C W S W cos2θ W ); Z Hµd1,2,3 d 1,2,3 : − e γ µ (S 2 W P L + (3 − 5S 2 W )P R )/(6C W S W cos2θ W ); hφ 0 Z Hµ : iex((14 − 17S 2 W )p 2µ − (4 − S 2 W )p 1µ )/(18S W C W cos2θ W ).(2)
Where p 1 , p 2 , and p 3 refer to the incoming momentum of the first, second and third particles, respectively. u i and d i represent the upper-and down-type fermions, respectively.
S W = sin θ W , C W = cosθ W , and θ W is the W einberg angle. At the leading order of 1/f , the sine values of the mixing angles α L and α R can be written as
S L = sin α L ≃ M M T sin x, S R = sin α R ≃ M M T (1 + sin 2 x).(3)
C L and C R are the cosine values of the mixing angles α L and α R , respectively. P L(R) = (1 ∓ γ 5 )/2 is the left (right)-handed projection operator.
In the framework of the LRT H model, the mass of the neutral Higgs boson φ 0 can be anything below f here we consider another possibility, in which the mass is around 150GeV [17]. Similar to the SM Higgs boson, φ 0 can decay to γγ through the top-quark loop and heavy top-quark loop. But unlike the SM Higgs boson, in the LRT H model, the light neutral Higgs boson φ 0 is a pseudoscalar boson, due to its pseudoscalar nature, there is no φ 0 W W and φ 0 ZZ couplings at tree level. So, the one-loop SM gauge boson contribution to φ 0 γγ is zero. In general, the light neutral Higgs boson φ 0 decays into bb, cc, τ + τ − , gg and γγ. Now we discuss the branching ratios for the possible decay modes of φ 0 . The decay width of φ 0 → ff is proportional to the square of the corresponding Y ukawa coupling, with an additional suppression factor of ν 2 /(2f 2 ) comparing to that of the SM Higgs boson. The concrete expressions of the decay widths for the different decay channels are given as follows:
Γ(φ 0 → bb) = 3G F m φ 0 ν 2 m 2 b 8 √ 2πf 2 (1 − 4m 2 b /m 2 φ 0 ) 3 2 , Γ(φ 0 → cc) = 3G F m φ 0 ν 2 m 2 c 8 √ 2πf 2 , Γ(φ 0 → τ + τ − ) = G F m φ 0 ν 2 m 2 τ 8 √ 2πf 2 , Γ(φ 0 → γγ) = G F α 2 m 3 φ 0 128 √ 2π 3 | f N f c Q 2 f A φ 0 f (τ f )| 2 , Γ(φ 0 → gg) = G 2 F α 2 s m 3 φ 0 48 √ 2π 3 | q A φ 0 q (τ q )| 2 .(4)
Where m b , m c , and m τ are the masses of the SM fermions b, c and τ , respectively. The index f corresponds to q and l (q = quark, l = lepton). Where N f c = 1, 3 for f = l, q, respectively. Q f is the charge of the fermion f . Similar with Ref. [22], the function A φ 0 f can be written as: [22], the function f (τ f ) has two parts corresponding to the τ f ≥ 1 and τ f < 1 two conditions. In our numerical estimation, we have neglected the contributions of the light fermions. Therefore, in the LRT H model, there is τ t(T ) = 4m 2 t(T ) /m 2 φ 0 ≥ 1, and the function f (τ f ) is given by
A φ 0 f = 2τ f [1 + (1 − τ f )f (τ f )]. (5) where τ f = 4m 2 f /m 2 φ 0 . In Ref.f (τ ) = arcsin 2 1 √ τ .(6)
where A φ 0 q , τ q , and f (τ q ) in Eq.(4) are defined the same as A φ 0 f , τ f and f (τ f ), but only for quarks.
Using above partial widths of the neutral Higgs boson φ 0 , its total width Γ can be approximately written as and m φ 0 = 120GeV . One can see from F ig.1 that the decay branching ratios of φ 0 are sensitive to the parameter f . If we assume that the parameter f is in the range of 500GeV ∼ 1500GeV , the value of the branching ratio Br(φ 0 → bb) is in the range of 14% − 55%, and the branching ratio Br(φ 0 → gg) is in the range of 38% − 85%. The values of Br(φ 0 → cc), Br(φ 0 → τ + τ − ) and Br(φ 0 → γγ) are much smaller than those of Br(φ 0 → bb) and Br(φ 0 → gg). Therefore, in order to see the trend clearly, in F ig.1
Γ = Γ bb + Γ cc + Γ τ + τ − + Γ γγ + Γ gg .(7)
we have multiplied them by 10, 20 and 300, respectively. The real numerical results
are Br(φ 0 → cc)= 0.9% − 3.8%, Br(φ 0 → τ + τ − )= 0.6% − 2.6%, and Br(φ 0 → γγ)= 0.09% − 0.2%.
Our numerical results agree quite well with Ref. [17], in that the branching
ratio Br(φ 0 → γγ) is roughly same as Br(h → γγ) for m h = m φ 0 .
III. Pair production of neutral Higgs bosons at the ILC
In many cases, the ILC can significantly improve the LHC measurements. If a Higgs boson is discovered, it will be crucial to determine its couplings with high accuracy, to understand the so-called mechanism of EW SB [24]. The high resolution profile determination of a light Higgs boson (mass, couplings, self-couplings, etc.) can be carried out at the ILC, where clear signals of Higgs events are expected with backgrounds that can be reduced to a magnitude level. With the LHC guidance, the ILC, which is currently being designed, will further improve our knowledge of the Higgs sector if that is how nature decided to create mass [24]. It was demonstrated in Ref. [25] that physics at the LHC and at the ILC will be complementary to each other in many respects. So far, many works have been contributed to studies of the neutral Higgs boson pair production at the ILC, in the SM [26][27][28] and in new physics beyond the SM [29][30][31][32].
From the discussions given in Sec. II, we can see that the neutral Higgs boson pair At the leading order, the production amplitude of the process can be written as
M 1 = M Z + M Z H(8)
with
M Z = e 2 x(−1 + 4S 2 W ) 24C 2 W S 2 Wv e (p 2 ) p 12 / p 2 12 − m 2 Z u e (p 1 ) + e 2 x 24C 2 W S 2 Wv e (p 2 ) p 12 / p 2 12 − m 2 Z γ 5 u e (p 1 ), M Z H = −e 2 x(14 − 17S 2 W ) 36C 2 W cos2θ Wv e (p 2 ) p 3 / p 2 12 − m 2 Z H P L u e (p 1 ) + −e 2 x(14 − 17S 2 W )(1 − 3C 2 W ) 72C 2 W S 2 W cos2θ Wv e (p 2 ) p 3 / p 2 12 − m 2 Z H P R u e (p 1 ) + e 2 x(4 − S 2 W ) 36C 2 W cos2θ Wv e (p 2 ) p 4 / p 2 12 − m 2 Z H P L u e (p 1 ) + e 2 x(1 − 3C 2 W )(4 − S 2 W ) 72C 2 W S 2 W cos2θ Wv e (p 2 ) p 4 / p 2 12 − m 2 Z H P R u e (p 1 ).
where p 12 is the momentum of the propagator, which is the sum of the incoming momentums p 1 and p 2 . With the above production amplitudes, we can obtain the production cross section directly.
From the above discussions, we can see that, except for the SM input parameters
e + e − → φ 0 h → bbbb.(9)
The production rate of the bbbb final state in the LRT H model can be easily estimated 1 Thanks to the referees for offering this reference to us.
using the formula σ s = σ × Br(φ 0 → bb) × Br(h → bb). If we assume the integrated luminosity £ int = 500f b −1 for the ILC with the c.m. energy √ s = 500GeV , then there will be 9 − 3.0 × 10 3 bbbb events to be generated at the ILC, which is significantly larger than that for the SM Higgs boson pair production process e + e − → hh → bbbb [26][27][28].
Therefore, we hope that by using very efficient µ-vertex detectors to tag the b quark jets,
IV. Pair production of neutral Higgs bosons at the LHC
The LHC has a good potential for discovery of a neutral Higgs boson. Now we look at pair production of the neutral Higgs bosons predicted by the LRT H model at the LHC. From the above discussions, we can see that both the φ 0 φ 0 pair and φ 0 h pair can be produced at the LHC. In this section, we will consider both of these cases.
A. φ 0 φ 0 pair production In this paper, we calculate all production channels for the neutral Higgs boson pair φ 0 φ 0 at the LHC, as shown in F ig.4, including triangle diagrams, box diagrams and treelevel diagram. Each loop diagram is composed of some scalar loop functions, which are calculated by using LoopT ools [34]. The hadronic cross section at the LHC is obtained by convoluting the partonic cross sections with the parton distribution functions (P DF s). In our numerical calculation, we will use CTEQ6L P DF s for the gluon and quark P DF s [35].
The renormalization scale µ R and the factorization scale µ F are chosen to be µ R = µ F = 2m φ 0 . Because the calculation of the loop diagrams are too tedious and the analytical expression are lengthy, we will not present those here. For M = 100GeV , m φ 0 = 120GeV , and 500GeV ≤ f ≤ 150GeV , the value of the total production cross section is in the range of 4f b ∼ 122f b, and the value of the production cross section coming from the top-quark loop diagrams is in the range of 1.5f b − 105f b.
g g h t(T ) (a) h t(T ) t(T ) φ 0 φ 0 g g t(T ) (b) t(T ) t(T ) t(T ) φ 0 φ 0 b b b φ 0 φ 0 (c)
This is because the contributions of the box diagrams are generally much smaller than those of the triangle diagrams, and furthermore the coupling htt is much larger than the coupling hTT or the coupling φ 0 bb. If we assume the integrated luminosity £ int = 100f b −1 for the LHC with the c.m. energy √ s = 14T eV , then there will be 4 × 10 2 − 1.22 × 10 4 events to be generated at the LHC. Using the relevant F eynman rules, we can write the invariant amplitude for the par-
tonic process q(p 1 )q(p 2 ) → φ 0 (p 3 )h(p 4 ) as M 2 (q) = M 21 (q), f or q = u, c M 2 (q) = M 22 (q), f or q = d, s q q φ 0 h Z, Z H (a) (b) b b φ 0 b h g g t(T ) (d) t(T ) t(T ) t(T ) φ 0 h g g φ 0 t(T ) (c) t(T ) t(T ) φ 0 h g g Z, Z H t(T ) (e) t(T ) t(T )M 21 (q) = ( −e 2 x 24S 2 W C 2 W + e 2 x 18C 2 W )v(p 2 ) p 12 / p 2 12 − m 2 Z P L u(p 1 ) + e 2 x 18C 2 Wv (p 2 ) p 12 / p 2 12 − m 2 Z P R u(p 1 ) + e 2 x(14 − 17S 2 W ) 108C 2 W cos2θ Wv (p 2 ) p 3 / p 2 12 − m 2 Z H P L u(p 1 ) + −e 2 x(4 − S 2 W ) 108C 2 W cos2θ Wv (p 2 ) p 3 / p 2 12 − m 2 Z H P R u(p 1 ) + e 2 x(1 − 3S 2 W )(14 − 17S 2 W ) 216S 2 W C 2 W cos2θ Wv (p 2 ) p 4 / p 2 12 − m 2 Z H P L u(p 1 ) + −e 2 x(4 − S 2 W )(1 − 3S 2 W ) 216S 2 W C 2 W cos2θ Wv (p 2 ) p 4 / p 2 12 − m 2 Z H P R u(p 1 ).
For the s-channel partonic processes qq → Z(Z H ) → φ 0 h (q = d, s and b), the invariant amplitude can be written
M 22 (q) = ( e 2 x 24S 2 W C 2 W − e 2 x 18C 2 W )v(p 2 ) p 12 / p 2 12 − m 2 Z P L u(p 1 ) + e 2 x 36C 2 Wv (p 2 ) p 12 / p 2 12 − m 2 Z P R u(p 1 ) + e 2 x(14 − 17S 2 W ) 108C 2 W cos2θ Wv (p 2 ) p 3 / p 2 12 − m 2 Z H P L u(p 1 ) + e 2 x(4 − S 2 W )(3 − 5S 2 W ) 108S 2 W C 2 W cos2θ Wv (p 2 ) p 3 / p 2 12 − m 2 Z H P R u(p 1 ) + −e 2 x(4 − 3S 2 W ) 108C 2 W cos2θ Wv (p 2 ) p 4 / p 2 12 − m 2 Z H P L u(p 1 ) + −e 2 x(4 − S 2 W )(3 − 5S 2 W ) 108S 2 W C 2 W cos2θ Wv (p 2 ) p 4 / p 2 12 − m 2 Z H P R u(p 1 ).
For the t-channel partonic process bb → φ 0 h as shown in F ig.7b, the invariant amplitude can be written
M 23 (q) = m 2 b √ 2vfv (p 2 ) p 13 / + m b p 2 13 − m 2 b γ 5 u(p 1 ).
Where of m φ 0 in F ig.9. One can see from F ig.9 that the total cross section σ is sensitive to mass parameter m φ 0 . For f = 500GeV and 120GeV ≤ m φ 0 ≤ 180GeV , its value is in the range
of 101f b − 306f b.
From the above discussions, we can see that the decay features of φ 0 are similar to those of the SM-like neutral Higgs boson h, as far as decays into bb and γγ are concerned.
Therefore, when we analyze the signatures of the neutral Higgs boson pairs from the LRT H model at the colliders, we will take the φ 0 φ 0 pair, for example. to bb and the other decays to γγ, then pair production of the neutral Higgs boson φ 0 at the LHC can give rise to the bbγγ final state, and the production rate of the bbγγ final state can be easily estimated using the formula σ s = σ × Br(φ 0 → bb) × Br(φ 0 → γγ). If we assume the integrated luminosity £ int = 100f b −1 for the LHC with the c.m. energy √ s = 14T eV , then there will be several hundreds of bbγγ events to be generated at the LHC. Furthermore, the narrow γγ peak can be reconstructed to distinguish the signal from the backgrounds. Detailed analysis of the signals and the relevant backgrounds about this kind of the final state has been given in Ref. [36].
V. Conclusions
The twin Higgs mechanism provides an alternative method to solve the little hierarchy problem. The LRT H model is a concrete realization of the twin Higgs mechanism. In this paper, we discuss the possible decay modes of the neutral Higgs boson φ 0 predicted by the LRT H model and consider its pair production at the ILC and LHC via suitable mechanisms.
At the ILC, we study production of the neutral Higgs boson pair φ 0 h via the processes e + e − → Z(Z H ) → φ 0 h. Our numerical results show that, for m φ 0 = m h = 120GeV and 500GeV ≤ f ≤ 1500GeV , the total production cross section of neutral Higgs boson pair φ 0 h at ILC is in the range of 0.92f b − 20f b. If we assume the integrated luminosity £ int = 500f b −1 for the ILC with the c.m. energy √ s = 500GeV , there will be 10 2 − 10 4 φ 0 h events to be generated at the ILC. If we assume that the neutral Higgs bosons φ 0 and h both decay to bb, then the process e + e − → φ 0 h can give rise to the bbbb final state.
There will be 9 − 3.0 × 10 3 bbbb events to be generated at the ILC. Owing to the bbbb events, we might detect the possible signatures of the neutral Higgs boson φ 0 via the processes e + e − → Z(Z H ) → φ 0 h in the future ILC experiments.
At the LHC, we study production of the neutral Higgs boson pairs φ 0 φ 0 and φ 0 h. First, we study production of the neutral Higgs boson pair φ 0 φ 0 via the processes gg → φ 0 φ 0 and qq → φ 0 φ 0 . Our numerical results show that, for M = 100GeV , m φ 0 = 120GeV
and 500GeV ≤ f ≤ 1500GeV , the value of the hadronic cross section σ φ 0 φ 0 is in the range of 4f b − 122f b, which mainly comes from the contributions of the top-quark loop.
Then we study production of the neutral Higgs boson pair φ 0 h via the processes qq → φ 0 h(q = u, c, d, s, b) and gg → φ 0 h. Our numerical results show that, for M = 150GeV , m φ 0 = m h = 120GeV and 500GeV ≤ f ≤ 1500GeV , the value of σ φ 0 h is in the range of 34f b − 306f b, of which about 91% of the contributions comes from light quarks u, d, c, s.
If we assume the integrated luminosity £ int = 100f b −1 for the LHC with the c.m. energy √ s = 14T eV , then there will be 3.4 × 10 3 − 3.1 × 10 4 φ 0 h events to be generated at the LHC. If we assume that one of the neutral Higgs bosons φ 0 and h decays to bb and the other decays to γγ, then the processes pp → φ 0 φ 0 + X and pp → φ 0 h + X all can give rise to the bbγγ final state. There will be several hundreds and up to thousands of bbγγ events to be generated at the LHC with the c.m. energy √ s = 14T eV and £ int = 100f b −1 .
model contains the U(4) 1 × U(4) 2 global symmetry as well as the gauged symmetry SU(2) L × SU(2) R × U(1) B−L . After Higgs obtained vacuum expectation values (f ,f ), the global symmetry U(4) 1 × U(4) 2 breaks down to U(3) 1 × U(3) 2 , and the gauge group SU(2) R × U(1) B−L breaks down to the SM U(1) Y . Thus, the LRT H model predicts the existence of the new particles, such as heavy gauge bosons, heavy scalars, and the top partner T , which can generate rich phenomenology at present and in future collider experiments
focus our attention on the neutral Higgs bosons. The LRT H model is based on the global U(4) 1 × U(4) 2 symmetry with a locally gauged subgroup SU(2) L × SU(2) R × U(1) B−L . Two Higgs fields, H = (H L , H R ) and H = (Ĥ L ,Ĥ R ), are introduced and each transforms as (4, 1) and (1, 4), respectively, under the global symmetry. H L,R (Ĥ L,R ) are two component objects which are charged under SU(2) L and SU(2) R , respectively. For the gauge couplings g 2L and g 2R of SU(2) L and SU(2) R , the left-right symmetry implies that g 2L = g 2R = g 2 . The U(4) 1 [U(4) 2 ] group is spontaneously broken down to its subgroup U(3) 1 [U(3) 2 ] with nonzero vacuum expectation value (V EV ) < H > = (0, 0, 0, f ) [<Ĥ > = (0, 0, 0,f )]. The Higgs V EV s also break SU(2) R × U(1) B−L down to the SM U(1) Y . After spontaneous global symmetry breaking by f andf , three Goldstone bosons are eaten by the new gauge bosons W ± H and Z H . After the SM electroweak symmetry breaking, the three additional Goldstone bosons are eaten by the SM gauge bosons W ± and Z.
tions of the fields, there are left with four Higgs bosons in the LRT H spectrum that couple to both the fermion sector and the gauge boson sector. They are one neutral Higgs bosons φ 0 , a pair of charged Higgs bosons φ ± , and the SM-like physical Higgs h. In addition, there is an SU(2) L doubletĥ = (ĥ + 1 , h 0 2 ) that couples to the gauge boson sector only (including the SM gauge bosons γ, Z, W , and the new gauge boson Z H ). The lightest particle inĥ, typically one of the neutral components, is stable, and therefore constitutes a good dark matter candidate. These neutral Higgs bosons can couple to each others, and also can couple to the ordinary fermions, ordinary gauge bosons, new top quark T, and new gauge boson Z H .
Figure 1 :
1The branching ratios of the neutral Higgs boson φ 0 for different decay modes as functions of the free parameter f for M = 150GeV , m φ 0 = 120GeV . In order to see the trend clearly, we have multiplied Br(φ 0 → cc), Br(φ 0 → τ + τ − ), and Br(φ 0 → γγ) by the factors 10, 20, and 300, respectively.We summed up our numerical results of the branching ratios of the neutral Higgs boson φ 0 for different decay modes Br(φ 0 ) in F ig.1. To get the numerical results, the SM parameters involved are taken as m b = 4.8GeV , m c = 1.25GeV and m τ = 1.78GeV[23]. In F ig.1, we plot Br(φ 0 ) as a function of free parameter f for M = 150GeV
φ 0 φ 0 Figure 2 :
02cannot be produced exclusively at the ILC because φ 0 φ 0 can not couple with gauge boson Z or Z H . However, the neutral Higgs boson pair φ 0 h can be produced via the processes e + e − → Z(Z H ) → φ 0 h at the ILC. The F eynman diagrams of the process e + (p 1 )e − (p 2 ) → φ 0 (p 3 )h(p 4 ) are shown in F ig.Feynman diagrams for the process e + e − → φ 0 h.
α = 1 Figure 3 :
13/128.8, S W = √ 0.2315, m Z = 91.1876GeV , m h = 120GeV [23], the cross section σ of pair production for the neutral Higgs boson φ 0 h at the ILC is dependent on the model dependent parameters f and m φ 0 . In our numerical estimation, we will assume that the values of the free parameters f and m φ 0 are in the ranges of 500GeV − 1500GeV and 100GeV − 180GeV , respectively. In F ig.3, we plot the production cross section σ of the process e + e − → φ 0 h as a function of the scale parameter f for the c.m. energy √ s = 500GeV , m h = 120GeV and three values of m φ 0 . We can see that σ is sensitive to the scale parameter f and the mass parameter m φ 0 . For 500GeV ≤ f ≤ 1500GeV and 120GeV ≤ m φ 0 ≤ 180GeV , its value is in the range of 0.92f b−20f b. According to an update of parameter for ILC at 2006 [33] 1 , one can see that, an integrated luminosity of 500f b −1 should be achieved in the first four years of running after one year of commissioning. Therefore, if we assume the integrated luminosity for the ILC is 500f b −1 , there will be 10 2 − 10 4 φ 0 h events to be generated at the ILC. The production cross section σ of e + e − → φ 0 h as a function of the parameter f for three values of m φ 0 , m h = 120GeV , and the c.m. energy √ s = 500GeV . From the discussions given in Sec. II, we can see that the possible decay modes of the neutral Higgs boson φ 0 are bb, cc, τ + τ − , gg and γγ. The SM-like neutral Higgs boson h has similar decay features with those of φ 0 . Therefore, the signatures of neutral Higgs boson pair φ 0 h is similar to those of the neutral Higgs boson pair φ 0 φ 0 at the high energy colliders. From the numerical results given in Sec. II, one can see that, for the masses m φ 0 ≤ 180GeV , the possible signals of φ 0 h can be seen as four b quarks,
we might detect the possible signatures of the neutral Higgs boson φ 0 via the process e + e − → φ 0 h in the future ILC experiments. Certainly, detailed confirmation of the observability of the signals generated by the process e + e − → Z(Z H ) → φ 0 h would require Monte-Carlo simulations of the signals and backgrounds, which is beyond the scope of this paper.
First
, we study production of the neutral Higgs boson pair φ 0 φ 0 at the LHC. At the LHC, the neutral Higgs boson pair φ 0 φ 0 can be produced through two mechanisms.One is loop-induced production via gluon fusion (gg → φ 0 φ 0 ) and the other is from the t-channel quark-antiquark annihilation (qq → φ 0 φ 0 ). The relevant F eynman diagrams are shown in F ig.4. Considering the couplings of the neutral Higgs boson φ 0 to the SM fermions are proportional to the factor of m q /f and the smallness masses of the quarks q = u, c, d, and s, we have neglected their contributions to production of the neutral Higgs boson pair φ 0 φ 0 .
Figure 4 :
4One-loop F eynman diagrams for the subprocess gg → φ 0 φ 0 (a,b) and treelevel F eynman diagram for the subprocess bb → φ 0 φ 0 (c) in the LRT H model. The diagrams obtained by exchanging the two gluons or exchanging the two Higgs bosons are not shown here. It is obvious that the production cross section σ of the neutral Higgs boson pair φ 0 φ 0 at the LHC are dependent on the model dependent parameters f , m φ 0 , and M. Similar to the calculation at the ILC, we assume that the values of the free parameters f and m φ 0 are in the ranges of 500GeV − 1500GeV and 100GeV − 180GeV , respectively. Besides, we assume the mixing parameter M is in the range of 100GeV − 200GeV . Our numerical results are summarized in F igs.5 and 6.To see contributions of the different partonic processes to the total hadronic cross section, we plot the total and partial hadronic cross sections for different partonic processes as functions of the scale parameter f for the parameters M = 100GeV and m φ 0 = 120GeV in F ig.5. We see from F ig.5 that production of the neutral Higgs boson pair φ 0 φ 0 is dominated by the partonic process gg → φ 0 φ 0 induced by the top-quark loop diagrams.
Figure 5 :Figure 6 :
56The total and partial hadronic cross sections for different partonic processes as functions of the free parameter f for the parameters M = 100GeV andm φ 0 = 120GeV .In order to see the effects of the mass parameter m φ 0 on the total cross section σ, we plot σ as a function of m φ 0 for f = 500GeV and three values of the mixing parameter M in F ig.6. One can see from F ig.6 that the total cross section σ is sensitive to the mass parameter m φ 0 , while is not sensitive to the mixing parameter M. This is because M is introduced to generate the mass mixing term Mq L q R , which is included in the gauge invariant top Y ukawa terms allowed by gauge invariance. From the relevant F eynman rules we can see that, the mixing parameter M does not influence the production crosssection σ of the neutral Higgs boson φ 0 too much. For f = 500GeV , M = 200GeV , and m φ 0 = 100GeV − 180GeV , the total cross section σ is in the range of 16f b − 253f b. The total production cross section σ as a function of free parameter m φ 0 for three values of mixing parameter M. B. φ 0 h pair production Now we consider production of the neutral Higgs boson pair φ 0 h at the LHC. At the LHC, the neutral Higgs boson pair φ 0 h can be mainly produced through two mechanisms: (i) qq → φ 0 h, where q = u, d, c, s, b; (ii) the loop-induced gluon fusion process gg → φ 0 h. The relevant F eynman diagrams are shown in F ig.7.
Figure 7 :
7Tree-level F eynman diagrams for the process qq → φ 0 h(q = u, d, c, s, b) (a,b) and one-loop F eynman diagrams for gg → φ 0 h (c,d,e) in the LRT H model. For the s-channel partonic processes qq → Z(Z H ) → φ 0 h (q = u and c), the invariant amplitude can be written
p 13 = p 1 − p 3 . Considering the couplings of the neutral Higgs boson φ 0 to the SM fermions are proportional to the factor of m q /f and the smallness masses of the quark q = u, c, d, and s, we have neglected their contributions to production cross section of the neutral Higgs boson pair φ 0 h via the t-channel process in our calculations. When we calculate the loop diagrams F igs.7(c)-7(d), and F ig.7e, we will use the same method with F igs.4(a) and 4(b). To see contributions of the different partonic processes to the total hadronic cross section, we plot the total and partial hadronic cross sections for different partonic processes as functions of the parameter f for m φ 0 = m h = 120GeV and M = 150GeV in F ig.8. We see that the production cross sections of the neutral Higgs bosons φ 0 h mainly come from the contributions of the light quarks (u, d, c, s) through the s-channel Z exchange and Z H exchange. Our numerical results show that, the contributions coming from the partonic processes gg → φ 0 h [including F igs.7(c)-7(e)] to total production cross section are at the orders of 10 −5 f b − 10 −1 f b, which are much smaller than those of the tree-level processes.This is because the Y ukawa couplings depend sensitively on the free parameters M and f . The parameter M is very smaller than the scale parameter f . So, although the gluon fusion get an enhancement due to large parton distribution functions, the contribution of the gluon fusion process is suppressed by the order of (M/f ) 4[21]. Thus, in F ig.8, we did not show the line corresponding to the value of the production cross section contributed by the gg fusion. The value of the production cross section of the neutral Higgs bosons φ 0 h is insensitive to the mixing parameter M. For m φ 0 = m h = 120GeVand 500GeV ≤ f ≤ 150GeV , its value is in the range of 34f b − 306f b, the partial value of the total production cross section coming from light quarks contributions is in the range of 31f b − 281f b. If we assume the integrated luminosity £ int = 100f b −1 for the LHC with the c.m. energy √ s = 14T eV , then there will be 3.4 × 10 3 − 3.1 × 10 4 φ 0 h events generated at the LHC.
Figure 8 :
8The total and partial hadronic cross sections for different partonic processes as function of the parameter f for m φ 0 = m h = 120GeV and M = 150GeV . Similar to those of the discussions for neutral Higgs boson pair φ 0 φ 0 production, we plot σ as a function of free parameter f for m h = 120GeV , M = 150GeV and three values
Figure 9 :
9The total production cross section as a function of free parameter f for m h = 120GeV , M = 150GeV and three values of m φ 0 . In most of the parameter space of the LRT H model, the main decay modes of φ 0 are gg and bb. However, the final states gggg and bbbb induced by pair production of the neutral Higgs boson φ 0 at the LHC have large QCD backgrounds and thus are insignificant for φ 0 discovery. If we assume that one of the neutral Higgs boson φ 0 decays
. N E Adam, arXiv:0803.1154hep-phFor example see: N. E. Adam et al., arXiv: 0803.1154[hep-ph].
. U Baur, T Plehn, D L Rainwater, Phys. Rev. Lett. 89151801U. Baur, T. Plehn, and D. L. Rainwater, Phys. Rev. Lett. 89, 151801(2002).
. M Moretti, S Moretti, F Piccinini, R Pittau, A D Polosa, JHEP. 050224M. Moretti, S. Moretti, F. Piccinini, R. Pittau, A. D. Polosa, JHEP 0502, 024(2005);
. T Binoth, S Karg, N Kauer, R , Phys. Rev. 74113008T. Binoth, S. Karg, N. Kauer, R. Ruckl, Phys. Rev. D74, 113008(2006).
. A Pierce, J Thaler, Lian-Tao Wang, JHEP. 070570A. Pierce, J. Thaler, Lian-Tao Wang, JHEP 0705, 070(2007);
. S Kanemura, K Tsumura, arXiv:0810.0433hep-phS. Kanemura, K. Tsumura, arXiv: 0810.0433[hep-ph].
. E W N Glover, J J Van Der, Bij, Nucl. Phys. 309282E. W. N. Glover and J. J. van der Bij, Nucl. Phys. B309, 282(1988).
. U Baur, T Plehn, David L Rainwater, Phys. Rev. 6733003U. Baur, T. Plehn, David L. Rainwater, Phys. Rev. D67, 033003(2003).
. S Dawson, S Dittmaier, M Spira, Phys. Rev. 58115012S. Dawson, S. Dittmaier, M. Spira, Phys. Rev. D58, 115012(1998).
. S Dawson, C Kao, Yili Wang, P Williams, Phys. Rev. 7513007S. Dawson, C. Kao, Yili Wang, P. Williams, Phys. Rev. D75, 013007(2007).
. J J Liu, W G Ma, G Li, R Y Zhang, H. -S Hou, Phys. Rev. 7015001J. J. Liu, W. G. Ma, G. Li, R. Y. Zhang and H. -S. Hou, Phys. Rev. D70, 015001(2004);
. C O Dib, R Rosenfeld, A Zerwekh, JHEP. 060574C. O. Dib, R. Rosenfeld and A. Zerwekh, JHEP 0605, 074(2006);
. L Wang, W Y Wang, J M Yang, H J Zhang, Phys. Rev. 7617702L. Wang, W. Y. Wang, J. M. Yang, H. J. Zhang, Phys. Rev. D76, 017702(2007).
. P K Das, B Mukhopadhyaya, hep-ph/0303135P. K. Das and B. Mukhopadhyaya, hep-ph/0303135.
. M Spira, J D Wells, Nucl. Phys. 5233M. Spira and J. D. Wells, Nucl. Phys. B523, 3(1998).
. A A Barrientos, Bernd A Bendezu, Kniehl, Phys. Rev. 6435006A. A. Barrientos Bendezu, Bernd A. Kniehl, Phys. Rev. D64, 035006(2001).
. T Plehn, M Spira, P M Zerwas, Nucl. Phys. 47946T. Plehn, M. Spira and P. M. Zerwas, Nucl. Phys. B479, 46(1996);
. A Djouadi, W Kilian, M Muhlleitner, P M Zerwas, Eur. Phys. J. 1045A. Djouadi, W. Kilian, M. Muhlleitner and P. M. Zerwas, Eur. Phys. J. C10, 45(1999);
. A , A.
. Manuel Belyaev, Oscar J P Drees, J K Eboli, S F Mizukoshi, Novaes, Phys. Rev. Belyaev, Manuel Drees, Oscar J. P. Eboli, J. K. Mizukoshi, S. F. Novaes, Phys. Rev.
. A Belyaev, M Drees, J K Mizukoshi, Eur. Phys. J. 17337A. Belyaev, M. Drees and J. K. Mizukoshi, Eur. Phys. J. C17, 337(2000);
. R Lafaye, D J Miller, M Muhlleitner, S Moretti, hep-ph/0002238R. Lafaye, D. J. Miller, M. Muhlleitner and S. Moretti, hep-ph/0002238;
. M Moretti, S Moretti, F Piccinini, R Pittau, JHEP. 050224M. Moretti, S. Moretti, F. Piccinini, R. Pittau, JHEP 0502, 024(2005).
. H De Sandes, R Rosenfeld, Phys. Lett. 659323H. de Sandes, R. Rosenfeld, Phys. Lett. B659, 323(2008).
. R Barbieri, A Strumia, Phys. Lett. 462144R. Barbieri and A. Strumia, Phys. Lett. B462, 144(1999);
. A Falkowski, S Pokorski, M Schmaltz, Phys. Rev. 7435003A. Falkowski, S. Pokorski, M. Schmaltz, Phys. Rev. D74, 035003(2006);
. Z Chacko, H. -S Goh, R Harnik, Phys. Rev. Lett. 96231802Z. Chacko, H. -S. Goh, R. Harnik, Phys. Rev. Lett. 96, 231802(2006).
. Z Chacko, H. -S Goh, R Harnik, JHEP. 0601108Z. Chacko, H. -S. Goh and R. Harnik, JHEP 0601, 108(2006).
. H. -S Goh, S Su, Phys. Rev. 7575010H. -S. Goh and S. Su, Phys. Rev. D75, 075010(2007).
. A Abada, I Hidalgo, Phys. Rev. 77113013A. Abada, I. Hidalgo, Phys. Rev. D77, 113013(2008).
. D. -W Jung, J Y Lee, arXiv:0710.2589hep-phD. -W. Jung and J. Y. Lee, arXiv: 0710.2589[hep-ph].
. E M Dolle, S F Su, Phys. Rev. 7775013E. M. Dolle, S. F. Su, Phys. Rev. D77, 075013(2008).
. Y B Liu, H M Han, X L Wang, Eur. Phys. J. 53615Y. B. Liu , H. M. Han , X. L. Wang, Eur. Phys. J. C53, 615(2008).
The Higgs Hunter's Guide. J F Gunion, H E Haber, G L Kane, S Dawson, Addison-Wesley, Reading, MAJ. F. Gunion, H. E. Haber, G. L. Kane, and S. Dawson, "The Higgs Hunter's Guide", Addison-Wesley, Reading, MA(1990);
. L Reina, hep-ph/0512377L. Reina, hep-ph/0512377.
. W. -M Yao, J. Phys. 331Particle Data Group. and partial updat for the 2008 editionW. -M. Yao et al. [Particle Data Group], J. Phys. G33, 1(2006) and partial updat for the 2008 edition.
. P W Higgs, Phys. Rev. Lett. 13508P. W. Higgs, Phys. Rev. Lett. 13, 508(1964);
. G S Guralnik, C R Hagen, T , G. S. Guralnik, C. R. Hagen and T.
. W B Kibble, Phys. Rev. Lett. 13585W. B. Kibble, Phys. Rev. Lett. 13, 585(1964);
. F Englert, R Brout, Phys. Rev. Lett. 13321F. Englert, R. Brout, Phys. Rev. Lett. 13, 321(1964).
. G Weiglein, Phys. Rept. 42647ILC/LC Study GroupG. Weiglein et al. [ILC/LC Study Group], Phys. Rept. 426, 47(2006);
. A Arhrib, R Benbrik, C. -H Chen, Rui Santos, arXiv:0901.3380hep-phA. Arhrib, R. Benbrik, C. -H. Chen, Rui Santos, arXiv: 0901.3380[hep-ph].
. J J Lopez-Villarejo, J A M Vermaseren, arXiv:0812.3750hep-phJ. J. Lopez-Villarejo, J. A. M. Vermaseren, arXiv: 0812.3750[hep-ph].
. A Djouadi, V Driesen, C Junger, Phys. Rev. 54759A. Djouadi, V. Driesen, C. Junger, Phys. Rev. D54, 759(1996).
. A Gutierrez-Rodriguez, M A Hernandez-Ruiz, O A Sampayo, Phys. Rev. 6774018A. Gutierrez-Rodriguez, M. A. Hernandez-Ruiz, O. A. Sampayo, Phys. Rev. D67, 074018(2003).
. H Grosse, Yi Liao, Phys. Rev. 64115007H. Grosse, Yi Liao, Phys. Rev. D64, 115007(2001).
. J L Feng, T Moroi, Phys. Rev. 565962J. L. Feng, T. Moroi, Phys. Rev. D56, 5962(1997).
. A Djouadi, H E Haber, P M Zerwas, Phys. Lett. 375203A. Djouadi, H. E. Haber, P. M. Zerwas, Phys. Lett. B375, 203(1996).
. R N Hodgkinson, D Lopez-Val, Joan Sola, Phys. Lett. 67347R. N. Hodgkinson, D. Lopez-Val, Joan Sola, Phys. Lett. B673, 47(2009);
. A Arhrib, R Benbrik, C W Chiang, Phys. Rev. 77115013A. Arhrib, R. Benbrik, C. W. Chiang, Phys. Rev. D77, 115013(2008).
. T Hahn, M Perez-Victoria, Computl. Phys. Commun. 118153T. Hahn, M. Perez-Victoria, Computl. Phys. Commun. 118, 153(1999);
. T Hahn, Nucl. Phys. Proc. Suppl. 135333T. Hahn, Nucl. Phys. Proc. Suppl. 135, 333(2004).
. J Pumplin, CTEQ CollaborationJHEP. 060232J. Pumplin et al. (CTEQ Collaboration), JHEP 0602, 032(2006).
. U Baur, T Plehn, David L Rainwater, Phys. Rev. 6953004U. Baur, T. Plehn , David L. Rainwater, Phys. Rev. D69, 053004(2004).
| zyda_arxiv-0290000 |
A SPECTRAL THEORY OF POLYNOMIALLY BOUNDED SEQUENCES AND APPLICATIONS TO THE ASYMPTOTIC BEHAVIOR OF DISCRETE SYSTEMS
11 Mar 2020
Nguyen Van Minh
NGUYENHideaki Matsunaga
Duc Huy
Vu Trong Luong
A SPECTRAL THEORY OF POLYNOMIALLY BOUNDED SEQUENCES AND APPLICATIONS TO THE ASYMPTOTIC BEHAVIOR OF DISCRETE SYSTEMS
11 Mar 2020
In this paper using a transform defined by the translation operator we introduce the concept of spectrum of sequences that are bounded by n ν , where ν is a natural number. We apply this spectral theory to study the asymptotic behavior of solutions of fractional difference equations of the form ∆ α x(n) = T x(n) + y(n), n ∈ N, where 0 < α ≤ 1. One of the obtained results is an extension of a famous Katznelson-Tzafriri Theorem, saying that if the αresolvent operator Sα satisfies sup n∈N Sα(n) /n ν < ∞ and the set of z 0 ∈ C such that (z −k α (z)T ) −1 exists, and together withk α (z), is holomorphic in a neighborhood of z 0 consists of at most 1, wherek α (z) is the Z-transform of k α (n) := Γ(α + n)/(Γ(α)Γ(n + 1)), then lim n→∞
Introduction
Let us consider difference equations of the form x(n + 1) = T x(n) + y(n), n ∈ N, (1.1) where T is a bounded operator in a Banach space X, {x(n)} ∞ n=1 and {y(n)} ∞ n=1 are sequences in X. The asymptotic behavior of solutions of the above mentioned equations is a central topic in Analysis and Dynamical Systems. There are numerous methods for this study of this topic. The reader is referred to [8] and its references for information on classical methods of Dynamical Systems in the finite dimensional case. On the other hand, in the infinite dimensional case, by Harmonic Analysis and Operator Theory, many results on the asymptotic behavior of solutions of Eq. (1.1) have been obtained, see e.g. [1,3,4,5,6,7,9,11,14,15,16]. Among many interesting results in this direction is a famous theorem due to Katznelson-Tzafriri (see [9]) saying that if T is a bounded operator in a Banach space X such that There are a lot of extensions and improvements of this result as well as simple proofs of it, see e.g. [1,3,5,7,9,11,14,15,16] and the references therein.
As shown in [15] the above mentioned Katznelson-Tzafriri Theorem is equivalent to its weaker version for individual orbits. Namely, the following statement: Let T be a bounded operator in a Banach space X such that (1.2) holds and σ(T ) ⊂ {1}. Then, for each x ∈ X lim n→∞ (T − I)T n x = 0. (1.4) In [11] a simple proof of this weaker version is given, based on a transform associated with the translation operator of sequences.
The main concern of this paper is to extend the above mentioned Katznelson-Tzafriri Theorem to fractional difference equations of the form ∆ α x(n) = T x(n) + y(n), n ∈ N, (1.5) where 0 < α ≤ 1, the operator ∆ α (the fractional difference operator in the sense of Riemann-Liouville) and other operators are defined as follows (see [10] and its references for more details): for each n ∈ N,
(∆ α )f (n) = ∆ 1 • ∆ −(1−α) f (n), (∆ 1 f )(n) = f (n + 1) − f (n), (∆ −α )f (n) = n k=0 k α (n − k)f (k), k α (j) = Γ(α + j) Γ(α)Γ(j + 1) ,
where Γ(·) is the Gamma function defined below. Our method relies on a spectral theory of polynomially bounded sequences that will be presented in the next sections, and that would be of independent interest. The obtained results will be illustrated in simple cases of ordinary difference equations and then stated for fractional difference equations. Our main result is Theorem 4.16. To our best knowledge, it is a new extension of the Katznelson-Tzafriri Theorem to fractional difference equations.
Preliminaries and Notations
2.1. Notations. Throughout this paper we will denote by N, Z, R, C the set of natural numbers, integers, real numbers and the complex plane, respectively. For z ∈ C, ℜz stands for its real part. The gamma function Γ(z) is defined to be
Γ(z) = ∞ 0 x z−1 e −x dx, ℜz > 0.
For a Banach space X, L(X) denotes the space of all bounded linear operators from X to itself. We also use the following standard notations: ρ(T ) denotes the resolvent set of a given operator T , that is, ρ(T ) := {λ ∈ C : (λ − T ) −1 exists}, and σ(T ) := C\ρ(T ). For each λ ∈ ρ(T ) we denote R(λ, T ) := (λ − T ) −1 . Moreover, we will denote by Γ the unit circle in the complex plane.
For a given nonnegative integer ν, we denote by l ν ∞ (X) the space of all sequences in a Banach space X such that
sup n∈N x(n) n ν < ∞.
It is easy to see that l ν ∞ (X) is a Banach space with norm
x ν := sup n∈N x(n) n ν
for each x = {x(n)} n∈N . We will denote by c ν 0 (X) the subspace of l ν ∞ (X) consisting of all sequences {x(n)} n∈N such that lim n→∞ x(n) n ν = 0. We can check that c ν 0 (X) is a complete subspace of l ν ∞ (X), so the quotient space Y := l ν ∞ (X)/c ν 0 (X) is well defined as a Banach space. If f ∈ l ν ∞ (X) we will denote its equivalence class byf . In the space l ν ∞ (X) let us consider the translation operator S defined as [Sx](n) = x(n + 1), n ∈ N, x ∈ l ν ∞ (X). This is a bounded operator. Moreover, this operator leaves c ν 0 (X) invariant. Hence, it induces an operatorS on Y.
2.2.
Vector-valued holomorphic functions. In this paper we say that a function f (z) defined for all z ∈ Ω ⊂ C with values in a complex Banach space X is holomorphic (or analytic) for z ∈ Ω if for each z 0 ∈ Ω
f ′ (z 0 ) := lim h→0,h =0 f (z 0 + h) − f (z 0 ) h
exists. A family of continuous functionals W ⊂ X * is said to be separating if x ∈ X and x, φ = 0 for all φ ∈ W , then x = 0. We will need the following whose proof can be found in [2, We will need an auxiliary result that is a special kind of maximum principle for holomorphic functions (for the proof see e.g. [3, Lemma 4.6.6]):
Lemma 2.2. Let U be an open neighborhood of iη such that U contains the closed diskB(iη, 2r) = {z ∈ C : |z − iη| ≤ 2r}. Let h : U → X be holomorphic and c ≥ 0, k ∈ N such that h(z) ≤ c |ℜz| k , if |z − iη| = 2r, ℜz = 0.
Then h(z) ≤ 4 3 k c r k , for all z ∈B(iη, r).
Spectrum of a polynomially bounded sequence
The following lemma is the key for us to set up a spectral theory for polynomially bounded sequences. Moreover, for each |λ| = 1 with |λ| < 2 and f ∈ l ν ∞ (X), the following estimate is valid:
R(λ,S)f ≤ C ||λ| − 1| ν+1 f ν , (3.1)
where C is a certain positive number, independent of f .
Proof. We will prove that if |λ| = 1, then λ ∈ ρ(S). In other words, σ(S) ⊂ Γ. And after that, we will give estimates of the resolvent R(λ,S)f of a given sequence f ∈ l ν ∞ (X). To study the invertibility of the operator (λ −S), we consider the non-homogeneous linear difference equation
x(n + 1) − λx(n) = f (n), n ∈ N. (3.2)
To prove that λ ∈ ρ(S) for each |λ| = 1 we will show that this equation (3.2) has a unique solution x ∈ l ν ∞ (X) modulo c ν 0 (X) given f ∈ l ν ∞ (X). We first consider the case |λ| < 1. In this case, we will use the Variation of Constants Formula
x(n) = λ n−1 x(1) + n−1 k=1 λ n−1−k f (k), n ∈ N.
Since the sequence f grows polynomially, the series ∞ k=1 λ n−1−k f (k) is absolutely convergent. Also, by |λ| < 1 the sequence {λ n−1 x(1)} n∈N is in c ν 0 (X). Therefore, Eq. (3.2) has a unique solution
x f (n) := n−1 k=1 λ n−1−k f (k) n∈N modulo c ν 0 (X).
Now suppose that g is any element in the classf . We will show thatx g =x f . Or equivalently, we have to show that whenever h ∈ c ν 0 (X), the sequence
{x h (n)} n∈N = n−1 k=1 λ n−1−k h(k) n∈N ∈ c ν 0 (X).
In fact, as h ∈ c ν 0 (X), given ε > 0 there exists a natural number M such that for all k ≥ M ,
|h(k)| k ν < 1 − |λ| 2 ε.
Therefore, for all n ≥ M + 1,
x h (n) n ν ≤ M−1 k=1 |λ| n−1−k n ν h(k) + n−1 k=M |λ| n−1−k n ν h(k) ≤ |λ| n n ν M−1 k=1 |λ| −1−k h(k) + n−1 k=M |λ| n−1−k h(k) k ν ≤ |λ| n n ν M−1 k=1 |λ| −1−k h(k) + n−1 k=M |λ| n−1−k 1 − |λ| 2 ε ≤ |λ| n n ν M−1 k=1 |λ| −1−k h(k) + ε 2 .
As M is a fixed natural number and |λ| < 1 there exists a natural number K ≥ M +1 such that for all n ≥ K,
|λ| n n ν M−1 k=1 |λ| −1−k h(k) ≤ ε 2 .
Consequently, given any ε > 0 there exists a number K such that for all n ≥ K,
x h (n) n ν ≤ ε.
This means
lim n→∞ x h (n) n ν = 0.
By this we have proved thatx f =x g wheneverf =ḡ. Namely, we have showed that if |λ| < 1, thenx f = (λ −S) −1 f . In other words, λ ∈ ρ(S). Moreover, for any representative g of the classf
R(λ,S)f ν = x f ν = inf g∈f x g ν ≤ x g ν = sup n∈N n−1 k=1 λ n−1−k g(k) n ν ≤ sup n∈N n−1 k=1 |λ| n−1−k g(k) k ν ≤ sup n∈N n−1 k=1 |λ| n−1−k g ν ≤ g ν 1 − |λ| .
Finally, as g is any representative of the classf , we have
R(λ,S)f ≤ inf g∈f g ν 1 − |λ| = f ν 1 − |λ| .
Next, we consider the case |λ| > 1. We can verify that the formula
x(n) = λ n−1 x(1) − ∞ k=n λ n−k−1 f (k), n ∈ N, (3.3)
gives the general solution to Eq. (3.2). In fact, since |λ| > 1 and f grows polynomially the series ∞ k=n λ n−k−1 f (k) is absolutely convergent for each n ∈ N. Moreover, by (3.3), for each n ∈ N,
x(n + 1) = λ n x(1) − ∞ k=n+1 λ n−k f (k) = λ n x(1) − ∞ k=n λ n−k f (k) + f (n) = λx(n) + f (n). Given f ∈ l ν ∞ (X), the only solution of Eq. (3.2) in l ν ∞ (X) is x f := − ∞ k=n λ n−k−1 f (k) n∈N .
Indeed,
x f ν ≤ sup n∈N ∞ k=n |λ| n−k−1 f (k) n ν = sup n∈N ∞ k=n |λ| n−k−1 k ν n ν f (k) k ν ≤ sup n∈N ∞ k=n |λ| n−k−1 k ν n ν f ν = sup n∈N ∞ j=1 |λ| −j 1 + j − 1 n ν f ν ≤ ∞ j=1 |λ| −j j ν f ν . (3.4)
We are interested in the behavior of ∞ j=1 |λ| −j j ν as |λ| gets closer and closer to 1 (and ∞, respectively). To this end, we note that for each j ∈ N,
|λ| −j−1 j ν ≤ j+1 j |λ| −t t ν dt. Therefore, ∞ j=1 |λ| −j j ν ≤ |λ| ∞ 0 |λ| −t t ν dt = |λ| ∞ 0 e −t·ln(|λ|) t ν dt = |λ|ν! | ln(|λ|)| ν+1 .
Consequently, since ln(|λ|) is equivalent to |λ| − 1 as |λ| is close to 1, there exists a number C independent of f such that for 1 < |λ| < 2
x f ν ≤ |λ|ν! | ln(|λ|)| ν+1 f ν ≤ C ||λ| − 1| ν+1 f ν . (3.5)
Similarly as in the previous case where |λ| < 1, we will prove thatx f =x g wheneverf =ḡ. Namely, ifh = 0, thenx h = 0. In fact, for a given ε > 0, there exists a natural number N such that for all k ≥ N , h(k) /k ν < ε. Therefore, for all n ≥ N ,
x h (n) n ν ≤ ∞ k=n |λ| n−k−1 n ν h(k) ≤ ∞ k=n |λ| n−k−1 n ν εk ν ≤ ε|λ| −1 ∞ j=0 |λ| −j 1 + j n ν .
Since |λ| > 1 is fixed, the series ∞ j=0 |λ| −j (1 + j/n) ν is convergent, so this shows that
lim n→∞ x h (n) n ν = 0.
That is,x h = 0. This yields that λ ∈ ρ(S) andx f = (λ −S) −1 f . Finally, with (3.5) the proof of the lemma is complete.
Definition 3.2. Let f ∈ l ν ∞ (X)
be a given sequence in X. Then its spectrum is defined to be the set of all complex ξ 0 ∈ Γ such that the complex function R(λ,S)f has no analytic extension to any neighborhood of ξ 0 . The spectrum of a sequence f ∈ l ν ∞ (X) will be denoted by σ ν (f ). Before we proceed we introduce some notations:
D |z|>1 := {z ∈ C : |z| > 1}, B(ξ 0 , δ) := {z ∈ C : |z − ξ 0 | < δ}. Lemma 3.3. Let f ∈ l ν ∞ (X). Then, ξ 0 ∈ Γ is in σ ν (f ) if and only if the function g : D |z|>1 ∋ λ → R(λ,S)f ∈ l ν ∞ (X)
cannot be extended to an analytic function in any neighborhood of ξ 0 .
Proof. It suffices to show that if g can be extended to an analytic function in a neighborhood of ξ 0 , then
ξ 0 / ∈ σ ν (f ). Suppose that g(λ) = h(λ) for all λ ∈ D |z|>1 ∩ B(ξ 0 , δ) where h is an analytic function in a small disk B(ξ 0 , δ). Then, the function (λ −S)h(λ) is analytic in B(ξ 0 , δ). We observe that, for λ ∈ D |z|>1 ∩ B(ξ 0 , δ) (λ −S)h(λ) = (λ −S)g(λ) = (λ −S)R(λ,S)f =f .
That is, the function (λ −S)h(λ) is a constant in an open and connected subset D |z|>1 ∩ B(ξ 0 , δ) of the disk B(ξ 0 , δ). Hence, (λ −S)h(λ) =f for all λ in B(ξ 0 , δ). In particular, when |λ| < 1 and λ ∈ B(ξ 0 , δ), h = R(λ,S)f . That means, h(λ) is an analytic extension of the function R(λ,S)f as a complex function on {z ∈ C : |z| = 1} to a neighborhood of ξ 0 . Proposition 3.4. Let f ∈ l ν ∞ (X) be a given sequence in X. Then the following assertions are valid:
i) σ ν (f ) is a closed subset of Γ; ii) The sequence f is in c ν 0 (X) if and only if σ ν (f ) = ∅; iii) If ξ 0 is an isolated element of σ ν (f ), then the point ξ 0 is a pole of the complex function R(λ,S)f of order up to ν + 1. Proof. Part (i) is obvious from the definition of the spectrum of x. Part (ii): Clearly, if f ∈ c ν 0 (X), then σ ν (f ) = ∅. Conversely, if σ ν (f ) = ∅, then the complex functionf (λ) := R(λ,S)f is an entire function. Moreover, it is bounded. In fact, from (3.4) for large |λ| > 2, f (λ) ν ≤ x f ≤ ∞ j=1 |λ| −j j ν f ν = |λ| −1 ∞ k=0 |λ| −k (k + 1) ν f ν ≤ |λ| −1 ∞ k=0 (k + 1) ν 2 k f ν .
Since the series ∞ k=0 (k + 1) ν /2 k is convergent, we have lim |λ|→∞ f (λ) ν = 0.
By the Liouville Theorem, this complex functionf (λ) := R(λ,S)f is the zero function, sof = 0 since R(λ,S) is injective for each large |λ|. That means f ∈ c ν 0 (X). Part (iii): Without loss of generality we may assume that ξ 0 = 1. Consider λ in a small neighborhood of 1 in the complex plane. We will express λ = e z with |z| < δ 0 . Choose a small δ 0 > 0 such that if |z| < δ 0 , then
1 |1 − |λ|| ≤ 2 |ℜz| .
It follows from Lemma 3.1 that for 0 < |ℜz| < δ 0 ,
R(λ,S)x ≤ C |1 − |λ|| ν+1 x ≤ C2 ν+1 |ℜz| ν+1 x .
Set f (z) = R(e z ,S)x with |z| < δ 0 . Since 1 is a singular point of R(λ,S)x , 0 is a singular point of f (z) in {|z| < δ 0 } . For each n ∈ Z and 0 < r < δ 0 , we have
1 2πi |z|=r 1 + z 2 r 2 ν+1 f (z)dz ≤ 1 2π |z|=r 1 + z 2 r 2 ν+1 f (z) |dz|.
If z = re iϕ , where ϕ is real, one has
1 + z 2 r 2 ν+1 = |1 + e 2iϕ | ν+1 = |e −iϕ + e iϕ | ν+1 = (2| cos ϕ|) ν+1 = 2 ν+1 r −ν−1 |ℜz| ν+1 .
Therefore,
1 2πi |z|=r 1 + z 2 r 2 ν+1 f (z) z n+1 dz ≤ 1 2π |z|=r 2 ν+1 r −n−ν−2 |ℜz| ν+1 C2 ν+1 |ℜz| ν+1 x |dz| = C4 ν+1 r −n−ν−2 2π |z|=r |dz| x = C4 ν+1 r −n−ν−1 x . (3.6)
Consider the Laurent series of f (z) at z = 0,
f (z) = ∞ n=−∞ a n z n ,
where a n = 1 2πi |z|=r f (z) z n+1 dz, n ∈ Z.
It follows that for each n ∈ Z,
1 2πi |z|=r 1 + z 2 r 2 ν+1 f (z) z n+1 dz = 1 2πi |z|=r ν+1 k=0 (ν + 1)! k!(ν + 1 − k)! r −2k f (z) z n+1−2k dz = ν+1 k=0 (ν + 1)! k!(ν + 1 − k)! r −2k 1 2πi |z|=r f (z) z n+1−2k dz = ν+1 k=0 (ν + 1)! k!(ν + 1 − k)! r −2k a n−2k
This, together with (3.6), shows
ν+1 k=0 (ν + 1)! k!(ν + 1 − k)! r −2k a n−2k ≤ C4 ν+1 r −n−ν−1 x .
Multiplying both sides by r 2ν gives
ν+1 k=0 (ν + 1)! k!(ν + 1 − k)! r 2ν−2k a n−2k ≤ C4 ν+1 r ν−n−1 x .
Observe that in the left side is a polynomial in terms of r whose zero power term is a n−2ν . Therefore, when ν − n − 1 ≥ 1 if we let r to get closer and closer to zero, then a n−2ν must be zero. That is for all ν ≥ n + 2, the coefficients a n−2ν = 0. This yields that for all j ≤ −ν − 2, a j = 0. In other words, z = 0, or λ = 1 is a pole of the complex functionf (λ) := R(λ,S)f with order up to ν + 1.
Before proceeding we introduce a notation: Let 0 = z ∈ C such that z = re iϕ with reals r, ϕ, and F (z) be any complex function. Then we define
R(λ,S)f = ∞ j=−ν−1 a j (λ − ξ 0 ) j+1 .
If (3.8) is satisfied, then for any k ≥ 1 the following is also valid:
lim λ↓z (λ − ξ 0 ) k R(λ,S)f = 0.
If we let k take on the values 1, 2, . . . , ν + 1, then we see that a j = 0 for all j = −ν−1, −ν, . . . . That is, the function R(λ,S)f is zero (so analytic) in a neighborhood of ξ 0 . From the properties of analytic functions this function must be zero in the connected open subset of its domain as well.
Applications
4.1.
Asymptotic behavior of polynomially bounded solutions of difference equations. In this subsection we will apply the results obtained in the previous section to study the polynomially bounded solutions of difference equations of the form
x(n + 1) = T x(n) + F (n), n ∈ N, (4.1)
where T is a bounded linear operator in X and F ∈ c ν 0 (X). Definition 4.1. A bounded operator T from a Banach space X to itself is said to be ν-polynomially power bounded, if
sup n∈N T n k ν < ∞,
where ν is a nonnegative integer.
Lemma 4.2. Let x ∈ l ν ∞ (X) be a solution of (4.1). Then σ ν (x) ⊂ σ(T ) ∩ Γ. (4.2) Moreover, for λ ∈ ρ(S) ∩ ρ(T ), R(λ,S)x = R(λ,T )x. (4.3)
Proof. Consider the operator of multiplication by T in the spaces l ν ∞ (X). It is easy to see that the operator is bounded and preserves c ν 0 (X), so it induces an operator T in the quotient space l ν ∞ (X)/c ν 0 (X). Moreover, σ(T ) ⊂ σ(T ). Since x is a solution of (4. If ξ 0 ∈ Γ and ξ 0 ∈ σ(T ), then there exists a neighborhood of ξ 0 (in C) such that for any λ ∈ U and |λ| = 1,
R(λ,T )x = R(λ,S)x. (4.4)
As the left hand side function is an analytic extension in a neighborhood U of ξ 0 , by (4.4), the complex function R(λ,S)x has an analytic extension to the neighborhood U of ξ 0 , that is ξ 0 ∈ σ ν (x). In other words, σ ν (x) ⊂ σ(T ) ∩ Γ. Moreover, (4.3) is proved.
We will prove the following that extends the famous Katznelson-Tzafriri to the case of ν-polynomially bounded operator. Proof. We consider the sequence x := {x(n) := T n } ∞ n=1 in L(X). Obviously, {x(n)} ∞ n=1 ∈ l ν ∞ (L(X)). Let us denote byT the operator of multiplication by T in Y := L(X). Then, we have an equation in Y:
x(n + 1) =T x(n), n ∈ N.
Note that σ(T ) ⊂ σ(T ). Therefore, by Lemma 4.2, we have σ ν (x) ⊂ {1}. For each λ ∈ ρ(S), we have the identity
R(λ,S)Sx = λR(λ,S)x −x.
By a simple induction we can show that for each j ∈ N,
R(λ,S)S jx = λ j R(λ,S)x − P (λ,x,Sx),
where P (λ,x,Sx) is a polynomial of λ,x,Sx. Hence, is extendable analytically to a neighborhood of 1. Therefore, for the sequence y := (S − I) ν+1 x we have σ ν (y) = ∅. By Proposition 3.4, y = (S − I) ν+1 x ∈ c ν 0 (L(X)), that is, (4.5) is valid. There are many extensions of this theorem (see e.g. [15] and its references). An elementary proof of this theorem is given in [11]. In Theorem 4.3, when ν = 0, we obtain the above mentioned Katznelson-Tzafriri Theorem.
R(λ,S)(S − I) ν+1x = (λ − 1) ν+1 R(λ,S)x + Q(λ,x,Sx), where Q(λ,x,Sx) is a polynomial of λ,x,Sx. Note that σ ν ((S − I) ν x) ⊂ σ ν (x) ⊂ {1}. By
Below is an individual version of Katznelson-Tzafriri Theorem for possibly non-ν polynomially bounded operator T . where T ∈ L(X). Each solution {x(n)} ∞ n=1 of this equation (4.7) is of the form x(n) = T n−1 x 0 , n ∈ N, for some x 0 ∈ X. Theorem 4.6. Let T ∈ L(X), and let x be a ν polynomially bounded solution of Eq. (4.1). Assume further that the following conditions are satisfied:
i) σ(T ) ∩ Γ is countable ii) For each ξ 0 = e iφ0 ∈ σ(T ) ∩ Γ {z = re iφ0 , r > 1} ⊂ ρ(T ); (4.8) lim λ↓z (λ − ξ 0 )R(λ,T )x = 0. (4.9)
Then lim n→∞ x(n) n ν = 0. (4.10)
Proof. Since σ ν (f ) ⊂ σ(T ) ∩ Γ, if σ(T ) ∩ Γ is empty, then the claim of the theorem is clear. Next, if it is not, then from the countability of σ ν (f ) as a closed subset of Γ there must be an isolated point, say ξ 0 of σ ν (f ). However, by condition (4.9) and Corollary 3.5 the set of non-removable singular points of the complex function R(λ,S)x = R(λ,T )x cannot have an isolated point. That means, σ ν (x) must be empty set, so by Proposition 3.4, the sequence x = {x(n)} ∞ n=1 must be in c ν 0 (X), that is (4.10).
The following result gives a sufficient condition for the stability of polynomially bounded solutions that is well known as Arendt-Batty-Ljubich-Vu Theorem (see [3]): Proof. It is clear that x(n) = T n x is a solution of Eq. (4.7). By the Spectral Radius Theorem the spectral radius r σ (T ) of T must satisfy r σ (T ) ≤ 1 because of the polynomial boundedness of T , so (4.8) is satisfied. By Theorem 4.6, we only need to check condition (4.9). We have
0 ≤ lim λ↓z (λ − ξ 0 )R(λ,T )x ν ≤ lim λ↓z sup n∈N (λ − ξ 0 )R(λ, T )T n x n ν ≤ lim λ↓z sup n∈N T n n ν (λ − ξ 0 )R(λ, T )x = sup n∈N T n n ν lim λ↓z (λ − ξ 0 )R(λ, T )x .
Since T is ν-polynomially power bounded sup n∈N ( T n /n ν ) is finite, so (4.11) yields that condition (4.9) is satisfied.
Asymptotic behavior of solutions of fractional difference equations.
Consider fractional difference equations of the form ∆ α x(n) = T x(n) + y(n), n ∈ N, (4.13)
where 0 < α ≤ 1, T ∈ L(X) and y ∈ c ν 0 (X). Definition 4.8. ([10, Definition 3.1]) Let T be a bounded operator defined on a Banach space X and α > 0. We call T the generator of an α-resolvent sequence if there exists a sequence of bounded and linear operator {S α (n)} n∈N ⊂ L(X) that satisfies the following properties i) S α (0) = I; ii) S α (n + 1) = k α (n + 1)I + T n j=0 k α (n − j)S α (j), for all n ∈ N. As shown in [10, Theorem 3.4] and a note before it, S α is determined by one of the following formulas:
S α (n) = n j=0 Γ(n − j + (j + 1)α) Γ(n − j + 1)Γ(jα + α) T j ; iii) S α (n) = 1 2πi C z n ((z − 1) α z 1−α − T ) −1 dz,
where C is a circle, centered at the origin of the complex plane, that encloses all spectral values of (z − 1) α z 1−α − T. Recall that the Z-transform of a sequence x := {x(n)} ∞ n=0 is defined as
x(z) := ∞ j=0
x(j)z −j . (4.14)
Let us denote D |z|>1 := {z ∈ C : |z| > 1}, and D |z|<1 := {z ∈ C : |z| < 1}. For each {x(n)} n∈N ∈ l ν ∞ (X) we will set x(0) = 0, so some properties of the Z-transform of sequences can be stated in the following:
Proposition 4.11. Let {x(n)} n∈N and {y(n)} n∈N be in l ν ∞ (X). Then i)x(z) is a complex function in z ∈ D |z|>1 ; ii) Sx(z) = zx(z) − zx(0); iii) x * y(z) =x(z) ·ỹ(z).
Proof. For the proof see e.g. [8,Chapter 6].
To study fractional difference equations (4.13) we will need the following analog of [12,Lemma 3.3]:
Lemma 4.12. Let {x(n)} n∈N ∈ l ν ∞ (X)
. If the Z-transformx(z) of the sequence x has a holomorphic extension to a neighborhood of z 0 ∈ Γ, then z 0 ∈ σ ν (x).
Proof. Assume thatx(z) (with |z| > 1) can be extended to a holomorphic function g 0 (z) in B(z 0 , δ) with a sufficiently small positive δ. We will show that R(z, S)x (with |z| > 1) has a holomorphic extension in a neighborhood of z 0 . By setting x(0) = 0 we define a sequence {g k (z)} ∞ k=1 as follows:
g k (z) := z k−1x (z) − k−1 j=0 z k−1−j x(j), k ∈ N. (4.15)
We are going to prove that this defines a bounded function g(z) with z in a small disk B(z 0 , δ) := {z ∈ C : |z −z 0 | < δ}, and then, applying a necessary and sufficient condition for a locally bounded function to be holomorphic to prove that R(z, S)x is holomorphic. To prove the boundedness of g(z) in a small disk B(z 0 , δ) we will use a special maximum principle as in [12]. We have
R(z, S)x := (z − S) −1 x = z −1 (I − z −1 S) −1 x = z −1 ∞ n=0 z −n S(n)x = ∞ n=0 z −n−1 S(n)x.
Therefore, for z ∈ B(z 0 , δ) ∩ D |z|>1 and for each k ∈ N,
[R(z, S)x](k) = ∞ n=0 z −n−1 x(n + k) = z −1 z kx (z) − k−1 j=0 z k−j x(j) = g k (z).
By (3.4) and (3.5), for z ∈ B(z 0 , δ) ∩ D |z|>1 , there is a certain number C such that
sup k∈N g k (z) k ν = g(z) ν = ∞ n=0 z −n−1 x(n + k) ∞ k=1 ν ≤ C (|z| − 1) ν+1 z ν . (4.16)
On the other hand, for z ∈ B(z 0 , δ) ∩ D |z|<1 we have for all k ∈ N,
g k (z) ≤ |z| k−1 g 0 (z) + k−1 j=0 |z| k−1−j x(j) ≤ |z| k−1 g 0 (z) + k−1 j=0 |z| k−1−j j ν x ν ≤ sup z∈B(z0,δ) g 0 (z) + x ν k−1 j=0 |z| k−1−j j ν = M k−1 j=0 |z| k−1−j j ν
where M := sup z∈B(z0,δ) g 0 (z) + x ν . Hence, for all k ∈ N,
g k (z) k ν ≤ M k−1 j=0 |z| k−1−j j k ν ≤ M k−1 j=0 |z| k−1−j ≤ M 1 − |z| . (4.17)
By (4.16) and (4.17) we have proved that there is a positive number K such that for z ∈ B(z 0 , δ) and and for each k ∈ N, this estimate is valid:
g k (z) k ν ≤ K ||z| − 1| ν+1 . (4.18)
Applying the maximum principle Lemma 2.2 as in [12] to the function g k (z)/k ν gives the boundedness of g k (z)/k ν in B(z 0 , δ/2). In fact, it is clear that for each k ∈ N the function g k (z)/k ν is holomorphic in z ∈ B(z 0 , δ). Therefore, g k (z)/k ν is bounded by a number independent of k, so g(z) is bounded in B(z 0 , δ/2). We are now ready to apply Theorem 2.1, a criterion for a locally bounded function to be holomorphic. In fact, since the family W := {x * • p k , x * ∈ X * , p k : {x n } → x k , k ∈ N} is separating and x * • p k (g(·)) = x * (g k (·)) is holomorphic, the complex function g(z) is holomorphic for z ∈ B(z 0 , δ/2).
At this point we have shown that g(z) is holomorphic for z ∈ B(z 0 , δ/2), and as g(z) = R(z, S)x for |z| > 1. This yields that R(z, S)x has a holomorphic extension g(z) to a neighborhood of z 0 . This completes the proof of the lemma.
Definition 4.13. We denote by σ Z,ν (x) the set of all points ξ 0 on Γ such that the Z-transform of a sequence x := {x(n)} n∈N ∈ l ∞ ν (X) cannot be extended holomorphically to any neighborhood of ξ 0 , and call this set the Z-spectrum of the sequence x.
In the simplest case where ν = 0, σ ν (x) may be different from σ Z,ν (x). In fact, the following numerical sequence x := {x(n)} n∈N ∈ l ∞ 0 (R), where
x(n) := 0, n = 0, 1/n, n ∈ N is in c 0 (R). Obviously,x = 0, so σ(x) = ∅. However, 1 ∈ σ Z,ν (x) becausex(z) = ∞ j=1 z −j /j cannot be extended holomorphically to a neighborhood of 1. In general, we only have the following inclusion.
Corollary 4.14. For each x := {x(n)} n∈N ∈ l ∞ ν (X), σ ν (x) ⊂ σ Z,ν (x).
Proof. The corollary is an immediate consequence of Lemma 4.12 and the definitions of the spectra mentioned in the statement.
Before we proceed, we introduce a notation Then σ ν (S α ) ⊂ Σ.
Proof. It suffices to show that if z 0 ∈ Σ 0 , then z 0 ∈ σ ν (S α ). Taking the Z-transform of S α from the equation in Definition 4.8 gives zS α (z) − zS α (0) = SS α (z) = zk α (z)I − zk α (0)I +k α (z) · TS α (z).
Therefore, for z ∈ D |z|>1 , (z −k α (z)T )S α (z) = zS α (0) + zk α (z)I − zk α (0)I.
Let z 0 ∈ Σ 0 . Then (z −k α (z)T ) −1 exists. Hence, S α (z) = (z −k α (z)T ) −1 (zS α (0) + zk α (z)I − zk α (0)I).
And, it is clear thatS α (z) has a holomorphic extension to a neighborhood of z 0 because bothk α (z) and (z −k α (z)T ) −1 are holomorphic in a neighborhood of z 0 , so z 0 ∈ σ Z,ν (S α ). By Corollary 4.14 this yields z 0 ∈ σ ν (S α ). This completes the proof of the lemma.
Lemma 3. 1 .
1Assume thatS is the operator induced by the translation S in the quotient space l ν ∞ (X)/c ν 0 (X). Then σ(S) ⊂ Γ.
.
Let f ∈ l ν ∞ (X), and ξ 0 ∈ Γ be an isolated point in σ ν (f ).Then the singular point ξ 0 of R(λ,S)f is removable and the complex function R(λ,S)f is zero in the connected open subset of its domain that contains ξ 0 . Proof. By Proposition 3.4, ξ 0 is a pole of order up to ν + 1. Consider the Laurent series of R(λ,S)f in a neighborhood of ξ 0 we have
1), for each |λ| = 1 we have R(λ,S)Sx = R(λ,S)Tx + R(λ,S)F =T R(λ,S)x, This, together with the identity λR(λ,S)x −x = R(λ,S)Sx, shows λR(λ,S)x −x =T R(λ,S)x.Therefore,x = λR(λ,S)x −T R(λ,S)x = (λ −T )R(λ,S)x.
Theorem 4 . 3 .
43Let T ∈ L(X) be ν-polynomially bounded such that σ(T ) ∩ Γ ⊂ {1}, where ν is a nonnegative integer. Then lim n→∞ 1 n ν (T − I) ν+1 T n = 0. (4.5)
Proposition 3.4, 1 is a pole of ν + 1 order of the complex function g(λ) := R(λ,S)x, so the complex function λ → R(λ,S)(S − I) ν+1x
Remark 4.4. A famous Katznelson-Tzafriri Theorem (see[9]) is stated as follows: Let T ∈ L(X) satisfy sup n∈N T n < ∞ and (σ(T ) ∩ Γ) ⊂ {1}. Then lim n→∞ (T n+1 − T n ) = 0.
Theorem 4 . 5 .
45Let T ∈ L(X) satisfy σ(T ) ∩ Γ ⊂ {1}. Then, for each x 0 ∈ X, lim n→∞ 1 n ν (T − I) ν+1 T n x 0 Proof. The proof is similar to that of Theorem 4.3. In analogy to the sequence x in the proof of Theorem 4.3 we can use the sequence {T n x 0 } ∞ n=1 . Let us consider homogeneous linear difference equations of the form x(n + 1) = T x(n), n ∈ N, (4.7)
Corollary 4 . 7 .
47Let T ∈ L(X) be ν-polynomially power bounded. Assume further that i) σ(T ) ∩ Γ is countable, ii) For each ξ 0 of σ(T ) ∩ Γ, and each x ∈ X, lim λ↓z (λ − ξ 0 )R(λ, T )x
Theorem 4 . 9 .
49Let α > 0 and T be a bounded operator defined on a Banach space X. The following properties are equivalent: i) T is the generator of an α-resolvent sequence {S α (n)} n∈N ; ii)
Theorem 4 .
410. ([10, Theorem 3.7]) Let 0 < α < 1 and {y(n)} n∈N is given. The unique solution of Eq. (4.13) with initial condition u(0) = x can be represented by u(n) = S α (n)u(0) + (S α * y)(n − 1), for all n ∈ N.
Σ 0
0:={z 0 ∈ Γ ⊂ C : (z −k α (z)T ) −1 exists and (z −k α (z)T ) −1 andk α (z)are holomorphic in a neighborhood of z 0 } and Σ = Γ\Σ 0 .
Lemma 4 . 15 .
415Let α > 0 and S α := {S α (n)} n∈N ⊂ L(X) be the resolvent of Eq. (4.13) that satisfies sup n∈N S α (n) n ν < ∞.
Theorem 4 . 16 .
416Let 0 < α ≤ 1 and Σ ⊂ {1}. Assume further that the α-resolvent S α of Eq. (4.ν+1+k S α (n + k) = 0. (4.19)Proof. As in the proof of Theorem 4.3, we can show thatλ → R(λ,S)(S − I) ν+1Sα has a holomorphic extension to a neighborhood of 1 in the complex plane. Moreover, since σ ν (S α ) ⊂ Σ this function has a holomorphic extension to a neighborhood of all points of Γ. Namely, σ ν ((S − I) ν+1S α ) = ∅. Therefore, (S − I) ν+1 S α ∈ c ν 0 (X). In other words,lim n→∞ 1 n ν (S − I) ν+1 S α (n) − I) ν+1 = (−1) ν+1 (I − S) ν+1 = (k S k .This, together with (4.20), yields(4.19). The theorem is proved.
Remark 4. 17 .
17When α = 1 Eq. (4.13) becomes x(n + 1) = (I + T )x(n) + y(n), n ∈ N. As shown in [10] S α (n) = (I + T ) n , n ∈ N. With this formula, (4.19) becomes lim n→∞ 1 n ν T ν+1 (I + T ) n . Hence, Theorem 4.16 coincides with Theorem 4.3 when α = 1. In other words, Theorem 4.16 is an extension of the Katznelson-Tzafriri Theorem for fractional difference equations (4.13).
Theorem 2.1.Let Ω ⊂ C be open and connected, and let f : Ω → X be bounded on every compact subset of Ω. Assume further that W ⊂ X * is separating subset such that x * • f is holomorphic for all x * ∈ W . Then f is holomorphic.Theorem 3.1], or [3, Theorem A.7]:
A Katznelson-Tzafriri type theorem for Cesaro bounded operators. L Abadias, Studia Math. 2341L. Abadias, A Katznelson-Tzafriri type theorem for Cesaro bounded operators, Studia Math., 234 (2016), no. 1, 59-82.
Vector-valued holomorphic functions revisited. W Arendt, N Nikolski, Math. Z. 234W. Arendt, N. Nikolski, Vector-valued holomorphic functions revisited, Math. Z., 234 (2000), 777-805.
Vector-valued Laplace transforms and Cauchy problems. W Arendt, C J Batty, M Hieber, F Neubrander, Birkhauser/Springer96Basel AG, BaselSecond edition, Monographs in MathematicsW. Arendt, C.J.K Batty, M. Hieber, F. Neubrander, Vector-valued Laplace transforms and Cauchy problems, Second edition, Monographs in Mathematics, 96. Birkhauser/Springer Basel AG, Basel, 2011.
Ergodicity and stability of orbits of unbounded semigroup representations. B Basit, A J Pryde, J. Aust. Math. Soc. 772B. Basit, A.J. Pryde, Ergodicity and stability of orbits of unbounded semigroup representa- tions, J. Aust. Math. Soc., 77 (2004), no. 2, 209-232.
Harmonic and spectral analysis of power bounded operators and bounded semigroups of operators on a Banach space (Russian). A G Baskakov, Mat. Zametki. 2translation in Math. NotesA. G. Baskakov, Harmonic and spectral analysis of power bounded operators and bounded semigroups of operators on a Banach space (Russian), Mat. Zametki, 97 (2015), no. 2, 174- 190; translation in Math. Notes, 97 (2015), no. 1-2, 164-178.
Weighted and local stability of semigroups of operators. C J K Batty, S B Yeates, Math. Proc. Cambridge Philos. Soc. 1291C.J.K. Batty, S.B. Yeates, Weighted and local stability of semigroups of operators, Math. Proc. Cambridge Philos. Soc., 129 (2000), no. 1, 85-98.
Complex Tauberian theorems for Laplace transforms with local pseudofunction boundary behavior. G Debruyne, J Vindas, J. Anal. Math. 1382G. Debruyne, J. Vindas, Complex Tauberian theorems for Laplace transforms with local pseudofunction boundary behavior, J. Anal. Math., 138 (2019), no. 2, 799-833.
S Elaydi, An Introduction to Difference Equations. Berlin-New YorkSpringer-VerlagThird editionS. Elaydi, An Introduction to Difference Equations, Third edition, Springer-Verlag, Berlin- New York 2005.
On power bounded operators. Y Katznelson, L Tzafriri, J. Funct. Anal. 68Y. Katznelson, L. Tzafriri, On power bounded operators, J. Funct. Anal., 68 (1986), 313-328.
lp-maximal regularity for fractional difference equations on UMD spaces. C Lizama, Math. Nachr. 288C. Lizama, lp-maximal regularity for fractional difference equations on UMD spaces, Math. Nachr., 288 (2015), 2079-2092.
Asymptotic behavior of individual orbits of discrete systems. Minh Nguyen Van, Proc. Amer. Math. Soc. 1379Nguyen Van Minh, Asymptotic behavior of individual orbits of discrete systems, Proc. Amer. Math. Soc., 137 (2009), no. 9, 3025-3035.
On the asymptotic behaviour of Volterra difference equations. Minh Nguyen Van, J. Difference Equ. Appl. 19Nguyen Van Minh, On the asymptotic behaviour of Volterra difference equations, J. Difference Equ. Appl., 19 (2013), 1317-1330.
Boundedness and almost periodicity in dynamical systems. T Naito, Nguyen Van Minh, R Miyazaki, Y Hamaya, J. Difference Equ. Appl. 74T. Naito, Nguyen Van Minh, R. Miyazaki, Y. Hamaya, Boundedness and almost periodicity in dynamical systems, J. Difference Equ. Appl., 7 (2001), no. 4, 507-527.
Some improvements of the Katznelson-Tzafriri theorem on Hilbert space. D Seifert, Proc. Amer. Math. Soc. 1439D. Seifert, Some improvements of the Katznelson-Tzafriri theorem on Hilbert space, Proc. Amer. Math. Soc., 143 (2015), no. 9, 3827-3838.
A short proof of Y. Katznelson's and L. Tzafriri's theorem. Phong Quoc, Vu, Proc. Amer. Math. Soc. 1154Quoc Phong Vu, A short proof of Y. Katznelson's and L. Tzafriri's theorem, Proc. Amer. Math. Soc., 115 (1992), no. 4, 1023-1024.
Theorems of Katznelson-Tzafriri type for semigroups of operators. Phong Quoc, Vu, J. Funct. Anal. 103Quoc Phong Vu, Theorems of Katznelson-Tzafriri type for semigroups of operators, J. Funct. Anal., 103 (1992), 74-84.
| zyda_arxiv-0292000 |
LHC Benchmark Scenarios for the Real Higgs Singlet Extension of the Standard Model
25 May 2016
Tania Robens
TU Dresden
Institut für Kern-und Teilchenphysik
Zellescher Weg 19D-01069DresdenGermany
Tim Stefaniak
Department of Physics
Santa Cruz Institute for Particle Physics
University of California
95064Santa CruzCAUSA
LHC Benchmark Scenarios for the Real Higgs Singlet Extension of the Standard Model
25 May 2016(Dated: May 26, 2016)They have also been presented in the framework of the LHC Higgs Cross Section Working Group. * Electronic address: [email protected] † Electronic address: [email protected] 1
We present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they fulfill all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low mass and high mass region, i.e. the mass range where the additional Higgs state is lighter or heavier than the discovered Higgs state at around 125 GeV.
I. INTRODUCTION
The first run of the LHC at center-of-mass (CM) energies of 7 and 8 TeV has been completed in 2015. Its remarkable success is highlighted by the breakthrough discovery of a scalar boson in July 2012 and the measurements of its coupling properties, which thus far are well compatible with the interpretation in terms of the Higgs boson of the Standard Model (SM) Higgs mechanism [1][2][3][4][5]. The combination of the Higgs mass measurements performed by ATLAS and CMS yields [6] m H = 125.09 ± 0.21 (stat.) ± 0.11 (syst.) GeV.
(1)
If the discovered particle is indeed the Higgs boson of the SM, its mass measurement determines the last unknown ingredient of this model, as all other properties of the electroweak sector then follow directly from theory. In the coming years a thorough investigation of the Higgs boson's properties is needed in order to identify whether the SM Higgs sector is indeed complete, or instead, the structure of a more involved Higgs sector is realized. This includes detailed and accurate measurements of its coupling strengths and CP structure at the LHC and ultimately at future experimental facilities for Higgs boson precision studies.
Complementary to this, collider searches for additional Higgs bosons need to be continued over the full accessible mass range. The discovery of another Higgs boson would inevitably prove the existence of a non-minimal Higgs sector.
In this work we consider the simplest extension of the SM Higgs sector, where an additional real scalar field is added, which is neutral under all quantum numbers of the SM gauge groups [7,8] and acquires a vacuum expectation value (VEV). This model has been widely studied in the literature , also in the context of electroweak higher order corrections [53,54] or offshell and interference effects [33,34,[55][56][57][58][59]. Here, we present an update of the exploration of the model parameter space presented in Ref. [38], where we take the latest experimental constraints into account. As before, we consider masses of the second (nonstandard) Higgs boson in the whole mass range up to 1 TeV. This minimal setup can be interpreted as a limiting case for more generic BSM scenarios, e.g. models with additional gauge sectors [60] or additional matter content [61,62]. Experimental searches for the model have been presented in [63 -70]. As in Ref. [38] we take the following theoretical and experimental constraints into account: bounds from perturbative unitarity and electroweak (EW) precision measurements, in particular focussing on higher order corrections to the W boson mass [32]; perturbativity, vacuum stability and correct minimization of the model up to a high energy scale using renormalization group (RG) evolved couplings; exclusion limits from Higgs searches at the LEP, Tevatron and LHC experiments via the public tool HiggsBounds [71][72][73][74][75], and compatibility of the model with the signal strength measurements of the discovered Higgs state using HiggsSignals [76] (cf. also Ref. [77]).
We separate the discussion of the parameter space into two different mass regions: (i) the high mass region, m H ∈ [130, 1000] GeV, where the lighter Higgs boson h is interpreted as the discovered Higgs state; (ii) the low mass region, m h ∈ [1, 120] GeV, where the heavier Higgs boson H is interpreted as the discovered Higgs state.
We find that the most severe constraints in the whole parameter space for the second Higgs mass m H 250 GeV are mostly given by limits from collider searches for a SM Higgs boson as well as by the LHC Higgs boson signal strength measurements. For m H 250 GeV limits from higher order contributions to the W boson mass prevail, followed by the requirement of perturbativity of the couplings.
For the remaining viable parameter space we present predictions for signal cross sections of the yet undiscovered second Higgs boson for the LHC at a CM energy of 14 TeV, discussing both the SM Higgs decay signatures and the novel Higgs-to-Higgs decay mode H → hh. For both the high mass and low mass region we present a variety of benchmark scenarios. These are designed to render a maximal direct production rate for the collider signature of interest. Whenever kinematically accessible we give two different benchmark points for each mass, for which the Higgs-to-Higgs decay H → hh is maximal or minimal, respectively.
The paper is organized as follows: In Section II we briefly review the model and the chosen parametrization. In Section III we review the constraints that are taken into account and in particular discuss the impact of the new constraints on the parameter space. In Section IV we provide benchmark points and planes discussed above. We summarize and conclude in Section V.
II. THE MODEL
In the following we briefly review the main features of the real Higgs singlet extension of the SM that are important for the benchmark choices. More details about the model can e.g. be found in Refs. [29,32,38,54] and references therein.
A. Potential and couplings
The real Higgs singlet extension of the SM [7,8,78] contains a complex SU (2) L doublet, in the following denoted by Φ, and in additional a real scalar S which is a singlet under the SM gauge group. The most general renormalizable Lagrangian compatible with an additional Z 2 symmetry is then given by
L s = (D µ Φ) † D µ Φ + ∂ µ S∂ µ S − V (Φ, S) ,(2)
3 II The model with the scalar potential
V (Φ, S) = −m 2 Φ † Φ − µ 2 S 2 + Φ † Φ S 2 λ 1 λ 3 2 λ 3 2 λ 2 Φ † Φ S 2 = −m 2 Φ † Φ − µ 2 S 2 + λ 1 (Φ † Φ) 2 + λ 2 S 4 + λ 3 Φ † ΦS 2 .(3)
The implicitly imposed Z 2 symmetry forbids all linear or cubic terms of the singlet field S in the potential. We assume that both Higgs fields Φ and S have a non-zero vacuum expectation value (VEV), denoted by v and x, respectively. In the unitary gauge, the Higgs fields are given by
Φ ≡ 0 h+v √ 2 , S ≡ h + x √ 2 .(4)
After diagonalization of the mass matrix we obtain the mass eigenstates h and H with mass eigenvalues given by
m 2 h = λ 1 v 2 + λ 2 x 2 − (λ 1 v 2 − λ 2 x 2 ) 2 + (λ 3 xv) 2 ,(5)m 2 H = λ 1 v 2 + λ 2 x 2 + (λ 1 v 2 − λ 2 x 2 ) 2 + (λ 3 xv) 2 ,(6)
and m 2 h ≤ m 2 H by convention. The gauge and mass eigenstates are related via the mixing
matrix h H = cos α − sin α sin α cos α h h ,(7)
where the mixing angle − π 2 ≤ α ≤ π 2 is given by
sin 2α = λ 3 xv (λ 1 v 2 − λ 2 x 2 ) 2 + (λ 3 xv) 2 ,(8)cos 2α = λ 2 x 2 − λ 1 v 2 (λ 1 v 2 − λ 2 x 2 ) 2 + (λ 3 xv) 2 .(9)
It follows from Eq. (7) that the light (heavy) Higgs boson couplings to SM particles are suppressed by cos α (sin α). If kinematically allowed, the additional decay channel H → hh is present. Its partial decay width at leading order (LO) is given by [7,78]
Γ H→hh = |µ | 2 8πm H 1 − 4m 2 h m 2 H ,(10)
where the coupling strength µ of the H → hh decay reads
µ = − sin (2α) 2vx (sin αv + cos α x) m 2 h + m 2 H 2 .(11)
4
Next-to-leading order (NLO) corrections to the H → hh decay width for this model have been calculated recently in Ref. [54]. The branching ratios of the heavy Higgs mass eigenstate m H are then given by
BR H→hh = Γ H→hh Γ tot ,(12)BR H→SM = sin 2 α × Γ SM,H→SM Γ tot ,(13)
where Γ SM, H→SM is the partial decay width of the SM Higgs boson and H → SM represents any SM Higgs decay mode. The total width is then
Γ tot = sin 2 α × Γ SM, tot + Γ H→hh ,(14)
where Γ SM, tot denotes the total width of the SM Higgs boson with mass m H . The suppression by sin 2 α directly follows from the suppression of all SM-like couplings, cf. Eq. (7). For µ = 0, the decay H → hh vanishes and we recover the SM Higgs boson branching ratios.
For the collider phenomenology of the model two features are important:
• the suppression of the production cross section of the two Higgs states induced by the mixing, which is given by sin 2 α (cos 2 α) for the heavy (light) Higgs, respectively;
• the suppression of the Higgs decay modes to SM particles, which is realized if the competing decay mode H → hh is kinematically accessible.
For the high mass (low mass) scenario, i.e. the case where the light (heavy) Higgs boson is identified with the discovered Higgs state at ∼ 125 GeV, | sin α| = 0 (1) corresponds to the complete decoupling of the second Higgs boson and therefore the SM-like scenario.
B. Model parameters
At the Lagrangian level, the model has five free parameters,
λ 1 , λ 2 , λ 3 , v, x,(15)
while the values of the additional parameters µ 2 , m 2 are fixed by the minimization conditions. A more intuitive basis, where the free model parameters are represented by physical (i.e. observable) quantities, is given by 1
m h , m H , sin α, v, tan β ≡ v x .(16)
The vacuum expectation value of the Higgs doublet Φ is given by the SM value v ∼ 246 GeV, and one of the Higgs masses is fixed to m h/H = 125.09 GeV, eliminating two of the five parameters. We are thus left with only three independent parameters,
m ≡ m H/h , sin α, tan β ,(17)
where the latter enters the collider phenomenology only through the heavy Higgs decay mode into the lighter Higgs, H → hh. Note that from a collider perspective, for cases where the decay mode H → hh is kinematically allowed, the input parameter tan β could be replaced by either the total width of the heavier state, Γ(H), the branching ratio BR (H → hh), or the partial decay width of this channel, Γ(H → hh), respectively, rendering the following viable parameter choices besides Eq. (17):
m ≡ m H/h , sin α, Γ(H) ,(18)m ≡ m H/h , sin α, BR(H → hh) ,(19)m ≡ m H/h , sin α, Γ(H → hh) .(20)
If the insertion starts on the Lagrangian level (via e.g. FeynRules [79], SARAH [80,81] or similar), also the Lagrangian parameters as such can be used as input values, but then care must be taken to correctly translate these into the phenomenologically viable parameter regions.
III. CONSTRAINTS
In this section we list all theoretical and experimental constraints that we take into account, and give an overview over the impact of these constraints on the parameter space. We refer the reader to Ref. [38] for details on the implementation of these constraints. With respect to Ref. [38] we update the experimental limits from LHC Higgs searches, leading to a change in the allowed parameter space especially in the lower mass range, m H ∈ [130, 250] GeV. We also include constraints from the combined ATLAS and CMS Higgs signal strength [82], rendering a significantly stronger limit on the mixing angle. However, this limit is still not as strong as the constraint from the W boson mass measurement in most of the parameter space.
A. Theoretical Constraints
We consider the following theoretical constraints in the selection of the benchmark scenarios:
• vacuum stability and minimization of model up to a scale µ run = 4 × 10 10 GeV,
• perturbative unitarity of the 2 → 2 S-matrix for (W + W − , ZZ, hh, hH, HH) initial and final states,
• perturbativity of the couplings in the potential, |λ i | ≤ 4 π, up to a high energy scale, µ run = 4 × 10 10 GeV, employing one-loop renormalization group equations (RGEs) [83].
B. Experimental Constraints
The following experimental constraints are taken into account at the 95% C.L.:
• agreement with electroweak precision observables, employing the oblique parameters S, T, U [84][85][86][87] and using the results from the global fit from the GFitter Group [88],
• agreement with the observed W boson mass [89][90][91], M W = 80.385 ± 0.015 GeV, employing the NLO calculation presented in Ref. [32],
• agreement with limits from direct Higgs searches at LEP, Tevatron, and the LHC using HiggsBounds (version 4.3.1) [71][72][73][74][75]. With respect to the results presented in Ref. [38], limits from the following searches have been included here:
-ATLAS search for H → W W [92], -ATLAS search for H → ZZ [70],
combination of ATLAS searches for H → hh → bbτ τ, γγW W * , γγbb, bbbb [67], • Agreement with the observed signal strengths of the 125 GeV Higgs boson, using HiggsSignals (version 1.4.0) [76], and using the results from the ATLAS and CMS combination of the LHC Run 1 data, µ = 1.09 ± 0.11 [82], leading to | sin α| ≤ 0.36 (21) for the heavy Higgs mass range m H 150 GeV (high mass range, m h ∼ 125 GeV), and | sin α| ≥ 0.87 (22) for the light Higgs mass range m h 100 GeV (low-mass range, m H ∼ 125 GeV). In these mass regions potential signal overlap with the SM-like Higgs at 125 GeV can be neglected. For Higgs masses in the range [100, 150] GeV we employ HiggsSignals using observables from the individual Higgs channels, which enables to approximately take into account a potential signal overlap [76], see also Ref. [38] for details.
-CMS search for H → V V (V = W ± , Z) [66], -CMS search for H → hh → 4τ ,
C. Allowed Parameter Regions and Sensitivity of the Constraints
High mass region
The importance of the different constraints on the mixing angle sin α in the high mass region, where m h ∼ 125 GeV, is summarized in Figure 1. Recall that this angle is responsible for the global suppression of the production cross section with respect to the SM prediction at the same Higgs mass. We see that in the lower mass region, m H 250 GeV, the most important constraints stem from direct Higgs searches [66,70,[94][95][96] and the combined Higgs signal strength [82], whereas for higher masses, m H ∈ [250 GeV; 800 GeV], the W boson mass becomes the strongest constraint [32]. Requiring perturbativity of the couplings yields the upper limit on | sin α| for very heavy Higgs bosons, m H ≥ 800 GeV.
The updated combined signal strength reduces the maximally allowed mixing angle from previously | sin α| 0.50 [38] Fig. 2. We see that the updated constraints yield stronger limits in particular for m H ≤ 250 GeV as well as for m H 400 GeV. We supplement this comparison by giving a detailed list in Tab. I of the LHC Higgs search channels that have been applied by HiggsBounds in the various mass regions. 2 The relatively strong constraints on the mixing angle lead to a significant suppression of the direct production rates of the heavy Higgs boson at LHC run 2. Fig. 3 shows the predicted production cross section at 14 TeV after all constraints have been taken into account. The production cross sections rapidly decrease with higher masses m H due to both the stronger constraints on the mixing angle (cf. Fig. 1) and a reduction of the available phase space for higher masses. The cross section for direct production in gluon fusion and successive decay into SM final states ranges from about 10 pb at lower masses to about 10 fb for masses around 800 GeV. Note that in order to obtain the predictions for a particular SM decay mode, H → XX, these numbers need to be multiplied by a factor of Note that these plots were obtained using a simple rescaling of production cross section of a SM Higgs boson of the same mass as given in Ref. [23], i.e. contributions due to interference with the additional scalar are not included. Tools which can handle these have been presented e.g. in Refs. [55,56,58,59]. These studies, however, focus on effects on the line-shape of the heavy scalar boson after a possible discovery. Moreover, thus far, their calculations neglect additional higher order corrections, whereas these have been calculated to great precision for the SM Higgs boson and are included in Fig. 3 [23]. For the future, it would be desirable to perform a dedicated study of interference effects including higher order corrections for the benchmark points presented in this work in order to estimate their effects (and the systematic uncertainty introduced here by neglecting them).
Low mass region
In the low mass region, where the heavier Higgs state takes the role of the discovered Higgs boson, m H ∼ 125 GeV, the parameter space is extremely constrained by the Higgs signal strength and exclusion limits from LEP Higgs searches [89]. The updated experimental results do not change the limits presented in Ref. [38]. We review these limits in Tab. II. Note that in the low mass region the couplings of the heavy Higgs boson at 125 GeV become SM-like for | sin α| = 1.
Tab. III gives the direct production cross section in gluon fusion for the undiscovered light Higgs state at a 8 and 14 TeV LHC, respectively. Again, the production cross section stems from a simple rescaling of the corresponding cross section for a SM Higgs boson of that mass [23,98]. In the second column we give the lower limit on sin α stemming from exclusion limits from LEP or LHC Higgs searches (evaluated with HiggsBounds). If the lower limit on sin α obtained from the Higgs signal rates (evaluated with HiggsSignals) results in stricter limits, they are displayed in the third column. The fourth column displays the upper limit on tan β that stems from perturbative unitarity in the complete decoupling case (| sin α| = 1). In the fifth column we give the tan β value for which Γ H→hh = 0 is obtained given the maximal mixing angle allowed by the Higgs exclusion limits (second column). At this tan β value, the | sin α| limit obtained from the Higgs signal rates (third column) is abrogated. The table is taken from Ref. [38].
Intermediate mass region
The intermediate mass region, where both Higgs bosons have masses between 120 GeV and 130 GeV, was originally discussed in Ref. [38]. In this mass region the observed Higgs signal at 125 GeV may be due to a signal overlap of both Higgs bosons, depending on the mass separation and the mass resolution of the experimental analysis. We show the allowed parameter space in the (m h , m H ) and (m h , sin α) plane from the updated fit in Fig. 4. The updated signal strength observables in HiggsSignals-1.4.0 yield only marginal improvements in the constrained parameter space, while the updated limits from direct Higgs searches are irrelevant in this mass region.
IV. BENCHMARK SCENARIOS FOR LHC RUN 2
The benchmark scenarios that are presented in this section are chosen such that they feature the maximally allowed production cross section at the LHC. We first present the benchmark scenarios for the high mass region, where the light Higgs plays the role of the discovered SM-like Higgs at 125 GeV, and then turn to the low mass range, where the heavy Higgs state is the SM-like Higgs boson. 3
A. High mass region
We distinguish between two different search channels:
• Higgs decays into SM particles: Maximizing the production cross section corresponds to maximizing the parameter [29] κ ≡ σ σ SM × BR(H → SM) = sin 4 α Γ SM,tot Γ tot .
In general, following Eq. (13)
κ ≡ σ σ SM × BR(H → hh) = sin 2 α Γ H→hh Γ tot ,
is maximized to obtain the largest possible signal yield. Figure 5 shows the allowed range of these two quantities, after all constraints have been taken into account. For the Higgs decay channel into SM particles, we see that searches from CMS pose important constraints for m H 400 GeV. For the Higgs-to-Higgs decay channel H → hh, on the other hand, both ATLAS [67] and CMS [100,101] searches are not yet sensitive enough to exclude points that are not already in conflict with other constraints.
We quantify the benchmark scenarios for both signal channels in this regime by considering the maximally allowed mixing angle together with the maximal and minimal branching ratio for the decay H → hh, respectively. While these maximal and minimal points define benchmark points, all BR(H → hh) values in between are in principle allowed. Therefore, an interpolation between the minimal and maximal values defines a higherdimensional benchmark scenario (benchmark slope or plane), where the additional third parameter (cf. Eq. (17)-(20)) is floating.
We furthermore distinguish scenarios for which the H → hh on-shell decay mode is kinematically allowed or forbidden. As we neglect all other triple and quartic Higgs selfcouplings apart from µ , and work in the on-shell approximation, tan β only influences the collider phenomenology for regions in parameter space where the decay H → hh is kinematically allowed, i.e. for heavy Higgs masses m H ≥ 2m h ≈ 250 GeV. For lower masses tan β is irrelevant for the phenomenology considered here. However, to be consistent, we recommend to still keep the values within the respective parameter regions allowed by perturbativity and perturbative unitarity. Benchmark scenarios for both cases are given in Tab. IV and V, respectively. Parameter ranges which are not explicitly listed can to a first approximation be linearly interpolated.
In addition, we also list exemplary benchmark points for this mass region in Tables VI and VII, where we additionally give the predictions for other relevant decay modes. Whenever kinematically accessible, we provide two benchmark points for every heavy Higgs mass, representing the maximal and minimal branching ratio for the H → hh decay, respectively. 4 The mixing angle is always chosen such that the production rate of the additional scalar is maximized. a and b). Reference production cross sections have been taken from the upcoming CERN Yellow Report 4 by the LHC Higgs Cross Section Working Group [104].
B. Low mass region
For the case that the heavier Higgs boson is taken to be the discovered SM-like Higgs boson with m H ∼ 125 GeV, | sin α| = 1 corresponds to the SM limit, and deviations from this value parametrize the new physics contributions. As in the high mass region, the following channels are interesting:
• Direct production of the lighter Higgs state h and successive decay into SM particles,
• Decay of the SM-like Higgs boson H into the lighter Higgs states, H → hh.
For the direct production of the light Higgs state smaller | sin α| values are of interest, as the cross section scales with cos 2 α. We provide the minimally allowed values for | sin α| in Tab. II. Tab. III lists the respective direct production cross sections at 8 and 14 TeV. These values can directly be used as benchmark scenarios for collider searches for direct light Higgs production.
For the second channel -the decay of the SM-like Higgs into two lighter Higgs stateswe list maximal branching ratios for the decay H → hh in Tab. VIII. As long as the decay H → hh is kinematically accessible, the maximal value of its branching ratio, BR(H → hh) 0.259, is not dependent on the light Higgs mass. The lighter Higgs bosons then decay further according to the branching ratios of a SM Higgs of the respective mass. A first experimental search of this signature with the light Higgs boson decaying into τ lepton pairs in the mass range m h ∈ [5,15] GeV has already been performed by the CMS experiment [93].
We present benchmark points for fixed masses in Tab. IX. Here, | sin α| values closer to unity are needed in order to obtain maximal branching ratios for this channel, which in turn leads to the reduction of direct production for the lighter state by almost an order of magnitude with respect to the values presented in Tab. III. Again, we recommend to scan over tan β between the values of scenario a and b (thus defining a higher dimensional benchmark scenario) in order to obtain a range of possible branching ratios. a and b). In scenario b we have tan β = − cot α. The | sin α| values have been optimized for scenario a, which in turn leads to a suppression of direct production for the lighter state. For direct production of the lighter scalar, the parameters in Tab. II and III should be used. For BHM50 -BHM10, the production cross section for the SM like Higgs is σ(gg → H) = 49.66 pb. Reference production cross sections have been taken from the upcoming CERN Yellow Report 4 by the LHC Higgs Cross Section Working Group [104].
In this paper we have revisited and updated the constraints on the parameter space of the real scalar singlet extension of the SM. In comparison with the previous results presented in Ref. [38], the most important improvements have been made in the constraints from new results in LHC searches for a heavy Higgs boson decaying into vector boson final states, as well as from the ATLAS and CMS combination of the signal strength of the discovered Higgs state. We found that these modify our previous findings in the mass range 130 GeV ≤ m H ≤ 250 GeV, where now the direct Higgs searches as well as the ATLAS and CMS signal strength combination render the strongest constraints on the parameter space.
Based on these updated results, we have provided benchmark scenarios for both the high mass and low mass region for upcoming LHC searches. Hereby, we pursued the philosophy of selecting those points which feature a maximal discovery potential in a dedicated collider search of the corresponding signature. We provided predictions of production cross sections for the LHC at 14 TeV, and supplemented these with information about the branching fractions of the relevant decay modes. We encourage the experimental collaborations to make use of these benchmark scenarios in the current and upcoming LHC runs.
BR(H → XX)/BR(H → SM), where BR(H → SM) is the sum over all branching ratios of Higgs decays into SM particles according to Eq.(13). Taking into account the current design strategy for the LHC run (cf. e.g. Ref.[97]) and expecting an integrated luminosity of about 100 fb −1 and 300 fb −1 before the shutdowns in 2019 and 2023, respectively, this translates into the fact that at least O (10 3 ) heavy Higgs bosons could be produced in that mass range in optimistic scenarios. For the hh final state, on the other hand, cross sections are about an order of magnitude lower. A comparison of current exclusion limits from LHC H → hh searches with the predictions in the viable parameter space will be given in Section IV.
Higgs signal rate with SM particles in the final state for the LHC at 14 TeV. (b) Heavy Higgs signal rate with light Higgs bosons in the final state for the LHC at 14 TeV. FIG. 3: LHC signal rates of the heavy Higgs boson H decaying into SM particles (a) or into two light Higgs bosons, H → hh, (b), in dependence of the heavy Higgs mass, m H , for a center-ofmass (CM) energy of 14 TeV.Shown are regions which are still allowed after all constraints are taken into account: Red and yellow regions correspond to agreement with the Higgs signal strength measurements at the 1σ and 2σ level, respectively, blue points comply with direct experimental searches but do not agree with the Higgs signal strength within 2σ. Light gray points denote scan points that are excluded by either perturbative unitarity, perturbativity of the couplings, RGE running or the W boson mass, while dark gray points denote regions in parameter space that obey these constraints but are excluded by direct searches.
(a) (m h , m H ) plane. (b) (m h , sin α) plane. FIG. 4: Parameter space for the intermediate mass region after taking all constraints into account. The color coding follows Fig. 3.
Higgs signal rate with SM particles in the final state. We display the observed and expected 95% C.L. limits from the CMS combination of SM Higgs searches [95] as well as from the H → V V (V = W, Z) search [66]. (b) Heavy Higgs signal rate with light Higgs bosons in the final state. We display the current expected and observed 95% C.L. limits from the ATLAS H → hh search (combination of various final states) [67] and CMS H → hh searches with γγbb [100] and bbbb [101] final states. FIG. 5: Collider signal rates of the heavy Higgs boson H decaying into SM particles (a) or into two light Higgs bosons, H → hh, (b), in dependence of the heavy Higgs mass, m H . The color coding is the same as inFig. 3. The rates are normalized to the inclusive SM Higgs production cross section at the corresponding mass value[23,102,103].
Fixed parameters M h = 125.1 GeV or M H = 125.1 GeV. Irrelevant parameters tan β whenever channel H → hh kinematically not accessible. additional comments predictions at LO, factorized production and decay; a,b signify maximal and minimal BR(H → hh); for b, sin α < 0; any values for tan β between scenario a and b are allowed. Production cross sections at 14 TeV [pb] and branching fractions BHM300 a,b Spectrum M H =300 GeV, | sin α| = 0.31, tan β (a) = 0.79, tan β (b) = 0.79σ(gg → h) 44.91 σ(gg → H) 1.09 BR(H → hh) 0.41 (a), 0.17 (b) BR(H → W W ) 0.41 (a), 0.57 (b) BR(H → ZZ) 0.18 (a), 0.25 (b) BHM400 a,b Spectrum M H =400 GeV, | sin α| = 0.26, tan β (a) = 0.58, tan β (b) = 0.59 σ(gg → h) 46.32 σ(gg → H) 0.76 BR(H → hh) 0.32 (a), 0.20 (b) BR(H → W W ) 0.40 (a), 0.47 (b) BR(H → ZZ) 0.18 (a), 0.22 (b) BR(H → tt) 0.10 (a), 0.12 (b) BHM500 a,b Spectrum M H =500 GeV, | sin α| = 0.24, tan β (a) = 0.44, tan β (b) = 0.46 σ(gg → h) 46.82 σ(gg → H) 0.31 BR(H → hh) 0.26 (a), 0.19 (b) BR(H → W W ) 0.41 (a), 0.44 (b) BR(H → ZZ) 0.19 (a), 0.21 (b) BR(H → tt)0.14 (a), 0.16 (b)TABLE VI: Benchmark scenarios for the high mass region for fixed masses and | sin α|, floating tan β (between scenariosa and b). Reference production cross sections have been taken from the upcoming CERN Yellow Report 4 by the LHC Higgs Cross Section Working Group[104].Production cross sections at 14 TeV [pb] and branching fractions (continued ) BHM600 a,b Spectrum M H =600 GeV, | sin α| = 0.22, tan β (a) = 0.37, tan β (b) = 0.38 σ(gg → h) 47.28 σ(gg → H) 0.12 BR(H → hh) 0.25 (a), 0.19 (b) BR(H → W W ) 0.41 (a), 0.45 (b) BR(H → ZZ) 0.21 (a), 0.22 (b) BR(H → tt) 0.13 (a), 0.14 (b) BHM700 a,b Spectrum M H =700 GeV, | sin α| = 0.21, tan β (a) = 0.31, tan β (b) = 0.32 σ(gg → h) 47.49 σ(gg → H) 0.050 BR(H → hh) 0.24 (a), 0.19 (b) BR(H → W W ) 0.44 (a), 0.47 (b) BR(H → ZZ) 0.22 (a), 0.23 (b) BR(H → tt) 0.10 (a), 0.11 (b) BHM800 a,b Spectrum M H =800 GeV, | sin α| = 0.2, tan β (a) = 0.25, tan β (b) = 0.27 σ(gg → h) 47.69 σ(gg → H) 0.022 BR(H → hh) 0.23 (a), 0.19 (b) BR(H → W W ) 0.46 (a), 0.48 (b) BR(H → ZZ) 0.23 (a), 0.24 (b) BR(H → tt) 0.08 (a), 0.09 (b) BHM200 Spectrum M H =200 GeV, | sin α| = 0.29, tan β = 1.19 σ(gg → h) 45.50 σ(gg → H) 1.74 BR(H → SM) as for a SM Higgs boson with mass of 200 GeV
M h =60 GeV, | sin α| = 0.9997, tan β (a) = 3.48, tan β (b) = 0.025 σ(gg → h) 0.10 σ(gg → H) 49.65 BR(H → hh) 0.26 (a), 0 (b) BR(H → SM) rescaled by 0.74 (a), as in SM (b) BHM50 a,b Spectrum M h =50 GeV, | sin α| = 0.9998, tan β (a) = 3.25, tan β (b) = 0.020 σ(gg → h) 0.098 BR(H → hh) 0.26 (a), 0 (b) BR(H → SM) rescaled by 0.74 (a), as in SM (b) BHM40 a,b Spectrum M h =40 GeV, | sin α| = 0.9998, tan β (a) = 3.13, tan β (b) = 0.020 σ(gg → h) 0.16 BR(H → hh) 0.26 (a), 0 (b) BR(H → SM) rescaled by 0.74 (a), as in SM (b) BHM30 a,b Spectrum M h =30 GeV, | sin α| = 0.9998, tan β (a) = 3.16, tan β (b) = 0.020 σ(gg → h) 0.31 BR(H → hh) 0.26 (a), 0 (b) BR(H → SM) rescaled by 0.74 (a), as in SM (b) BHM20 a,b Spectrum M h =20 GeV, | sin α| = 0.9998, tan β (a) = 3.23, tan β (b) = 0.020 σ(gg → h) 0.90 BR(H → hh) 0.26 (a), 0 (b) BR(H → SM) rescaled by 0.74 (a), as in SM (b) BHM10 a,b Spectrum M h =10 GeV, | sin α| = 0.9998, tan β (a) = 3.29, tan β (b) = 0.020 σ(gg → h) 2.98 BR(H → hh) 0.26 (a), 0 (b) BR(H → SM) rescaled by 0.74 (a), as in SM (b)
to | sin α| 0.36. The updated limits from LHC HiggsFIG. 1: Maximal allowed values for | sin α| in the high mass region, m H ∈ [130, 1000] GeV, from NLO calculations of the W boson mass (red, solid ) [32], electroweak precision observables (EWPOs) tested via the oblique parameters S, T and U (orange, dashed ), perturbativity of the RG-evolved coupling λ 1 (blue, dotted ), evaluated for an exemplary choice tan β = 0.1, perturbative unitarity (grey, dash-dotted ), direct LHC Higgs searches (green, dashed ), and the Higgs signal strength (magenta, dash-dotted ). FIG. 2: Comparison of the | sin α| limit obtained from the LHC Higgs searches with SM final states as presented in Ref. [38] (red) with the updated analysis (green).searches in channels with vector boson final states also generally lead to stronger constraints, except in the region m H ∈ [260, 300] GeV, where a statistical upward fluctuation in the CMS H → ZZ → 4 channel[66] leads to a slightly weaker limit than previously observed. A comparison of previously presented limits from LHC Higgs searches with the current status is displayed in0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
200
300
400
500
600
700
800
900
1000
| sinα | (upper limit)
m H [GeV]
W boson mass
EW observables (S,T,U)
λ 1 perturbativity (tanβ=0.1)
perturbative unitarity (tanβ=0.1)
LHC SM Higgs searches
Higgs signal rates
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
200
300
400
500
600
700
800
900
1000
| sinα | (upper limit)
m H [GeV]
LHC searches from [1]
updated results
TABLE I :
IList of LHC Higgs search channels that are applied by HiggsBounds in the high mass region, yielding the upper limit on | sin α| shown in Figs. 1 and 2.
m h [
hGeV] | sin α| min, HB | sin α| min, HS (tan β) max (tan β) no H→hh120
0.410
0.918
8.4
-
110
0.819
0.932
9.3
-
100
0.852
0.891
10.1
-
90
0.901
-
11.2
-
80
0.974
-
12.6
-
70
0.985
-
14.4
-
60
0.978
0.996
16.8
0.21
50
0.981
0.998
20.2
0.20
40
0.984
0.998
25.2
0.18
30
0.988
0.998
33.6
0.16
20
0.993
0.998
50.4
0.12
10
0.997
0.998
100.8
0.08
TABLE II :
IILimits on sin α and tan β in the low mass scenario for various light Higgs masses m h and tan β = 1.
TABLE III :
IIIMaximally allowed cross section for light Higgs production in gluon fusion, σ gg = cos 2 α max × σ gg,SM , at the LHC at CM energies of 8 and 14 TeV after all current constraints have been taken into account, corresponding to the mixing angles from Tab. II. This is an updated version of Tab. V in Ref.[38].
m H [
HGeV] | sin α| max tan β max m H [GeV] | sin α| max tan β max130
0.42
1.79
195
0.28
1.22
135
0.38
1.73
200
0.29
1.19
140
0.36
1.69
210
0.28
1.14
145
0.35
1.62
215
0.33
1.12
150
0.34
1.57
220
0.34
1.10
160
0.36
1.49
230
0.35
1.05
180
0.30
1.32
235
0.34
1.03
185
0.27
1.28
240
0.31
1.00
190
0.29
1.26
245
0.28
0.98
TABLE IV :
IVBenchmark points for mass ranges where the onshell decay H → hh is kinematically forbidden. Maximal values of tan β were calculated at the maximal mixing angle, and should be applied for consistency reasons.m H [GeV] | sin α| max BR H→hhmin
BR H→hh
max
m H [GeV] | sin α| max BR H→hh
min
BR H→hh
max
255
0.31
0.09
0.27
430
0.25
0.19
0.30
260
0.34
0.11
0.33
470
0.24
0.19
0.28
265
0.33
0.13
0.36
520
0.23
0.19
0.26
280
0.32
0.17
0.40
590
0.22
0.19
0.25
290
0.31
0.18
0.40
665
0.21
0.19
0.24
305
0.30
0.20
0.40
770
0.20
0.19
0.23
325
0.29
0.21
0.40
875
0.19
0.19
0.22
345
0.28
0.22
0.39
920
0.18
0.19
0.22
365
0.27
0.21
0.36
975
0.17
0.19
0.21
395
0.26
0.20
0.32
1000
0.17
0.19
0.21
TABLE V :
VMaximal and minimal allowed branching ratios of the decay H → hh, taken at the maximally allowed value of | sin α|. Note that mininal values for the BR(H → hh) stem from sin α ≤ 0.Benchmark Scenarios for the Real Singlet
Main Features
real singlet extension, with two vevs and no hidden sector interaction
with heavy Higgs H and light Higgs h.
TABLE VII :
VIIBenchmark scenarios for the high mass region for fixed masses and | sin α|, floating tan β (between scenarios
TABLE VIII :
VIIIMaximal branching ratios for H → hh. This BR can always be zero for the choice tan β = − cot α.
TABLE IX :
IXLow mass benchmark scenarios for the Higgs-to-Higgs decay signature for fixed masses and | sin α|, floating tan β (between scenarios
Note that even if the Z 2 symmetry is not imposed, the parameters of the model relevant for the collider phenomenology considered here can always be chosen in terms of the masses, a mixing angle, and an additional parameter determining the H → hh decay channel.
HiggsBounds selects the most sensitive channel by comparing the expected exclusion limits first. In a second step, the predicted signal strength is confronted with the observed exclusion limit only of this selected channel. This well defined statistical procedure allows to systematically test the model against a plethora of Higgs search limits without diluting the 95% C.L. of the individual limits.
See also Ref.[99] for recent benchmark point suggestions within the complex singlet model.
Electroweak corrections to the decay H → hh have been presented for some of these benchmark points in Ref.[54].
AcknowledgementsWe thank S. Dawson
. P W Higgs, Phys.Lett. 12132P. W. Higgs, Phys.Lett. 12, 132 (1964).
. P W Higgs, Phys.Rev.Lett. 13508P. W. Higgs, Phys.Rev.Lett. 13, 508 (1964).
. F Englert, R Brout, Phys.Rev.Lett. 13321F. Englert and R. Brout, Phys.Rev.Lett. 13, 321 (1964).
. G Guralnik, C Hagen, T Kibble, Phys.Rev.Lett. 13585G. Guralnik, C. Hagen, and T. Kibble, Phys.Rev.Lett. 13, 585 (1964).
. T Kibble, Phys.Rev. 1551554T. Kibble, Phys.Rev. 155, 1554 (1967).
. G Aad, ATLAS, CMS1503.07589Phys. Rev. Lett. 114191803G. Aad et al. (ATLAS, CMS), Phys. Rev. Lett. 114, 191803 (2015), 1503.07589.
. R Schabinger, J D Wells, hep-ph/0509209Phys.Rev. 7293007R. Schabinger and J. D. Wells, Phys.Rev. D72, 093007 (2005), hep-ph/0509209.
. B Patt, F Wilczek, hep-ph/0605188B. Patt and F. Wilczek (2006), hep-ph/0605188.
. V Barger, P Langacker, M Mccaskey, M J Ramsey-Musolf, G Shaughnessy, 0706.4311Phys.Rev. 7735005V. Barger, P. Langacker, M. McCaskey, M. J. Ramsey-Musolf, and G. Shaughnessy, Phys.Rev. D77, 035005 (2008), 0706.4311.
. G Bhattacharyya, G C Branco, S Nandi, Phys.Rev. 77G. Bhattacharyya, G. C. Branco, and S. Nandi, Phys.Rev. D77, 117701 (2008), 0712.2693.
. S Dawson, W Yan, Phys.Rev. 79904S. Dawson and W. Yan, Phys.Rev. D79, 095002 (2009), 0904.2005.
. S Bock, R Lafaye, T Plehn, M Rauch, D Zerwas, 1007.2645Phys.Lett. 69444S. Bock, R. Lafaye, T. Plehn, M. Rauch, D. Zerwas, et al., Phys.Lett. B694, 44 (2010), 1007.2645.
. P J Fox, D Tucker-Smith, N Weiner, 1104.5450JHEP. 1106127P. J. Fox, D. Tucker-Smith, and N. Weiner, JHEP 1106, 127 (2011), 1104.5450.
. C Englert, T Plehn, D Zerwas, P M Zerwas, 1106.3097Phys.Lett. 703298C. Englert, T. Plehn, D. Zerwas, and P. M. Zerwas, Phys.Lett. B703, 298 (2011), 1106.3097.
. C Englert, J Jaeckel, E Re, M Spannowsky, 1111.1719Phys.Rev. 8535008C. Englert, J. Jaeckel, E. Re, and M. Spannowsky, Phys.Rev. D85, 035008 (2012), 1111.1719.
. B Batell, S Gori, L.-T Wang, 1112.5180JHEP. 1206172B. Batell, S. Gori, and L.-T. Wang, JHEP 1206, 172 (2012), 1112.5180.
. C Englert, T Plehn, M Rauch, D Zerwas, P M Zerwas, 1112.3007Phys.Lett. 707512C. Englert, T. Plehn, M. Rauch, D. Zerwas, and P. M. Zerwas, Phys.Lett. B707, 512 (2012), 1112.3007.
. R S Gupta, J D Wells, 1110.0824Phys.Lett. 710154R. S. Gupta and J. D. Wells, Phys.Lett. B710, 154 (2012), 1110.0824.
. M J Dolan, C Englert, M Spannowsky, 1210.8166Phys.Rev. 8755002M. J. Dolan, C. Englert, and M. Spannowsky, Phys.Rev. D87, 055002 (2013), 1210.8166.
. D Bertolini, M Mccullough, 1207.4209JHEP. 1212118D. Bertolini and M. McCullough, JHEP 1212, 118 (2012), 1207.4209.
. B Batell, D Mckeen, M Pospelov, 1207.6252JHEP. 1210104B. Batell, D. McKeen, and M. Pospelov, JHEP 1210, 104 (2012), 1207.6252.
. D Lopez-Val, T Plehn, M Rauch, JHEP. 13101308D. Lopez-Val, T. Plehn, and M. Rauch, JHEP 1310, 134 (2013), 1308.1979.
. S Heinemeyer, 1307.1347The LHC Higgs Cross Section Working GroupS. Heinemeyer et al. (The LHC Higgs Cross Section Working Group) (2013), 1307.1347.
. R S Chivukula, A Farzinnia, J Ren, E H Simmons, Phys.Rev. 88R. S. Chivukula, A. Farzinnia, J. Ren, and E. H. Simmons, Phys.Rev. D88, 075020 (2013), 1307.1064.
. C Englert, M Mccullough, 1303.1526JHEP. 1307168C. Englert and M. McCullough, JHEP 1307, 168 (2013), 1303.1526.
. B Cooper, N Konstantinidis, L Lambourne, D Wardrope, 1307.0407Phys.Rev. 88114005B. Cooper, N. Konstantinidis, L. Lambourne, and D. Wardrope, Phys.Rev. D88, 114005 (2013), 1307.0407.
. C Caillol, B Clerbaux, J.-M Frere, S Mollet, 1304.0386Eur.Phys.J.Plus. 129C. Caillol, B. Clerbaux, J.-M. Frere, and S. Mollet, Eur.Phys.J.Plus 129, 93 (2014), 1304.0386.
. R Coimbra, M O Sampaio, R Santos, 1301.2599Eur.Phys.J. 732428R. Coimbra, M. O. Sampaio, and R. Santos, Eur.Phys.J. C73, 2428 (2013), 1301.2599.
. G M Pruna, T Robens, 1303.1150Phys.Rev. 88115012G. M. Pruna and T. Robens, Phys.Rev. D88, 115012 (2013), 1303.1150.
. S Dawson, A Gritsan, H Logan, J Qian, C Tully, 1310.8361S. Dawson, A. Gritsan, H. Logan, J. Qian, C. Tully, et al. (2013), 1310.8361.
. L Basso, O Fischer, J J Van Der, Bij, 1309.6086Phys. Lett. 730326L. Basso, O. Fischer, and J. J. van Der Bij, Phys. Lett. B730, 326 (2014), 1309.6086.
. D Lopez-Val, T Robens, 1406.1043Phys.Rev. 90D. Lopez-Val and T. Robens, Phys.Rev. D90, 114018 (2014), 1406.1043.
. C Englert, M Spannowsky, 1405.0285Phys. Rev. 9053003C. Englert and M. Spannowsky, Phys. Rev. D90, 053003 (2014), 1405.0285.
. C Englert, Y Soreq, M Spannowsky, 1410.5440JHEP. 05145C. Englert, Y. Soreq, and M. Spannowsky, JHEP 05, 145 (2015), 1410.5440.
. C.-Y Chen, S Dawson, I M Lewis, 1410.5488Phys. Rev. 9135015C.-Y. Chen, S. Dawson, and I. M. Lewis, Phys. Rev. D91, 035015 (2015), 1410.5488.
. D Karabacak, S Nandi, S K Rai, 1405.0476Phys.Lett. 737341D. Karabacak, S. Nandi, and S. K. Rai, Phys.Lett. B737, 341 (2014), 1405.0476.
. S Profumo, M J Ramsey-Musolf, C L Wainwright, P Winslow, Phys. Rev. 91S. Profumo, M. J. Ramsey-Musolf, C. L. Wainwright, and P. Winslow, Phys. Rev. D91, 035018 (2015), 1407.5342.
. T Robens, T Stefaniak, 1501.02234Eur. Phys. J. 75T. Robens and T. Stefaniak, Eur. Phys. J. C75, 104 (2015), 1501.02234.
. V Lozano, J M Moreno, C B Park, 1501.03799JHEP. 084V. Martín Lozano, J. M. Moreno, and C. B. Park, JHEP 08, 004 (2015), 1501.03799.
. A Falkowski, C Gross, O Lebedev, 1502.01361JHEP. 0557A. Falkowski, C. Gross, and O. Lebedev, JHEP 05, 057 (2015), 1502.01361.
. G Ballesteros, C Tamarit, 1505.07476JHEP. 09210G. Ballesteros and C. Tamarit, JHEP 09, 210 (2015), 1505.07476.
. D Buttazzo, F Sala, A Tesi, 1505.05488JHEP. 11158D. Buttazzo, F. Sala, and A. Tesi, JHEP 11, 158 (2015), 1505.05488.
. S Banerjee, M Mitra, M Spannowsky, 1506.06415Phys. Rev. 9255013S. Banerjee, M. Mitra, and M. Spannowsky, Phys. Rev. D92, 055013 (2015), 1506.06415.
. T Corbett, O J P Eboli, M C Gonzalez-Garcia, 1509.01585Phys. Rev. 9315005T. Corbett, O. J. P. Eboli, and M. C. Gonzalez-Garcia, Phys. Rev. D93, 015005 (2016), 1509.01585.
. A Tofighi, O N Ghodsi, M Saeedhoseini, 1510.00791Phys. Lett. 748208A. Tofighi, O. N. Ghodsi, and M. Saeedhoseini, Phys. Lett. B748, 208 (2015), 1510.00791.
. C.-Y Chen, Q.-S Yan, X Zhao, Y.-M Zhong, Z Zhao, 1510.04013Phys. Rev. 9313007C.-Y. Chen, Q.-S. Yan, X. Zhao, Y.-M. Zhong, and Z. Zhao, Phys. Rev. D93, 013007 (2016), 1510.04013.
. S I Godunov, A N Rozanov, M I Vysotsky, E V Zhemchugov, 1503.01618Eur. Phys. J. 76S. I. Godunov, A. N. Rozanov, M. I. Vysotsky, and E. V. Zhemchugov, Eur. Phys. J. C76, 1 (2016), 1503.01618.
. M Duch, B Grzadkowski, M Mcgarrie, 1506.08805JHEP. 09162M. Duch, B. Grzadkowski, and M. McGarrie, JHEP 09, 162 (2015), 1506.08805.
. Z.-W Wang, T G Steele, T Hanif, R B Mann, 1510.04321Z.-W. Wang, T. G. Steele, T. Hanif, and R. B. Mann (2015), 1510.04321.
. N Bernal, X Chu, 1510.08527JCAP. 16016N. Bernal and X. Chu, JCAP 1601, 006 (2016), 1510.08527.
. S Ghosh, A Kundu, S Ray, 1512.05786S. Ghosh, A. Kundu, and S. Ray (2015), 1512.05786.
. M J Dolan, J L Hewett, M Krämer, T G Rizzo, 1601.07208M. J. Dolan, J. L. Hewett, M. Krämer, and T. G. Rizzo (2016), 1601.07208.
. S Kanemura, M Kikuchi, K Yagyu, 1511.06211Nucl. Phys. 907S. Kanemura, M. Kikuchi, and K. Yagyu, Nucl. Phys. B907, 286 (2016), 1511.06211.
. F Bojarski, G Chalons, D Lopez-Val, T Robens, 1511.08120JHEP. 02147F. Bojarski, G. Chalons, D. Lopez-Val, and T. Robens, JHEP 02, 147 (2016), 1511.08120.
. E Maina, 1501.02139JHEP. 064E. Maina, JHEP 06, 004 (2015), 1501.02139.
. N Kauer, C O'brien, 1502.04113Eur. Phys. J. 75N. Kauer and C. O'Brien, Eur. Phys. J. C75, 374 (2015), 1502.04113.
. C Englert, I Low, M Spannowsky, 1502.04678Phys. Rev. 9174029C. Englert, I. Low, and M. Spannowsky, Phys. Rev. D91, 074029 (2015), 1502.04678.
. A Ballestrero, E Maina, 1506.02257JHEP. 0145A. Ballestrero and E. Maina, JHEP 01, 045 (2016), 1506.02257.
. S Dawson, I M Lewis, 1508.05397Phys. Rev. 9294023S. Dawson and I. M. Lewis, Phys. Rev. D92, 094023 (2015), 1508.05397.
. L Basso, S Moretti, G M Pruna, 1004.3039Phys.Rev. 8255018L. Basso, S. Moretti, and G. M. Pruna, Phys.Rev. D82, 055018 (2010), 1004.3039.
. M J Strassler, K M Zurek, hep-ph/0604261Phys.Lett. 651M. J. Strassler and K. M. Zurek, Phys.Lett. B651, 374 (2007), hep-ph/0604261.
. M J Strassler, K M Zurek, hep-ph/0605193Phys.Lett. 661263M. J. Strassler and K. M. Zurek, Phys.Lett. B661, 263 (2008), hep-ph/0605193.
. G Aad, ATLAS Collaboration1407.6583Phys.Rev.Lett. 113171801G. Aad et al. (ATLAS Collaboration), Phys.Rev.Lett. 113, 171801 (2014), 1407.6583.
. V Khachatryan, CMS1504.00936JHEP. 10144V. Khachatryan et al. (CMS), JHEP 10, 144 (2015), 1504.00936.
. G Aad, ATLAS1509.04670Phys. Rev. 9292004G. Aad et al. (ATLAS), Phys. Rev. D92, 092004 (2015), 1509.04670.
. G Aad, ATLAS1509.00672JHEP. 11206G. Aad et al. (ATLAS), JHEP 11, 206 (2015), 1509.00672.
. G Aad, ATLAS1507.05930Eur. Phys. J. 76G. Aad et al. (ATLAS), Eur. Phys. J. C76, 45 (2016), 1507.05930.
. P Bechtle, O Brein, S Heinemeyer, G Weiglein, K E Williams, 0811.4169Comput. Phys. Commun. 181P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, and K. E. Williams, Comput. Phys. Com- mun. 181, 138 (2010), 0811.4169.
. P Bechtle, O Brein, S Heinemeyer, G Weiglein, K E Williams, 1102.1898Comput. Phys. Commun. 1822605P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, and K. E. Williams, Comput. Phys. Com- mun. 182, 2605 (2011), 1102.1898.
. P Bechtle, O Brein, S Heinemeyer, O Stål, T Stefaniak, 1301.2345PoS. 201224P. Bechtle, O. Brein, S. Heinemeyer, O. Stål, T. Stefaniak, et al., PoS CHARGED2012, 024 (2012), 1301.2345.
. P Bechtle, O Brein, S Heinemeyer, O Stål, T Stefaniak, 1311.0055Eur. Phys. J. C. 742693P. Bechtle, O. Brein, S. Heinemeyer, O. Stål, T. Stefaniak, et al., Eur. Phys. J. C 74, 2693 (2013), 1311.0055.
. P Bechtle, S Heinemeyer, O Stal, T Stefaniak, G Weiglein, 1507.06706Eur. Phys. J. 75P. Bechtle, S. Heinemeyer, O. Stal, T. Stefaniak, and G. Weiglein, Eur. Phys. J. C75, 421 (2015), 1507.06706.
. P Bechtle, S Heinemeyer, O Stål, T Stefaniak, G Weiglein, Eur.Phys.J. 741305P. Bechtle, S. Heinemeyer, O. Stål, T. Stefaniak, and G. Weiglein, Eur.Phys.J. C74, 2711 (2014), 1305.1933.
. P Bechtle, S Heinemeyer, O Stål, T Stefaniak, G Weiglein, 1403.1582JHEP. 141139P. Bechtle, S. Heinemeyer, O. Stål, T. Stefaniak, and G. Weiglein, JHEP 1411, 039 (2014), 1403.1582.
. M Bowen, Y Cui, J D Wells, hep-ph/0701035JHEP. 070336M. Bowen, Y. Cui, and J. D. Wells, JHEP 0703, 036 (2007), hep-ph/0701035.
. N D Christensen, C Duhr, 0806.4194Comput. Phys. Commun. 1801614N. D. Christensen and C. Duhr, Comput. Phys. Commun. 180, 1614 (2009), 0806.4194.
. F Staub, 0806.0538F. Staub (2008), 0806.0538.
. F Staub, 1309.7223Comput. Phys. Commun. 1851773F. Staub, Comput. Phys. Commun. 185, 1773 (2014), 1309.7223.
. R N Lerner, J Mcdonald, 0909.0520Phys.Rev. 80123507R. N. Lerner and J. McDonald, Phys.Rev. D80, 123507 (2009), 0909.0520.
. G Altarelli, R Barbieri, Phys. Lett. B. 253161G. Altarelli and R. Barbieri, Phys. Lett. B 253, 161 (1991).
. M E Peskin, T Takeuchi, Phys.Rev.Lett. 65964M. E. Peskin and T. Takeuchi, Phys.Rev.Lett. 65, 964 (1990).
. M E Peskin, T Takeuchi, Phys.Rev. 46381M. E. Peskin and T. Takeuchi, Phys.Rev. D46, 381 (1992).
. I Maksymyk, C Burgess, D London, hep-ph/9306267Phys.Rev. 50I. Maksymyk, C. Burgess, and D. London, Phys.Rev. D50, 529 (1994), hep-ph/9306267.
. M Baak, Gfitter Group1407.3792Eur.Phys.J. 74M. Baak et al. (Gfitter Group), Eur.Phys.J. C74, 3046 (2014), 1407.3792.
. J Alcaraz, ALEPH Collaboration ; DELPHI Collaboration ; L3 Collaboration ; OPAL Collaborationhep-ex/0612034LEP Electroweak Working GroupJ. Alcaraz et al. (ALEPH Collaboration, DELPHI Collaboration, L3 Collaboration, OPAL Collaboration, LEP Electroweak Working Group) (2006), hep-ex/0612034.
. T Aaltonen, CDF Collaboration1203.0275Phys.Rev.Lett. 108151803T. Aaltonen et al. (CDF Collaboration), Phys.Rev.Lett. 108, 151803 (2012), 1203.0275.
. V M Abazov, D0 Collaboration1310.8628Phys.Rev. 8912005V. M. Abazov et al. (D0 Collaboration), Phys.Rev. D89, 012005 (2014), 1310.8628.
. G Aad, ATLAS1509.00389JHEP. 0132G. Aad et al. (ATLAS), JHEP 01, 032 (2016), 1509.00389.
. S Chatrchyan, CMS Collaboration1312.5353Phys.Rev. 8992007S. Chatrchyan et al. (CMS Collaboration), Phys.Rev. D89, 092007 (2014), 1312.5353.
Accelerators at the high-energy frontier: Cern plans, projects and future studies, talk given at "XLIII International Meeting on Fundamental Physics Centro de Ciencias de Benasque Pedro Pascual. P Lebrun, P. Lebrun, Accelerators at the high-energy frontier: Cern plans, projects and future studies, talk given at "XLIII International Meeting on Fundamental Physics Centro de Ciencias de Benasque Pedro Pascual, 12-21 March 2015".
. M Grazzini, Private communicationM. Grazzini. Private communication.
. R Costa, M Muehlleitner, M O P Sampaio, R Santos, 1512.05355R. Costa, M. Muehlleitner, M. O. P. Sampaio, and R. Santos (2015), 1512.05355.
. V Khachatryan, CMS1603.06896V. Khachatryan et al. (CMS) (2016), 1603.06896.
. V Khachatryan, CMS1503.04114Phys. Lett. 749560V. Khachatryan et al. (CMS), Phys. Lett. B749, 560 (2015), 1503.04114.
. S Dittmaier, LHC Higgs Cross Section Working Group1101.0593S. Dittmaier et al. (LHC Higgs Cross Section Working Group) (2011), 1101.0593.
. S Dittmaier, S Dittmaier, C Mariotti, G Passarino, R Tanaka, 1201.3084S. Dittmaier, S. Dittmaier, C. Mariotti, G. Passarino, R. Tanaka, et al. (2012), 1201.3084.
. Lhc The, Higgs, Cross Section Working Groupto appearThe LHC Higgs Cross Section Working Group (2016), to appear.
| zyda_arxiv-0317000 |
Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL
Miguel Suau
Delft University of Technology
Matthijs T J Spaan
Delft University of Technology
Frans A Oliehoek [email protected]
Delft University of Technology
Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL
Reinforcement learning agents may sometimes develop habits that are effective only when specific policies are followed. After an initial exploration phase in which agents try out different actions, they eventually converge toward a particular policy. When this occurs, the distribution of state-action trajectories becomes narrower, and agents start experiencing the same transitions again and again. At this point, spurious correlations may arise. Agents may then pick up on these correlations and learn state representations that do not generalize beyond the agent's trajectory distribution. In this paper, we provide a mathematical characterization of this phenomenon, which we refer to as policy confounding, and show, through a series of examples, when and how it occurs in practice.Preprint. Under review.
Introduction
This morning, I went to the kitchen for a coffee. When I arrived, I forgot why I was there, so I got myself a coffee-How often do you do something without paying close attention to your actions? Have you ever caught yourself thinking about something else while washing the dishes, making coffee, or cycling? Acting out of habit is a vital human skill as it allows us to concentrate on more important matters while carrying out routine tasks. You can commute to work while thinking about how to persuade your boss to give you a salary raise or prepare dinner while imagining your next holidays in the Alps. However, unlike in the above example, habits can also lead to undesired outcomes when we fail to recognize that the context has changed. You may hop in your car and start driving towards work even though it is a Sunday and you actually want to go to the grocery store, or you may flip the light switch when leaving a room even though the lights are already off.
Here we show how reinforcement learning (RL) agents may also suffer from this phenomenon. Agents can exploit spurious correlations (Pearl et al., 2016) between observed variables and rewards to build simple habits that require little effort to carry out. Such correlations are induced by the agent's policy and hence can be relied upon so long as said policy is followed consistently. However, as we shall see, even minor trajectory deviations can result in catastrophic outcomes. Ideally, the agent should only pick up on correlations that are stable across policies. That is, independently of the trajectories being followed. We refer to this objective as out-of-trajectory (OOT) generalization.
Contributions This paper characterizes policy confounding, a term we use to name the abovedescribed phenomenon. To do so, we introduce a mathematical framework that helps us investigate different types of state representations. Moreover, we provide a series of clarifying examples that illustrate how, as a result of policy confounding, the agent may learn representations based on spurious correlations that do not guarantee OOT generalization. Unfortunately, we do not have a complete answer for how to prevent policy confounding. However, we suggest a few off-the-shelf solutions that may help mitigate its effects. We hope this paper will create awareness among the RL community about the risks of policy confounding and inspire further research on this topic.
Example: Frozen T-Maze
We now provide an example to illustrate the phenomenon of policy confounding and motivate the need for careful analysis. The environment shown in Figure 1 is a variant of the popular T-Maze environment (Bakker, 2001). The agent receives a binary signal, green or purple, at the start location. Then, it needs to move to the right and reach the correct goal at the end of the maze (ignore the blue cells and the black vertical arrow in the middle of the maze for now). The agent obtains a reward of +1 for moving to the green (purple) goal when having received the green (purple) signal and a reward of −1 otherwise. At first sight, one may think that the only way the agent can solve the task is if, at every cell along its trajectory, it can recall the initial signal. However, once the agent figures out the shortest path to each of the two goals (depicted by the green and purple arrows), the agent may safely forget the initial signal. The agent knows that whenever it is at any of the cells along the green (purple) path, it must have received the green (purple) signal. Hence, it can simply move toward the right goal on the basis of its own location. Sticking to this habit is optimal so long as the agent commits to always taking these two paths. 1 It is also essential that the environment's dynamics remain the same since even the slightest change in the agent's trajectories may erase the spurious correlation induced by the agent's policy between the agent's location and the correct goal.
To show that this actually occurs in practice, we train agents in the original environment (train env) and evaluate them on a variant of the same (eval env), where some ice (blue) has appeared in the middle of the maze. The ice makes the agent slip from the upper cell to the bottom cell and vice versa. The plot on the right of Figure 1 shows the return averaged over 10 trials. The performance drop in the evaluation environment (blue curve) suggests that the agents' policies do not generalize. The ice confuses the agents, who, after being pushed away from their preferred trajectories, can no longer select the right goal. More details about this experiment are provided in Section 7.
Related Work
The presence of spurious correlations in the training data is a well-studied problem in machine learning. These correlations often provide convenient shortcuts that a model can exploit to make predictions (Beery et al., 2018). However, the performance of a model that relies on them may significantly deteriorate under different data distributions (Quionero-Candela et al., 2009;Arjovsky, 2021). Langosco et al. (2022) show that RL agents may use certain environment features as proxies for choosing their actions. These features, which show only in the training environments, happen to be spuriously correlated with the agent's objectives. In contrast, we demonstrate that, as a result of policy confounding, agents may directly take part in the formation of spurious correlations. A few prior works have already reported empirical evidence of particular forms of policy confounding, showing that in deterministic environments, agents can rely on information that correlates with the agent's progress in an episode to determine the optimal actions. This strategy is effective because under fixed policies, features such as timers (Song et al., 2020), agent's postures (Lan et al., 2023), or previous action sequences (Machado et al., 2018) can be directly mapped to the agent's state. These works provide various hypotheses to justify their experimental observations. Here, we contribute an overarching theory that explains the underlying causes and mechanisms behind these results, along with a series of examples illustrating other types of policy confounding. Please refer to Appendix C for more details on related work.
Preliminaries
Although, as we shall see in the experiments, policy confounding can occur even when states are fully observable, in order to understand the idea, it is useful to formulate the setting as partially observable (Kaelbling et al., 1996). Moreover, since we model values and policies using (parametric) functions rather than tables, we use state variables or state factors to represent the different states of the environment (Boutilier et al., 1999). Definition 1 (FPOMDP). A factored partially observable Markov decision process (FPOMDP) is a tuple ⟨S, F, A, T, R, O, X, Y ⟩ where S is the set of states, F is the set of state variables (or state factors) F = {f 1 , ..., f l } so that every state s t ∈ S = × l i=1 f i is represented as a vector s = ⟨f 1 , ..., f l ⟩, A is the set of actions a t , T (s t+1 | s t , a t ) is the transition probability function, R(s t , a t ) is the reward function which determines the immediate reward r t , and O(s t ) is the observation or emission function, which selects a subset of observed variables X t ⊆ F (which may be different depending on the state s t ), and discards the hidden variables Y t = F \ X t , such that the agent's
observations o t ∈ × mt i=1 X t are represented as vectors o t = ⟨x 1 t , ..., x mt t ⟩ with m t ≤ l.
In this setting, the agent must keep track of past actions and observations to make the right action choices (Singh et al., 1994). The optimal policy is a mapping from the past action-observation history, h t = ⟨o 1 , a 1 , ..., a t−1 , o t ⟩, to a probability distribution ∆(A) over actions A, π : H → ∆(A), where H is the set of all possible histories of any length. We use the random variable τ = ⟨o 1 , a 1 , ..., a T −1 , o T ⟩ to denote the agent's trajectory in an episode, with T being the episode's horizon. Knowing that the full history constitutes a Markov representation, we can reformulate the FPOMDP into a factored history MDP (FHMDP). Definition 2 (FHMPD). A factored history Markov decision process (FHMDP) is a tuple ⟨H, Θ, A, T h , R h ⟩, where H is the set of all possible histories of any length, Θ denotes the set of variables in the history, with Θ t denoting the set of actions A and observation variables X in a history of length t, Θ t = {x 1 1 , ..., x m1 1 , a 1 , ..., x 1 t , ..., x mt t , a t }, such that we write their Cartesian product,
H t = {x 1 1 × ... × x m1 1 × a 1 × ... × x 1 t × ... × x mt t × a t }, simply as H t = ×Θ t , T h (ht+1 = ⟨ht, at, ot+1⟩ | ht, at) ≜ s t+1 ,s t ∈S O(st+1)T (st+1 | st, at) Pr(st | ht)
is the history transition function, 2 and
R h (ht, at) ≜ s t ∈S R(st, at) Pr(st | ht)
is the history reward function.
This formulation is convenient because it allows solving the POMDP using MDP methods. Yet, due to combinatorial explosion, learning a policy that conditions on the full history is generally infeasible. Fortunately, in many problems, not all the information is strictly relevant; the agent can usually find compact representations of the history, that are sufficient for solving the task (McCallum, 1995).
History representations
Factored representations are useful because they readily define relationships between (states) histories. Histories can be compared to one another by looking at the individual values the different variables take. Removing some of the variables in Θ t has the effect of grouping together those histories that share the same values for the remaining ones. Thus, in contrast with most of the theoretical work in RL, which treats histories (states) as independent entities, we can define history (state) abstractions at the variable level instead of doing so at the history (state) level (Li et al., 2006).
Definition 3 (History representation). A history representation is a function
Φ : H t →H t , with H t = ×Θ t ,H t = ×Θ t , andΘ t ⊆ Θ t .
Intuitively a history representation Φ(h t ) is a context-specific projection of a history h t ∈ H t = ×Θ t onto a lower dimensional spaceH t = ×Θ t defined by a subset of its variables,Θ t ⊆ Θ t . We use
{h t } Φ = {h ′ t ∈ H t : Φ(h ′ t ) = Φ(h t )} to denote h t 's equivalence class under Φ.
Markov history representations
As noted in Section 4, the agent should strive for history representations with few variables. Yet, not all history representations will be sufficient to learn the optimal policy; some may exclude variables that contain useful information for the task at hand.
Definition 4 (Markov history representation). A history representation
Φ(h t ) is said to be Markov if, for all h t , h t+1 ∈ H, a t ∈ A, R h (ht, at) = R h (Φ(ht), at) and h ′ t+1 ∈{h t+1 } Φ T h (h ′ t+1 | ht, at) = Pr(Φ(ht+1) | Φ(ht), at), where R h (Φ(h t ), a t ) = {R(h ′ t , a t )} h ′ t ∈{ht} Φ is the reward at any h ′ t ∈ {h t } Φ .
The above definition is equivalent to the notion of bisimulation (Dean and Givan, 1997;Givan et al., 2003) or model-irrelevance state abstraction (Li et al., 2006). Representations satisfying these conditions are guaranteed to be equivalent to the original representation. That is, for any given policy and initial history, the expected return (i.e., cumulative reward; Sutton and Barto, 2018) is the same when conditioning on the full history or on the Markov history representation. Note that a history representation Φ such that Φ(h t ) = h t , for all h t ∈ H, is, in itself, Markov.
Definition 5 (Minimal history representation). A history representation Φ * : H t →H * t with H * t = ×Θ * t is said to be minimal, if all other history representations Φ : H t →H t withH t = ×Θ t and |Θ t | ⊂ |Θ * t |, for at least one h t ∈ H, are not Markov.
In other words, Φ * t (h t ) is minimal when none of the remaining variables can be removed while the representation remains Markov. Hence, we say that a minimal history representation Φ * t (h t ) is a sufficient statistic of the full history. Definition 6 (Superfluous variable). Let {Θ * t } ∪Φ * be the union of variables in all possible minimal history representations.
A variable Θ i t ∈ Θ t is said to be superfluous, if Θ i t / ∈ {Θ * t } ∪Φ * .
π-Markov history representations
Considering that the agent's policy will rarely visit all possible histories, the notion of Markov history representation seems excessively strict. We now define a relaxed version that guarantees the representation to be Markov when a specific policy π is followed. Definition 7 (π-Markov history representation). A history representation Φ π (h t ) is said to be π-Markov if, for all h t , h t+1 ∈ H π , a t ∈ supp(π(· | h t )),
Policy Confounding
We are now ready to describe how and when policy confounding occurs, as well as why we should care, and how we should go about preventing it. The proofs for all theoretical results are deferred to Appendix A.
Policy confounding arises naturally as the agent improves its policy. Normally, at the beginning of training, the agent takes exploratory actions to determine which ones yield high rewards. It is only after the agent has committed to a particular policy that we start seeing how some of the variables in its history become irrelevant for predicting future states and rewards. The agent may then choose to ignore these variables and exclude them from its representation if keeping them takes extra 'effort'.
The next result demonstrates that a π-Markov history representation Φ π requires at most the same variables, and in some cases fewer, than a minimal history representation Φ * , while still satisfying the Markov conditions for those histories visited under π, h t ∈ H π . Proposition 1. Let Φ * be the set of all possible minimal history representations, where every
Φ * ∈ Φ * is defined as Φ * : H t →H * t withH * t = ×Θ * t .
For all π and all Φ * ∈ Φ * , there exists a π-Markov history representation Φ π : H π t →H π t withH π t = ×Θ π t such that for all h t ∈ H π , Θ π t ⊆Θ * t . Moreover, there exist cases for whichΘ π t is a proper subset,Θ π t ̸ =Θ * t .
Although the result above seems intuitive, its truth may appear incidental. While it is clear that Φ π will never require more variables than the corresponding minimal history representation Φ * , whether or not Φ π will require fewer, seems just an arbitrary consequence of the policy being followed. Moreover, since the variables inΘ * t are all strictly relevant for predicting transitions and rewards, one may think that a policy π inducing representations such thatΘ π t ⊂Θ * t can never be optimal. However, as shown by the following example, it turns out that the histories visited by a particular policy, especially if it is the optimal policy, tend to contain a lot of redundant information. This is particularly true in environments where future observations are heavily influenced by past actions and observations. In such cases, the current observation often reveals a lot about the agent's trajectory. Example 1. (Frozen T-Maze) Let us consider the Frozen T-Maze again (Section 2). Figure 3 shows a dynamic Bayesian network (DBN; Murphy, 2002) describing the dynamics of the environment. Observation variables are denoted by x, while hidden variables are denoted by y. The nodes labeled as x 2 represent the agent's location from t = 0 to t = 8. All intermediate nodes between t = 0 and t = 7 are omitted for simplicity. The nodes labeled as y indicate whether the goal is to go to the green or the purple cell (see Figure 1). Note that y always takes the same value at all timesteps within an episode (either green or purple). The information in y is hidden and only passed to the agent at the start location through the node x 1 0 . On the one hand, if actions are not specified by any particular policy, but simply sampled at random (left diagram), to determine the reward r 8 at t = 8, one needs to know the signal x 1 0 received at t = 0 and the agent's current location x 2 8 . These are highlighted by the green circles in the left DBN. This is because the actions ⟨a 0 , ..., a 7 ⟩ appear as exogenous variables and can take any possible value. Hence, the reward could be either −0.1, (per timestep penalty), −1 (wrong goal), or +1 (correct goal) depending on the actual values of x 1 1 and x 2 8 . On the other hand, when actions are sampled from the optimal policy π * (right DBN), knowing x 2 8 (green circle) is sufficient to determine r 8 . In this second case, π * makes the action a 0 , and thus all future agent locations, dependent on the initial signal x 1 0 . This occurs because, under the optimal policy (green and purple paths in Figure 1), the agent always takes the action 'move up' when receiving the green signal or 'move down' when receiving the purple signal, and then follows the shortest path towards each of the goals. As such, we have that, from t = 1 onward, Φ π * (h t ) = x 2 t is a π-Markov history representation since it constitutes a sufficient statistic of the history h t under π * . Finally, note that, for the same reason, from t = 1, actions may also condition only on x 2 . The phenomenon highlighted by the previous example is the result of a spurious correlation induced by the optimal policy between the agent's locations ⟨x 2 0 , ..., x 2 8 ⟩ and the reward r 8 . Generally speaking, this occurs because policies act as confounders, opening backdoor paths between future histories/rewards and the variables in the current history h t (Pearl, 2000). This is shown by the DBN depicted in Figure 9, where we see that the policy influences both the current history and also future histories/rewards, hence potentially affecting the conditional relationships between some of their variables. For instance, in the above example, R π * (x 2 8 = 'agent at green goal') = +1 when following π * , while for an arbitrary π, R(x 2 8 = 'agent at green goal') = ±1.
Definition 9 (Policy Confounding). A history representation Φ : H t →H t is said to be confounded by a policy π if, for some h t , h t+1 ∈ H, a t ∈ A,
R π (Φ(ht), at) ̸ = R π (do(Φ(ht)), at) or Pr π (Φ(ht+1) | Φ(ht), at) ̸ = Pr π (Φ(ht+1) | do(Φ(ht)), at)
The operator do(·) is known as the do-operator, and it is used to represent physical interventions in a system (Pearl, 2000). These interventions are meant to distinguish cause-effect relations from mere statistical associations. In our case, do(Φ(h t )) means setting the variables forming the history representation Φ(h t ) to a particular value and considering all possible histories in the equivalence class, h ′ t ∈ {h t } Φ . That is, independently of what policy is being followed. It is easy to show that the underlying reason why a π-Markov history representation may require fewer variables than the minimal history representation (as in Example 1) is indeed policy confounding.
Theorem 1. Let Φ * : H t →H *
Leveraging spurious correlations to develop simple habits can be advantageous when resources such as memory, computing power, or data are limited. Agents can disregard and exclude from their representation those variables that are redundant under their policies. However, the challenge is that some of these variables may be crucial to ensure that the agent behaves correctly when the context changes. In the Frozen T-Maze example from Section 2, we observed how the agent could no longer find the correct goal when the ice pushed it away from the optimal trajectory. This is a specific case of a well-researched issue known as out-of-distribution (OOD) generalization (Quionero-Candela et al., 2009;Arjovsky, 2021). We refer to it as out-of-trajectory (OOT) generalization to highlight that the problem arises due to repeatedly sampling from the same policy and thus following the same trajectories. In contrast to previous works (Kirk et al., 2023) that address generalization to environments that differ from the training environment, our objective here is to generalize to trajectories the agent never (or only rarely) takes. 3 Ideally, the agent should aim to learn representations that enable it to predict future rewards and transitions even when experiencing slight variations in its trajectory. Based on Definition 4, we know that, in general, only a Markov history representation satisfies these requirements. However, computing such representations is typically intractable (Ferns et al., 2006), and thus most standard RL methods usually learn representations by maximizing an objective function that depends on the distribution of trajectories P b (τ ) visited under a behavior policy b (e.g., expected return, E τ ∼P b (τ ) [G(τ )]; Sutton and Barto, 2018). The problem is that b may favor certain trajectories over others, which may lead to the exploitation of spurious correlations in the learned representation.
When should we worry about OOT generalization in practice?
The previous section highlighted the generalization failures of representations that depend on spurious correlations. Now, let us delve into the circumstances in which policy confounding is most prone to cause problems.
Function approximation Function approximation has enabled traditional RL methods to scale to high-dimensional problems with long-term memory dependencies, where storing values in lookup tables is infeasible. Using parametric functions (e.g., neural networks) to model policies and value functions, agents can learn abstractions by grouping together histories if these yield the same transitions and rewards. As mentioned before, abstractions occur naturally when histories are represented by a set of variables since the functions simply need to ignore some of these variables. However, this also implies that value functions and policies are exposed to spurious correlations. If a particular variable becomes irrelevant due to policy confounding, the function may learn to ignore it and remove it from its representation (Example 1). This is in contrast to tabular representations, where, every history takes a separate entry, and even though there exist algorithms that perform history (state) abstractions in tabular settings (Andre and Russell, 2002;Givan et al., 2003), these abstractions are normally formed offline before learning (computing) the policy, hence avoiding the risk of policy confounding.
Narrow trajectory distributions In practice, agents are less prone to policy confounding when the trajectory distribution P b (τ ) is broad (i.e., when b encompasses a wide set of trajectories) than when it is narrow. This is because the spurious correlations present in certain trajectories are less likely to have an effect on the learned representations. On-policy methods (e.g., SARSA, Actor-Critic; Sutton and Barto, 2018) are particularly troublesome for this reason since the same policy being updated must also be used to collect the samples. Yet, even when the trajectory distribution is narrow, there is no reason why the agent should pick up on spurious correlations while its policy is still being updated. Only when the agent commits to a particular policy should we start worrying about policy confounding. At this point, lots of the same trajectories are being used for training, and the agent may 'forget' (French, 1999) that, even though certain variables may no longer be needed to represent the current policy, they were important under previous policies. This generally occurs at the end of training when the agent has converged to a particular policy. However, if policy confounding occurs earlier during training, it may prevent the agent from further improving its policy (Nikishin et al., 2022; please refer to Section C for more details).
What can we do to improve OOT generalization?
As mentioned in the introduction, we do not have a complete answer to the problem of policy confounding. Yet, here we offer a few off-the-shelf solutions that, while perhaps limited in scope, can help mitigate the problem in some situations. These solutions revolve around the idea of broadening the distribution of trajectories so as to dilute the spurious correlations introduced by certain policies.
Off-policy methods We already explained in Section 6.2 that on-policy methods are particularly prone to policy confounding since they are restricted to using samples coming from the same policy. A rather obvious solution is to instead use off-policy methods, which allow using data generated from previous policies. Because the samples belong to a mixture of policies it is less likely that the model will pick up the spurious correlations present on specific trajectories. However, as we shall see in the experiments, this alternative works only when replay buffers are large enough. This is because standard replay buffers are implemented as queues, and hence the first experiences coming in are the first being removed. This implies that a replay buffer that is too small will contain samples coming from few and very similar policies. Since there is a limit on how large replay buffers are allowed to be, future research could explore other, more sophisticated, ways of deciding what samples to store and which ones to remove (Schaul et al., 2016).
Exploration and domain randomization When allowed, exploration may mitigate the effects of policy confounding and prevent agents from overfitting their preferred trajectories. Exploration strategies have already been used for the purpose of generalization; to guarantee robustness to perturbations in the environment dynamics (Eysenbach and Levine, 2022), or to boost generalization to unseen environments (Jiang et al., 2022). The goal for us is to remove, to the extent possible, the spurious correlations introduced by the current policy. Unfortunately, though, exploration is not always without cost. Safety-critical applications require the agent to stay within certain boundaries (Altman, 1999;García and Fernández, 2015). When training on a simulator, an alternative to exploration is domain randomization (Tobin et al., 2017;Peng et al., 2018;Machado et al., 2018). The empirical results reported in the next section suggest that agents become less susceptible to policy confounding when adding enough stochasticity to the environment or to the policy. Yet, there is a limit on how much noise can be added to the environment or the policy without altering the optimal policy ( Sutton and Barto, 2018, Example 6.6: Cliff Walking).
Experiments
The goal of the experiments is to: (1) demonstrate that the phenomenon of policy confounding described by the theory does occur in practice, (2) uncover the circumstances under which agents are most likely to suffer the effects of policy confounding and fail to generalize, and (3) evaluate how effective the strategies proposed in the previous section are in mitigating these effects.
Experimental setup
Agents are trained with an off-policy method, DQN (Mnih et al., 2015) and an on-policy method, PPO (Schulman et al., 2017). To be able to analyze the learned representations more easily, we represent policies and value functions as feedforward neural networks and use a stack of past observations as input in the environments that require memory. We report the mean return as a function of the number of training steps. Training is interleaved with periodic evaluations on the original environments and variants thereof used for validation. The results are averaged over 10 random seeds. Please refer to Appendix F for more details about the experimental setup.
Environments
We ran our experiments on three grid-world environments: the Frozen T-Maze from Section 2, and the below described Key2Door, and Diversion environments. We use these as pedagogical examples to clarify the ideas introduced by the theory. Nonetheless, in Appendix C, we refer to previous works showing evidence of particular forms of policy confounding in high dimensional domains.
Example 2. Key2Door. Here, the agent needs to collect a key placed at the beginning of the corridor in Figure 4 (left) and then open the door at the end. The observations do not show whether the key has already been collected. Thus, to solve the task in the minimum number of steps, the agent must remember that it already got the key when going to the door. Yet, since during training, the agent always starts the episode at the first cell from the left, when moving towards the door, the agent can forget about the key once it has reached the third cell. As in the Frozen T-Maze example, the agent can build the habit of using its own location to tell whether it has or has not got the key yet. This, can only occur when the agent consistently follows the optimal policy, depicted by the purple arrow. Otherwise, if the agent moves randomly through the corridor, it is impossible to tell whether the key has or has not been collected. In contrast, in the evaluation environment, the agent always starts at the second to last cell, this confuses the agent, which is used to already having the key by the time it reaches said cell. A DBN describing the dynamics of the environment is provided in Appendix D.
Example 3. Diversion. Here, the agent must move from the start state to the goal state in Figure 4 (right). The observations are length-8 binary vectors. The first 7 elements indicate the column where the agent is located. The last element indicates the row. This environment aims to show that policy confounding can occur not only when the environment is partially observable, as was the case in the previous examples, but also in fully observable scenarios. After the agent learns the optimal trajectory depicted by the green arrow, it can disregard the last element in the observation vector. This is because, if the agent does not deviate, the bottom row is never visited. Rather than forgetting past information, the agent ignores the last element in the current observation vector for being irrelevant when following the optimal trajectory. We train the agent in the original environment and evaluate it in a version with a yellow diversion sign in the middle of the maze that forces the agent to move to the bottom row. A DBN describing the dynamics of the environment is provided in Appendix D.
Results
On-policy vs. off-policy The results in Figure 5 reveal the same pattern in all three environments. PPO fails to generalize outside the agent's preferred trajectories. After an initial phase where the average returns on the training and evaluation environments increase ('PPO train' and 'PPO eval'), the return on the evaluation environments ('PPO eval') starts decreasing when the agent commits to a particular trajectory, as a result of policy confounding. In contrast, since the training samples come from a mixture of policies, DQN performs optimally in both variants of the environments ('DQN train' and 'DQN eval') long after converging to the optimal policy. 4 A visualization of the history representations learned with PPO, showing that the policy does ignore variables that are necessary for generalization, is provided in Appendix E.1.
Large vs. small replay buffers We mentioned in Section 6.3 that the effectiveness of off-policy methods against policy confounding depends on the size of the replay buffer. The results in Figure 6 (left) confirm this claim. The plot shows the performance of DQN in the Frozen T-Maze environment when the size of the replay buffer contains 100K experiences and when it only contains the last 10K experiences. We see that in the second case, the agents performance in the evaluation environment decreases (red curve left plot). This is because, after the initial exploration phase, the distribution of trajectories becomes too narrow, and the spurious correlations induced by the latest policies dominate the replay buffer. Similar results for the other two environments are provided in Appendix E.2.
Exploration and domain randomization The last experiment shows that if sufficient exploration is allowed, DQN may still generalize to different trajectories, even when using small replay buffers (blue curve right plot Figure 6). In the original configuration, the exploration rate ϵ for DQN starts at ϵ = 1 and decays linearly to ϵ = 0.0 after 20K steps. For this experiment, we set the final exploration rate ϵ = 0.1. In contrast, since exploration in PPO is normally controlled by the entropy bonus, which makes it hard to ensure fixed exploration rates, we add noise to the environment instead. The red curve in Figure 6 (right) shows that when we train in an environment where the agent's actions are overridden by a random action with 20% probability, the performance of PPO in the evaluation environment does not degrade after the agent has converged to the optimal policy. This suggests that the added noise prevents the samples containing spurious correlations from dominating the training batches. However, it may also happen that random noise is not sufficient to remove the spurious correlations. As shown in Figure 13 (Appendix E.2), in the Key2Door environment, neither forcing the agent to take random actions 20% of the time nor setting ϵ = 0.1, solves the OOT generalization problem. Similar results for Diversion are provided in Appendix E.2.
Conclusion
This paper described the phenomenon of policy confounding. We showed both theoretically and empirically how as a result of following certain trajectories, agents may pick up on spurious correlations, and build habits that are not robust to trajectory deviations. We also uncovered the circumstances under which policy confounding is most likely to occur in practice and suggested a few ad hoc solutions that may mitigate its effects. We conceive this paper as a stepping stone to explore more sophisticated solutions. An interesting avenue for future research is the integration of tools from the field of causal inference (Pearl et al., 2016;Peters et al., 2017) to aid the agent in forming history representations that are grounded on causal relationships rather than mere statistical associations (Lu et al., 2018;Zhang et al., 2020;Sontakke et al., 2021;Saengkyongam et al., 2023).
which is precisely the first condition in Definition 4,
R h (Φ π (h t ), a t ) = R h (h t , a t ),(4)
for all h t ∈ H and a t ∈ A.
Analogously, we have that,
Pr π (Φ π (h t+1 ) | Φ π (h t ), a t ) = Pr π (Φ π (h t+1 ) | do(Φ π (h t )), a t ) = Pr(Φ π (h t+1 ) | Φ π (h t ), a t )(5)
where the second equality reflects that the above must hold independently of π. Hence, we have that
for all h t , h t+1 ∈ H and h ′ t ∈ {h t } Φ , Pr(Φ π (h t+1 ) | Φ π (h t ), a t ) = Pr(Φ π (h t+1 ) | Φ π (h ′ t ), a t ),(6)
which means that, for all h t , h t+1 ∈ H and a t ∈ A,
Pr(Φ π (h t+1 ) | Φ π (h t ), a t ) = Pr(Φ π (h t+1 ) | h t , a t ) = h ′ t+1 ∈{ht+1} Φ π T h (h ′ t+1 | h t , a t ),(7)
which is the second condition in Definition 4.
Equations (4) and (7) reveal that if the assumption is true (i.e., Φ π is not confounded by the policy), then Φ π is not just π-Markov but actually strictly Markov (Definition 4). However, we know that Φ * (h t ) is the minimal history representation, which contradicts the above statement, since, according to Definition 5, there is no proper subset ofΘ * t , for all h t ∈ H, such that the representation remains Markov. Hence,Θ π t ⊂Θ * t implies policy confounding. Proposition 2. Let {Θ * t } ∪Φ * be the union of variables in all possible minimal history representations. There exist cases where, for some π, there is a π-minimal history representation Φ π * : H π t →H π * t withH π * t = ×Θ π * t such thatΘ π * t \ {Θ * t } ∪Φ * ̸ = ∅.
Proof (sketch). Consider a deterministic MDP with a deterministic policy. Imagine there exists a variable X 1 that is perfectly correlated with the episode's timestep t, but that is generally irrelevant to the task. The variable X 1 would constitute in itself a valid π-Markov history representation since it can be used to determine transitions and rewards so long as a deterministic policy is followed. At the same time, X 1 would not enter the minimal Markov history representation because it is useless under stochastic policies. Example 4 below illustrates this situation. Song et al. (2020). Figure 7 shows a grid world environment., The agent must go from the start cell to the goal cell. The agent must avoid the yellow cells; stepping on those yields a −0.1 penalty. There is a is +1 reward for reaching the goal. The agent can observe its own location within the maze x and the current timestep t. The two diagrams in Figure 8 are DBNs describing the environment dynamics. When actions are considered exogenous random variables (left diagram), the only way to estimate the reward at t = 10 is by looking at the agent's location. In contrast, when actions are determined by the policy (right diagram), the time variable becomes a proxy for the agent's location. This is because the start location and the sequence of actions are fixed. This implies that t is a perfectly valid π-Markov history representation under π * . Moreover, as shown by the DBN on the right, the optimal policy may simply rely on t to determine the optimal action.
C Further Related Work
Early evidence of policy confounding Although to the best of our knowledge, we are the first to bring forward and describe mathematically the idea of policy confounding, a few prior works have reported evidence of particular forms of policy confounding. In their review of the Arcade Learning Environment (ALE; Bellemare et al., 2013), Machado et al. (2018) explain that because the games are fully deterministic (i.e., initial states are fixed and transitions are deterministic), open-loop policies that memorize good action sequences can achieve high scores in ALE. Clearly, this can only occur if the policies themselves are also deterministic. In such cases, policies, acting as confounders, induce a spurious correlation between the past action sequences and the environment states. Similarly, Song et al. (2020) showed, by means of saliency maps, how agents may learn to use irrelevant features of the environment that happen to be correlated with the agent's progress, such as background clouds or the game timer, as clues for outputting optimal actions. In this case, the policy is again a confounder for all these, a priori irrelevant, features. Zhang et al. (2018b) provide empirical results showing how large neural networks may overfit their training environments and, even when trained on a collection of procedurally generated environments, memorize the optimal action for each observation. Zhang et al. (2018a) shows how, when trained on a small subset of trajectories, agents fail to generalize to a set of test trajectories generated by the same simulator. Lan et al. (2023) report evidence of well-trained agents failing to perform well on Mujoco environments when starting from trajectories (states) that are out of the distribution induced by the agent's policy. We conceive this as a simple form of policy confounding. Since the Mujoco environments are also deterministic, agents following a fixed policy can memorize the best actions to take for each state instantiation, potentially relying on superfluous features. Hence, they can overfit to unnatural postures that would not occur under different policies. Finally, Nikishin et al. (2022) describe a phenomenon named 'primacy bias', which prevents agents trained on poor trajectories from further improving their policies. The authors show that this issue is particularly relevant when training relies heavily on early data coming from a fixed random policy. We hypothesize that one of the causes for this is also policy confounding. The random policy may induce spurious correlations that lead to the formation of rigid history (state) representations that are hard to recover from.
Generalization Generalization is a hot topic in machine learning. The promise of a model performing well in contexts other than those encountered during training is undoubtedly appealing. In the realm of reinforcement learning, the majority of research focuses on generalization to environments that, despite sharing a similar structure, differ somewhat from the training environment (Kirk et al., 2023). These differences range from small variations in the transition dynamics (e.g., sim-to-real transfer; Higgins et al., 2017;Tobin et al., 2017;Peng et al., 2018;Zhao et al., 2020), changes in the observations (i.e., modifying irrelevant information, such as noise: Mandlekar et al., 2017;Ornia et al., 2022, or background variables: Zhang et al., 2020Stone et al., 2021), to alterations in the reward function, resulting in different goals or tasks (Taylor and Stone, 2009;Lazaric, 2012;Muller-Brockhausen et al., 2021). Instead, we focus on the problem of OOT generalization. Keeping the environment unchanged, we aim to ensure that agents perform effectively when confronted with situations that differ from those encountered along their preferred trajectories.
State abstraction State abstraction is concerned with removing from the representation all that state information that is irrelevant to the task. In contrast, we are worried about learning representations containing too little information, which can lead to state aliasing. Nonetheless, as argued by McCallum (1995), state abstraction and state aliasing are two sides of the same coin. That is why we borrowed the mathematical frameworks of state abstraction to describe the phenomenon of policy confounding. Li et al. (2006) provide a taxonomy of the types of state abstraction and how they relate to one another. Givan et al. (2003) introduce the concept of bisimulation, which is equivalent to our definition of Markov history representation (Definition 4) but for states instead of histories. Ferns et al. (2006) proposes a method for measuring the similarity between two states. Castro (2020) notes that this metric is prohibitively expensive and suggests using a relaxed version that computes state similarity relative to a given policy. This is similar to our notion of π-Markov history representation (Definition 7). While the end goal of this metric is to group together states that are similar under a given policy, here we argue that this may lead to poor OOT generalization. Figure 9: Two DBNs representing the dynamics of the Key2Door environment, when actions are sampled at random (left), and when they are determined by the optimal policy (right). The nodes labeled as x represent the agent's location, while the nodes labeled as y represent whether or not the key has been collected. The agent can only see x. Hence, when actions that are sampled are random (left), the agent must remember its past locations to determine the reward r 7 . Note that only x 1 and x 7 are highlighted in the left DBN. However, other variables in ⟨x 2 , ..., x 6 ⟩ might be needed, depending on when the key is collected. In contrast, when following the optimal policy, only x 7 is needed. In this second case, knowing the current location is sufficient to determine whether the key has been collected. , and when they are determined by the optimal policy (right). The nodes labeled as x 1 indicate the row where the agent is located; the nodes labeled as x 2 indicate the column. We see that when actions are sampled at random, both x 1 6 and x 2 6 are necessary to determine r 6 . However, when actions are determined by the optimal policy, x 2 6 is sufficient, as the agent always stays at the top row.
E Experimental Results
E.1 Learned history representations
The results reported in Section 7 show that the OOT generalization problem exists. However, some may still wonder if the underlying reason is truly policy confounding. To confirm this, we compare the outputs of the policy at every state in the Frozen T-Maze when being fed the same histories (observation stack) but two different signals. That is, we permute the variable containing the signal (x 1 in the diagram of Figure 2) and leave the rest of the variables in the observation stack unchanged. We then feed the two versions to the policy network and measure the KL divergence between the two output probabilities. This metric is a proxy for how much the agent attends to the signal in every state.
The heatmaps in Figure 11 show the KL divergences at various points during training (0, 10K, 30K, and 100K timesteps) when the true signal is 'green' and we replace it with 'purple'. We omit the two goal states since no actions are taken there. We see that initially (top left heatmap), the signal has very little influence on the policy (note the scale of the colormap is 10 × −6), after 10K steps, the agent learns that the signal is very important when at the top right state (top right heatmap). After this, we start seeing how the influence of the signal at the top right state becomes less strong (bottom left heatmap) until it eventually disappears (bottom right heatmap). In contrast, the influence of the signal at the initial state becomes more and more important, indicating that after taking the first action, the agent ignores the signal and only attends to its own location. The results for the alternative case, purple signal being replaced by green signal, are shown in Figure 12. Figure 11: A visualization of the learned history representations. The heatmaps show the KL divergence between the action probabilities when feeding the policy network a stack of the past 10 observations and when feeding the same stack but with the value of the signal being switched from green to purple, after 0 (top left), 10K (top right), 30K (bottom left), and 100K (bottom right) timesteps of training. Figure 12: A visualization of the learned history representations. The heatmaps show the KL divergence between the action probabilities when feeding the policy network a stack of the past 10 observations and when feeding the same stack but with the value of the signal being switched from purple to green, after 0 (top left), 10K (top right), 30K (bottom left), and 100K (bottom right) timesteps of training. Figures 13 and 14 report the results of the experiments described in Section 7 (paragraphs 2 and 3) for Key2Door and Diversion. We see how the buffer size also affects the performance of DQN in the two environments (left plots). We also see that exploration/domain randomization does improve OOT generalization in Diversion but not in Key2Door.
E.2 Buffer size and exploration/domain randomization
F Further Experimental Details
We ran our experiments on an Intel i7-8650U CPU with 8 cores. Agents were trained with Stable Baselines3 (Raffin et al., 2021). Most hyperparameters were set to their default values except for the ones reported in Tables 1 (PPO) and 2 (DQN), which seemed to work better than the default values.
Figure 1 :
1Left: An illustration of the Frozen T-Maze environment. Right: Learning curves when evaluated in the Frozen T-Maze environment with (blue curve) and without (red curve) ice.
Figure 2 :
2Two DBNs representing the dynamics of the Frozen T-Maze environment, when actions are sampled at random (left), and when they are determined by the optimal policy (right).
Figure 3 :
3A DBN illustrating the phenomenon of policy confounding. The policy opens backdoor path that can affect conditional relations between the variables in h t and h t+1
Figure 4 :
4Illustrations of the Key2Door (left) and Diversion (right) environments.
Figure 5 :Figure 6 :
56DQN vs. PPO in the train and evaluation variants of Frozen T-Maze (left), Key2Door (middle), and Diversion (right). Frozen T-Maze. Left: DQN small vs. large buffer sizes. Right: PPO and DQN when adding stochasticity.
Figure 7 :
7An illustration of the watch-the-time environment. ... ... ... ...
Figure 8 :
8Two DBNs representing the dynamics of the watch-the-time environment, when actions are sampled at random (left), and when they are determined by the optimal policy (right).
Example 4 .
4(Watch the Time) This example is inspired by the empirical results of
Figure 10 :
10Two DBNs representing the dynamics of the Diversion environment, when actions are sampled at random (left)
Figure 13 :Figure 14 :
1314Key2Door. Left: DQN small vs. large buffer sizes. Right: PPO and DQN when adding stochasticity. Diversion. Left: DQN small vs. large buffer sizes. Right: PPO and DQN when adding stochasticity.
Table 1 :
1PPO hyperparameters.Rollout steps
128
Batch size
32
Learning rate
2.5e-4
Number epoch
3
Entropy coefficient
1.0e-2
Clip range
0.1
Value coefficient
1
Number Neurons 1st layer
128
Number Neurons 2nd layer
128
Table 2 :
2DQN hyperparameters.Buffer size
1.0e5
Learning starts
1.0e3
Learning rate
2.5e-4
Batch size
256
Initial exploration bonus
1.0
Final exploration bonus
0.0
Exploration fraction
0.2
Train frequency
5
Number Neurons 1st layer
128
Number Neurons 2nd layer
128
Note that the two paths highlighted inFigure 1are not the only optimal paths. However, for the agent to be able to ignore the initial signal, it is important that the paths do not overlap.
Note that we sum over st+1 because multiple states may emit the same observation ot+1.
R h (ht, at) = R π h (Φ π (ht), at) andh ′ t+1 ∈{h t+1 } Φ π T h (h ′ t+1 | ht, at) = Pr π (Φ π (ht+1) | Φ π (ht), at), where H π ⊆ H denotes the histories visited under π, R π h (Φ π (h t ), a t ) = {R h (h ′ t , a t )} h ′ t ∈{ht} Φ π , {h t } Φ π = {h ′ t ∈ H π t : Φ π (h ′ t ) = Φ π (h t )},and Pr π is probability under π. Definition 8 (π-minimal history representation). A history representation Φ π * : H π t →H π * t with H π * t = ×Θ π * t is said to be π-minimal, if all other history representations Φ : H π t →H π t with H π t = ×Θ t and |Θ t | ⊂ |Θ π * t |, for at least one h t ∈ H π , are not π-Markov.
t withH * t = ×Θ * t be a minimal history representation. If, for some π, there is a π-Markov history representation Φ π : H π t →H π t withH π t = ×Θ π t , such thatΘ π t ⊂Θ * t for some h t ∈ H, then Φ π is confounded by policy π.Finally, to conclude this section, we demonstrate that even though, in Example 1, the variables included in the π-minimal history representation are a subset of the variables in the minimal history representation,Θ π * t ⊂Θ * t , this is not always the case, asΘ π * t may contain superfluous variables (Definition 6). An example illustrating this situation is provided in Appendix B (Example 4).Proposition 2. Let {Θ * t } ∪Φ * be the union of variables in all possible minimal history representations. There exist cases where, for some π, there is a π-minimal history representation Φ π * : H π t →H π * t withH π * t = ×Θ π * t such thatΘ π * t \ {Θ * t } ∪Φ * ̸ = ∅.6.1 Why should we care about policy confounding?
Note that in the Frozen T-Maze environment, the ice does change the environment dynamics. However, its purpose is to compel the agent to take trajectories different from the optimal ones. The way we implemented it, the effect of the ice would be equivalent to forcing the agent to move down twice when in the top cell or move up twice when in the bottom cell. These trajectories are feasible in the original environment.
The small gap between 'DQN train' and 'DQN eval' is due to the −0.1 penalty per timestep. In all three environments, the shortest path is longer in the evaluation environment than in the training environment.
Peters, J., Janzing, D., and Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. The MIT Press. Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., and Lawrence, N. D. (2009). Dataset shift in machine learning. The MIT Press. Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., and Dormann, N. (2021). Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1-8.
AcknowledgementsThis project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programA Proofs Lemma 1. Let Φ π1 * be the set of all possible π-minimal history representations under π 1 , where every Φ π1 * ∈ Φ π1 * is defined as Φ π1 * : H π1 t →H π1 * t andH π1 * t = ×Θ π1 * t , and let π 2 be a second policy such that for all h t ∈ H π1 t ∩ H π2 t , supp (π 2 (· | h t )) ⊆ supp (π 1 (· | h t )) .For all Φ π1 * ∈ Φ π1 * , there exists a π-Markov history representation under policy π 2 , Φ π2 :Proof. First, it is easy to show thatIn particular,In such cases, we know that there is at least one action a ′ for whichfrom H π 2 but possibly also subsequent histories that can only be reached from h ′ t+1 . Further, since H π2 ⊂ H π1 , we know that, for every Φ π1 * ∈ Φ π1 * , there must be a Φ π2 * that requires, at most, the same number of variables,Θ π2 t ⊆Θ π1 * t and, in some cases, fewer,Θ π1 * t ̸ =Θ π2 * t (e.g., Frozen T-Maze example).Proposition 1. Let Φ * be the set of all possible minimal history representations, where every Φ * ∈ Φ * is defined as Φ * : H t →H * t withH * t = ×Θ * t . For all π and all Φ * ∈ Φ * , there exists a π-Markov history representation Φ π : H π t →H π t withH π t = ×Θ π t such that for all h t ∈ H π , Θ π t ⊆Θ * t . Moreover, there exist cases for whichΘ π t is a proper subset,Θ π t ̸ =Θ * t .Proof. The proof follows from Lemma 1. We know that, in general, H π ⊆ H, and if π(a ′ t |h ′ t ) = 0 for at least one pair a ′ t ∈ A, h ′ t ∈ H, then H π ⊂ H. Hence, for every Φ * there is a Φ π such thatΘ π t ⊆Θ * t , and in some cases, when H π ⊂ H, we may haveΘ π t ̸ =Θ * t (e.g., Frozen T-Maze example).Theorem 1. Let Φ * : H t →H * t withH * t = ×Θ * t be a minimal history representation. If, for some π, there is a π-Markov history representation Φ π : H π t →H π t withH π t = ×Θ π t , such thatΘ π t ⊂Θ * t for some h t ∈ H, then Φ π is confounded by policy π.Proof. Proof by contradiction. Let us assume thatΘ π t ⊂Θ * t , and yet there is no policy confounding. I.e., for all h t , h t+1 ∈ H, a t ∈ A,andFirst, note that the do-operator implies that the equality must hold for all h ′ t in h t 's equivalence class under Φ π , h ′ t ∈ {h t } Φ = {h ′ t ∈ H t : Φ(h ′ t ) = Φ(h t )}, i.e., not just those h ′ t that are visited under π, R π h (Φ π (h t ), a t ) = R π h (do(Φ π (h t )), a t ) = {R(h ′ t , a t )} h ′ t ∈{ht} Φ
Constrained Markov decision processes. E Altman, CRC press7Altman, E. (1999). Constrained Markov decision processes, volume 7. CRC press.
State abstraction for programmable reinforcement learning agents. D Andre, S J Russell, Proceedings of the Eighteenth National Conference on Artificial Intelligence. the Eighteenth National Conference on Artificial IntelligenceAndre, D. and Russell, S. J. (2002). State abstraction for programmable reinforcement learning agents. In Proceedings of the Eighteenth National Conference on Artificial Intelligence, pages 119-125.
Out of distribution generalization in machine learning. M Arjovsky, arXiv:2103.02667arXiv preprintArjovsky, M. (2021). Out of distribution generalization in machine learning. arXiv preprint arXiv:2103.02667.
Reinforcement learning with long short-term memory. B Bakker, Advances in neural information processing systems. 14Bakker, B. (2001). Reinforcement learning with long short-term memory. Advances in neural information processing systems, 14.
Recognition in terra incognita. S Beery, G Van Horn, P Perona, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Beery, S., Van Horn, G., and Perona, P. (2018). Recognition in terra incognita. In Proceedings of the European conference on computer vision (ECCV), pages 456-473.
The Arcade Learning Environment: An evaluation platform for general agents. M G Bellemare, Y Naddaf, J Veness, M Bowling, Journal of Artificial Intelligence Research. 47Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The Arcade Learning Environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253-279.
Decision-theoretic planning: Structural assumptions and computational leverage. C Boutilier, T Dean, S Hanks, Journal of Artificial Intelligence Research. 11Boutilier, C., Dean, T., and Hanks, S. (1999). Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1-94.
Scalable methods for computing state similarity in deterministic markov decision processes. P S Castro, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceCastro, P. S. (2020). Scalable methods for computing state similarity in deterministic markov decision processes. In Proceedings of the AAAI Conference on Artificial Intelligence.
Model minimization in Markov decision processes. T Dean, R Givan, Proc. of the National Conference on Artificial Intelligence. of the National Conference on Artificial IntelligenceDean, T. and Givan, R. (1997). Model minimization in Markov decision processes. In Proc. of the National Conference on Artificial Intelligence, pages 106-111.
Maximum entropy RL (provably) solves some robust RL problems. B Eysenbach, S Levine, International Conference on Learning Representations. Eysenbach, B. and Levine, S. (2022). Maximum entropy RL (provably) solves some robust RL problems. In International Conference on Learning Representations.
Methods for computing state similarity in markov decision processes. N Ferns, P S Castro, D Precup, P Panangaden, Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, UAI'06. the Twenty-Second Conference on Uncertainty in Artificial Intelligence, UAI'06Ferns, N., Castro, P. S., Precup, D., and Panangaden, P. (2006). Methods for computing state similarity in markov decision processes. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, UAI'06, page 174-181.
Catastrophic forgetting in connectionist networks. R M French, Trends in cognitive sciences. 34French, R. M. (1999). Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128-135.
A comprehensive survey on safe reinforcement learning. J García, F Fernández, Journal of Machine Learning Research. 161García, J. and Fernández, F. (2015). A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437-1480.
Equivalence notions and model minimization in Markov decision processes. R Givan, T Dean, M Greig, Artificial Intelligence. 141-2Givan, R., Dean, T., and Greig, M. (2003). Equivalence notions and model minimization in Markov decision processes. Artificial Intelligence, 14(1-2):163-223.
Darla: Improving zero-shot transfer in reinforcement learning. I Higgins, A Pal, A Rusu, L Matthey, C Burgess, A Pritzel, M Botvinick, C Blundell, A Lerchner, PMLRInternational Conference on Machine Learning. Higgins, I., Pal, A., Rusu, A., Matthey, L., Burgess, C., Pritzel, A., Botvinick, M., Blundell, C., and Lerchner, A. (2017). Darla: Improving zero-shot transfer in reinforcement learning. In International Conference on Machine Learning, pages 1480-1490. PMLR.
Uncertainty-driven exploration for generalization in reinforcement learning. Y Jiang, J Z Kolter, R Raileanu, Deep Reinforcement Learning Workshop NeurIPS. Jiang, Y., Kolter, J. Z., and Raileanu, R. (2022). Uncertainty-driven exploration for generalization in reinforcement learning. In Deep Reinforcement Learning Workshop NeurIPS 2022.
Reinforcement learning: A survey. L P Kaelbling, M Littman, Moore , A , Journal of AI Research. 4Kaelbling, L. P., Littman, M., and Moore, A. (1996). Reinforcement learning: A survey. Journal of AI Research, 4:237-285.
A survey of zero-shot generalisation in deep reinforcement learning. R Kirk, A Zhang, E Grefenstette, T Rocktäschel, Journal of Artificial Intelligence Research. 76Kirk, R., Zhang, A., Grefenstette, E., and Rocktäschel, T. (2023). A survey of zero-shot generalisation in deep reinforcement learning. Journal of Artificial Intelligence Research, 76:201-264.
Can agents run relay race with strangers? generalization of RL to out-of-distribution trajectories. L.-C Lan, H Zhang, C.-J Hsieh, The Eleventh International Conference on Learning Representations. Lan, L.-C., Zhang, H., and Hsieh, C.-J. (2023). Can agents run relay race with strangers? gener- alization of RL to out-of-distribution trajectories. In The Eleventh International Conference on Learning Representations.
Goal misgeneralization in deep reinforcement learning. L Langosco, J Koch, L D Sharkey, J Pfau, D Krueger, PMLRInternational Conference on Machine Learning. Langosco, L., Koch, J., Sharkey, L. D., Pfau, J., and Krueger, D. (2022). Goal misgeneralization in deep reinforcement learning. In International Conference on Machine Learning, pages 12004- 12019. PMLR.
Transfer in reinforcement learning: a framework and a survey. Reinforcement Learning: State-of-the-Art. A Lazaric, Lazaric, A. (2012). Transfer in reinforcement learning: a framework and a survey. Reinforcement Learning: State-of-the-Art, pages 143-173.
Towards a unified theory of state abstraction for MDPs. L Li, T J Walsh, M L Littman, International Symposium on Artificial Intelligence and Mathematics (ISAIM. Li, L., Walsh, T. J., and Littman, M. L. (2006). Towards a unified theory of state abstraction for MDPs. In International Symposium on Artificial Intelligence and Mathematics (ISAIM 2006).
Deconfounding reinforcement learning in observational settings. C Lu, B Schölkopf, J M Hernández-Lobato, arXiv:1812.10576arXiv preprintLu, C., Schölkopf, B., and Hernández-Lobato, J. M. (2018). Deconfounding reinforcement learning in observational settings. arXiv preprint arXiv:1812.10576.
Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. M C Machado, M G Bellemare, E Talvitie, J Veness, M Hausknecht, M Bowling, Journal of Artificial Intelligence Research. 61Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M., and Bowling, M. (2018). Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523-562.
Adversarially robust policy learning: Active construction of physically-plausible perturbations. A Mandlekar, Y Zhu, A Garg, L Fei-Fei, S Savarese, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEMandlekar, A., Zhu, Y., Garg, A., Fei-Fei, L., and Savarese, S. (2017). Adversarially robust policy learning: Active construction of physically-plausible perturbations. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3932-3939. IEEE.
Reinforcement Learning with Selective Perception and Hidden State. A K Mccallum, University of RochesterPhD thesisMcCallum, A. K. (1995). Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester.
Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, Nature. 5187540529Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529.
Procedural content generation: Better benchmarks for transfer reinforcement learning. M Muller-Brockhausen, M Preuss, A Plaat, 2021 IEEE Conference on games (CoG). IEEEMuller-Brockhausen, M., Preuss, M., and Plaat, A. (2021). Procedural content generation: Better benchmarks for transfer reinforcement learning. In 2021 IEEE Conference on games (CoG), pages 01-08. IEEE.
Dynamic Bayesian Networks: Representation, Inference and Learning. K P Murphy, UC Berkeley, Computer Science DivisionPhD thesisMurphy, K. P. (2002). Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, UC Berkeley, Computer Science Division.
The primacy bias in deep reinforcement learning. E Nikishin, M Schwarzer, P D'oro, P.-L Bacon, A Courville, PMLRInternational Conference on Machine Learning. Nikishin, E., Schwarzer, M., D'Oro, P., Bacon, P.-L., and Courville, A. (2022). The primacy bias in deep reinforcement learning. In International Conference on Machine Learning, pages 16828-16847. PMLR.
Observational robustness and invariances in reinforcement learning via lexicographic objectives. D J Ornia, L Romao, L Hammond, M MazoJr, A Abate, arXiv:2209.15320arXiv preprintOrnia, D. J., Romao, L., Hammond, L., Mazo Jr, M., and Abate, A. (2022). Observational ro- bustness and invariances in reinforcement learning via lexicographic objectives. arXiv preprint arXiv:2209.15320.
Causality: Models, Reasoning, and Inference. J Pearl, Cambridge University PressPearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press.
Causal inference in statistics: A primer. J Pearl, M Glymour, N P Jewell, Internet resourcePearl, J., Glymour, M., and Jewell, N. P. (2016). Causal inference in statistics: A primer. 2016. Internet resource.
Sim-to-real transfer of robotic control with dynamics randomization. X B Peng, M Andrychowicz, W Zaremba, Abbeel , P , 2018 IEEE international conference on robotics and automation (ICRA). IEEEPeng, X. B., Andrychowicz, M., Zaremba, W., and Abbeel, P. (2018). Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pages 3803-3810. IEEE.
Invariant policy learning: A causal perspective. S Saengkyongam, N Thams, J Peters, N Pfister, IEEE Transactions on Pattern Analysis and Machine Intelligence. Saengkyongam, S., Thams, N., Peters, J., and Pfister, N. (2023). Invariant policy learning: A causal perspective. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Prioritized experience replay. T Schaul, J Quan, I Antonoglou, D Silver, International Conference on Learning Representations. Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2016). Prioritized experience replay. In International Conference on Learning Representations.
J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintSchulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
Learning without state-estimation in partially observable Markovian decision processes. S P Singh, T Jaakkola, Jordan , M I , Proc. of the International Conference on Machine Learning. of the International Conference on Machine LearningSingh, S. P., Jaakkola, T., and Jordan, M. I. (1994). Learning without state-estimation in partially observable Markovian decision processes. In Proc. of the International Conference on Machine Learning, pages 284-292.
Observational overfitting in reinforcement learning. X Song, Y Jiang, S Tu, Y Du, B Neyshabur, International Conference on Learning Representations. Song, X., Jiang, Y., Tu, S., Du, Y., and Neyshabur, B. (2020). Observational overfitting in reinforce- ment learning. In International Conference on Learning Representations.
Causal curiosity: Rl agents discovering self-supervised experiments for causal representation learning. S A Sontakke, A Mehrjou, L Itti, B Schölkopf, PMLRInternational conference on machine learning. Sontakke, S. A., Mehrjou, A., Itti, L., and Schölkopf, B. (2021). Causal curiosity: Rl agents discovering self-supervised experiments for causal representation learning. In International conference on machine learning, pages 9848-9858. PMLR.
A Stone, O Ramirez, K Konolige, Jonschkowski , R , arXiv:2101.02722The distracting control suite-a challenging benchmark for reinforcement learning from pixels. arXiv preprintStone, A., Ramirez, O., Konolige, K., and Jonschkowski, R. (2021). The distracting control suite-a challenging benchmark for reinforcement learning from pixels. arXiv preprint arXiv:2101.02722.
Reinforcement learning: An introduction. R S Sutton, A G Barto, MIT pressSutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
Transfer learning for reinforcement learning domains: A survey. M E Taylor, P Stone, Journal of Machine Learning Research. 710Taylor, M. E. and Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(7).
Domain randomization for transferring deep neural networks from simulation to the real world. J Tobin, R Fong, A Ray, J Schneider, W Zaremba, Abbeel , P , 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEETobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23-30. IEEE.
A dissection of overfitting and generalization in continuous reinforcement learning. A Zhang, N Ballas, J Pineau, arXiv:1806.07937arXiv preprintZhang, A., Ballas, N., and Pineau, J. (2018a). A dissection of overfitting and generalization in continuous reinforcement learning. arXiv preprint arXiv:1806.07937.
Invariant causal prediction for block mdps. A Zhang, C Lyle, S Sodhani, A Filos, M Kwiatkowska, J Pineau, Y Gal, D Precup, PMLRInternational Conference on Machine Learning. Zhang, A., Lyle, C., Sodhani, S., Filos, A., Kwiatkowska, M., Pineau, J., Gal, Y., and Precup, D. (2020). Invariant causal prediction for block mdps. In International Conference on Machine Learning, pages 11214-11224. PMLR.
C Zhang, O Vinyals, R Munos, S Bengio, arXiv:1804.06893A study on overfitting in deep reinforcement learning. arXiv preprintZhang, C., Vinyals, O., Munos, R., and Bengio, S. (2018b). A study on overfitting in deep reinforce- ment learning. arXiv preprint arXiv:1804.06893.
Sim-to-real transfer in deep reinforcement learning for robotics: a survey. W Zhao, J P Queralta, T Westerlund, 2020 IEEE symposium series on computational intelligence (SSCI). IEEEZhao, W., Queralta, J. P., and Westerlund, T. (2020). Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In 2020 IEEE symposium series on computational intelligence (SSCI), pages 737-744. IEEE.
| zyda_arxiv-0366000 |
TempT: Temporal consistency for Test-time adaptation
Onur Cezmi Mutlu
Stanford University
Mohammadmahdi Honarmand
Stanford University
Saimourya Surabhi
Stanford University
Dennis P Wall [email protected]
Stanford University
TempT: Temporal consistency for Test-time adaptation
We introduce Temporal consistency for Test-time adaptation (TempT), a novel method for test-time adaptation on videos through the use of temporal coherence of predictions across sequential frames as a self-supervision signal.TempT is an approach with broad potential applications in computer vision tasks, including facial expression recognition (FER) in videos. We evaluate TempT's performance on the AffWild2 dataset. Our approach focuses solely on the unimodal visual aspect of the data and utilizes a popular 2D CNN backbone, in contrast to larger sequential or attention-based models used in other approaches. Our preliminary experimental results demonstrate that TempT has competitive performance compared to the previous years' reported performances, and its efficacy provides a compelling proof-of-concept for its use in various real-world applications.
Introduction
Affective computing aims to develop technologies with the capabilities like recognizing, interpreting, and simulating human affects. Expressions being one of the primary means of conveying emotion, facial expression recognition (FER) often constitutes an important part of human affective behavior analysis. There is an increasing number of use cases from driver safety applications to diagnosis and therapy of developmental problems of children [11]. With the continuous improvement in the computer vision field through extensive adoption of deep learning approaches, the real-world use of such algorithms is becoming easier and universal. However, the robustness and reliability of aforementioned algorithms tend to suffer from the domain shift phenomena which is still a prominent problem for computer vision models with limited generalization capability.
The domain shift problem becomes even more pronounced in the "real world" scenarios due to uncontrollable environmental conditions. In the computer vision setting some examples to these conditions could be lighting, camera quality, motion, and resolution. Invariance and robust-ness against these variations is the main focus of domain adaptation and domain generalization research, with many successful algorithms already developed. In our work, we explore a specific subdomain of this field called Test-Time Adaptation (TTA), also referred to as Unsupervised Source-Free Domain Adaptation. In this setting, we assume no access to the target domain during training-time and no access to target domain labels in test-time. We treat each video as a new domain and our method adapts the trained model to a given video during test-time to improve its performance.
We investigate the performance of our approach on the Facial Expression Recognition (FER) task, where the goal is to classify each frame in a video for Ekman emotions. The task of video assessment at the frame level is a natural environment for machine learning models with spatiotemporal inductive biases since the ability to model inter-frame relations could potentially be useful. Examples of such models are 3D convolutional neural networks (CNN) [10], attention-based models [1], or hybrid approaches combining 2D CNNs with recurrent neural networks (RNN) [37]. The first two of these approaches usually suffer from greater computational requirements than 2D CNNs, whereas the last method has unstable training time behavior under inputs with longer duration. There are numerous solutions to these problems including more efficient architectures as well as well-studied training paradigms, but in our work, we focus on exploring an adaptive approach where a simple 2D CNN model, which lacks useful biases for the setting, uses temporal predictive consistency as a self-supervision signal to adapt at test-time. For benchmarking purposes, we use Affwild2 [14][15][16][17][18][19][20][21][22]40] which is an invaluable FER dataset that contains over 500 videos and covers a wide variety of aforementioned variations. These qualities make it a suitable candidate for testing our algorithm.
Related Work
Facial Expression Recognition (FER) is a challenging task, especially in real-world scenarios. The difficulty arises from the fact that there is a significant amount of variation within each expression category, making it difficult to distinguish between different expressions. Additionally, there can be similarities between different expression categories, which further complicates the task of FER. This challenge is even more pronounced in real-world settings, where the lighting conditions, poses, and identities of the individuals can vary significantly. In such scenarios, even individuals with the same identity, pose, and lighting conditions can exhibit different expressions, while individuals with different identities, ages, gender, and pose can express the same emotion.
Thus, FER is a task that requires robust algorithms that can effectively handle these intraclass variances and interclass similarities. In the past few years many Convolution Neural networks (CNN) based [4,23,27] and transformerbased architectures [39] have been proposed and significantly improve the performance of FER.
As far as we are aware, there has been no prior research on test-time adaptation (TTA) for facial expression recognition (FER). Our work is an attempt to explore the use of TTA on FER tasks. It represents a novel approach to FER that has the potential to improve accuracy and opens up new avenues for research into TTA. Test-time Adaptation Early attempts for unsupervised domain adaptation were mainly based on updating running statistics of the batch normalization layers [26,33] with the new information from test data. [34] was one of the early works to propose using an auxiliary self-supervised task to be used in the test-time with the purpose of adapting the backbone parameters. [35] proposed using entropy minimization as the main adaptation goal and limiting the set of parameters to be updated to the weights of batch normalization layers (as opposed to updating statistics as before) which are shown to be highly expressive in [5]. Originating from the close ties of domain adaptation with fewshot learning [42] introduces a meta-learning-based solution where the loss to be used for adaptation is meta-learned. Finally, [41] and [28] report impressive adaptation results by combining image augmentation and entropy minimization to overcome the shortcomings of the latter in scenarios with large domain shifts.
All of these works operate on static data that does not necessarily bear temporal correlations. Among them, only [35] explores continual adaptation to online data streams. [36] is a novel work that proposes a continual adaptation algorithm based on augmentation consistency. Yet, their al-gorithm makes an assumption of i.i.d. samples during test time, which may not always be correct. [6] addresses this issue and coins a new normalization layer that handles selective adaptation under non-iid data streams. To our knowledge, none of the works in the field exploits the temporal correlations in a given stream, and in our work, we aim to explore a possible direction for that.
Our Approach
Datasets and Preprocessing
Focusing on training a computer vision model that operates on images (rather than videos), we have numerous data sources that are popular in the FER literature. We combine Affwild2 with Affectnet [30] and Real-world Affective Faces Database (RAF-DB) [24,25] to create a larger and more diverse training dataset. In our task, target classes are 7 basic emotions (also known as Ekman emotions [3]) plus an "other" class for expressions that do not fit into any category.
Affwild2 is significantly larger in comparison to the others and has a label imbalance as can be seen in Fig. 2 and Fig. 3. In order to overcome this, we perform a random sampling on it by limiting the number of frames to 300 per video per expression class basis. Detailed label distribution of the resultant dataset is given in Tab. 1.
We use provided cropped and aligned images in Af-fwild2, and others are only available in cropped versions, so we do not require any additional spatial preprocessing for any of the datasets. We then resize images to 112px×112px with antialiasing. For training purposes, we use common image augmentation methods such as color jitter, brightness and contrast shift, histogram equalization, channel dropout, blur, and random horizontal flip.
Modeling
Our approach is based on individual predictions on video frames, which allows us to use popular image-processing architectures in the literature. Due to their proven performance and stability of training, we use models from Resnet [7] family, with variations such as aggregated residual transformations [38] and squeeze-and-excitation blocks [9]. Generated embeddings are processed by two fullyconnected layers where the second, i.e. output, layer is subject to weight and input normalization [32] to prevent overconfidence, improve smoothness, and generalization.
Significant class imbalance is a problem in this setting that needs to be addressed for successful supervised training. Label weighting, class up-sampling, and class down-sampling are classic methods to alleviate this issue, yet there are numerous scenarios where they fail to do so. We, therefore, adopt another approach namely Label-Distribution-Aware Margin Loss (LDAM) that was introduced in [2]. LDAM is similar to sample weighting in the sense that it modifies the loss depending on the class frequency but instead of using a multiplicative scaling, it intercepts with the class margins. The exact formulation is given in Eq. (1) where z is the unnormalized prediction vector,y is the ground truth class label, n j is the number of samples in class j and C is a temperature-like hyperparameter that tunes the effect of margins. LDAM enforces larger margins on minority classes which in return increases the model robustness and prevents overfitting. For more details, we refer the reader to the original paper.
Supervised training of the model is then performed with back-propagation algorithm using defined LDAM loss to account for the skewed label distribution. Adam [13] optimizer with weight decay [29] is used for optimization where learning rates were subject to a step-decay schedule. Modeling and training were performed using PyTorch [31] framework on NVIDIA V100 GPUs.
TempT: Temporal consistency for Test-time adaptation
Being trained on static images as opposed to videos, 2D CNN models do not carry the implicit bias for the smoothness and/or consistency in their predictions across frames. We found empirically that such models contain stronger high-frequency components at the output, and when they are subject to a low-pass filter, the results look more desirable. We propose using this fact to generate a supervision signal to tune the network and improve classification performance. In particular, we temporally smooth the model predictions using a low-pass filter and set it as the desired signal. Purpose of setting this filtered signal as the target is to enforce the model to make temporally consistent predictions. We then calculate the mean-squared error between the original and target signals and use back-propagation to update a subset of model parameters.
More formally, let x (t) ∈ R 112×112×3 be the t th frame of video and f (.) : R 112×112×3 → R 8 be the trained neural network of interest. We hypothesize that predictive coherence between consecutive samples can be used as an im- plicit Jacobian regularizer. In [8], it has been shown that regularization on the Frobenius norm of input-output Jacobian of a neural network can help the network attain flatter minima with higher robustness against input variations. Now, consider the case when the frame rate of a video is high enough. We can then approximate the Jacobian as in Eq. (2).
J i,j (x (t) ) = ∂f i (x (t) ) ∂x (t) j ≈ f i (x (t) ) − f i (x (t−1) ) x (t) j − x (t−1) j(2)
Then minimizing the Frobenius norm of the Jacobian becomes equivalent to minimizing inter-frame prediction differences as in Eq. (3)
min J(x (t) ) F ≡ min i,j J 2 i,j (x (t) ) ≡ min f i (x (t) ) − f i (x (t−1) )(3)
We empirically found that the initial distribution of prediction differences is heavy-tailed, with the tail being caused by momentary jumps in predictions due to problems at input cropping and/or sharp changes in activations due to model imperfections. When we used the target in Eq. (3) these outliers made the training process unstable for a significant portion of the experiments. We, therefore, chose to use another equivalent formulation to minimize the target. We first pass all frames from the pipeline to obtain an initial set of unnormalized scores y (t) ∈ R 8 . We then use the error signal in Eq. (4) as a self-supervision loss function to finetune the model. LP F (.) can be any low pass filter; in our experiments, we use a median filter, due to its robustness to outliers.
L(y) = t y (t) − LP F (y) (t)(4)
Using the entire video for adaptation may not be computationally feasible when the video duration is long. To alleviate this, we count the number of changes in model predictions using a sliding window and select the regions with the most changes to be the training regions that will compose the training batch. The updated version of the loss signal can be examined in Eq. (5) where R is the set of se-lected regions, and r indicated the range of frames to be considered.
L(y) = r∈R t∈r y (t) − LP F (y) (t)(5)
Being differentiable, this loss allows the use of backpropagation to update model parameters. The choice of parameters has an important effect on the performance of the adapted model since the selection defines the expressivity of the model and therefore the power of adaptive interventions. Following the analysis in [12], we select this subset to be the weight and bias terms in batch normalization layers while freezing the running statistics. This has been shown to yield enough expressivity while preventing overfitting. We then use AdamW optimizer with learning rate set to 0.0001, for the adaptation process and take 10 gradient steps, a number that has proven empirically optimal in our hyperparameter searches.
Experiments
We test TempT on the AffWild2 dataset and compare the results against the baseline model as well as another test time domain adaptation method, namely TENT [35]. We performed an extensive hyperparameter search on the adaptation parameters of TempT, such as the number of steps, learning rate, optimizer, etc., and report the performance of the best configuration in Tab. 2. Static models' performances are deterministic whereas for adaptation cases we report an average F1 score over 20 experiments to account for stochasticity arising from a random sampling of adaptation frames. We clearly see the positive effect of TempT on classification performance. One important observation of these results is the ability of adaptation to help a less complex model reach the performance of a much larger one. In this experiment, SE-ResNext-101 has 8 times the number of parameters of Resnet-18. Another observation of the results is the performance disruption that TENT introduces. It consistently hurt the performance of the baseline model and we argue that this is due to the highly correlated inputs that we have during test time, which is predicted in [6].
To further observe the changes that TempT induces, we also investigate time series generated by the model before and after adaptation. In Fig. 4 we provide such an example taken from 100 frame portion of a validation set video. On the top figure we provide unnormalized model outputs before and after adaptation, whereas bottom figure shows 'argmax' predictions. To create a cleaner top plot we omitted the classes that do not become the dominant prediction during this interval. From this visualization we can see, in a qualitative manner, that adaptation reduces the flickering behavior at the output and provides more coherent predictions over time, while increasing the F1 score for this particular video from 0. 39 understanding of this effect, we computed average number of decision changes before and after the adaptation on entire AffWild validation dataset. TempT reduces normalized number of changes (i.e. number of changes per frame) for a given video from 0.15 to 0.043.
Conclusion and Future Work
In our work, we explored a novel model-agnostic algorithm that can have real-life applications for similar tasks and showed that this adaptive method could enhance model performance without any additional means of supervision. On the other hand, performance variance due to stochasticity in the frame sampling process is a problem that needs to be addressed to obtain a more deterministic understanding of the limits and behavior of the algorithm. With such increased stability and the performance boost it brings, TempT can potentially enable more reliable use of models on edge devices while protecting user privacy.
Figure 1 .Figure 2 .
12TempT Label distribution of Affwild2 train set
Figure 3 .
3Label distribution of Affwild2 validation set
L
(z, y) = − log e zy−∆y e zy−∆y + j =y e zj j ∈ {1, . . . , k}
Figure 4 .
4(top) Model outputs before (dashed) and after (solid) adaptation. (bottom) Model predictions before and after adaptation
to 0.47. To have a quantitative Supervised TENT [35] TempT Table 2. Average F1 Score performances on validation setResnet-18
0.307
0.277
0.323
SE-ResNext-101
0.325
0.269
0.345
AcknowledgementsWe would like to thank all members of the Wall Lab for providing valuable feedback. The work was supported in part by funds to DPW from the National Institutes of Health (1R01LM013364-01, 1R01LM013083), the National Science Foundation (Award 2014232), Lucile Packard Foundation (Auxiliaries Endowment) the ISDB Transform Fund, and program grants from Stanford's Human-Centered Artificial Intelligence Program, and from the Wu Tsai Neurosciences Institute's Neuroscience:Translate Program.
Vivit: A video vision transformer. Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionAnurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. Vivit: A video vision transformer. In Proceedings of the IEEE/CVF inter- national conference on computer vision, pages 6836-6846, 2021. 1
Learning imbalanced datasets with labeldistribution-aware margin loss. Advances in neural information processing systems. Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma, 32Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label- distribution-aware margin loss. Advances in neural informa- tion processing systems, 32, 2019. 3
Basic emotions. Handbook of cognition and emotion. Paul Ekman, 9816Paul Ekman et al. Basic emotions. Handbook of cognition and emotion, 98(45-60):16, 1999. 2
Facial expression recognition with deeply-supervised attention network. Yingruo Fan, O K Victor, Jacqueline C K Li, Lam, IEEE Transactions on Affective Computing. 132Yingruo Fan, Victor O.K. Li, and Jacqueline C.K. Lam. Facial expression recognition with deeply-supervised atten- tion network. IEEE Transactions on Affective Computing, 13(2):1057-1071, 2022. 2
Jonathan Frankle, J David, Ari S Schwab, Morcos, arXiv:2003.00152Training batchnorm and only batchnorm: On the expressive power of random features in cnns. arXiv preprintJonathan Frankle, David J Schwab, and Ari S Morcos. Training batchnorm and only batchnorm: On the expres- sive power of random features in cnns. arXiv preprint arXiv:2003.00152, 2020. 2
Note: robust continual testtime adaptation against temporal correlation. Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, Sung-Ju Lee, Advances in Neural Information Processing Systems. 355Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, and Sung-Ju Lee. Note: robust continual test- time adaptation against temporal correlation. Advances in Neural Information Processing Systems, 35:27253-27266, 2022. 2, 5
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 2
Robust learning with jacobian regularization. Judy Hoffman, A Daniel, Sho Roberts, Yaida, arXiv:1908.02729arXiv preprintJudy Hoffman, Daniel A Roberts, and Sho Yaida. Ro- bust learning with jacobian regularization. arXiv preprint arXiv:1908.02729, 2019. 4
Squeeze-and-excitation networks. Jie Hu, Li Shen, Gang Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation net- works. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132-7141, 2018. 2
tional neural networks for human action recognition. Shuiwang Ji, Wei Xu, Ming Yang, Kai Yu, IEEE transactions on pattern analysis and machine intelligence. 353d convolu-Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolu- tional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221-231, 2012. 1
Labeling images with facial emotion and the potential for pediatric healthcare. Khaled Haik Kalantarian, Peter Jedoui, Qandeel Washington, Kaiti Tariq, Jessey Dunlap, Dennis P Schwartz, Wall, Artificial intelligence in medicine. 981Haik Kalantarian, Khaled Jedoui, Peter Washington, Qan- deel Tariq, Kaiti Dunlap, Jessey Schwartz, and Dennis P Wall. Labeling images with facial emotion and the potential for pediatric healthcare. Artificial intelligence in medicine, 98:77-86, 2019. 1
Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning. Fahdi Kanavati, Masayuki Tsuneki, PMLR, 2021. 5Medical Imaging with Deep Learning. Fahdi Kanavati and Masayuki Tsuneki. Partial transfusion: on the expressive influence of trainable batch norm param- eters for transfer learning. In Medical Imaging with Deep Learning, pages 338-353. PMLR, 2021. 5
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 3
Learning from synthetic data & multi-task learning challenges. Dimitrios Kollias, Abaw, arXiv:2207.011382022arXiv preprintDimitrios Kollias. Abaw: Learning from synthetic data & multi-task learning challenges. arXiv preprint arXiv:2207.01138, 2022. 1
Analysing affective behavior in the first abaw 2020 competition. D Kollias, Schulc, S Hajiyev, Zafeiriou, 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG). D Kollias, A Schulc, E Hajiyev, and S Zafeiriou. Analysing affective behavior in the first abaw 2020 competition. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG), pages 794- 800, 2021. 1
Face behavior a la carte: Expressions, affect and action units in a single network. Dimitrios Kollias, Viktoriia Sharmanska, Stefanos Zafeiriou, arXiv:1910.11111arXiv preprintDimitrios Kollias, Viktoriia Sharmanska, and Stefanos Zafeiriou. Face behavior a la carte: Expressions, af- fect and action units in a single network. arXiv preprint arXiv:1910.11111, 2019. 1
Distribution matching for heterogeneous multitask learning: a large-scale face study. Dimitrios Kollias, Viktoriia Sharmanska, Stefanos Zafeiriou, arXiv:2105.03790arXiv preprintDimitrios Kollias, Viktoriia Sharmanska, and Stefanos Zafeiriou. Distribution matching for heterogeneous multi- task learning: a large-scale face study. arXiv preprint arXiv:2105.03790, 2021. 1
Abaw: Valence-arousal estimation, expression recognition, action unit detection & emotional reaction intensity estimation challenges. Dimitrios Kollias, Panagiotis Tzirakis, Alice Baird, Alan Cowen, Stefanos Zafeiriou, arXiv:2303.01498arXiv preprintDimitrios Kollias, Panagiotis Tzirakis, Alice Baird, Alan Cowen, and Stefanos Zafeiriou. Abaw: Valence-arousal estimation, expression recognition, action unit detection & emotional reaction intensity estimation challenges. arXiv preprint arXiv:2303.01498, 2023. 1
Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond. Dimitrios Kollias, Panagiotis Tzirakis, A Mihalis, Athanasios Nicolaou, Guoying Papaioannou, Björn Zhao, Irene Schuller, Stefanos Kotsia, Zafeiriou, International Journal of Computer Vision. 1Dimitrios Kollias, Panagiotis Tzirakis, Mihalis A Nicolaou, Athanasios Papaioannou, Guoying Zhao, Björn Schuller, Irene Kotsia, and Stefanos Zafeiriou. Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architec- tures, and beyond. International Journal of Computer Vision, pages 1-23, 2019. 1
Expression, affect, action unit recognition: Aff-wild2, multi-task learning and arcface. Dimitrios Kollias, Stefanos Zafeiriou, arXiv:1910.04855arXiv preprintDimitrios Kollias and Stefanos Zafeiriou. Expression, affect, action unit recognition: Aff-wild2, multi-task learning and arcface. arXiv preprint arXiv:1910.04855, 2019. 1
Affect analysis in-the-wild: Valence-arousal, expressions, action units and a unified framework. Dimitrios Kollias, Stefanos Zafeiriou, arXiv:2103.15792arXiv preprintDimitrios Kollias and Stefanos Zafeiriou. Affect analysis in-the-wild: Valence-arousal, expressions, action units and a unified framework. arXiv preprint arXiv:2103.15792, 2021. 1
Analysing affective behavior in the second abaw2 competition. Dimitrios Kollias, Stefanos Zafeiriou, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionDimitrios Kollias and Stefanos Zafeiriou. Analysing affec- tive behavior in the second abaw2 competition. In Proceed- ings of the IEEE/CVF International Conference on Com- puter Vision, pages 3652-3660, 2021. 1
Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition. Shan Li, Weihong Deng, IEEE Transactions on Image Processing. 281Shan Li and Weihong Deng. Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial ex- pression recognition. IEEE Transactions on Image Process- ing, 28(1):356-370, 2019. 2
Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition. Shan Li, Weihong Deng, IEEE Transactions on Image Processing. 281Shan Li and Weihong Deng. Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial ex- pression recognition. IEEE Transactions on Image Process- ing, 28(1):356-370, 2019. 2
Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. Shan Li, Weihong Deng, Junping Du, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Shan Li, Weihong Deng, and JunPing Du. Reliable crowd- sourcing and deep locality-preserving learning for expres- sion recognition in the wild. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2584-2593. IEEE, 2017. 2
Revisiting batch normalization for practical domain adaptation. Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, Xiaodi Hou, arXiv:1603.04779arXiv preprintYanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normalization for practical do- main adaptation. arXiv preprint arXiv:1603.04779, 2016. 2
Occlusion aware facial expression recognition using cnn with attention mechanism. Yong Li, Jiabei Zeng, Shiguang Shan, Xilin Chen, IEEE Transactions on Image Processing. 285Yong Li, Jiabei Zeng, Shiguang Shan, and Xilin Chen. Oc- clusion aware facial expression recognition using cnn with attention mechanism. IEEE Transactions on Image Process- ing, 28(5):2439-2450, 2019. 2
Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. Jian Liang, Dapeng Hu, Jiashi Feng, PMLR, 2020. 2International Conference on Machine Learning. Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for un- supervised domain adaptation. In International Conference on Machine Learning, pages 6028-6039. PMLR, 2020. 2
Fixing weight decay regularization in adam. Ilya Loshchilov, Frank Hutter, Ilya Loshchilov and Frank Hutter. Fixing weight decay reg- ularization in adam. 2017. 3
Affectnet: A database for facial expression, valence, and arousal computing in the wild. Ali Mollahosseini, Behzad Hasani, Mohammad H Mahoor, IEEE Transactions on Affective Computing. 101Ali Mollahosseini, Behzad Hasani, and Mohammad H Ma- hoor. Affectnet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 10(1):18-31, 2017. 2
Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, 32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An im- perative style, high-performance deep learning library. Ad- vances in neural information processing systems, 32, 2019. 3
Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Tim Salimans, P Durk, Kingma, Advances in neural information processing systems. 29Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in neural information processing systems, 29, 2016. 3
Improving robustness against common corruptions by covariate shift adaptation. Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge, Advances in Neural Information Processing Systems. 33Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bring- mann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. Advances in Neural Information Processing Sys- tems, 33:11539-11551, 2020. 2
Test-time training with selfsupervision for generalization under distribution shifts. Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, Moritz Hardt, PMLR, 2020. 2International conference on machine learning. Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. Test-time training with self- supervision for generalization under distribution shifts. In International conference on machine learning, pages 9229- 9248. PMLR, 2020. 2
Tent: Fully test-time adaptation by entropy minimization. Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell, arXiv:2006.1072625arXiv preprintDequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Ol- shausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726, 2020. 2, 5
Continual test-time domain adaptation. Qin Wang, Olga Fink, Luc Van Gool, Dengxin Dai, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Qin Wang, Olga Fink, Luc Van Gool, and Dengxin Dai. Continual test-time domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7201-7211, 2022. 2
Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. Zuxuan Wu, Xi Wang, Yu-Gang Jiang, Hao Ye, Xiangyang Xue, Proceedings of the 23rd ACM international conference on Multimedia. the 23rd ACM international conference on MultimediaZuxuan Wu, Xi Wang, Yu-Gang Jiang, Hao Ye, and Xi- angyang Xue. Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. In Pro- ceedings of the 23rd ACM international conference on Mul- timedia, pages 461-470, 2015. 1
Aggregated residual transformations for deep neural networks. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSaining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017. 2
Transfer: Learning relation-aware facial expression representations with transformers. Fanglei Xue, Qiangchang Wang, Guodong Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionFanglei Xue, Qiangchang Wang, and Guodong Guo. Trans- fer: Learning relation-aware facial expression representa- tions with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3601- 3610, 2021. 2
Aff-wild: Valence and arousal 'in-the-wild'challenge. Stefanos Zafeiriou, Dimitrios Kollias, A Mihalis, Athanasios Nicolaou, Guoying Papaioannou, Irene Zhao, Kotsia, Computer Vision and Pattern Recognition Workshops. IEEE2017 IEEE Conference onStefanos Zafeiriou, Dimitrios Kollias, Mihalis A Nicolaou, Athanasios Papaioannou, Guoying Zhao, and Irene Kot- sia. Aff-wild: Valence and arousal 'in-the-wild'challenge. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 1980-1987. IEEE, 2017. 1
Memo: Test time robustness via adaptation and augmentation. Marvin Zhang, Sergey Levine, Chelsea Finn, Advances in Neural Information Processing Systems. 35Marvin Zhang, Sergey Levine, and Chelsea Finn. Memo: Test time robustness via adaptation and augmentation. Advances in Neural Information Processing Systems, 35:38629-38642, 2022. 2
Adaptive risk minimization: Learning to adapt to domain shift. Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn, Advances in Neural Information Processing Systems. 342Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, and Chelsea Finn. Adaptive risk min- imization: Learning to adapt to domain shift. Advances in Neural Information Processing Systems, 34:23664-23678, 2021. 2
| zyda_arxiv-0387000 |
DIFFUSION-LIMITED AGGREGATION AS BRANCHED GROWTH
Thomas C Halsey
Department of Physics
The James Franck Institute
The University of Chicago
5640 South Ellis Avenue Chicago60637Illinois
DIFFUSION-LIMITED AGGREGATION AS BRANCHED GROWTH
arXiv:cond-mat/9401077v1 31 Jan 1994 1/19/946460A68700520
I present a first-principles theory of diffusion-limited aggregation in two dimensions. A renormalized mean-field approximation gives the form of the unstable manifold for branch competition, following the method of Halsey and Leibig [Phys. Rev. A 46, 7793 (1992)]. This leads to a result for the cluster dimensionality, D ≈ 1.66, which is close to numerically obtained values. In addition, the multifractal exponent τ (3) = D in this theory, in agreement with a proposed "electrostatic" scaling law.
Diffusion-limited aggregation (DLA) is a model of pattern formation in which
clusters grow by the accretion of successive random walkers. 1 Each random walker arrives from infinity, and sticks to the growing cluster at whichever surface point it first contacts. Only after the accretion of a walker does the next walker commence its approach to the cluster. The clusters thereby obtained are fractal in all dimensionalities d > 1, and are qualitatively and/or quantitatively similar to patterns observed in such diverse phenomena as colloidal aggregation, electrodeposition, viscous fingering, and dielectric breakdown. 2 At the heart of the problem of diffusion-limited aggregation is the following question: what is the relationship between the scale-invariance of the diffusive growth process and the hierarchical structure of the clusters generated by this process? 3 A preliminary, and incomplete, answer to this question was provided by this author in collaboration with M. Leibig. 4 In this work, it was hypothesized that the quantitative process by which one branch screens, i.e., takes growth probability from, a neighboring branch, has a specific form, independent of the length scale on which this process takes place. This assumption allows the development of a qualitatively correct theory, which yields multifractal scaling of growth probability, as well as agreement with a phenomonological scaling law, the "Turkevich-Scher" law, relating the scaling of the maximum growth probability over all sites on the cluster to the dimension of the cluster as a whole. 5 In this letter, I shall present a more complete and a priori theory of diffusionlimited aggregation in two dimensions based upon a specific mean-field calculation of the dynamics of branch competition. Because the mean-field approximation is implemented on all length scales, it is perhaps better to regard this theory as an ansatz solution in the case where certain types of fluctuations on all length scales are neglected, while others are included. This specific model allows verification of all qualitative aspects of branch competition that were advanced as (reasonable) hypotheses in Ref. 4. The result obtained for the dimensionality of the cluster, D = 1.66, is within 3% of the oft-quoted value D = 1.71 obtained from the scaling of the cluster radius-of-gyration in numerical studies. An additional scaling law (the "electrostatic scaling law"), relating the multifractal exponent τ (3) of the growth measure to the dimensionality D by D = τ (3), is seen to be exact within this theory. 6 In the growth process, each particle attaches itself to a unique "parent" particle in the pre-existing cluster. Furthermore, the cluster is observed to be a branched structure, with no loops and with each particle having asymptotically zero, one or two "children", i.e. particles to whom it stands as a parent. 7 Very rarely particles have more than two children; primarily for reasons of convenience I neglect this possibility.
Consider a particle with two children. Each of the two children separately, with all particles descended from each, I term a "branch". Thus these two-child particles are parents of two branches, which occupy neighboring regions of space.
The total number of particles in one branch I term n 1 , and the total in the other n 2 . The total number of descendants of the parent particle is thus n b ≡ n 1 + n 2 . Now consider the next particle to accrete to the cluster. I say that this particle has a total probability p 1 to stick anywhere on the first branch, and a total probability p 2 to stick anywhere on the second branch, yielding a total probability p b ≡ p 1 + p 2 .
Let us now consider the normalized quantities x = p 1 /p b and y = n 1 /n b .
Clearly dn 1 /dn = p 1 , where n is the total number of particles in the cluster, and we are neglecting fluctuations of O( √ n b ). Thus y obeys the following equation of motion:
dy d ln n b = x − y.(1)
The right-hand side of this equation is a function only of x and y. Now x will obey an equation of the form
dx d ln n b = G(x, y; n; {φ i }),(2)
where {φ i } is some parameterization of all of the variables describing the structure of the cluster. In ref. that there will be a stable and an unstable direction; the eigenvalue corresponding to the latter direction we define to be ν.
When a pair of branches is first created by a tip-splitting event, its initial growth up to the stage at which n b ≫ 1 is determined by complicated microscopic dynamics, which do not recognize the existence of the unstable fixed point. Thus we expect the probability that a newly created branch pair will be a distance ǫ ν from the unstable fixed point will be ρ(ǫ ≪ 1)dǫ ∝ ǫ ν−1 dǫ; we are assuming a constant probability density of branch creation near the unstable fixed point. This assumption has been specifically verified by numerical study in ref. 4. The choice of ǫ ν for this initial distance insures that position along the unstable manifold in the x − y plane can be parameterized by the variable ǫn b .
It is possible to relate the eigenvalue ν to the cluster dimensionality D by the following argument. 4 Consider the strongest branch in the cluster, that obtained by always following the stronger child (with the larger values of x,y) at each branching.
The total number of side-branches (or branch points) from such a branch is ∼ r,
where r is the cluster radius. In order that the cluster have a dimension D > 1, a number ∼ 1 of these side branches must have a total number of particles ∼ n, the total number in the cluster. A side branch obeying this criterion must have ǫn ∼ 1, so that at that branching, both descendant branches are roughly equal in size. The probability of this happening at any particular branching is n −1 dǫ ρ(ǫ) ∝ n ν , and there are ∼ r different sidebranchings at which this might occur. Thus rn ν ∼ 1, or
D = 1/ν.
In order to determine g(x, y), we turn to an explicit description of the growth process. 6,8 Suppose that we parameterize the accessible surface of the cluster by arc-length s. If a particle attaches at the surface point s ′ , it thereby reduces the growth probability at all points s for which |s − s ′ | > a, where a is the particle size.
This is because a certain number of the random walks that would have reached s previously are now obstructed by the new particle at s ′ . If the probability that a particle lands at s ′ is p(s ′ ), and the probability that a random walker goes from s ′ to s without contacting the surface is H(s, s ′ ), this implies that dp(s) dn
= ds ′ (H(s, s ′ ) − h(s)δ(s − s ′ )) p 2 (s ′ ),(3)
where we have modelled effects on the scale |s − s ′ | < a by the δ-function, the coefficient of which, h(s), is set by the conservation of the total growth probability, dsp(s) = 1. Note that in Eq. (3), two factors of p(s ′ ) appear-one corresponds to the original probability that a particle lands at p(s ′ ), the other to the potential trajectories arriving at s that are blocked by such a particle.
For a ≪ |s − s ′ | ≪ an, conformal transformation shows that the function H(s, s ′ ) is given in two dimensions by the simple form 9
H(s, s ′ ) = p(s)p(s ′ ) s ′ s ds ′′ p(s ′′ ) 2 ,(4)
where the integral in the denominator is the total growth probability between the points s and s ′ . It is convenient to parameterize the interface by this quantity, the "growth probability" distance between points z(s), defined by z(s ′ ) − z(s) = s ′ s ds ′′ p(s ′′ ). Then our fundamental equation becomes
dp(z) dn = p(z) dz ′ 1 (z − z ′ ) 2 −h(z)δ(z − z ′ ) p 2 (z ′ ),(5)
where a serves as an ultra-violet cutoff to prevent divergence of the integral, and h(z) is related to h(s) and to the function z(s); its precise form is of no interest to us.
I wish to use this equation to determine the function dx/d ln n b = g(x, y).
Repeated application of the chain rule yields
dx d ln n b = n b p 2 b (1 − x) dp 1 dn − x dp 2 dn . (6)
Consider a branch with probability p ′ and a number of particles n ′ . We suppose that this branch extends from z = 0 to z = p ′ . Eqs. (5) and (6) imply that if we can write p 2 (z) on this branch (and by extension, all other branches) as
p 2 (z) = (p ′ ) 2 n ′ f (z/p ′ ),(7)
where f (z) is a universal function that depends neither upon p ′ nor upon n ′ , then we will be able to write dx/d ln n b = g(x, y), with the right-hand side a function of x and y alone. Equation (7) is motivated by the fact that p 2 (z) must be proportional to (p ′ ) 2 ; the dependence on n ′ is specifically chosen to lead to an n ′ -independent g(x, y). Only if we can find a method of computing an n ′ -independent f (z) will this ansatz be justified.
Thus the crux of the problem is this "branch envelope" function f (z), which represents, with the appropriate normalization, the distribution of growth probability in different regions of a branch. Now in our picture, each branch can be divided into two distinct sub-branches, which compete according to the dynamics established by g(x, y). Our central mean-field assumption is that we can compute f (z) by averaging the envelope functions f (z) of these sub-branches over the stochastic parameter ǫ appropriate to the competition of these two sub-branches. In this way we obtain the following equation:
f (z) = ∞ −∞ dǫρ(ǫ) x 2 (ǫn b ) y(ǫn b ) f z x(ǫn b ) + (1 − x(ǫn b )) 2 (1 − y(ǫn b )) f 1 − z 1 − x(ǫn b ) ,(8)
where x(ǫn b ) and y(ǫn b ) give the values of x and y along the unstable manifold as functions of n b and the stochastic parameter ǫ. For convenience, we are defining ρ(ǫ)
for negative values of ǫ as ρ(−ǫ) = ρ(ǫ), with x(−η) = 1 − x(η), y(−η) = 1 − y(η).
This leads to the relatively compact expression of Eq. (8). For large n b , this equation has a solution independent of n b , which is determined by
∞ −∞ dη|η| ν−1 x 2 (η) y(η) f z x(η) + (1 − x(η)) 2 (1 − y(η)) f 1 − z 1 − x(η) − f (z) = 0. (9)
Since the integrand goes to zero as η → ∞, we are justified in taking the small ǫ form for ρ(ǫ).
Of course, in order to perform this integral, we must have the form of the unstable manifold, and thus we must already know g(x, y). We can determine g(x, y)
from f (z) by simply integrating Eq. (5) over the appropriate intervals. We do not integrate over regions exterior to the two competing branches, but only investigate the influence of the two branches on one another. Skipping some tedious algebra,
we may express the result as follows. Defining a function ψ(u) by
ψ(u) = 1 0 dz 1 z − 1 z + u f (z),(10)
we can write
g(x, y) =x(1 − x) 2 x y ψ(∞) − (1 − x) 2 (1 − y)x ψ x 1 − x − 2 1 − x 1 − y ψ(∞) − x 2 y(1 − x) ψ 1 − x x .(11)
The reader should note that we have a circular procedure, because g(x, y) is determined as a function of f (z) by Eqs. (10) and (11) In addition, this theory automatically agrees with the electrostatic scaling law, which states that
ds p(s) 3 ∝ n −1 ,(12)
where the integral is over the entire cluster surface. This is equivalent to the more usual statement that τ (3) = D. In ref. 4, we demonstrated that the multifractal exponents σ(q) defined by ds p(s) q ∝ n −σ(q) can be obtained from the integral
condition 11 ∞ 0 dη η ν−1 x(η) q y(η) σ(q) + (1 − x(η)) q (1 − y(η)) σ(q) − 1 = 0.(13)
By integrating Eq. (9) from z = 0 to z = 1, one obtains precisely this criterion, with q = 3 and σ(q) = 1, in agreement with the electrostatic scaling law. Though the electrostatic scaling law thus appears in a natural way in this theory, one should not say that it is predicted by this theory unless the solution obtained to Eq. (9) is stable. It may be that it is necessary to impose the electrostatic scaling law as a constraint to insure this stability. 10 From Figure 1, it is clear that although in some sense the unstable manifold that we have calculated is an acceptable average trajectory, the numerically
4 we assumed that by averaging the right-hand side of this equation over these parameters {φ i }, one obtains dx/d ln n b = g(x, y), where the right-hand side is now only a function of x and y. Given this function g(x, y), one has a closed system of equations describing the evolution of x and y as functions of ln n b .By symmetry, g(x, y) = −g(1 − x, 1 − y), so (x, y) = (1/2, 1/2) must be a fixed point of this process of competition between the two branches. In ref. 4, we explored the consequences of assuming that this fixed point is hyperbolic, with the unstable manifold emerging from the fixed point terminating in two stable fixed points at (x, y) = (0, 0) or (x, y) = (1, 1), these latter representing the situation in which one branch has been completely screened by the other. This assumption will be explicitly verified in the calculation below.If the central fixed point at (x, y) = (1/2, 1/2) is hyperbolic, then branch pairs which commence their existence (with n b ∼ 1) near the unstable fixed point will be quickly drawn onto the unstable manifold. Linearizing the system of equations for d(x, y)/d ln n b about the central fixed point, the hyperbolic assumption implies
Figure 1 .
1, while f (z) is determined as a function of g(x, y), and in particular by the unstable manifold in the x − y plane as determined by g(x, y), by Eq.(9). Thus in practice we are looking for a solution of Eq. (9) where the functions x(η) and y(η) are implicitly determined by f (z).I have numerically obtained the unique solution to Eq. (9) under these conditions, which is displayed in the inset toFigure 1. 10 This validates our assumption regarding the scaling with n ′ in Eq. (7). The function g(x, y) determined from this function has all of the necessary qualitative features; in particular, the fixed point at (x, y) = (1/2, 1/2) is unstable and hyperbolic, and the unstable manifold leads from this point to stable fixed points at (x, y) = (0, 0) and (1, 1), as illustrated inFigure 1also shows numerical results for branch competition. The value of the unstable eigenvalue ν is ν ≈ .6020, implying that D = 1/ν ≈ 1.661, which is within 3% of the standard numerical result D ≈ 1.71.
obtained trajectories do exhibit some dispersion about this average. This has significant results. The Makarov scaling law predicts that dσ(q)/dq| q=1 = 1/D 0 , 12 where D 0 is the surface fractal dimension (which according to some studies is significantly less than the radius-of-gyration dimension D.)13 My result, from Eq.(13),is dσ(q)/dq| q=1 ≈ 0.71,which is significantly different from the Makarov result. In practice, this quantity is quite sensitive to the way in which the unstable manifold approaches the stable fixed points at (x, y) = (0, 0) and (1, 1); since the numerical trajectories are quite dispersed in this region, I do not expect a good result for the Makarov scaling from a one-trajectory theory. However, the theory outlined in this letter can be easily generalized to account for the possibilty of trajectory dispersion, which may lead to better agreement with the Makarov result.
Figure Caption
Figure Caption
1 .
1Trajectories of branch competion in the x−y plane. The light solid trajectories are numerical results from ref. 4 for specific branch pairs in growing DLA clusters. The heavy solid line represents the unstable manifold predicted by this letter, which is quite close to the "average" numerical trajectory. The inset shows the computed branch envelope function f (z).
AcknowledgementsI would like to acknowledge a stimulating discussion with L.P. Kadanoff, as well as conversations with R. Blumenfeld on a closely related topic. I am very grateful to A. Libchaber for encouragement at an early stage in this project. This work was supported by the National Science Foundation through a Presidential Young Investigator award, Grant DMR-9057156.
. T A Witten, Jr , L M Sander, Phys. Rev. Lett. 471400T.A. Witten, Jr. and L.M. Sander, Phys. Rev. Lett. 47, 1400 (1981);
. P Meakin, Phys. Rev. A. 271495P. Meakin, Phys. Rev. A 27, 1495 (1983).
. R Brady, R C Ball, Nature. 309225R. Brady and R.C. Ball, Nature (London) 309, 225 (1984);
. L Niemeyer, L Pietronero, H J Wiesmann, Phys. Rev. Lett. 521033L. Niemeyer, L. Pietronero, and H.J. Wiesmann, Phys. Rev. Lett. 52, 1033 (1984);
. J Nittmann, G Daccord, H E Stanley, Nature. 314141J. Nittmann, G. Daccord, and H.E. Stanley, Nature (London) 314, 141 (1985).
This question is also the focus of real-space studies such as. L Pietronero, A Erzan, C Evertsz, Phys. Rev. Lett. 61861This question is also the focus of real-space studies such as L. Pietronero, A. Erzan, and C. Evertsz, Phys. Rev. Lett. 61, 861 (1988);
. X R Wang, Y Shapir, M Rubenstein, Phys. Rev. A. 1515974Physica APhysica A 151, 207 (1988), and X.R. Wang, Y. Shapir and M. Rubenstein, Phys. Rev. A 39, 5974 (1989);
. J. Phys. A. 22507J. Phys. A 22, L507 (1989).
. T C Halsey, M Leibig, Phys. Rev. A. 467793T.C. Halsey and M. Leibig, Phys. Rev. A 46, 7793 (1992);
. T C Halsey, K Honda, unpublishedT.C. Halsey and K. Honda, unpublished.
. L Turkevich, H Scher, Phys. Rev. Lett. 551026L. Turkevich and H. Scher, Phys. Rev. Lett. 55, 1026 (1985);
. R Ball, R Brady, G Rossi, B R C Thompson ; T, P Halsey, I Meakin, Procaccia, Phys. Rev. Lett. 33854Phys. Rev. Lett.Phys. Rev. A 33, 786 (1986); see also R. Ball, R. Brady, G. Rossi, and B.R. Thompson, Phys. Rev. Lett. 55, 1406 (1985), and T.C. Halsey, P. Meakin, and I. Procaccia, Phys. Rev. Lett. 56, 854 (1986).
. T C Halsey, Phys. Rev. Lett. 592067T.C. Halsey, Phys. Rev. Lett. 59, 2067 (1987);
. Phys. Rev. A. 384749Phys. Rev. A 38, 4749 (1988).
The fact that there are no loops follows from the fact that every particle has a unique parent. which is true in off-lattice versions of DLAThe fact that there are no loops follows from the fact that every particle has a unique parent, which is true in off-lattice versions of DLA.
. B Shraiman, D Bensimon, Phys. Rev. A. 302840B. Shraiman and D. Bensimon, Phys. Rev. A 30, 2840 (1984);
. R C Ball, M Blunt, Phys. Rev. A. 393591R.C. Ball and M. Blunt, Phys. Rev. A 39, 3591 (1989).
. T C Halsey, Phys. Rev. A. 353512T.C. Halsey, Phys. Rev. A 35, 3512 (1987).
The stability of this solution is a more difficult question. There appears numerically to be a single instability of the solution. which can be eliminated if one applies the electrostatic scaling law as a constraintThe stability of this solution is a more difficult question. There appears nu- merically to be a single instability of the solution, which can be eliminated if one applies the electrostatic scaling law as a constraint.
For a general discussion of multifractality, see T. Vicsek, Fractal Growth Phenomena. World ScientificSingapore2nd ed.For a general discussion of multifractality, see T. Vicsek, Fractal Growth Phe- nomena, 2nd ed. (World Scientific, Singapore, 1992).
. N G Makarov, Proc. London Math. Soc. 51369N.G. Makarov, Proc. London Math. Soc. 51, 369 (1985).
. F Argoul, A Arneodo, G Grasseau, H Swinney, Phys. Rev. Lett. 612558In particular, D 0 ≈ 1.60 isIn particular, D 0 ≈ 1.60 is claimed by F. Argoul, A. Arneodo, G. Grasseau, and H. Swinney, Phys. Rev. Lett. 61, 2558 (1988).
| zyda_arxiv-0423000 |
Topological Characteristics of Harmonic Quasiconformal Unit Disk Automorphisms in the Uniform Topology
Florian Biersack [email protected]
University of Würzburg
Chair for Complex Analysis
Emil-Fischer-Strasse 4097074Würzburg, BavariaGermany
Topological Characteristics of Harmonic Quasiconformal Unit Disk Automorphisms in the Uniform Topology
We study the class HQ(D), the set of harmonic quasiconformal automorphisms of the unit disk D in the complex plane, endowed with the topology of uniform convergence. Several important topological properties of this space of mappings are investigated, such as separability, compactness, path-connectedness and completeness.
Introduction
The idea for investigating the harmonic quasiconformal automorphisms of D := z ∈ C |z| < 1 was on the one hand inspired from a topic that has drawn much attention in recent years: The harmonic quasiconformal mappings. Initiated by Martio in 1968 (see [10, p. 238] and [14, p. 366]), this particular class of homeomorphisms attracted much interest in the recent past, see [3,Introduction], [10], [11], [14], [15,Section 10.3] and the references therein, to name only a few. In particular, Kalaj and Pavlović worked intensively in this area and achieved numerous results, among others several characterization statements for harmonic quasiconformal automorphisms of the unit disk (see [10,Theorem A,p. 239] and Proposition 3.8 below). On the other hand, in [2], the authors studied the quasiconformal automorphism groups of simply connected domains in the complex plane. For this class of domains, the unit disk in C can be regarded as the reference element, not least due to the classical Riemann Mapping Theorem (RMT) and its quasiconformal counterpart, the Measurable RMT (see e.g. [13,Mapping Theorem,p. 194]). In view of these circumstances and by the Theorem of Radó-Kneser-Choquet (see Proposition 2.2), a similarly striking result on harmonic mappings, the following discussion will focus on the special case of harmonic quasiconformal automorphisms of the unit disk in C. The topology used in this paper is the uniform topology, induced by the supremum metric
Definition and basic properties
In [2], the authors studied the following space of mappings: That is, the mappings in HQ(D) are the harmonic quasiconformal automorphisms of D. Here and henceforth, a complex-valued mapping f = u + iv defined on a domain is called harmonic if both, its real and imaginary parts, are real-valued harmonic mappings, which in turn are defined via the Laplace equation
∆u = ∂ 2 u ∂x 2 + ∂ 2 u ∂y 2 = 0,
the differential polynomial ∆ := ∂ 2 ∂x 2 + ∂ 2 ∂y 2 being the Laplace operator. Harmonic mappings possess numerous important properties, such as the mean-value property and the maximum principle ([6, p. 12]), which are in turn deeply connected with holomorphic functions by well-known results from Complex Analysis. An immediate conclusion to be drawn is Σ(D) ⊆ HQ(D), where Σ(D) := {f ∈ Q(D) f is conformal} denotes the subset of conformal automorphisms of D. In particular, it is id D ∈ HQ(D) and therefore HQ(D) = ∅, where id D is the identity on D. An important fact about harmonic mappings in C and their representation is given by the following result due to Radó, Kneser
Proposition 2.2 (Radó-Kneser-Choquet).
Let G C be a convex Jordan domain and γ : ∂D −→ ∂G be a weak homeomorphism, i.e. a continuous mapping of ∂D onto ∂G such that the preimage γ −1 (ξ) of each ξ ∈ ∂G is either a point or a closed subarc of ∂D. Then the harmonic extension
P[γ](z) := 1 2π 2π 0 1 − r 2 1 − 2r cos(t − φ) + r 2 γ(e it ) dt, z = re iφ ∈ D,(2)
defines an injective harmonic mapping of D onto G; moreover, P[γ] is unique. Conversely, if G C is a strictly convex Jordan domain and f : D −→ G is an injective harmonic mapping, then f has a continuous extension to D which defines a weak homeomorphism of ∂D onto ∂G. Moreover, if f ∈ C(D) is harmonic in D, then f | D can be written in the form (2).
For a (Jordan) domain G ⊆ C, let H * (∂D, ∂G) denote the set of all weak homeomorphisms of ∂D onto ∂G and in the special case G = D define H * (∂D) := H * (∂D, ∂D). Consequently, let H + (∂D, ∂G) and H + (∂D) denote the corresponding subsets of all orientation-preserving homeomorphisms, respectively.
Remark 2.3.
The harmonic extension P[γ] defined by (2) is also called the Poisson transformation of γ ∈ H * (∂D, ∂G), and the corresponding integral kernel 1 − r 2 1 − 2r cos(t) + r 2 is called the Poisson kernel, see [6, p. 12] and [15, pp. 5-6].
From the Radó-Kneser-Choquet Theorem 2.2, one obtains the following (see also [11, pp. 337-338])
Corollary 2.4. HQ(D) = Q(D) P[γ] γ ∈ H + (∂D)
Corollary 2.4 also makes sense when recalling that every quasiconformal automorphism of a Jordan domain admits a homeomorphic boundary extension (see [12, p. 13]). In particular, the induced boundary mapping is injective, hence an element of H + (∂D). A concrete harmonic automorphism of the unit disk is visualized in
Example 2.5.
For x ∈ [0, 1], consider the piecewise-defined function
φ(x) = 2x, x ∈ [0, 1 3 ] 2 3 , x ∈ [ 1 3 , 3 4 ] 4 3 x − 1 3 , x ∈ [ 3 4 , 1]
for e it ∈ ∂D defines a weak homeomorphism of ∂D onto itself, i.e. γ ∈ H * (∂D). The corresponding harmonic extension provided by Proposition 2.2 therefore yields a harmonic homeomorphism P[γ] of D onto itself. Figure 1 shows the (approximated) mapping behaviour of this harmonic extension, visualized by concentric circles around the origin, radial rays and an Euclidean grid. However, the mapping P[γ] is not quasiconformal due to the fact that its boundary function -which equals γ by construction -is not injective, but this would necessarily follow.
Remark 2.6.
In particular, the harmonic extension P[γ] discussed in Example 2.5, with γ given by (3), provides a concrete example of a sense-preserving homeomorphism of the unit disk that is not quasiconformal, i.e.
P[γ] ∈ H + (D)\Q(D).
Another example of such a mapping will be presented in Proposition 4.3.
A basic fact in the theory of harmonic mappings is that the composition of two such mappings is not necessarily harmonic again (see [6, p. 2]). In the same manner, the inverse mapping of an injective harmonic mapping is also not harmonic in general, except for special situations, as stated in (see [6, Theorem, pp.
145-148])
Proposition 2.7 (Choquet-Deny).
Suppose f is an orientation-preserving injective harmonic mapping defined on a simply connected domain G ⊆ C, and suppose that f is neither analytic nor affine. Then the inverse mapping f −1 is harmonic if and only if f has the form
f (z) = α βz + 2i Arg(γ − e −βz ) + δ,(4)
where α, β, γ, δ ∈ C are constants with αβγ = 0 and e −βz < |γ| for all z ∈ G.
This result and the previously stated facts immediately imply (see also [
Topological properties of HQ(D)
This section is intended to study some central topological properties of the space HQ(D). Since this situation is settled in the context of metric spaces, many of these topological notions can be expressed in terms of convergent sequences in the space HQ(D). Thus, certain convergence results for uniformly convergent sequences of harmonic mappings will prove valuable, as stated in An elementary persistence property in the interplay between harmonic and holomorphic mappings is that the post-composition of a holomorphic function with a harmonic one remains harmonic (see [6, p. 2]). This fact is utilized in order to prove Another property to be studied is the path-connectedness of HQ(D). In this context, the following integral operator will be of crucial importance (see [14, p. 367] and [15, p. 305]):
Definition 3.5 (Hilbert transformation). For periodic ϕ ∈ L 1 ([0, 2π]) and x ∈ R, the expression H(ϕ)(x) := − 1 π lim →0 + π ϕ(x + t) − ϕ(x − t) 2 tan(t/2) d t (5)
is called the (periodic) Hilbert transformation of ϕ.
Remark 3.6.
(i) In Fourier theory and trigonometric series, the Hilbert transformation plays a prominent role. However, the definition of the operator H is not completely consistent in the vast literature about this topic. For example, a different formulation is given by
H(ϕ)(x) = − 1 π lim →0 + π ϕ(x + t) − ϕ(x − t) t d t,
which is -at least for existence questions -equivalent to (5) due to 2 tan(t/2) − t = 0 for t −→ 0 (see [15, p. 306] and [17, Vol. I, p. 52]). (ii) The notion of Hilbert transformation is also present in further mathematical areas, for example in the classical theory of quasiconformal mappings in C (see [13, pp. 156-160]) and Teichmüller spaces (see [9, pp. 319-320]). However, the circumstance that the definitions are in parts considerably different from each other is also present in these contexts.
Due to the presence of the tangent function in the integrand's denominator in (5), the question for existence of H raises, partially answered in (see [14, p. 367 (b) ϕ is strictly increasing and bi-Lipschitz;
(c) the Hilbert transformation of ϕ is an element of L ∞ (R).
A mapping g : X −→ Y between metric spaces (X, d X ) and (Y, d Y ) is called bi-Lipschitz if there exists a constant L ∈ [1, +∞) such that
1 L d X (x 1 , x 2 ) ≤ d Y (g(x 1 ), g(x 2 )) ≤ Ld X (x 1 , x 2 )
for all x 1 , x 2 ∈ X, thus sharpening the classical notion of a Lipschitz-continuous mapping. In view of Corollary 2.4 and Proposition 3.8, the following characterization for the elements of the space HQ(D) is valid: A harmonic (orientation-preserving) homeomorphism P[e iϕ ] of D onto itself is quasiconformal if and only if the corresponding mapping ϕ is an element of
H + qc := ϕ ∈ C([0, 2π]) ϕ is strictly increasing and bi-Lipschitz, ϕ(2π) − ϕ(0) = 2π, H( ϕ ) ∈ L ∞ (R) .
Here, ϕ denotes the canonical extension of ϕ ∈ H + qc to all of R via ϕ(t + 2kπ) := ϕ(t) + 2kπ for all t ∈ [0, 2π] and every k ∈ Z. By the requirement of strict increasing monotonicity, every mapping ϕ ∈ H + qc is differentiable almost everywhere in (the interior of) [0, 2π]. Consequently, each extended mapping ϕ ∈ C(R) is differentiable almost everywhere in R with ϕ being 2π-periodic by construction. Furthermore, the assumption that ϕ is bi-Lipschitz yields ϕ ∈ L 1 ([0, 2π]) (see [16,Theorem 10,p. 124]). Therefore, the condition H( ϕ ) ∈ L ∞ (R) is reasonable. In view of the path-connectedness of HQ(D), the first important observation to be made here is
+ (1 − λ)ϕ 2 . (i) Monotonicity: For t, t ∈ [0, 2π] with t < t , it is λϕ 1 (t) + (1 − λ)ϕ 2 (t) < λϕ 1 (t ) + (1 − λ)ϕ 2 (t ) due to λ, (1 − λ) ≥ 0, hence λϕ 1 + (1 − λ)ϕ 2 is strictly increasing.
(ii) Bi-Lipschitz property: Let t, t ∈ [0, 2π] and L := max{L 1 , L 2 } with L j denoting the bi-Lipschitz constant of ϕ j , j = 1, 2. Then on the one hand, by means of the triangle inequality, it is
|λϕ 1 (t) + (1 − λ)ϕ 2 (t) − λϕ 1 (t ) − (1 − λ)ϕ 2 (t )| ≤ λ |ϕ 1 (t) − ϕ 1 (t )| + (1 − λ) |ϕ 2 (t) − ϕ 2 (t )| ≤ λL|t − t | + (1 − λ)L|t − t | = L|t − t |.
Hence λϕ 1 + (1 − λ)ϕ 2 is Lipschitz-continuous with Lipschitz constant L. Without loss of generality, assume t > t , then on the other hand, it is (recall that λϕ 1 + (1 − λ)ϕ 2 is strictly increasing by (i))
λϕ 1 (t) + (1 − λ)ϕ 2 (t) − λϕ 1 (t ) − (1 − λ)ϕ 2 (t ) = λ(ϕ 1 (t) − ϕ 1 (t )) + (1 − λ)(ϕ 2 (t) − ϕ 2 (t )) ≥ λ 1 L (t − t ) + (1 − λ) 1 L (t − t ) = 1 L (t − t ).
Finally, switching the roles of t and t shows that λϕ 1 + (1 − λ)ϕ 2 is bi-Lipschitz continuous on [0, 2π] with bi-Lipschitz constant L. (iii) Image interval has length 2π: It is
λϕ 1 (2π) + (1 − λ)ϕ 2 (2π) − (λϕ 1 (0) + (1 − λ)ϕ 2 (0)) = λ(ϕ 1 (2π) − ϕ 1 (0)) + (1 − λ)(ϕ 2 (2π) − ϕ 2 (0)) = λ2π + (1 − λ)2π = 2π. (iv) Hilbert transformation: First of all, λ ϕ 1 + (1 − λ) ϕ 2 is differentiable almost everywhere in R by (i) with (λ ϕ 1 + (1 − λ) ϕ 2 ) = λ ϕ 1 + (1 − λ) ϕ 2 .
The function λϕ 1 + (1 − λ)ϕ 2 is contained in L 1 ([0, 2π]) as the linear combination of such elements. Following Definition 3.5, the Hilbert transformation of λ ϕ 1 + (1 − λ) ϕ 1 is given by
H(λ ϕ 1 + (1 − λ) ϕ 2 )(x) = − 1 π π 0 + λ ϕ 1 (x + t) + (1 − λ) ϕ 2 (x + t) − λ ϕ 1 (x − t) − (1 − λ) ϕ 2 (x − t) 2 tan(t/2) dt = − 1 π π 0 + λ( ϕ 1 (x + t) − ϕ 1 (x − t)) + (1 − λ)( ϕ 2 (x + t) − ϕ 2 (x − t)) 2 tan(t/2) dt.
Since ϕ 1 , ϕ 2 ∈ H + qc , it is H( ϕ 1 ), H( ϕ 2 ) ∈ L ∞ (R) by definition of the set H + qc , thus using the linearity of (improper) integrals the previous equation can be rewritten as
H(λ ϕ 1 + (1 − λ) ϕ 2 )(x) = − λ π π 0 + ϕ 1 (x + t) − ϕ 2 (x − t) 2 tan(t/2) dt − 1 − λ π π 0 + ϕ 2 (x + t) − ϕ 2 (x − t) 2 tan(t/2) dt = λH( ϕ 1 )(x) + (1 − λ)H( ϕ 2 )(x).
Since L ∞ (R) is a R-vector space, the previous equation yields H(λ ϕ 1 + (1 − λ) ϕ 2 ) ∈ L ∞ (R). All in all, the mapping λϕ 1 + (1 − λ)ϕ 2 is contained in H + qc for every λ, hence H + qc is convex. Thus, as a subset of the normed vector space C([0, 2π]), H + qc is also path-connected.
Continuing the investigation, the set H + qc now gives rise to consider the mapping (6) is continuous and surjective.
Λ : H + qc −→ HQ(D), ϕ −→ D z −→ Λ(ϕ)(z) := P[e iϕ ](z) .(6)
Proof. The fact that Λ is surjective was already mentioned above. Hence, let (ϕ n ) n∈N converge in H + qc to ϕ ∈ H + qc . The characterization of elements in HQ(D) stated in Proposition 3.8 implies that (Λ(ϕ n )) n∈N is a sequence in HQ(D) and Λ(ϕ) ∈ HQ(D). In particular, Λ(ϕ n ) and Λ(ϕ) are harmonic quasiconformal automorphisms of D, continuous on D and coincide with e iϕn and e iϕ on ∂D, respectively (see also [6, p. 12]). Therefore, since Λ(ϕ n ) − Λ(ϕ) is harmonic as well, the Maximum Principle for harmonic mappings applies, concluding in
sup z∈D |Λ(ϕ n )(z) − Λ(ϕ)(z)| = sup z∈∂D |Λ(ϕ n )(z) − Λ(ϕ)(z)| = sup t∈[0,2π] e iϕn(t) − e iϕ(t) ≤ sup t∈[0,2π] |ϕ n (t) − ϕ(t)| = d sup (ϕ n , ϕ).
In the estimate, the elementary inequality |e ix − e iy | ≤ |x − y| for x, y ∈ R was used. The last expression tends to zero for n → ∞, proving the continuity of Λ.
Finally, combining the statements of Lemma 3.9 and Theorem 3.10 yields In order to prove this claim, some helpful results are collected in the following. The principal idea of the proof of Theorem 4.1 is to construct a sequence of homeomorphic mappings of the interval [0, 1] onto itself converging uniformly to the Cantor function C : [0, 1] −→ [0, 1]; for basic information on this function, see [5] and [16,Section 2.7,. A result of Božin and Mateljević shows that, via the Poisson transformation, an appropriately modified variant of the mapping C induces a harmonic homeomorphism of the unit disk D onto itself which is not quasiconformal (see Proposition 4.3). However, this harmonic homeomorphism will be seen to arise as the uniform limit of harmonic quasiconformal automorphisms of D, thus implying that HQ(D) cannot be complete.
First of all, an approximation procedure for the Cantor function C in terms of a certain recursively defined sequence, which will be of central importance, is stated (see [5,Proposition 4.2,p. 9]):
C(x) = 1 2 C(3x), 0 ≤ x ≤ 1 3 1 2 , 1 3 < x < 2 3 1 2 + 1 2 C(3x − 2), 2 3 ≤ x ≤ 1.
Moreover, for arbitrary ψ 0 ∈ B([0, 1]), the sequence (ψ n ) n∈N0 defined by
ψ n+1 (x) := 1 2 ψ n (3x), 0 ≤ x ≤ 1 3 1 2 , 1 3 < x < 2 3 1 2 + 1 2 ψ n (3x − 2), 2 3 ≤ x ≤ 1(7)
for n ∈ N 0 converges uniformly on [0, 1] to C.
An approximation of the Cantor function using the recursively defined sequence given by (7) is shown in Figure 2. Basically, the principal idea of the approximation procedure and the mappings ψ n is that the initial mapping ψ 0 is "copied" and gets "duplicated in a scaled fashion", being added to the graph of ψ n more and more times as the index increases. This is visualized by the right-hand picture in Figure 2: In the first step (in blue), the scaled initial mapping ψ 0 can be seen two times, namely on the intervals [0, 1 3 ] and [ 2 3 , 1]. After the second iteration (in orange), the mapping ψ 0 appears four times in a scaled manner. Finally, in the third step (in yellow), the appropriately scaled version of ψ 0 is present eight times. In particular, it becomes obvious that all continuity and differentiability questions regarding ψ n depend solely on the behaviour of the initial mapping ψ 0 (and eventually existing derivatives) at the boundary points x = 0 and x = 1 of the starting interval. Furthermore, in Lemma 4.2, the stated approximation part and the related uniqueness of C is based on Banach's Contraction Principle (see [16, p. 216]). The following Proposition contains the mentioned result of Božin/Mateljević concerning a harmonic homeomorphism of D which fails to be quasiconformal (see [3,
Furthermore, ψ 0 is strictly increasing on (0, 1) and leaves the boundary points fixed -in other words, ψ 0 maps [0, 1] homeomorphically onto itself. Lemma 4.2 implies that the corresponding sequence (ψ n ) n∈N0 defined via (7) converges uniformly on [0, 1] to the Cantor function C, and by construction, it is ψ n ∈ C 2 ([0, 1]) for every n ∈ N 0 due to (8). Transferring the ψ n to the interval [0, 2π] via ϕ n (t) := π ψ n t 2π Figure 2: Left: An approximation of the Cantor function C using the recursively defined sequence (ψ n ) n described in Lemma 4.2. The initial function is given by ψ 0 (x) = 6x 5 − 15x 4 + 10x 3 for x ∈ [0, 1], and for the approximation, the index value n = 15 was chosen. Right: The first three function in the approximation sequence (ψ n ) n : ψ 1 in blue, ψ 2 in orange and ψ 3 in yellow.
+ t 2π , t ∈ [0, 2π],(9)
yields a sequence (ϕ n ) n of C 2 -homeomorphism of [0, 2π] onto itself. Accordingly, this sequence (ϕ n ) n clearly converges uniformly on [0, 2π] to the mapping ϕ C defined in Proposition 4.3. As a next step, the mappings ϕ n and ϕ C are extended to all of R by setting ϕ n (t + 2kπ) := ϕ n (t) + 2kπ (10) for k ∈ Z and t ∈ [0, 2π], yielding a sequence (ϕ n ) n∈N0 ⊆ C 2 (R); likewise, the mappings ψ n and C are extended in the same manner (the extended mappings are denoted by the same letter). In particular, the ϕ n are differentiable with ϕ n (t + 2kπ) = ϕ n (t) for all t ∈ R by construction, i.e. the ϕ n (and thus the ϕ n as well) are continuous 2π-periodic mappings. Lifting these mappings to the unit circle by γ n (e it ) := e iϕn(t)
for t ∈ [0, 2π] and each n ∈ N yields orientation-preserving homeomorphisms of ∂D onto itself, hence the harmonic extensions P[γ n ] by means of the Radó-Kneser-Choquet Theorem 2.2 are (orientation-preserving) harmonic homeomorphisms of D onto itself. In order to visualize this procedure and to illustrate a concrete mapping of the described type, the mapping behaviour of the harmonic unit disk automorphism P[γ 4 ] is visualized in Figure 3. Moreover, by Pavlović's characterization result stated in Proposition 3.8, the mappings P[γ n ] in fact define quasiconformal automorphisms of D, which can be seen as follows:
It is ϕ n ∈ C 2 (R) strictly increasing with ϕ n (t + 2π) = ϕ n (t) + 2π for all t ∈ R by construction, see (9) and (10). Furthermore, as C 2 -homeomorphisms, each mapping ϕ n is Lipschitz-continuous, and the corresponding inverse mappings ϕ −1 n are also C 2 by construction due to (8), thus also Lipschitz-continuous. In consequence, the mappings ϕ n are bi-Lipschitz. Hence, in view of Proposition 3.8(ii), the Hilbert transformation condition (c) needs to be verified. Therefore, let x ∈ R, then it is |ϕ n (x + t) − ϕ n (x − t)| ≤ L n · |x + t − (x − t)| = 2L n |t|, since ϕ n ∈ C 2 (R), thus ϕ n is Lipschitz-continuous on R with Lipschitz constant L n ∈ R + . This yields π 0 + ϕ n (x + t) − ϕ n (x − t) t dt ≤ π 0 + |ϕ n (x + t) − ϕ n (x − t)| t dt ≤ π 0 + 2L n t t dt = 2πL n < +∞, and now Remark 3.6(i) implies that H(ϕ n ) is (essentially) bounded for ϕ n , n ∈ N 0 (note that the conclusion could also have been drawn from Lemma 3.7 since ϕ n and ϕ n are periodic and continuous on R). Thus Proposition 3.8 shows that the mappings P[γ n ] are quasiconformal automorphisms of D.
Finally, it will be shown that the mappings P[γ n ] converge uniformly on D to the non-quasiconformal mapping h C in question (from |ϕ n (t) − ϕ C (t)| .
Since ϕ n converges uniformly on [0, 2π] to ϕ, the claim follows: The sequence (P[γ n ]) n∈N0 in HQ(D) converges uniformly to h C ∈ HQ(D), showing that the space HQ(D) is incomplete.
d sup (f, g) := sup z∈D |f (z) − g(z)| for (bounded) mappings f, g : D −→ C.
Definition 2 . 1 .
21Let G C be a bounded, simply connected domain, then Q(G) := f : G −→ G f is a quasiconformal mapping of G onto G Several central topological properties of Q(G) in the topology of uniform convergence induced by d sup were studied in [2]. In the unit disk D, a particularly specialized subclass of such mappings arises by demanding the additional property of harmonicity, i.e. by considering HQ(D) := f ∈ Q(D) f is harmonic (
Figure 1 :
1Preimage (left) and image (right) of concentric circles and radial rays (top) as well as of an Euclidean grid (bottom) in D under the harmonic extension of γ defined by (3). which is easily seen to map the interval [0, 1] continuously, but not injectively onto itself while keeping the endpoints x = 0 and x = 1 fixed. Transferring φ to the interval [0, 2π] by conjugating it via the mapping x −→ t = 2πx yields a function ψ ∈ C([0, 2π]) with the same properties. Consequently, the mapping γ(e it ) = e i ϕ(t)
HQ(D) is no semigroup with respect to composition of mappings. In particular, HQ(D) is no subgroup of Q(D).
Proposition 3. 1 .
1Let (f n ) n∈N be a sequence of harmonic mappings on a domain G ⊆ C. (i) If (f n ) n converges locally uniformly on G to some function f , then f is harmonic (Weierstraß-type Theorem, see [1, Theorem 1.23, p. 16]). (ii) If additionally, all f n are injective and the sequence (f n ) n converges locally uniformly on G to f , then f is either injective, a constant mapping, or f (G) lies on straight line (Hurwitz-type Theorem, see [4, Theorem 1.5]). The first result concerning certain topological aspects of HQ(D) is given in Theorem 3.2. The space HQ(D) is separable and non-compact. As a subspace, it is closed in Q(D). Proof. If (f n ) n∈N ⊆ HQ(D) converges uniformly on D to f ∈ Q(D), then f is harmonic by Proposition 3.1(i), hence f ∈ HQ(D). Therefore, HQ(D) is closed in Q(D). As for the separability of HQ(D), it suffices to observe that the ambient metric space Q(D) is separable by [2, Theorem 6, p. 5]. The claimed separability of HQ(D) is then implied by the fact that subspaces of separable metric spaces are also separable. In order to see that HQ(D) is a non-compact space, suppose the contrary, i.e. HQ(D) is compact in the uniform topology. Due to the completeness of Σ(D) (see [8, Satz 1, p. 229]), the space Σ(D) is closed in the ambient space HQ(D). However, this yields that Σ(D) would also be compact as a closed subspace of the compact space HQ(D), contradicting the non-compactness of Σ(D) (see [8, Satz 1, p. 229]). Hence HQ(D) is not compact.
The space HQ(D) is dense-in-itself, i.e. it does not contain any isolated points. Proof. The space Q(D) is a topological group (see [2, Theorem 3, p. 3]) and not discrete ([2, Theorem 12, p. 8]); in particular, Σ(D) is not discrete (as already noticed in [8, p. 230]). Hence, let h ∈ HQ(D) be arbitrary and choose a sequence (f n ) n∈N in Σ(D)\{id D } converging to id D . Then, for each n ∈ N, the mapping g n := h•f n is harmonic and quasiconformal, thus (g n ) n is a sequence in HQ(D). The continuity of left multiplication in the topological group Q(D) yields d sup (g n , h) = d sup (h • f n , h) n→∞ −→ 0 due to d sup (f n , id D ) The space HQ(D) is perfect, i.e. it is closed in Q(D) and contains no isolated points.
] and [ 17 ,
17Vol. I, p. 52]) Lemma 3.7.For periodic ϕ ∈ L 1 ([0, 2π]), the Hilbert transformation H(ϕ)(x) exists for almost every x ∈ R. Furthermore, H(ϕ)(x) exists if ϕ (x) exists and is finite at x ∈ R. Now the connection between the Hilbert transformation H and HQ(D) will be clarified. By the Radó-Kneser-Choquet Theorem 2.2, every mapping γ = e iϕ ∈ H + (∂D) defines a harmonic automorphism of D by means of the Poisson transformation P[e iϕ ] (this statement remains true even for γ ∈ H * (∂D), see also[11, (1.3), p. 338]). The question for whether this harmonic extension is quasiconformal has been answered in a characterizing manner by Pavlović in[14], and is stated in (see[15, Theorem 10.18, p. 305]) Proposition 3.8. Let f : D −→ D be an orientation-preserving harmonic homeomorphism of the unit disk onto itself. Then the following conditions are equivalent:(i) f is quasiconformal, i.e. f ∈ HQ(D);(ii) f = P[e iϕ ], where the function ϕ has the following properties:
(a) ϕ(t + 2π )
2π− ϕ(t) = 2π for all t ∈ R;
2π]) is convex. In particular, H + qc is path-connected in the Banach space C([0, 2π]). Proof. Let ϕ 1 , ϕ 2 ∈ H + qc , λ ∈ [0, 1] and consider the mapping λϕ 1
The space HQ(D) is path-connected.
4
Incompleteness of HQ(D): Statement, auxiliary results and proofThis section is concerned with the proof of the following statement:
The space HQ(D) is incomplete.
Let B([0, 1]) denote the Banach space of bounded real-valued functions on [0, 1]. The Cantor function C is the unique element of B([0, 1]) for which
For t ∈ [0, 2π], define ϕ C (t) := π(C( t 2π ) + t 2π ) and γ C (t) := e iϕ C (t) . Then the function h C := P[γ C ] is a harmonic homeomorphism of D onto itself that is not quasiconformal.Now all preparations are made in order to prove the claim of Theorem 4.1: Proof of Theorem 4.1. Consider the polynomial functionψ 0 : [0, 1] −→ R, x −→ ψ 0 (x) := 6x 5 − 15x 4 + 10x 3whose first and second derivatives satisfy ψ 0 (0) = ψ 0 (1) = 0 = ψ 0 (0) = ψ 0 (1).
Figure 3 :
3Left-hand side: Preimage of concentric circles and radial rays (top) and of an Euclidean grid (bottom) in the unit disk. Right-hand side: Image of the concentric circles, radial rays and the Euclidean grid in D under the mapping P[γ 4 ].
Proposition 4.3), which is essentially based on the same idea as the proof of Theorem 3.10: Applying the Maximum Principle for harmonic functions to P[γ n ] − h C yields sup z∈D |P[γ n ](z) − h C (z)| = max z∈∂D |P[γ n ](z) − P[γ C ](z)| = max t∈[0,2π] e iϕn(t) − e iϕ C (t) ≤ max t∈[0,2π]
Harmonic Function Theory, 2. edition. S Axler, P Bourdon, W Ramey, Graduate Texts in Mathematics. 137Springer-VerlagAXLER, S., BOURDON, P., RAMEY, W.: Harmonic Function Theory, 2. edition. Graduate Texts in Mathematics, vol. 137. Springer-Verlag New York, 2001.
F Biersack, W Lauf, Topological Properties of Quasiconformal Automorphism Groups, The Journal of Analysis. to appearBIERSACK, F., LAUF, W.: Topological Properties of Quasiconformal Automorphism Groups, The Jour- nal of Analysis, to appear.
Some counterexamples related to the theory of HQC mappings. V Božin, M Mateljević, Filomat. 244BOŽIN, V., MATELJEVIĆ, M.: Some counterexamples related to the theory of HQC mappings. Filomat, 24(4), pp. 25-34 (2010).
Univalent Harmonic Mappings in the Plane. D Bshouty, W Hengartner, Annales Universitatis Mariae Curie-Sklodowska, Sectio A -Mathematica. 483BSHOUTY, D., HENGARTNER, W.: Univalent Harmonic Mappings in the Plane. Annales Universitatis Mariae Curie-Sklodowska, Sectio A -Mathematica, 48(3), pp. 12-42 (1994).
The Cantor function. O Dovgoshey, O Martio, V Ryazanov, M Vuorinen, Expositiones Mathematicae. 241DOVGOSHEY, O., MARTIO, O., RYAZANOV, V., VUORINEN, M.: The Cantor function. Expositiones Mathematicae, 24(1), pp. 1-37 (2006).
Harmonic Mappings in the Plane. P Duren, Cambridge Tracts in Mathematics. 156Cambridge University PressDUREN, P.: Harmonic Mappings in the Plane. Cambridge Tracts in Mathematics, no. 156. Cambridge University Press, 2004.
A Variational Method for Harmonic Mappings onto Convex Regions. P Duren, G Schober, Complex Variables. 9DUREN, P., SCHOBER, G.: A Variational Method for Harmonic Mappings onto Convex Regions. Com- plex Variables, 9(2-3), pp. 153-168 (1987).
Über Räume konformer Selbstabbildungen ebener Gebiete. D Gaier, Mathematische Zeitschrift. 1872GAIER, D.:Über Räume konformer Selbstabbildungen ebener Gebiete. Mathematische Zeitschrift, 187(2), pp. 227-257 (1984).
. F P Gardiner, N Lakic, Quasiconformal Teichmüller Theory. Mathematical Surveys and Monographs. 76American Mathematical SocietyGARDINER, F.P., LAKIC, N.: Quasiconformal Teichmüller Theory. Mathematical Surveys and Mono- graphs, vol. 76. American Mathematical Society, 2000.
Quasiconformal and harmonic mappings between Jordan domains. D Kalaj, Mathematische Zeitschrift. 2602KALAJ, D.: Quasiconformal and harmonic mappings between Jordan domains. Mathematische Zeitschrift, 260(2), pp. 237-252 (2008).
Harmonic automorphisms of the unit disk. J G Krzyz, M Nowak, Journal of Computational and Applied Mathematics. 1051-2KRZYZ, J.G., NOWAK, M.: Harmonic automorphisms of the unit disk. Journal of Computational and Applied Mathematics, 105(1-2), pp. 337-346 (1999).
Univalent Functions and Teichmüller Spaces. O Lehto, Graduate Texts in Mathematics. 109Springer-VerlagLEHTO, O.: Univalent Functions and Teichmüller Spaces. Graduate Texts in Mathematics, vol. 109. Springer-Verlag New York, 1987.
Quasiconformal Mappings in the Plane, 2. edition. Die Grundlehren der mathematischen Wissenschaften. O Lehto, K I Virtanen, Springer-Verlag126Berlin Heidelberg New YorkLEHTO, O., VIRTANEN, K.I.: Quasiconformal Mappings in the Plane, 2. edition. Die Grundlehren der mathematischen Wissenschaften, Band 126. Springer-Verlag Berlin Heidelberg New York, 1973.
Boundary Correspondence under Harmonic Quasiconformal Homeomorphisms of the Unit Disk. M Pavlović, Annales Academiae Scientiarum Fennicae. Mathematica. 27PAVLOVIĆ, M.: Boundary Correspondence under Harmonic Quasiconformal Homeomorphisms of the Unit Disk. Annales Academiae Scientiarum Fennicae. Mathematica. vol. 27, pp. 365-372 (2002).
Function Classes on the Unit Disk: An Introduction. M Pavlović, De Gruyter Studies in Mathematics. 52Walter de GruyterPAVLOVIĆ, M.: Function Classes on the Unit Disk: An Introduction. De Gruyter Studies in Mathe- matics, vol. 52. Walter de Gruyter, Berlin/Boston, 2014.
. H L Royden, P M Fitzpatrick, Prentice HallReal Analysis, 4. editionROYDEN, H.L., FITZPATRICK, P.M.: Real Analysis, 4. edition. Prentice Hall, 2010.
A Zygmund, Trigonometric Series, 3. edition, Volumes I & II combined, with a foreword by R. Fefferman. Cambridge Mathematical Librar Series. Cambridge University PressZYGMUND, A.: Trigonometric Series, 3. edition, Volumes I & II combined, with a foreword by R. Fefferman. Cambridge Mathematical Librar Series. Cambridge University Press, 2002.
| zyda_arxiv-0433000 |
Feature Fusion Vision Transformer for Fine-Grained Visual Categorization
Jun Wang [email protected]
Xiaohan Yu [email protected]
Griffith University
Yongsheng Australia [email protected]
Gao
University of Warwick
UK
Griffith University
Australia
Feature Fusion Vision Transformer for Fine-Grained Visual Categorization
The core for tackling the fine-grained visual categorization (FGVC) is to learn subtle yet discriminative features. Most previous works achieve this by explicitly selecting the discriminative parts or integrating the attention mechanism via CNN-based approaches. However, these methods enhance the computational complexity and make the model dominated by the regions containing the most of the objects. Recently, vision transformer (ViT) has achieved SOTA performance on general image recognition tasks. The self-attention mechanism aggregates and weights the information from all patches to the classification token, making it perfectly suitable for FGVC. Nonetheless, the classification token in the deep layer pays more attention to the global information, lacking the local and low-level features that are essential for FGVC. In this work, we propose a novel pure transformer-based framework Feature Fusion Vision Transformer (FFVT) where we aggregate the important tokens from each transformer layer to compensate the local, low-level and middle-level information. We design a novel token selection module called mutual attention weight selection (MAWS) to guide the network effectively and efficiently towards selecting discriminative tokens without introducing extra parameters. We verify the effectiveness of FFVT on four benchmarks where FFVT achieves the state-of-the-art performance. Code is available at this link.
Introduction
Fine-grained visual categorization (FGVC) aims to solve the problem of differentiating subordinate categories under the same basic-level category, e.g., birds, cars and plants. FGVC has wide real-world applications, such as autonomous driving and intelligent agriculture. Some FGVC tasks are exceedingly hard for human beings due to the small inter-class variance and large intra-class variance, e.g., recognizing 200 subordinate plant leaves and 200 subordinate birds. Therefore, FGVC is an important and highly challenging task.
Owing to the decent designed networks and large-scale annotated datasets, FGVC has gained steady improvements in recent years. Current methods on FGVC can be roughly divided into localization-based methods and attention-based methods. The core for solving FGVC is to learn the discriminative features in images. Early localization-based methods [1,16,41] achieve this by directly annotating the discriminative parts in images. However, it is costly and time-consuming to build bounding box annotations, hindering the applicability of these methods on real-world applications. To alleviate this problem, recent localizationbased methods normally integrate the region proposal network (RPN) to obtain the potential discriminative bounding boxes. These selected proposals are then fed into the backbone network to gain the local features. After that, most methods often adopt a rank loss [2] on the classification outputs for all local features. However, [14] argues that RPN-based methods ignore the relationships among selected regions. Another problem is that this mechanism drives the RPN to propose large bounding boxes as they are more likely to contain the foreground objects. Confusion occurs when these bounding boxes are inaccurate and cover the background rather than objects. Besides, some discriminative regions, e.g., leaf vein in plant leaves, cannot be simply annotated by a rectangular [35].
Attention-based [40,50,54] methods automatically detect the discriminative regions in images via self-attention mechanism. These methods release the reliance on manually annotation for discriminative regions and have gained encouraging results. Recently, vision transformer has demonstrated potential performance on general image classification [6], image retrieval [9] and semantic segmentation [54]. This great success shows that the innate attention mechanism of a pure transformer architecture can automatically search the important parts in images that contribute to image recognition. However, few study investigate the performance of vision transformer in FGVC. As the first work to study the vision transformer on FGVC, [14] proposed to replace the inputs of the final transformer layer with some important tokens and gained improved results. Nonetheless, the final class token may concern more on global information and pay less attention to local and low-level features, defecting the performance of vision transformer on FGVC since local information plays an important role in FGVC. Besides, previous works focus on FGVC benchmarks containing more than ten thousands of annotated images, and no study explores the capability of vision transformer on small-scale and ultra-fine-grained visual categorization (ultra-FGVC) settings.
In this paper, we propose a novel feature fusion vision transformer (FFVT) for FGVC. FFVT aggregates the local information from low-level, middle-level and high-level tokens to facilitate the classification. We present a novel important token selection approach called Mutual Attention Weight Selection (MAWS) to select the representative tokens on each layer that are added as the inputs of the last transformer layer. In addition, we explore the performance of our method on four FGVC datasets to comprehensively verify the capability of our proposed FFVT on FGVC. In conclusion, our work has four main contributions.
1. To our best knowledge, we are the first study to explore the performance of vision transformer on both small-scale and ultra-FGVC settings. The two small-scale datasets in this paper are highly challenging due to the ultra-fine-grained inter-category variances and few training data available. Some examples are visualized in Figure 1.
2. We propose FFVT, a novel vision transformer framework for fine-grained visual categorization tasks that can automatically detect the distinguished regions and take advantage of different level of global and local information in images.
3. We present a novel important token selection approach called Mutual Attention Weight Selection (MAWS). MAWS can effectively select the informative tokens that are having high similarity to class token both in the contexts of the class token and the token itself without introducing extra parameters. 4. We verify the effectiveness of our method on four fine-grained benchmarks. Experimental results demonstrate that FFVT achieves state-of-the-art performance on them, offering an alternative to current CNN-based approaches. Ablation studies show that our proposed method boost the performance of the backbone model by 5.42%, 4.67% and 0.80% on CottonCultivar80, SoyCultivarLocal and CUB datasets, respectively.
Fine-Grained Visual Categorization
Methods on FGVC can be coarsely divided into two groups: localization-based methods and attention-based methods. Similar to object detection task, localization-based methods often detect the foreground objects and perform classification based on them. Early works [1,16,41] achieve this by taking advantage of part annotation to supervise the learning of the detection branch. However, bounding box annotation requires large manual labor, hampering their real-world applications.
To alleviate above problem, recent localization-based methods introduce the weakly supervised object detection (WSOD) technique to predict the potential discriminative regions with only image-level label. Ge et al. [13] used WSOD and instance segmentation techniques to obtain the rough object instances, and then selected the important instances to perform classification. He et al. [15] presented two spatial constraints to select the discriminative parts obtained by the detection branch. Wang et al. [38] utilized correlations between regions to select distinguished parts. However, these methods require a well designed WSOD branch to propose potential discriminative regions. Moreover, the selected parts sent to the classification head often cover the whole object instead of the truly discriminative parts.
Alternatively, attention-based methods automatically localize the discriminative regions via self-attention mechanism without extra annotations. Zhao et al. [50] proposed a diversified visual attention network which uses the diversity of the attention to collect dicriminative information. Xiao et al. [40] presented a two-level attention mechanism to steadily filter out the trivial parts. Similar to [40], Zheng et al. [54] proposed a progressive-attention to progressively detect discriminative parts at multiple scales. However, these methods often suffer from huge computational cost.
Transformer
Transformer has achieved huge success in natural language processing [4,30,31]. Motivated by this, researchers try to exploit the transformers in computer vision. Recent work ViT [6] achieves the state-of-the-art performance on image classification by employing a pure transformer architecture on a number of fix-sized image patches. Later, researchers explore the performance of the pure transformer in other computer vision tasks. Zheng [54] et al. developed a pure transformer SETR on semantic segmentation task. Alaaeldin et al. [9] exploited a transformer to generate the image descriptor for image retrieval task. Nonetheless, few studies explore the vision transformer on FGVC.
The most similar to our work is TransFG [14] which is the first study to extend the ViT into FGVC, while there are two notable differences between FFVT and TransFG. First, TransFG selects the discriminative tokens and directly send them to the last transformer layer (no feature fusion), while FFVT aims to aggregate the local and different level information from each layer to enrich the feature representation capability via feature fusion. Second, our proposed token selection strategy is totally different from that of TransFG which requires the attention information from all transformer layer to generate the selected token indexes via matrix multiplication. In contrast, our proposed MAWS utilize attention information from only one transformer layer to produce the corresponding indexes. Hence, MAWS is simple and efficient. Our work is also in accordance with the spirit of recent research [11,33,34,35,36,43,44,45,46,47,48,49,51,52], which focuses on localizing subtle yet vital regions.
Methods
To better comprehend our method, we first briefly review the knowledge of vision transformer ViT in Section 3.1. Our proposed methods are then elaborately described in the following subsections.
ViT For Image Recognition
ViT follows the similar architecture of transformer in natural language processing with minor modification. Transformer in natural language processing takes a sequence of tokens as inputs. Similarly, given an image with resolution H * W , vision transformer first processes the image into N = H P * W P fix-sized patches x p . where P is the size for each patch. The patches x p are then linearly projected into a D-dimensional latent embedding space. To introduce the positional differences, a learnable vector called position embedding with the same size of patch embedding is directly added to patch embedding. Similar to the class token in BERT [4], an extra class token is added to interact with all patch embeddings and undertakes the classification task. The procedure is shown in Eq (1):
z z z 0 = [x x x class ; x x x 1 p E E E; x x x 2 p E E E; ...; x x x N p E E E] + E E E pos(1)
Where E E E ∈ R (P 2 ·C)×D is the patch embedding projection and C is the number of the image channels. E E E pos denotes the position embedding.
After that, these patch embeddings are fed into the transformer encoder containing several multi-head self-attention (MSA) and multi-layer perceptron (MLP) blocks. Note that all layers retain the same latent vector size D. The outputs of the l-th layer are calculated by Eqs (2) to (3):
z z z l = MSA(LN(z z z l−1 )) + z z z l−1 (2) z z z l = MLP(LN(z z z l )) + z z z l .(3)
Where LN(·) is the layer normalization operation and z l denotes the encoded image representation. Eventually, a classification head implemented by a MLP block is applied to the class token z z z 0 l to obtain the predicted category. [14] suggests that the ViT cannot capture enough local information required for FGVC. To cope with this problem, we propose to fuse the low-level features and middle-level features to enrich the local information. We present a novel token selection approach called mutual attention weight selection (MAWS) to determine the tokens to be aggregated in the deep layer. This section introduces the details of our proposed FFVT. The overall architecture of FFVT is illustrated in Fig 2. Linear Projection of Flattened Patches
FFVT Architecture
Feature Fusion Module
The key challenge of the FGVC is to detect the discriminative regions that significantly contribute to figuring out the subtle differences among subordinate categories. Previous works often achieve this by manually annotating the discriminative regions or integrating the RPN module. However, these methods suffer from some problems discussed in Section 1&2, limiting their performance on real-world applications. The MSA mechanism in vision transform can perfectly meet the above requirement, whereas MSA in deep layer is likely to pay more attention to the global information. Therefore, we propose a feature fusion module to compensate local information. As shown in figure 2, given the important tokens (hidden features) from each layer selected by MAWS module, we replace the inputs (except for the class token) of the last transformer layer with our selected tokens. In this way, the class token in the last transformer layer fully interacts with the low-level, middle level and high-level features from the previous layers, enriching the local information and feature representation capability.
Specifically, we denote the tokens selected by MAWS module in l-layer as:
z z z local l = [z 1 l , z 2 l , ..., z K l ](4)
Where K is the number of selected features. The fused features along with the classification token fed into the last transformer layer L are:
z z z f f = [z z z 0 L−1 ; z z z local 1 ; z z z local 2 ; ...; z z z local L−1 ](5)
Eventually, following the ViT, the classification token of the final transformer layer is sent to the classification head to perform categorization. The problem turns to how to select the important and discriminative tokens. To that end, we propose an effective and efficient token selection approach described in the next section.
Mutual Attention Weight Selection Module
Since an image is split into many patches, token selection turns to be an important problem. Noise is added when the background patches are frequently selected, while discriminative patches can boost the model performance. Hence, we propose a token selection approach which directly utilizes the attention scores generated by multi-head self-attention module.
To be specific, an attention score matrix for one attention head A ∈ R (N+1)×(N+1) is denoted as:
A A A = [a a a 0 ; a a a 1 ; a a a 2 ; ...; a a a i ; ...; a a a N ]
a a a i = [a i,0 , a i,1 , a i,2 , ..., a i, j , ..., a i,N ]
Where a i, j is the attention score between token i and j in the context of token i, i.e., dotproduct between the query of token i and the key of token j. One simple strategy is to pick the tokens having the higher attention scores with the classification token as the classification token contains rich information for categorization. We can do this by sorting the a a a 0 and picking the K tokens with the bigger value. We denote this strategy as single attention weight selection (SAWS). However, SAWS may introduce noisy information since the selected tokens could aggregate much information from noisy patches. Taking a three-patch attention score matrix γ shown below as an example: Token three is selected as it has the biggest value in the attention score vector for classification token. However, token three aggregates much information from token one (the maximum attention score in a a a 3 ) thus may introduce noises assuming token one is a noisy token. To cope with this problem, we develop a mutual attention weight selection module which requires the selected tokens to be similar to the classification token both in the contexts of the classification token and the tokens themselves.
γ =
In particular, we denote the first column in the attention score matrix as b b b 0 . Note that b b b 0 is the attention score vector between classification token and other tokens in the context of other tokens compared with a a a 0 in the context of classification token. The mutual attention weight m m ma a a i between the classification token and token i is then calculated by Eqs (8) to (9):
m m ma a a i = a 0,i * b i,0 (8) a 0,i = e a 0,i ∑ N j=0 e a 0, j , b i,0 = e b i,0 ∑ N j=0 e b j,0(9)
For multi-head self-attention, we first average the attention scores of all heads. After obtaining the mutual attention weight, the indexes of important tokens are collected according to the mutual attention values. Our approach does not introduce extra learning parameters. It is simple and efficient compared with the matrix multiplication in [14].
Experiments
Datasets
We explore the effectiveness of FFVT on two widely used FGVC dataset and two smallscale ultra-fine-grained datasets, i.e., CUB-200-2011 [32], Stanford Dogs [18], SoyCultivar-Local [46] and CottonCultivar80 [46]. The SoyCultivarLocal and CottonCultivar80 are two highly challenging datasets as they further reduce the granularity of categorization, e.g. from species to cultivar, and with few training data available. The statistics of four datasets are shown in Table 1.
Implementation Details
The same as the most current transformer-based approaches, the backbone network (ViT) of FFVT is pretrained on the ImageNet21K dataset. Following the same data augmentation methods on most existing works, input images are first resized to 500 × 500 for Soy.Loc and Cotton datasets, and 600 × 600 for CUB and Stanford Dogs. We then crop the image into 384 × 384 for Soy.Loc and Cotton, and 448 × 448 for CUB and Stanford Dogs (Random cropping in training and center cropping in testing). Random horizontal flipping is adopted and an extra color augmentation is applied for CUB. K in Eq (4) is set to 12 for CUB, Soy.Loc and Cotton, and 24 for Stanford Dogs.
We select the SGD optimizer to optimize the network with a momentum of 0.9. The initial learning rate is 0.02 with the cosine annealing scheduler for FFVT on CUB, Soy.Loc Cotton datasets, and 0.003 on the Stanford Dogs dataset. The batch size is set to 8 for all datasets except for the Stanford Dogs with a batch size of 4. For fair comparisons, we reimplement the experiments of ViT and TransFG on the Stanford Dogs benchmark with their default settings and the same batch size as FFVT. Experiments are conducted on four Nvidia 2080Ti GPUs using PyTorch deep learning framework.
Comparison with the State-Of-The-Art
Here, we demonstrate the experimental results on four datasets and compare our method with a number of state-of-the-art works. As shown in Table 2, FFVT obtains the second best-performed method on CUB with an accuracy of 91.6%, beating other methods by a large margin except for the most recent state-of-the-art fine-grained method TransFG (-0.1%). Note that FFVT achieves a comparable accuracy against TransFG with much less computation cost and GPU memory consumption since the overlapping strategy of TransFG significantly increases the number of the input patches from 784 to 1296. Besides, limited by our computation resources, the batch size of TransFG on the experiment of CUB dataset is two times larger than FFVT. This may also account for the relative performance differences. FFVT outperforms all the listed approaches on Stanford Dogs with an accuracy of 91.5%, strongly exceeding the second best-performed TransFG by 0.9%. Table 2: Comparison of different methods on CUB-200-2011 datasets. The best accuracy is highlighted in bold and the second best accuracy is underlined.
Method
Backbone Accuracy ResNet50 [15] ResNet50 84.5 GP-256 [39] VGG16 85.8 MaxEnt [8] DenseNet161 86.6 DFL-CNN [37] ResNet50 87.4 NTS-Net [42] ResNet50 87.5 Cross-X [23] ResNet50 87.7 DCL [3] ResNet50 87.8 CIN [12] ResNet101 88.1 DBTNet [53] ResNet101 88.1 ACNet [17] ResNet50 88.1 S3N [5] ResNet50 88.5 FDL [22] DenseNet161 89.1 PMG [7] ResNet50 89.6 API-Net [55] DenseNet161 90.0 StackedLSTM [13] GoogleNet 90.4 ViT [6] ViT-B_16 90.8 TransFG [14] ViT-B_16 91.7 FFVT ViT-B_16 91.6
SoyCultivarLocal and CottonCultivar80 are two extremely challenging ultra-fine-grained datasets. The difficulty lies in two folds, i.e., super-subtle inter-class differences and few training images (three for each category). Some examples are visualized in figure 1. Therefore, locating the discriminative regions plays an essential role in accurate classification. ViT-B_16 90.2 TransFG [14] ViT-B_16 90.6 (92.3) FFVT
ViT-B_16 91.5
The results of experiments on SoyCultivarLocal and CottonCultivar80 are shown in Table 3. FFVT obtains the highest accuracy of 57.92% on CottonCultivar80, outperforming the second best-performed method by a large margin (+4.17%). Similarly, our proposed FFVT beats all methods with an accuracy of 44.17% on SoyCultivarLocal.
Ablation Studies
We perform the ablation studies on CottonCultivar80, SoyCultivarLocal and CUB to further validate the effectiveness of our proposed methods. SAWS is the single attention weight selection strategy designed in Section 3.2.2. As shown in Table 5, even the simple SAWS strategy can remarkably boost the performance by 4.58%, 3.50% and 0.64% on CottonCulti-var80, SoyCultivarLocal and CUB, respectively. The results confirm the necessity of aggregating the local and different level information for vision transformer on FGVC. A bigger improvement can be seen when applying the MAWS strategy (+6.67%, 4.84% and 0.80% on CottonCultivar80, SoyCultivarLocal and CUB, respectively), showing that MAWS better exploits the attention information. MAWS explicitly selects the most useful tokens thus forces the model to learn from these informative parts. We then investigate the influence of the hyper-parameter K. Table 6 summarizes the results of FFVT on the SoyCultivarLocal dataset with the value of K ranging from 10 to 14. FFVT achieves the best performance when there are 12 tokens selected for each layer. One possible reason is that the tokens focused by each attention head are selected by the proposed MAWS module and contribute positively to the classification since this value (12) is in accordance with the number of the attention heads. As K increases from 10 to 12, the accuracy steadily enhances from 43.17% to 44.17%. A different pattern can be seen when K continues increasing to 14, where the accuracy slightly reduces to 42.5%. The performance drop may due to that large K introduces the noisy tokens while small K value lead to insufficient discriminative information for classification. Note that results of all K settings show a significant improvements over that backbone ViT (39.33%), indicating that FFVT is not very sensitive to the value of K.
Conclusion
This paper proposes a novel fine-grained visual categorization architecture FFVT and achieves state-of-the-art performance on four benchmarks. To facilitate the performance of vision transformer in FGVC, we propose a feature fusion approach to enrich the local, low-level and middle-level information for the classification token. To select the discriminative tokens that to be aggregated, we develop a novel token selection module MAWS which explicitly takes advantage of the attention scores produced by self-attention mechanism. Experimental results show that FFVT significantly improve the classification accuracy of standard ViT on different fine-grained settings, i.e., normal-scale, small-scale and ultra-fine-grained settings. We observe that FFVT is very effective on the challenging datasets, confirming its capability of capturing subtle differences and discriminative information.
Based on our encouraging results, we believe that the pure-transformer model has the huge potential on different FGVC settings, even in the small-scale datasets without the induction bias like convolutional neural networks.
Figure 1 :
1Examples of images in SoyCultivarLocal and Cotton datasets. Images in the first row come from four species of Soy.Loc, while examples in the second row are selected from four categorizes of Cotton.
Figure 2 :
2The overall architecture of the proposed FFVT. Images are split into a sequence of fix-sized patches which are then linearly projected into the embedding space. Combined with the position embedding, the patch embeddings are fed into the Transformer Encoder to learn the patch features. Feature fusion is exploited before the last transformer layer to aggregate the important local, low-level and middle level information from previous layers. This is implemented by replacing the inputs (exclude classification token) of the last transformer layer with the tokens selected by the MAWS Module.
Table 3 :
3Comparison of different methods on Stanford Dogs (Dogs) dataset. The best accuracy is highlighted in bold and the second best accuracy is underlined. Values in parentheses are reported results in their papers.Method
Backbone
Dogs
MaxEnt [8]
DenseNet161
83.6
FDL [22]
DenseNet161
84.9
RA-CNN [10]
VGG19
87.3
DB [27]
ResNet50
87.7
SEF [24]
ResNet50
88.8
Cross-X [23]
ResNet50
88.9
API-Net [55] DenseNet161
90.3
ViT [6]
Table 4 :
4Comparison of different methods on SoyCultivarLocal (Soy.Loc) and CottonCulti-var80 (Cotton) datasets. The best accuracy is highlighted in bold and the second best accuracy is underlined.Method
Backbone Cotton Soy.Loc
AlexNet [19]
AlexNet
22.92
19.50
VGG16 [26]
VGG16
50.83
39.33
ResNet50 [15]
ResNet50
52.50
38.83
InceptionV3 [28]
GoogleNet 37.50
23.00
MobileNetV2 [25]
MobileNet 49.58
34.67
Improved B-CNN [21]
VGG16
45.00
33.33
NTS-Net [42]
ResNet50
51.67
42.67
fast-MPN-COV [20]
ResNet50
50.00
38.17
ViT [6]
ViT-B_16
51.25
39.33
DeiT-B [29]
ViT-B_16
53.75
38.67
TransFG [14]
ViT-B_16
45.84
38.67
FFVT
ViT-B_16
57.92
44.17
Table 5 :
5Ablation studies on CottonCultivar80 (Cotton), SoyCultivarLocal (Soy.Loc), and CUB datasets. The best accuracy is highlighted in bold.Method
Cotton Soy.Loc CUB
ViT [6]
51.25
39.33
90.85
ViT+Feature Fusion+SAWS
55.83
42.83
91.49
FFVT(ViT+Feature Fusion+MAWS) 57.92
44.17
91.65
Table 6 :
6Ablation studies of the hyper-parameter K on SoyCultivarLocal benchmark. The best accuracy is highlighted in bold. Accuracy(%) 43.17 43.83 44.17 43.00 42.50K
10
11
12
13
14
ß
Poof: Part-based one-vs.-one features for finegrained categorization, face verification, and attribute estimation. Thomas Berg, N Peter, Belhumeur, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionThomas Berg and Peter N Belhumeur. Poof: Part-based one-vs.-one features for fine- grained categorization, face verification, and attribute estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 955-962, 2013.
Ranking measures and loss functions in learning to rank. Wei Chen, Tie-Yan Liu, Yanyan Lan, Zhi-Ming Ma, Hang Li, Advances in Neural Information Processing Systems. 22Wei Chen, Tie-Yan Liu, Yanyan Lan, Zhi-Ming Ma, and Hang Li. Ranking measures and loss functions in learning to rank. Advances in Neural Information Processing Systems, 22:315-323, 2009.
Destruction and construction learning for fine-grained image recognition. Yue Chen, Yalong Bai, Wei Zhang, Tao Mei, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYue Chen, Yalong Bai, Wei Zhang, and Tao Mei. Destruction and construction learning for fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5157-5166, 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pretraining of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Selective sparse sampling for fine-grained image recognition. Yao Ding, Yanzhao Zhou, Yi Zhu, Qixiang Ye, Jianbin Jiao, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionYao Ding, Yanzhao Zhou, Yi Zhu, Qixiang Ye, and Jianbin Jiao. Selective sparse sampling for fine-grained image recognition. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision, pages 6599-6608, 2019.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, arXiv:2010.11929Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
Fine-grained visual classification via progressive multi-granularity training of jigsaw patches. Ruoyi Du, Dongliang Chang, Ayan Kumar Bhunia, Jiyang Xie, Zhanyu Ma, Yi-Zhe Song, Jun Guo, European Conference on Computer Vision. SpringerRuoyi Du, Dongliang Chang, Ayan Kumar Bhunia, Jiyang Xie, Zhanyu Ma, Yi-Zhe Song, and Jun Guo. Fine-grained visual classification via progressive multi-granularity training of jigsaw patches. In European Conference on Computer Vision, pages 153- 168. Springer, 2020.
Ramesh Raskar, and Nikhil Naik. Maximumentropy fine-grained classification. Abhimanyu Dubey, Otkrist Gupta, arXiv:1809.05934arXiv preprintAbhimanyu Dubey, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Maximum- entropy fine-grained classification. arXiv preprint arXiv:1809.05934, 2018.
Training vision transformers for image retrieval. Alaaeldin El-Nouby, Natalia Neverova, Ivan Laptev, Hervé Jégou, arXiv:2102.05644arXiv preprintAlaaeldin El-Nouby, Natalia Neverova, Ivan Laptev, and Hervé Jégou. Training vision transformers for image retrieval. arXiv preprint arXiv:2102.05644, 2021.
Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. Jianlong Fu, Heliang Zheng, Tao Mei, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJianlong Fu, Heliang Zheng, and Tao Mei. Look closer to see better: Recurrent atten- tion convolutional neural network for fine-grained image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4438-4446, 2017.
Face recognition using line edge map. Yongsheng Gao, K H Maylor, Leung, IEEE transactions on pattern analysis and machine intelligence. 24Yongsheng Gao and Maylor KH Leung. Face recognition using line edge map. IEEE transactions on pattern analysis and machine intelligence, 24(6):764-779, 2002.
Channel interaction networks for fine-grained image categorization. Yu Gao, Xintong Han, Xun Wang, Weilin Huang, Matthew Scott, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Yu Gao, Xintong Han, Xun Wang, Weilin Huang, and Matthew Scott. Channel in- teraction networks for fine-grained image categorization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 10818-10825, 2020.
Weakly supervised complementary parts models for fine-grained image classification from the bottom up. Weifeng Ge, Xiangru Lin, Yizhou Yu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionWeifeng Ge, Xiangru Lin, and Yizhou Yu. Weakly supervised complementary parts models for fine-grained image classification from the bottom up. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3034- 3043, 2019.
Transfg: A transformer architecture for fine-grained recognition. Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille, arXiv:2103.07976arXiv preprintJu He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, and Alan Yuille. Transfg: A transformer architecture for fine-grained recognition. arXiv preprint arXiv:2103.07976, 2021.
Weakly supervised learning of part selection model with spatial constraints for fine-grained image classification. Xiangteng He, Yuxin Peng, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence31Xiangteng He and Yuxin Peng. Weakly supervised learning of part selection model with spatial constraints for fine-grained image classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
Part-stacked cnn for fine-grained visual categorization. Shaoli Huang, Zhe Xu, Dacheng Tao, Ya Zhang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionShaoli Huang, Zhe Xu, Dacheng Tao, and Ya Zhang. Part-stacked cnn for fine-grained visual categorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1173-1182, 2016.
Attention convolutional binary neural tree for fine-grained visual categorization. Ruyi Ji, Longyin Wen, Libo Zhang, Dawei Du, Yanjun Wu, Chen Zhao, Xianglong Liu, Feiyue Huang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRuyi Ji, Longyin Wen, Libo Zhang, Dawei Du, Yanjun Wu, Chen Zhao, Xianglong Liu, and Feiyue Huang. Attention convolutional binary neural tree for fine-grained visual categorization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10468-10477, 2020.
Novel dataset for fine-grained image categorization: Stanford dogs. Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, Fei-Fei Li, Proc. CVPR Workshop on Fine-Grained Visual Categorization (FGVC). CVPR Workshop on Fine-Grained Visual Categorization (FGVC)Citeseer2Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Fei-Fei Li. Novel dataset for fine-grained image categorization: Stanford dogs. In Proc. CVPR Workshop on Fine-Grained Visual Categorization (FGVC), volume 2. Citeseer, 2011.
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in neural information processing systems. 25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing sys- tems, 25:1097-1105, 2012.
Towards faster training of global covariance pooling networks by iterative matrix square root normalization. Peihua Li, Jiangtao Xie, Qilong Wang, Zilin Gao, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionPeihua Li, Jiangtao Xie, Qilong Wang, and Zilin Gao. Towards faster training of global covariance pooling networks by iterative matrix square root normalization. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 947-955, 2018.
Yu Tsung, Subhransu Lin, Maji, arXiv:1707.06772Improved bilinear pooling with cnns. arXiv preprintTsung-Yu Lin and Subhransu Maji. Improved bilinear pooling with cnns. arXiv preprint arXiv:1707.06772, 2017.
Filtration and distillation: Enhancing region attention for fine-grained visual categorization. Chuanbin Liu, Hongtao Xie, Zheng-Jun Zha, Lingfeng Ma, Lingyun Yu, Yongdong Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Chuanbin Liu, Hongtao Xie, Zheng-Jun Zha, Lingfeng Ma, Lingyun Yu, and Yongdong Zhang. Filtration and distillation: Enhancing region attention for fine-grained visual categorization. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 34, pages 11555-11562, 2020.
Cross-x learning for fine-grained visual categorization. Wei Luo, Xitong Yang, Xianjie Mo, Yuheng Lu, S Larry, Jun Davis, Jian Li, Ser-Nam Yang, Lim, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionWei Luo, Xitong Yang, Xianjie Mo, Yuheng Lu, Larry S Davis, Jun Li, Jian Yang, and Ser-Nam Lim. Cross-x learning for fine-grained visual categorization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8242-8251, 2019.
Learning semantically enhanced feature for fine-grained image classification. Wei Luo, Hengmin Zhang, Jun Li, Xiu-Shen Wei, IEEE Signal Processing Letters. 27Wei Luo, Hengmin Zhang, Jun Li, and Xiu-Shen Wei. Learning semantically enhanced feature for fine-grained image classification. IEEE Signal Processing Letters, 27:1545- 1549, 2020.
Mobilenetv2: Inverted residuals and linear bottlenecks. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018.
Very deep convolutional networks for largescale image recognition. Karen Simonyan, Andrew Zisserman, arXiv:1409.1556arXiv preprintKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Finegrained recognition: Accounting for subtle differences between similar classes. Guolei Sun, Hisham Cholakkal, Salman Khan, Fahad Khan, Ling Shao, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Guolei Sun, Hisham Cholakkal, Salman Khan, Fahad Khan, and Ling Shao. Fine- grained recognition: Accounting for subtle differences between similar classes. In Pro- ceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12047- 12054, 2020.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818-2826, 2016.
Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, arXiv:2012.12877Alexandre Sablayrolles, and Hervé Jégou. arXiv preprintHugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablay- rolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. arXiv preprint arXiv:2012.12877, 2020.
Multimodal transformer for unaligned multimodal language sequences. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov, Proceedings of the conference. Association for Computational Linguistics. Meeting. the conference. Association for Computational Linguistics. MeetingNIH Public Access20196558Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. Multimodal transformer for unaligned multi- modal language sequences. In Proceedings of the conference. Association for Compu- tational Linguistics. Meeting, volume 2019, page 6558. NIH Public Access, 2019.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762Attention is all you need. arXiv preprintAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
The caltech-ucsd birds. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, Serge Belongie, Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
Boosted efficientnet: detection of lymph node metastases in breast cancer using convolutional neural networks. Jun Wang, Qianying Liu, Haotian Xie, Zhaogang Yang, Hefeng Zhou, Cancers. 134661Jun Wang, Qianying Liu, Haotian Xie, Zhaogang Yang, and Hefeng Zhou. Boosted efficientnet: detection of lymph node metastases in breast cancer using convolutional neural networks. Cancers, 13(4):661, 2021.
EAR-NET: Error Attention Refining Network For Retinal Vessel Segmentation. Jun Wang, Zhao Yang, Linglong Qian, Xiaohan Yu, Yongsheng Gao, 2021 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEEJun Wang, Zhao Yang, Linglong Qian, Xiaohan Yu, and Yongsheng Gao. EAR-NET: Error Attention Refining Network For Retinal Vessel Segmentation. In 2021 Interna- tional Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2021.
Mask guided attention for fine-grained patchy image classification. Jun Wang, Xiaohan Yu, Yongsheng Gao, 2021 IEEE International Conference on Image Processing (ICIP). Jun Wang, Xiaohan Yu, and Yongsheng Gao. Mask guided attention for fine-grained patchy image classification. In 2021 IEEE International Conference on Image Process- ing (ICIP), pages 1044-1048, 2021.
PGTRNet: Two-phase Weakly Su-pervisedObject Detection with Pseudo Ground Truth Refining. Jun Wang, Hefeng Zhou, Xiaohan Yu, arXiv:2104.00231arXiv preprintJun Wang, Hefeng Zhou, and Xiaohan Yu. PGTRNet: Two-phase Weakly Su- pervisedObject Detection with Pseudo Ground Truth Refining. arXiv preprint arXiv:2104.00231, 2021.
Learning a discriminative filter bank within a cnn for fine-grained recognition. Yaming Wang, I Vlad, Larry S Morariu, Davis, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionYaming Wang, Vlad I Morariu, and Larry S Davis. Learning a discriminative filter bank within a cnn for fine-grained recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 4148-4157, 2018.
Weakly supervised fine-grained image classification via correlation-guided discriminative learning. Zhihui Wang, Shijie Wang, Pengbo Zhang, Haojie Li, Wei Zhong, Jianjun Li, Proceedings of the 27th ACM International Conference on Multimedia. the 27th ACM International Conference on MultimediaZhihui Wang, Shijie Wang, Pengbo Zhang, Haojie Li, Wei Zhong, and Jianjun Li. Weakly supervised fine-grained image classification via correlation-guided discrimina- tive learning. In Proceedings of the 27th ACM International Conference on Multimedia, pages 1851-1860, 2019.
Grassmann pooling as compact homogeneous bilinear pooling for fine-grained visual classification. Xing Wei, Yue Zhang, Yihong Gong, Jiawei Zhang, Nanning Zheng, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Xing Wei, Yue Zhang, Yihong Gong, Jiawei Zhang, and Nanning Zheng. Grassmann pooling as compact homogeneous bilinear pooling for fine-grained visual classification. In Proceedings of the European Conference on Computer Vision (ECCV), pages 355- 370, 2018.
The application of two-level attention models in deep convolutional neural network for fine-grained image classification. Tianjun Xiao, Yichong Xu, Kuiyuan Yang, Jiaxing Zhang, Yuxin Peng, Zheng Zhang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionTianjun Xiao, Yichong Xu, Kuiyuan Yang, Jiaxing Zhang, Yuxin Peng, and Zheng Zhang. The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 842-850, 2015.
Hierarchical part matching for fine-grained visual categorization. Lingxi Xie, Qi Tian, Richang Hong, Shuicheng Yan, Bo Zhang, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionLingxi Xie, Qi Tian, Richang Hong, Shuicheng Yan, and Bo Zhang. Hierarchical part matching for fine-grained visual categorization. In Proceedings of the IEEE Interna- tional Conference on Computer Vision, pages 1641-1648, 2013.
Learning to navigate for fine-grained classification. Ze Yang, Tiange Luo, Dong Wang, Zhiqiang Hu, Jun Gao, Liwei Wang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Ze Yang, Tiange Luo, Dong Wang, Zhiqiang Hu, Jun Gao, and Liwei Wang. Learning to navigate for fine-grained classification. In Proceedings of the European Conference on Computer Vision (ECCV), pages 420-435, 2018.
Leaf image retrieval using combined feature of vein and contour. Xiaohan Yu, Shengwu Xiong, Yongsheng Gao, 2015 International Conference on Image and Vision Computing New Zealand (IVCNZ). IEEEXiaohan Yu, Shengwu Xiong, and Yongsheng Gao. Leaf image retrieval using com- bined feature of vein and contour. In 2015 International Conference on Image and Vision Computing New Zealand (IVCNZ), pages 1-6. IEEE, 2015.
Multiscale crossing representation using combined feature of contour and venation for leaf image identification. Xiaohan Yu, Shengwu Xiong, Yongsheng Gao, Yang Zhao, Xiaohui Yuan, 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEEXiaohan Yu, Shengwu Xiong, Yongsheng Gao, Yang Zhao, and Xiaohui Yuan. Multi- scale crossing representation using combined feature of contour and venation for leaf image identification. In 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1-6. IEEE, 2016.
Multiscale contour steered region integral and its application for cultivar classification. Xiaohan Yu, Yongsheng Gao, Shengwu Xiong, Xiaohui Yuan, IEEE Access. 7Xiaohan Yu, Yongsheng Gao, Shengwu Xiong, and Xiaohui Yuan. Multiscale contour steered region integral and its application for cultivar classification. IEEE Access, 7: 69087-69100, 2019.
Patchy image structure classification using multi-orientation region transform. Xiaohan Yu, Yang Zhao, Yongsheng Gao, Shengwu Xiong, Xiaohui Yuan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Xiaohan Yu, Yang Zhao, Yongsheng Gao, Shengwu Xiong, and Xiaohui Yuan. Patchy image structure classification using multi-orientation region transform. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12741-12748, 2020.
Maskcov: A random mask covariance network for ultra-fine-grained visual categorization. Pattern Recognition. Xiaohan Yu, Yang Zhao, Yongsheng Gao, Shengwu Xiong, 108067Xiaohan Yu, Yang Zhao, Yongsheng Gao, and Shengwu Xiong. Maskcov: A random mask covariance network for ultra-fine-grained visual categorization. Pattern Recogni- tion, page 108067, 2021.
Benchmark platform for ultra-fine-grained visual categorization beyond human performance. Xiaohan Yu, Yang Zhao, Yongsheng Gao, Shengwu Xiong, Xiaohui Yuan, International Conference on Computer Vision (ICCV). 2021Xiaohan Yu, Yang Zhao, Yongsheng Gao, Shengwu Xiong, and Xiaohui Yuan. Bench- mark platform for ultra-fine-grained visual categorization beyond human performance. In International Conference on Computer Vision (ICCV), 2021.
Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor. Baochang Zhang, Yongsheng Gao, Sanqiang Zhao, Jianzhuang Liu, IEEE transactions on image processing. 192Baochang Zhang, Yongsheng Gao, Sanqiang Zhao, and Jianzhuang Liu. Local deriva- tive pattern versus local binary pattern: face recognition with high-order local pattern descriptor. IEEE transactions on image processing, 19(2):533-544, 2009.
Diversified visual attention networks for fine-grained object classification. Bo Zhao, Xiao Wu, Jiashi Feng, Qiang Peng, Shuicheng Yan, IEEE Transactions on Multimedia. 196Bo Zhao, Xiao Wu, Jiashi Feng, Qiang Peng, and Shuicheng Yan. Diversified visual attention networks for fine-grained object classification. IEEE Transactions on Multi- media, 19(6):1245-1256, 2017.
Learning deep part-aware embedding for person retrieval. Yang Zhao, Chunhua Shen, Xiaohan Yu, Hao Chen, Yongsheng Gao, Shengwu Xiong, Pattern Recognition. 116107938Yang Zhao, Chunhua Shen, Xiaohan Yu, Hao Chen, Yongsheng Gao, and Shengwu Xiong. Learning deep part-aware embedding for person retrieval. Pattern Recognition, 116:107938, 2021.
Learning discriminative region representation for person retrieval. Yang Zhao, Xiaohan Yu, Yongsheng Gao, Chunhua Shen, Pattern Recognition. 121108229Yang Zhao, Xiaohan Yu, Yongsheng Gao, and Chunhua Shen. Learning discriminative region representation for person retrieval. Pattern Recognition, 121:108229, 2022.
Learning deep bilinear transformation for fine-grained image representation. Heliang Zheng, Jianlong Fu, Zheng-Jun Zha, Jiebo Luo, arXiv:1911.03621arXiv preprintHeliang Zheng, Jianlong Fu, Zheng-Jun Zha, and Jiebo Luo. Learning deep bilinear transformation for fine-grained image representation. arXiv preprint arXiv:1911.03621, 2019.
Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, H S Philip, Torr, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionSixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6881-6890, 2021.
Learning attentive pairwise interaction for fine-grained classification. Peiqin Zhuang, Yali Wang, Yu Qiao, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Peiqin Zhuang, Yali Wang, and Yu Qiao. Learning attentive pairwise interaction for fine-grained classification. In Proceedings of the AAAI Conference on Artificial Intelli- gence, volume 34, pages 13130-13137, 2020.
| zyda_arxiv-0464000 |
∆F = 1 constraints on composite Higgs models with LR parity
7 Nov 2012
Natascia Vignaroli
Department of Physics and Astronomy
Department of Physics and Astronomy
Iowa State University
50011AmesIAUSA
Michigan State University
48824East LansingMIUSA
∆F = 1 constraints on composite Higgs models with LR parity
7 Nov 2012
We analyze the bounds on the spectrum of composite Higgs models (CHM) that come from flavor observables, by means of simple two-site effective Lagrangians, which incorporate a custodial symmetry and a Left-Right parity and which could also be adopted in further phenomenological studies on CHM. We derive, in particular, an important constraint on the masses of the (t L , b L ) partners, which does not depend on the flavor structure of the sector beyond the SM. This bound is obtained from the "infrared" contribution to b → sγ induced by the flavor-conserving effective vertex W t R b R . We find that the presence of a custodial symmetry can play a role in protecting this effective coupling and, as a consequence, in attenuating the constraint, which, however, remains of the order of 1 TeV. In addition to this bound, we calculate the constraints from the "ultraviolet" contribution to b → sγ, induced by loops of heavy fermions, and to / K .
Introduction
A possible solution to the hierarchy problem is based on an analogy with the pion mass stabilization in QCD: the Higgs, similarly to the pion, might be a composite state, generated by a new strong dynamics; as such, its mass is not sensitive to radiative corrections above the compositeness scale, assumed to be of the order of the TeV scale. A further protection, that allows the Higgs to be naturally lighter than the other resonances, exists if the composite Higgs is also the pseudo-Goldstone boson of a spontaneously broken global symmetry [1]. A pseudo-Goldstone boson Higgs is expected to be light and as such in agreement with the indication from the LEP electroweak precision data (EWPD). In this project we will reconsider the bounds on the spectrum of Composite Higgs Models (CHM) that come from flavor observables, with a special focus to b → sγ. Instead of considering a full theory we will work in an effective description valid at low energy. In particular, we will refer to a "two-site" (TS) description [2,3], where two sectors, the weakly-coupled sector of the elementary fields and the composite sector, that comprises the Higgs, are linearly coupled each other through mass mixing terms [4]. After diagonalization the elementary/composite basis rotates to the mass eigenstate one, made up of SM and heavy states that are admixture of elementary and composite modes. Heavier particles have larger degrees of compositeness: heavy SM particles, like the top, are more composite while the light ones are almost elementary. In order for composite Higgs models to be compatible with LEP precision data, the presence of a custodial symmetry in the composite sector is strongly suggested to avoid large corrections to the ρ parameter. The absence of large Flavor-Changing Neutral Currents is achieved, instead, by a sort of GIM mechanism, that naturally emerges when the connection between the elementary and the strong sector proceeds via linear couplings [8]. In absence of a symmetry protection, the LEP data also point toward a small degree of compositeness of the left-handed bottom quark (small corrections to Zb L b L ), and, by gauge invariance, of the left-handed top as well. This implies that, in order to obtain a heavy enough top quark, it is necessary to have an almost fully composite right-handed top quark. It has been shown, however, that the corrections to Zb L b L can be suppressed if the custodial symmetry of the strong sector includes a Left-Right parity [7]. This can allow for a smaller right-handed top compositeness. In order to study the phenomenology at energies lower than the compositeness scale, we derive two different models which incorporate a custodial symmetry and a Left-Right parity. We label such models as TS5 and TS10. They describe the low-energy regime of the Minimal Composite Higgs Models (MCHM) defined in Ref. [5,6], in the limit in which only the leading terms in an expansion in powers of the Higgs field are retained 1 . In MCHM the Higgs arises as the pseudo-Goldstone boson associated to the SO(5) → O(4) breaking in the composite sector; where O(4) includes SO(4) ∼ SU (2) L × SU (2) R as well as a parity P LR which exchanges SU (2) L with SU (2) R . Composite fermions can be embedded in a 5 = (2, 2) + (1, 1) representation of SO (5) in the TS5 model and in a 10 = (2, 2) + (1, 3) + (3, 1) in the TS10. TS5 and TS10 extend the two-site description of [2,3] to consider 5 and 10 SO(5) representations for composite fermions. In particular, the TS5 model extends the 'two site' model of Ref. [3] to include the composite fermions needed to give mass to the bottom quark.
We find two important bounds on the masses of the heavy fermions which do not depend on the flavor structure of the sector beyond the SM (BSM). The first comes from the measurement of the Zb LbL coupling, that we already mentioned and that can be suppressed assuming a P LR symmetry. The second is obtained from the infrared (IR) contribution to b → sγ induced by the flavor conserving effective vertex W t R b R . In composite Higgs models there are two classes of effects that lead to a shift of the b → sγ decaying rate compared to the SM prediction: loops of heavy fermion resonances from the strong sector give a ultraviolet (UV) local contribution; they generate, at the compositeness scale, the flavor-violating dipole operators O 7 and O 7 , which define the effective Hamiltonian for the b → sγ decay. The virtual exchange of heavy resonances also generates the effective V+A interaction of the W boson and the SM quarks, W t R b R , which in turn leads to a shift to b → sγ via a loop of SM particles. This latter IR contribution is enhanced by a chiral factor m t /m b and, since in this case the flavor violation comes entirely from the SM V-A current,t L γ µ s L , it gives a Minimal Flavor Violating (MFV) lower bound on the heavy fermion masses. We also discuss the role of a parity P C , which is a subgroup of the custodial SU (2) V , to protect the effective coupling W b R t R . In general, stronger bounds can be obtained from the UV CHM contribution to b → sγ and from / K [18]; however, these latter bounds are model dependent and in principle could be loosened by acting on the NP flavor structure (see, for example, [27]). The bound from the IR contribution to b → sγ, on the other hand, is robust, since it is a MFV effect.
The paper is organized as follows: in sec. 2 we introduce our two-site models; in sec. 3 we discuss the bound from b → sγ; we first calculate the MFV bounds from the infrared contribution in generic CHM, by NDA, and in the specific TS5 and TS10, we then proceed to calculate the non-MFV constraints from b → sγ and from / K ; we draw our conclusions in sec. 4.
Effective theories for composite Higgs models
The idea behind Composite Higgs Models is that the Electro Weak Symmetry Breaking may be triggered by a new strong dynamics, in analogy with the chiral symmetry breaking in QCD. In these theories a new strong sector couples to a weakly coupled sector, which coincides with that of the Standard Model without the Higgs. The Higgs, as the pion in QCD, is a composite state coming from the latter strong dynamics. Its composite nature allows for a solution to the hierarchy problem. Indeed, its mass is not sensitive to radiative corrections above the compositeness scale, assumed to be of the order of the TeV. The EWSB is transmitted to SM fermions by means of linear couplings [4] (generated by some UV physics at the UV scale Λ U V ) between elementary fermions ψ and composite fermions
∆L = λψO + h.c.(1)
This way to communicate the EWSB can give a natural explanation of the hierarchies in the quark masses (through RG evolution of the composite-elementary couplings λ i ) and avoid the tension which occurs when trying to generate large enough quark masses and, at the same time, suppressing FCNC processes 2 . As a consequence of linear couplings a scenario of Partial Compositeness of the SM particles emerges. At energies below the compositeness scale a composite operator O can excite from the vacuum a tower of composite fermions of increasing mass. Linear couplings (1) thus turn into mass mixing terms between elementary fermions and towers of composite fermions χ n
0|O|χ n = ∆ n L mix = n ∆ n ψ χ n + h.c. .(2)L = L el + L com + L mix(3)
Because of the mass mixing terms the physical eigenstates, made up of SM and (new) heavy states, are admixture of elementary and composite modes. The low-energy phenomenology of such theories can be exhaustively studied, and calculation can be made easier, by considering a truncation of each tower of composite fermions to the first resonance, while other heavy states are neglected [2]. For example, the effective Lagrangian describing one elementary chiral field ψ L and its composite partner χ is
∆L =ψ L i ∂ψ L +χ(i ∂ − m * )χ − ∆ LψL χ R + h.c. .(4)
We can rotate the fermions from the elementary/composite basis to the mass eigenstate one, the light(SM)/heavy basis, according to:
tan ϕ L = ∆ L m * |light = cos ϕ L |ψ L − sin ϕ L |χ L |heavy = sin ϕ L |ψ L + cos ϕ L |χ L(5)
Our eigenstate fields are thus a heavy fermion of mass m = m 2 * + ∆ 2 L and a light fermion, to be identified with the SM field, that will acquire a mass after the EWSB. These fields, as we see, are superpositions of elementary and composite states. The angle ϕ L parametrizes the degree of compositeness of the physical fields. In particular, the SM fermion has a sin ϕ L ≡ ∆ L √ m 2 * +∆ 2 L degree of compositeness (and a cos ϕ L ≡ m * √ m 2 * +∆ 2 L degree of elementarity); the mass mixing parameter ∆ L can be naturally much smaller than the mass m * of the composite fermion 3 , therefore, SM fermions are in general mostly elementary with a small degree of compositeness, while heavy fermions are mostly composite with a small degree of elementarity. We have a similar rotation, with angle ϕ R , in the case of right-handed fermions. SM fermions acquire a mass after the EWSB; since the origin of this breaking resides, by assumption, in the composite sector (the Higgs is a fully composite state), the SM fermion mass arises from the composite part of left-handed and right-handed SM fields:
m ψ = Y * v √ 2 sin ϕ L sin ϕ R ,(6)
where Y * is a Yukawa coupling among composites, from which the SM Yukawa y = Y * sin ϕ L sin ϕ R originates. In the following we will assume that the strong sector is flavor anarchic, so that there is no large hierarchy between elements within each matrix Y * and the hierarchy in the masses and mixings of the SM quarks comes entirely from the hierarchy in the elementary/composite mixing angles (such 'anarchic scenario' has been extensively studied in the framework of 5D warped models, see Refs. [8,[12][13][14][15] Experimental data give hints on the type of the new strong dynamics responsible for the EWSB. The LEP precision data suggest the presence of a custodial symmetry in the composite sector to avoid large corrections to the ρ parameter. In order to protect ρ (or equivalently the Peskin-Takeuchi T parameter) the composite sector must respect, minimally, a global symmetry:
SU (2) L × SU (2) R × U (1) X ,
where SU (2) L × SU (2) R is broken to the diagonal SU (2) V after the EWSB; the unbroken SU (2) V invariance acts as a custodial symmetry so that ρ = 1 at tree level.
The SM electroweak group SU (2) L ×U (1) Y can be embedded into SU (2) L ×SU (2) R ×U (1) X , so that hypercharge is realized as Y = T 3 R + X.
The Composite Higgs transforms as a bidoublet (2,2)
under SU (2) L × SU (2) R , H ≡ (H, H c ),
where H is the Composite Higgs doublet and H c = iσ 2 H * is its conjugate. The H VEV breaks the SU (2) L × SU (2) R × U (1) X group down to SU (2) V × U (1) X and leads to the EWSB. Therefore, we have the following relation among charges:
Q = T 3 L + T 3 R + X = T 3 L + Y .(7)
This scheme can also results from models where the Higgs arises as the pseudo-Goldstone boson associated to a SO(5) → SO(4) ∼ SU (2) L ×SU (2) R breaking in the composite sector; or to a SO(5) → O(4) breaking, where O(4) includes SO(4) ∼ SU (2) L × SU (2) R as well as a parity P LR which exchanges SU (2) L with SU (2) R . This enhanced custodial symmetry can suppress the corrections to the coupling Zb L b L , which are strongly constrained by LEP data [7].
P LR and P C symmetries
In MCHM [5] the Higgs arises as the pseudo-Goldstone boson associated to the SO(5) → O(4) breaking in the composite sector; where the enhanced custodial symmetry O(4) includes SO(4) ∼ SU (2) L × SU (2) R as well as a parity P LR which exchanges SU (2) L with SU (2) R . As shown in [7], this P LR parity, as well as the P C symmetry, subgroup of the custodial O(4), can protect the coupling Zb L b L against large corrections from the composite sector. Each composite operator has a definite left and right isospin quantum number, T L,R , and a 3rd component, T 3 L,R . We can also univocally assign to each SM field definite quantum numbers, T L,R , T 3 L,R , corresponding to those of the composite operator to which it couples. P LR and P C are symmetries of the composite sector, P LR exchanges SU (2) L with SU (2) R and P C is the subgroup of SU (2) V that transforms |T L , T R ;
T 3 L , T 3 R → |T L , T R ; −T 3 L , −T 3 R (SO(3)
vectors transform with P C = diag(1, −1, −1)). For P LR (P C ) to be a symmetry also of the interacting terms between SM fields and composite operators, ∆L = λψO + h.c., the SM fields ψ have to be eigenstates of P LR (P C ). This implies:
T L = T R (T 3 L = T 3 R ) (P LR invariance)(8)T 3 L = T 3 R = 0 (P C invariance) .(9)
If the above formulas hold, we can see that the coupling Zψψ,
g ψ = g cos θ W (Q 3 L − Q sin 2 θ W ) ,(10)
is protected against large corrections. Indeed, the electric charge Q is conserved and the charge of the SU (2) L 3rd component, Q 3 L , is conserved by custodial invariance plus P LR symmetry and by P C symmetry. By custodial
U (1) V invariance, δQ 3 V = δQ 3 R + δQ 3 L = 0; if there is also a P LR invariance, δQ 3 R = δQ 3 L , therefore δQ 3 L = 0.
The same conservation, δQ 3 L = 0, is obtained by P C invariance: the SM W 3 L has an odd parity under P C , W 3 L → −W 3 L ; if ψ is a P C eigenstate it must have T 3 L = T 3 R = 0, then the currentψγ µ ψ is even under P C and it cannot couple to W 3 L , which is odd. We will show (sec. 3.2.1) that the P C symmetry can also protect in a similar way the effective coupling W t R b R and, as a consequence, it can be responsible for an attenuation of the bound on heavy fermion masses, coming from the process b → sγ.
In what follows we present the two-site models, TS5 and TS10, which incorporate a custodial symmetry and a P LR parity. 4
TS5
In the TS5 model, we consider composite fermions filling the following SO(4) × U (1) X ∼ SU (2) L × SU (2) R × U (1) X representations:
Q = T T 5/3 B T 2/3 = (2, 2) 2/3T = (1, 1) 2/3 Q −1/3 = B −1/3 T B −4/3 B = (2, 2) −1/3 ,B = (1, 1) −1/3(11)
and the composite Higgs in:
H = φ † 0 φ + −φ − φ 0 = (2, 2) 0(12)
The SO(4) multiplets of composite fermions can be embedded into fundamentals [28] for a study of the same representations in a two-site description of SO(5)). We are thus introducing two classes of composite fermions, those filling a 5 2/3 representation, with X charge X = 2/3 and those in a 5 −1/3 , with X = −1/3.
5 2/3 (−1/3) of SO(5) × U (1) X , that decompose as 5 2/3 (−1/3) = (2, 2) 2/3 (−1/3) ⊕ (1, 1) 2/3 (−1/3) under SU (2) L × SU (2) R × U (1) X (see Ref.
We want to consider, indeed, the possibility that the SM quark doublet (t L , b L ) couples to two different BSM operators, Q 2/3 and Q −1/3 , the first responsible for generating the top mass, the second for generating the bottom mass. (t L , b L ) is linearly coupled to (T, B) through a mass mixing term we call ∆ L1 and to (T , B ) through a mass mixing term ∆ L2 . t R and b R couple respectively toT , through a mass mixing term ∆ R1 , and toB, through a mass mixing term ∆ R2 . The fermionic Lagrangian reads, in the elementary/composite basis:
L =q i L i ∂q i L +ū i R i ∂u i R +d i R i ∂d i R + T r Q (i ∂ − M Q * ) Q +T (i ∂ − MT * )T + Y * U T r Q H T + T r Q (i ∂ − M Q * ) Q +B (i ∂ − MB * )B + Y * D T r Q H B − ∆ L1q 3 L (T, B) − ∆ R1tRT − ∆ L2q 3 L (T , B ) − ∆ R2bRB + h.c. .(13)
where the superscript i runs over the three SM families (i = 1, 2, 3), with q 3
L ≡ (t L , b L ), u 3 ≡ t R , d 3 ≡ b R .
By construction, the elementary fields couple to the composite ones only through the mass mixing terms, shown in the last row of (13). This implies that the SM Yukawa couplings arise only through the coupling of the Higgs to the composite fermions and their mixings to the elementary fermions. We further assume that the strong sector is flavor anarchic, so that the hierarchy in the masses and mixings of the SM quarks comes from the hierarchy in the mixing parameters ∆ i L,R . In this case the mixing parameters of the light elementary quarks can be safely neglected and one can focus on just the third generation of composite fermions. 5 As a consequence of the elementary/composite mass mixings, the top and the bottom masses arise, after the EWSB, from the Yukawa terms in the Lagrangian (13), Y * U T r Q H T and Y * D T r Q H B . The top mass will be proportional to ∆ L1 ∆ R1 and the bottom mass to ∆ L2 ∆ R2 . The small ratio between the bottom and the top quark masses can be thus obtained both for ∆ L2 ∆ L1 (∆ R2 ∼ ∆ R1 ) and for ∆ R2 ∆ R1 (∆ L2 ∼ ∆ L1 ). For t R , b R and their excited states the rotation from the elementary/composite basis to the mass eigenstate one, the SM/heavy basis, is given by:
tan ϕ R = ∆ R1 MT * s R ≡ sin ϕ R c R ≡ cos ϕ R tan ϕ bR = ∆ R2 MB * s bR ≡ sin ϕ bR c bR ≡ cos ϕ bR t R = c R t el R − s RT com R T R = s R t el R + c RT com R b R = c bR b el R − s bRB com R B R = s bR b el R + c bRB com R (14) s R (s bR ) defines the degree of compositeness, ξ tR (ξ bR ), of t R (b R ); c R (c bR ) that ofT (B), ξD.
We will diagonalize analytically the mixing among q 3 L and the corresponding excited states by requiring the simplifying assumption: ∆ L2 ∆ L1 , that can naturally follow, for example, from the RG flow in the full theory [6]. The first two generations of elementary quarks do not need a field rotation from the elementary/composite basis to the mass eigenstate basis, since they do not mix with the composite fermions and can thus be directly identified with the corresponding SM states. We can see that in this model t R and b R are both P C and P LR eigenstates, since they couple to
SU (2) L × SU (2) R singlets (T L (T ,B) = T R (T ,B), T 3 L (T ,B) = T 3 R (T ,B) = 0). Instead, t L is a P LR eigenstate only in the limit (∆ L1 = 0) in which it decouples from T (T 3 L (T ) = T 3 R (T )). Similarly, b L is a P LR eigenstate only for ∆ L2 = 0, in which case it decouples from B (T 3 L (B ) = T 3 R (B )
). So far we have made field rotations to the mass eigenstate basis before the EWSB. After the EWSB, the SM top and bottom quarks acquire a mass, and the heavy masses get corrections
of order Y * v √ 2m * 2 .
In the following, we assume x ≡ Y * v √ 2m * 1 and compute all quantities at leading order in x.
2.2.1 ∆ L2 ∆ L1
In this case, since ∆ L2 ∆ L1 , b L is, approximately, a P LR eigenstate so, approximately, we have a custodial symmetry protection to Zb LbL . The small ratio between the bottom and the top quark masses is obtained for
∆ L2 ∆ L1 (∆ R2 ∼ ∆ R1 ); we have: m t = v √ 2 Y * U s 1 s R (15) m b = v √ 2 Y * D s 2 s bR ,(16)
where
s 1 = sin ϕ L1 = ∆ L1 √ M 2 Q * +∆ 2 L1
defines the (t L , b L ) degree of compositeness, ξ qL , and s 2 is a rotation angle proportional to ∆ L2 , s 2 = ∆ L2 M Q * cos ϕ L1 . The physical masses of the heavy fermions read:
MT = M 2 T * + ∆ 2 R1 MB = M 2 B * + ∆ 2 R2 M T = M B = M 2 Q * + ∆ 2 L1 M T 5/3 = M T 2/3 = M Q * = M T c 1 M T = M B = M 2 Q * + ∆ 2 L2 M Q * M B−1/3 = M B−4/3 = M Q *(17)
where c 1 ≡ cos ϕ L1 is the degree of compositeness, ξ D , of the SU (2) L doublet D = (T, B). Details can be found in App. A.1.
In order for the strong sector to respect the custodial invariance, as we have shown, composite fermions have to fill multiplets of SU (2) L × SU (2) R × U (1) X . As a consequence, the heavy partner of the SM doublet q 3 L = (t L , b L ), D = (T, B) (= 2 1/6 under the SM electroweak group), is embedded in a larger multiplet, the bidoublet Q 2/3 = (2, 2) 2/3 , that includes an other doublet of heavy fermions, (T 5/3 , T 2/3 )(= 2 7/6 ). The heavy fermions T 5/3 and T 2/3 in this latter doublet are called custodians. They share the same multiplet of the heavy partners of q 3 L but they do not mix directly with the SM fermions. This implies that their masses tend to zero in the limit in which t L becomes fully composite (see for example the discussion in [25]). This can be seen from eq. (17): M T 5/3(2/3) is zero for c 1 = 0, i.e. for a fully composite t L (s 1 = 1).
TS10
In TS10 we consider composite fermions embedded into a 10 2/3 representation of SO(5) × U (1) X , that decomposes as
10 2/3 = (2, 2) 2/3 ⊕(1, 3) 2/3 ⊕(3, 1) 2/3 under SU (2) L ×SU (2) R × U (1) X .
Therefore we refer to this field content in the composite sector:
Q 2/3 = T T 5/3 B T 2/3 = (2, 2) 2/3 Q 2/3 = T 5/3 T B = (1, 3) 2/3 ,Q 2/3 = T 5/3 T B = (3, 1) 2/3 H = φ † 0 φ + −φ − φ 0 = (2, 2) 0(18)
and to the following fermionic Lagrangian in the elementary/composite basis:
L =q 3 L i ∂q 3 L +t R i ∂t R +b R i ∂b R + T r Q (i ∂ − M Q * ) Q + T r Q i ∂ − MQ * Q + T r Q i ∂ − MQ * Q + Y * T r HQQ + Y * T r Q HQ − ∆ L1q 3 L (T, B) − ∆ R1tRT − ∆ R2bRB + h.c. .(19)
We have the following expressions for the top and bottom masses:
m t = v 2 Y * s 1 s R , m b = v √ 2 Y * s 1 s bR(20)
and for the heavy fermion physical masses:
MT = M 2 Q * + ∆ 2 R1 MB = M 2 Q * + ∆ 2 R2 = MT c R /c bR MT c R MT 5/3 = MT 5/3 = MT = MB = MT c R M T = M B = M 2 Q * + ∆ 2 L1 M T 2/3 = M T 5/3 = M T c 1 .(21)
More details can be found in App. A.2. Besides the custodians T 5/3 and T 2/3 , which are light in the case of a composite q 3 L ,T 5/3 and the fermions in theQ 2/3 triplet become light for a t R with a large degree of compositeness (alsoB becomes light in this case). In this model, both t R and b R are not P LR eigenstates and only t R is a P C eigenstate, as a consequence of the couplings toQ (
T L (T ,B) = T R (T ,B); in particular, b R is not a P C eigenstate, since T 3 R (B) = 0. b L is exactly a P LR eigenstate.
Zb LbL in the TS Models
Shifts in the Z coupling to b L , g Lb , have been extensively studied in the literature. See, for example, the studies [29] in the context of Randall-Sundrum models and [30] in two-site descriptions. The shifts arise after the EWSB because of electroweak mixings among b L and heavy fermions. There is also a contribution from the mixing among neutral gauge bosons; however this mixing is of the order ( v M * ) 2 1, where M * stands for the heavy neutral boson mass, and we will neglect it in what follows. In two-site models without P LR symmetry there is no custodial symmetry protection to Zb LbL and so the shift on g Lb is large. Naive Dimensional Analysis (NDA) [10] gives (see, for example, [16,26]
): δg Lb g Lb ∼ m 2 t M 2 Q * s 2 R ∼ Y 2 * v 2 s 2 1 M 2 Q * .(22)
This formula has been obtained by approximating
q 2 = M 2 Z 0. At q 2 = M 2 Z the shift receives O M 2 Z M 2 Q * corrections: δg Lb g Lb ∼ M 2 Z s 2 1 M 2 Q * ∼ v 2 Y 2 * s 2 1 M 2 Q * g 2 Y 2 * .(23)
When compared to (22), there is a suppression g Y *
2
(see for example [11]), so we will neglect it in the following.
LEP and SLD experiments fix an upper bound of 0.25% for the (positive) shift in the g Lb from its SM value. Therefore, from the eq. (22), we derive the following bound for the heavy fermion mass in models without custodial symmetry protection to Zb LbL :
M Q * (3.2) 1 s R TeV .(24)
In order to respect this limit without requiring too large heavy fermion masses, that would contrast with naturalness arguments, it is necessary to have a quite composite right-handed top (i.e., a not small s R ). On the contrary, in models with custodial symmetry protection to Zb LbL , there is no such restriction for the t R degree of compositeness and bounds are weaker than the one in (24). Indeed, in the TS5 with ∆ L2 ∆ L1 , where we have approximately a custodial symmetry protection to Zb LbL (the breaking is proportional to ∆ L2 and is thus small), we obtain: As expected, the shift is proportional to s 2 2 (i.e., it is proportional to ∆ 2 L2 , the size of the custodial symmetry breaking) and it is small (notice that is also smaller than the effect at non-zero momentum). In the TS10, we obtain, again, a small shift:
δg Lb g Lb = Y * v √ 2 2 s 2 c bR √ 2MB 2 [T 3 L (B) − T 3 L (b L )] = 1 2 m 2 b M 2 Q * c 4 bR s 2 bR 1 2 m 2 t M 2 Q * s 2 2 s 2 R . (25) b R b L t L t L s L γ W V tsδg Lb g Lb = Y * v √ 2 2 s 2 1 M 2 Q * c 4 bR (T 3 L (B) − T 3 L (b L )) + (T 3 L (B ) − T 3 L (b L )) = − m 2 b M 2 Q * 2 − s 2 bR 2 − m 2 b M 2 Q * .(26)
Despite b L is an exact P LR eigenstate in the TS10, there is still a small modification that comes from the coupling of b R , that explicitly breaks P LR . Notice that δg Lb = 0, if we have s bR = 0.
3 Bounds from flavor observables 3.1 Constraint from the process b → sγ
We define, following [19], the effective Hamiltonian for b → sγ:
H ef f = − G F √ 2 V * ts V tb C 7 (µ b )O 7 + C 7 (µ b )O 7 ,(27)
where
O 7 = e 8π 2 m bb σ µν F µν (1 − γ 5 )s and O 7 = e 8π 2 m bb σ µν F µν (1 + γ 5 )s.
In the SM the W boson has a purely V −A interaction to the fermions and so the contribution to the b → sγ process has to proceed through mass insertions in the external legs (see Fig. 1). The Wilson coefficient C 7 is thus negligible, because of a suppression by a factor m s /m b in respect to the Wilson coefficient C 7 , that, evaluated at the weak scale µ w is [19]
C SM 7 (µ w ) = − 1 2 − (8x 3 t + 5x 2 t − 7x t ) 12(1 − x t ) 3 + x 2 t (2 − 3x t ) 2(1 − x t ) 4 ln(x t ) ,(28)with x t = m 2 t M 2 W .
In composite Higgs models there are two classes of effects that lead to a shift of the b → sγ decaying rate compared to the Standard Model prediction. The first comes from loops of heavy fermion resonances from the strong sector that generate the flavor-violating dipole operators O 7 , O 7 at the compositeness scale. We will refer to this as the UV contribution. The second contribution comes from the tree level exchange of heavy resonances, which generates an effective V+A interaction of the W boson and the SM quarks which in turn leads to a shift to b → sγ via a loop of SM particles. This latter IR contribution is enhanced by a chiral factor m t /m b . Since in this case the flavor violation can come entirely from the SM V-A current, it gives a quite model-independent lower bound on the heavy fermion masses. By taking into account the experimental average value for the b → sγ branching ratio [20] and the theoretical calculation [21], we get, if the new physics contributions to C 7 , C CH 7 , and to C 7 , C CH 7 , are considered separately, the bounds (see Appendix B):
− 0.098 C CH 7 (m * ) 0.028 (29) |C CH 7 (m * )| 0.37 ,(30)
where m * denotes the mass of the heavy fermions in the loop (we take m * = 1 TeV).
The infrared contribution to b → sγ from the composite Higgs model is at the weak scale µ w instead of m * (we take µ W = M W ); therefore, we have to account for a scaling factor
C CH 7 (µ w ) = α s (m * ) α s (m t ) 16/21 α s (m t ) α s (µ w ) 16/23 C CH 7 (m * ) ≈ 0.79C CH 7 (m * )(31)
We get:
− 0.077 C CH 7 (µ w ) 0.023 (32) |C CH 7 (µ w )| 0.29(33)
While the infrared contribution to C 7 involves a flavor-conserving operator and brings to a MFV bound, the infrared contribution to C 7 as well as the ultraviolet contributions to C 7 and to C 7 involve flavor-violating operators. As a consequence, they will require some assumptions on the flavor structure of the NP sector.
We will now evaluate the bounds on heavy masses that come from the infrared contribution to C 7 . We will first present estimates of such bounds in generic composite Higgs models, which can be obtained by NDA. Then we will calculate the bounds in the specific two-site model TS5 and TS10, introduced in sec.s 2.2 and 2.3.
MFV bound from the infrared contribution to C 7
The infrared contribution to the process b → sγ is a one loop contribution from the W boson accompanied by top quarks, where a mass insertion in the intermediate top quark states is allowed by the presence of a (V +A) interaction of the W boson with the top and the bottom quarks (Fig. 2). This interaction originates from a term:
L ⊃ C R O R ,(34)
where O R is the dimension-6 operator: Figure 2: 1 loop Infrared contribution to C 7 . The red blob denotes the effective coupling W t R b R , generated from the composite sector.
O R ≡ H c † iD µ Ht R γ µ b R + h.c. . (35) b R t R t L s L γ W V tsW − b R t R B T φ † 0 φ † 0 Figure 3: The CHM contribution to the effective coupling W t R b R (At order Y * v √ 2m * 2 ).
At low energy, after the EWSB, the interaction in (34) gives:
L ⊃ C R v 2 2 g 2 √ 2b R γ µ t R W − µ .(36)
This interaction gives a contribution to the Wilson coefficient C 7 in the eq. (27). We find:
C CH−IR 7 (µ w ) = C R v 2 2 m t m b f RH (x t )(37)where x t = m 2 t M 2 W and f RH (x t )
is the loop function [22]:
f RH (x t ) = − 1 2 1 (1 − x t ) 3 2 3 − x 3 t 2 − 3 2 x t + 2 + 3x t log(x t ) + 1 (1 − x t ) 3 − x 3 t 2 + 6x 2 t − 15 2 x t + 2 − 3x 2 t log(x t ) .(38)W t R b R , v R ≡ C R v 2 2 .
By considering the bound in (32) and the relation in (37), we obtain:
− 0.0004 < v R < 0.0013 .(39)
This bound from b → sγ can be compared with that from the measurement of the W tb anomalous couplings at colliders. Ref. [23] reports an expected bound of −0.012 < v R < 0.024, that can be imposed by 14 TeV LHC measurements with 30 fb −1 . This latter can be obtained from studies on cross sections and top decay observables (angular distributions and asymmetries) in the single top production at the LHC. Present searches for anomalous W couplings at the 7 TeV LHC [24] fix still mild bounds on v R , −0.34 < v R < 0.39, with 0.70 fb −1 . We can see that the bound obtained from b → sγ is much stronger than that from the v R measurement at collider.
The CHM contribution to the effective coupling W t R b R is given by the exchange of heavy fermions that mix electro-weakly with t R and b R ( fig. 3). At the order x 2 , only the SU (2) L heavy doublets which are partners of (t L , b L ) contribute to C R . This latter can be easily estimated by NDA [10]:
C R ∼ Y 2 * ξ bR ξ tR ξ 2 D M 2 D ∼ y b y t M 2 D ξ 2 D ξ 2 qL .(40)
(40) implies:
C CH−IR 7 (µ w ) ∼ m 2 t M 2 D f RH (x t ) ξ 2 D ξ 2 qL .(41)
Applying the condition in (32) to this infrared contribution, we get the estimated bound:
M D 1.0(0.54) ξ qL TeV ,(42)
where the first number and the second number in parenthesis refer respectively to the case of a positive and of a negative C CH−IR 7 contribution. Notice that in the case of a positive C CH−IR 7 contribution we obtain a stronger bound on M D , since the constraint in (32) is asymmetric. We find that a subgroup of the custodial symmetry SU (2) V , the P C parity, can give a suppression to the W t R b R coupling and, as a consequence, to the CHM infrared contribution to b → sγ. The estimates we have just reported refer to generic composite Higgs models where there is not such P C protection.
Protection by P C parity
The P C protection against the generation of the W t R b R vertex acts similarly to the P LR and P C protection against large corrections to the Zb L b L coupling, which we have discussed in sec. 2.1. P C is a symmetry of the sector BSM, that is respected also by the interactions of t R and b R if these latter are P C eigenstates. Since P C acts as diag(1, −1, −1) on SO(3) vectors, the W is not a P C eigenstate (the composite partners of W 1 and W 2 have not the same P C eigenvalue). In the case in which t R and b R are both P C eigenstates, both the t R and the b R interactions must respect the P C parity. Then, the W t R b R vertex, which is P C violating, since the W is not a P C eigenstate, can arise only by paying for an additional factor, that gives a suppression. Whereas, in models where t R and b R are not both P C eigenstates and, as such, their interactions have not to respect the P C parity, the W t R b R vertex can be generated without suppressions. The TS5 falls into the class of models with P C protection, since in the TS5 both t R and b R are P C eigenstates. Considering the TS5, we can evaluate the suppression factor to W t R b R due to the P C protection. We can find it in an easy way by promoting ∆ L1 and ∆ L2 to spurions, which enforce a SU (2) L × SU (2) R invariance:
−∆ L1q 3 L (T, B) → −q 3 L Q 2/3∆L1 −∆ L2q 3 L (T , B ) → −q 3 L Q −1/3∆L2 , where∆ L1 = (∆ L1 , 0) ≡ (1, 2) 1/6 and∆ L2 = (0, ∆ L2 ) ≡ (1, 2) 1/6 . We can thus write the O R operator (35) in the SU (2) L × SU (2) R invariant way: O R = 1 f 2q 3 R∆ L1 V µ∆ † L2 q 3 R γ µ + h.c. ,(43)
where f has the dimension of a mass, q 3 R = (t R , b R ) ≡ (1, 2) 1/6 and V µ ≡ H c † iD µ H. Since P C is a subgroup of the custodial SU (2) V , the SU (2) × SU (2) invariant operator in (43) is also a P C invariant. We can notice that the P C invariance has brought to an additional factor
∆ L1 ∆ L2 f 2 compared to (35). Without P C protection, the D = (T, B) contribution to the W t R b R effective vertex in the TS5 reads s R s bR c 2 1 Y * v √ 2M D 2 = m b m t M 2 D c 2 1 s 2 1 ;
the request for P C invariance brings to the additional factor ∆ L1 ∆ L2
f 2 . For f 2 = M Q * M Q * , we obtain Y * v √ 2M D 2 s R s bR c 1 ∆ L1 M Q * c 1 ∆ L2 M Q * = Y * v √ 2M D 2 s R s bR s 1 s 2 = m b m t M 2 D ,
that is a suppression by a factor s 2 1 /c 2 1 ≡ ξ 2 qL /ξ 2 D .
We can thus return to the estimated bounds on M D from C CH−IR 7 in eq. (42), and consider the case in which there is a P C protection to the t R and b R interactions. In such case the C R contribution becomes:
C R ∼ y b y t M 2 D (with P C ) ,(44)
which implies
C CH−IR 7 (µ w ) ∼ m 2 t M 2 D f RH (x t ) (with P C )(45)
and an estimated bound:
M D 1.0(0.54) TeV (with P C ) .(46)
We will now calculate the bounds on M D from C CH−IR 7 in the specific TS5 and TS10 models. As already discussed, the TS5 belongs to the class of models with P C protection. The TS10, instead, falls in the class of models without P C protection, because in the TS10 b R is not a P C eigenstate. We thus expect that the bound in the TS10 will receive an enhancement factor c 1 /s 1 , compared to that in the TS5.
In the TS5 we have a contribution to the O R operator in (35) both from the doublet D = (T, B) in the X = 2/3 representation and from the doublet D ≡ (T , B ) in the X = −1/3. We find:
C T S5 R = − y b y t M 2 D 1 + M 2 D M 2 D .(47)
This implies:
C CH−IR−T S5 7 (µ w ) = − m 2 t M 2 D f RH (x t ) 1 + M 2 D M 2 D .(48)
Notice that the C T S5 , that gives a contribution to C R . We obtain
C T S10 R = y b y t M 2 D c 2 1 s 2 1 ,(50)
which implies:
C CH−IR−T S10 7 (µ w ) = m 2 t M 2 D f RH (x t ) c 2 1 s 2 1 .(51)
From the condition in (32) we get finally the bound:
M TS10 D (0.54) c 1 s 1 TeV .(52)
Notice that, differently from the case of the TS5 contribution, C CH−IR−T S10 7 (µ w ) is negative. As such, it is constrained less strongly by the condition in (32). As expected, we have found a c 1 /s 1 enhancement of this bound, compared to (49).
We now proceed to evaluate the bounds from the C 7 contribution and then those from the UV contributions. As we already pointed out, these are contributions that involve flavorviolating operators and require assumptions on the flavor structure of the NP sector. In what follows we will consider the case of flavor anarchy of the composite Yukawa matrices. This scenario, we remind, assumes that there is no large hierarchy between elements within each matrix Y * and the quark mass hierarchy is completely explained by the elementary/composite mixing angles. We also set, for simplicity, Y * U = Y * D = Y * .
Non-MFV constraints
Generational mixing
After the EWSB, the mass eigenstate basis is obtained, as in the SM, using unitary transformations: (D L , D R ) and (U L , U R ) for down and up-type quark respectively. We will assume that the left rotation matrix has entries of the same order as those of the Cabibbo-Kobayashi-Maskawa matrix:
C CH−IR 7 (µ w ) ∼ (ytv) 2 M 2 D ξ 2 D w/ P C ESTIMATED TS5 M D 1.0(0.54) TeV M D 1.4 TeV MFV Bounds ∼ (ytv) 2 M 2 D ξ D ξ qL 2 w/o P C ESTIMATED TS10 M D 1.0(0.54)/ξ qL TeV M D 0.54/s 1 TeV C CH−IR 7 (µ w ) ∼ (ytv) 2 M 2 D ξ 2 D ms m b V 2 ts w/ P C ESTIMATED TS5 M D 0.80 TeV M D 1.1 TeV ∼ (ytv) 2 M 2 D ξ D ξ qL 2 ms m b V 2 ts w/o P C ESTIMATED TS10 M D 0.80/ξ qL TeV M D 0.80/s 1 TeV C CH−U V 7 (m * ) ∼ (Y * v) 2 M D MD ξ D ξD ESTIMATED TS5 TS10 M D MD 1.5(0.79)Y * TeV M D MD 0.52(0.28)Y * TeV M D MB 0.75(0.40)Y * TeV C CH−U V 7 (m * ) ∼ (Y * v) 2 M D MD ξ D ξD ms m b V 2 ts ESTIMATED TS5 TS10 M D MD (1.1)Y * TeV M D MD (0.40)Y * TeV M D MB (0.58)Y * TeV(D L ) ij ∼ (V CKM ) ij .(53)
The assumption of anarchical Y * fixes the form of the rotation matrix D R to be:
(D R ) ij ∼ m i m j 1 (D L ) ij for i < j .(54)
Considering the estimates (53) and (54), we can evaluate the generational mixing factors in the composite Higgs model contributions to C 7 (UV) and C 7 . For the ultraviolet contribution to C 7 , we consider the presence of a mass insertion that can generate the operatorb L σ µν F µν s R . This mass insertion brings to a factor m b (D R ) 23 ∼ ms (D L ) 23 ∼ ms Vts ; where we have first used the estimate in (54) and then that in (53). The ultraviolet contribution to C 7 involves the operatorb R σ µν F µν s L and we obtain, from the mass insertion, a generational mixing factor m b (D L ) 23 ∼ m b V ts ; where the last similitude follows from the assumption in (53). Evaluating, similarly, the generational mixing factor for the vertex W t R s R in C CH−IR comes entirely from the SM vertex W t L s L and it is accounted by a factor V ts . Therefore, we find that the composite Higgs model contribution to the Wilson coefficient C 7 is enhanced by a factor m s m b V 2 ts ∼ 8 (55) compared to the contribution to C 7 both in the ultraviolet and in the infrared case.
Infrared contribution to C 7
Taking into account the generational mixing factor in (55), the composite Higgs model contribution to the Wilson coefficient C 7 (in Fig. 4) is given by:
C CH−IR 7 (µ w ) = C R v 2 2 m s m b V 2 ts m t m b f RH (x t ) .(56)
Considering the estimates for C R in (40) in the TS10. We can discuss how the bound on heavy masses can change in the case of a fully composite top: in the TS5 the bound on doublet heavy fermion (49) does not depend on the top degree of compositeness (this remains almost true considering the full numerical calculation) and we obtain quite strong MFV bounds both for composite t L and composite t R . In the TS10, because of the P C protection, we obtain strong bounds in the case of a fully composite t R (eq. (52)). Ref. [25] finds that corrections to S and T parameters give only weak constraints on a composite t R (both in TS5 and in TS10). The IR contribution to b → sγ, on the contrary, put a quite strong constraint, especially in the TS10, on this limit case. One can finally discuss the validity of our results, which have been obtained 'analytically' (i.e. by considering an expansion in x ≡ Y * v √ 2m * and retaining only the O(x) terms). We find that the results from the numerical calculation of the bounds, obtained by diagonalizing numerically the fermionic mass matrices, do not differ more than O(1) from those we have shown, which are obtained at order x, in the assumption x 1. This can be also found by considering that the exchange of relatively light custodians, that can give a contribution Y * v √ 2m CU ST * > 1 to the effective W t R b R vertex, has to be followed by the exchange of heavier composite fermions, that reduces the overall contribution. By definition, indeed, the custodians do not directly couple to SM fermions, therefore their contribution to W t R b R is always accompanied by the exchange of heavier composite particles.
Ultraviolet contribution
In this case the P C parity does not influence the bounds and we get contributions of the same size in the different models. The leading contribution comes from diagrams with heavy fermions and would-be Goldstone bosons in the loop 6 (Fig. 5).
C CH−U V 7 , C CH−U V 7 ∝ s Li Y * ik Y * kl Y * lj s Rj(61)
The contribution (61) is not aligned with the mass matrix m dij ∼ s Li Y * ij s Rj , therefore, after the EWSB it remains non diagonal in the flavor space. Before going on the specific TS5 and TS10 models, we can obtain estimated bounds from the UV contributions in generic composite Higgs models, by means of NDA. We obtain:
C CH−U V 7 ∼ (Y * v) 2 M D MD ξ D ξD ,(62)
whereD denotes a heavy fermion which is a SU (2) L singlet, and
C CH−U V 7 ∼ m s m b V 2 ts (Y * v) 2 M D MD ξ D ξD ,(63)
where we have taken into account the generational mixing factor in (55). By comparing these results with those from the IR contributions in (42, 46), we see that the UV contribution gives approximately a bound Y * /y t ( Y * yt ξ q L , in the case of models without P C protection) times stronger than the one from the IR contribution to C 7 . Such UV bounds, however, are not as robust as the IR one, since they require, as we already pointed out, assumptions on the flavor structure of the BSM sector. In particular, we have estimated them in the scenario of flavor anarchy in the strong sector. Notice that in this anarchic scenario much stronger bounds on the resonance masses, of the order of 20 TeV [13], come from k . In Ref. [16] the Ultraviolet contribution to b → sγ in a two-site model without a P LR protection to the t R and b R interactions is evaluated. In the following we will describe in details the contribution in the TS5 and we will report the results for TS10. We can calculate the C CH−U V 7 and C CH−U V 7 ultraviolet contributions by considering the model independent analysis of Ref. [16] and the generational mixing factor in (55). We get the following effective Hamiltonian for b → sγ with loops of heavy fermions and neutral would-be Goldstone bosons:
H ef f neutral Higgs = i e 8π 2 (2 · p) M 2 w k neutral V tsb (1 − γ 5 )s + m s m b V tsb (1 + γ 5 )s (64) where k neutral ≈ 4 i=1 |α (i) 1 | 2 + |α (i) 2 | 2 m b 1 36 M 2 w m 2 * (i) + 4 i=1 α (i) * 1 α (i) 2 m * (i) 1 6 M 2 w m 2 * (i)(65)L ⊃d (i) α (i) 1 (1 + γ 5 ) + α (i) 2 (1 − γ 5 ) bH + h.c. .(66)
After the EWSB, we find the following coefficients at O(x):
α (B) 1 = Y 2 * v 2 s bR 1 M B + M B + c bR MB M 2 B − M 2 B α (B) 2 = − Y * 2 √ 2 s 2 c bR α (B ) 1 = α (B −1/3 ) 1 = − Y * 2 √ 2 s bR α (B ) 2 = α (B −1/3 ) 2 = − Y 2 * v 4 s 2 M 2 B MB − s 2 bR M 3 B − c bR M 3 B + 2c bR M B M 2 B M B MB(M 2 B − M 2 B )(67)
the heavy fermion B gives a contribution of O(s 2 2 ) to k neutral and we neglect it. Considering the eq. (65) and the coefficients in (67), neglecting again O(x 2 ) terms, we obtain:
k neutral ≈ −m b M 2 W Y 2 * 1 8 c bR M B MB − 7 18 s 2 bR M 2 B .(68)
From this expression of k neutral we obtain the following TS5 ultraviolet contributions to the Wilson coefficient of the effective Hamiltonian in (27):
C CH−U V 7 (m * ) = 1 16 √ 2 G F Y 2 * c bR M B MB − 7 18 s 2 bR M 2 B ; C CH−U V 7 (m * ) = 1 16 √ 2 G F Y 2 * c bR M B MB − 7 18 s 2 bR M 2 B m s m b V 2 ts .(69)
Assuming s bR small, the above formulas become:
C CH−U V 7 (m * ) = 1 16 √ 2 G F Y 2 * M B MB ; C CH−U V 7 (m * ) = 1 16 √ 2 G F Y 2 * M B MB m s m b V 2 ts .(70)
Finally, the condition on C CH−U V 7 in the eq. (30) gives the bound:
M B MB (0.40) Y * TeV ;(71)
where, for simplicity, we have set s bR = 0. The condition (29) on C CH−U V 7 gives a stronger bound,
M B MB (0.52) Y * TeV ,(72)
if C CH−U V 7 (m * ) is a negative contribution. There is also a contribution to b → sγ from diagrams with heavy fermions and charged Higgs in the loop. Following a similar procedure as the one used before (C) we find, neglecting O(x 2 ) terms:
k charged ≈ m b M 2 W Y 2 * 5 48 1 M B MB + O(s 2 1 ) + O(s 2 bR ) .(73)
If we can neglect O(s 2 1 ) and O(s 2 bR ) terms, k charged gives a weaker bound than the one from k neutral . The full expression of k charged can be found in App. D, here we have just reported, for simplicity, the result for small s 1 and s bR angles. In Fig. 6
Ultraviolet contribution in the TS10
For the TS10 model, applying the same procedure as for the case of TS5, we get:
k neutral = m b M 2 W Y 2 * × 7M T M 2 T s 2 1 − 18MBM 2 B 1 − s 2 1 + M 2 B 7M B s 2 1 − 18MB 1 − s 2 1 288M 2 B M B M 2 B + O(s bR ) = −m b M 2 W Y 2 * 1 16 1 M B MB + 1 M B MB + O(s 2 1 ) + O(s bR ) (74) k charged = m b M 2 W Y 2 * 5 48 1 M B MB + 5 48 1 M B MB + 5 96 s 2 R M 2 B + O(s 2 1 ) + O(s 2 bR )(75)
If the left-handed bottom quark has a small degree of compositeness, we can neglect O(s 2 1 ) (while s bR is naturally very small in the TS10, in order to account for the ratio m b /m t 1). The charged contribution, in this case, gives a stronger bound than the one from k neutral :
M B MB (0.58) Y * TeV ,(76)
from the condition (30) on C CH−U V
7
. A stronger bound,
M B MB (0.75) Y * TeV ,(77)
comes from the condition (29) on C CH−U V 7 , if this last contribution has a negative sign. In Fig. 7 fully composite t R . This is an effect caused by the exchange of the custodiansT ,B and of theB, that are light in the limit of a composite t R . In particular, when t R is fully composite (s R = 1), MB( c R MT ) and MB = MT (= c R MT ) vanish. This causes the divergence of the bounds for s R → 1. Such divergences can be seen in the curves in Figure 7, when they approach the (grey) exclusion regions for s 1 (indeed, the minimum value of s 1 allowed by the condition s R = 2mt Y * vs 1 ≤ 1 is obviously obtained in the case s R = 1).
Tab. 1 summarizes our results. It shows the bounds on heavy fermion masses that can be obtained from the process b → sγ. We report the estimated bounds in generic Composite Higgs Models (with or without P C protection), which we have found by means of NDA, and the bounds in the specific two-site models TS5 and TS10. ξ ψ/χ denotes the degree of compositeness of a SM/Heavy fermion. In the specific TS5 and TS10 models: ξ qL ≡ s 1 , ξ D ≡ c 1 . D = (T, B),D denotes a SU (2) L singlet heavy fermion. For the estimated bounds from C CH 7 and for the bounds from C CH−U V
7
, we indicate both the values that can be obtained in the case of a positive (the first number) or a negative (the second number in parenthesis) contribution.
Constraint from / K
The bound on the mass of the heavy fermions that comes from the direct CP violating observable of the K 0 → 2π system, Re( / ), can be even stronger, in the assumption of anarchic Y * , than those obtained from b → sγ, as already found in [18]. As we pointed out, however, it is a bound that strongly depends on the assumptions made on the flavor structure of the new physics sector. As for the UV contribution to b → sγ, the custodial symmetry does not influence the bound and we obtain contributions of the same size in the different models. In what follows we describe the bound in the TS5 and in the TS10. New Physics contribution can be parametrized at low energy by chromo-magnetic operators:
O G =sσ µν T a G a µν (1 − γ 5 ) d , O G =sσ µν T a G a µν (1 + γ 5 ) d .(78)
As for the UV contribution to b → sγ, the leading contribution to / K comes from diagrams with heavy fermions and Higgs in the loop, that generate the O G and O G operators (1 loop diagrams are the same as for the UV contribution to b → sγ, Fig. 5, with the replacements γ → g, b → s and s → d).
The related coefficients C G and C G , in analogy with C 7 and C 7 of the UV contribution to b → sγ, differ by a generational mixing factor that, in the assumption of anarchic Y * , we estimate to be ∼ m d msV 2
us
. We consider only the generation mixing (1 − 3) × (2 − 3), via 3rd generation. In analogy with (64), we define:
A ef f −chromo neutral Higgs = i g s 8π 2 (2 · p) M 2 w k G neutral V uss (1 − γ 5 )d + m d m s V uss (1 + γ 5 )d ,(79)
where
k G neutral ≈ 4 i=1 |α (i) 1 | 2 + |α (i) 2 | 2 m s − 1 12 M 2 w m 2 * (i) + 4 i=1 α (i) * 1 α (i) 2 m * (i) − 1 2 M 2 w m 2 * (i)(80)
the index i runs over the four down-type heavy fermions of the model, d (i) , and the α
(i) 1 , α (i) 2
coefficients are defined by the interactions:
L ⊃d (i) α (i) 1 (1 + γ 5 ) + α (i) 2 (1 − γ 5 ) bH + h.c. .(81)
After the EWSB, neglecting O(x 2 ) terms, we find in the TS5:
k G neutral = 3 8 m s M 2 w Y 2 * M B MB + O(s 2 sR ) ,(82)
where s sR defines the degree of compositeness of the right-handed strange quark and has naturally a small value. In the limit in which s sR = 0, we obtain the same result also in the TS10. We can thus calculate the C G and C G contributions:
C G = − 1 16π 2 k G neutral M 2 w m s V us , C G = m d m s V 2 us C G .(83)Defining δ = Re( / ) CH − Re( / ) SM Re( / ) exp (84) we obtain |δ | ≈ (58 T eV ) 2 B G |C G − C G | < 1 ,(85)
where Re( / ) SM has been estimated as in Ref. [18]; B G denotes the hadronic bag-parameter, 2π I=0 |y s O G |K 0 . We take B G = 1 7 and we take into account separately the contribution from C G and C G . In the limit s sR = 0 we obtain from (85):
M B MB (1.3)Y * T eV ,(86)
which is in agreement with the result in [18]. The contribution from the charged Higgs interactions gives weaker bounds than those from the neutral Higgs contribution.
Conclusions
Composite Higgs Models are among the compelling scenarios for physics beyond the Standard Model that can give an explanation of the origin of the EWSB and that are going to be tested at the LHC.
In this project we have have built simple "two-site" models, the TS5 and the TS10, which can represent the low energy regime of Minimal Composite Higgs Models with a custodial symmetry and a P LR parity. Working in these effective descriptions, we have reconsidered the bounds on the CHM spectrum implied by flavor observables. We have found in particular that the IR contribution to b → sγ induced by the flavor conserving effective vertex W t R b R implies a robust Minimal Flavor Violating bound on the mass (m * ) of the new heavy fermions (to be more specific, on the heavy doublets, partners of q L = (t L , b L )). The relevance of shifts to W t R b R has been already pointed out in the literature (see, for example, [31,32]), even though its importance in setting a bound on heavy fermion masses was unestimated in previous studies. We have also shown how this bound can be stronger in the case of the absence of a symmetry (P C ) protection to the effective W t R b R vertex. In particular, we have found an estimated bound m * 1.0 TeV , in models with P C protection to the W t R b R vertex (where both t R and b R are P C eigenstates) and a bound m * 1.0/ξ qL TeV , where ξ qL denotes the degree of compositeness of (t L , b L ), in models without P C protection. ξ qL is naturally a small number, the bound could be thus very strong in these types of models. In the specific "two-site" models, the bounds we have found are m T S5 *
TeV
in the TS5, and m T S10 * 0.54 ξ qL TeV , in the TS10. Table 1 summarizes the results obtained for the bounds from b → sγ.
In addition to these bounds, we have calculated the constraints from the UV composite Higgs model contribution to b → sγ. Figs. 6 and 7 show the bounds in the TS5 and the TS10 as functions of the t L degree of compositeness. Our results have shown that these bounds can be stronger than those from the IR contribution but they are model dependent; in particular they strongly depend on the assumptions made on the flavor structure of the composite sector. We have obtained an estimated limit m * (0.52)Y * TeV in a specific NP flavor scenario (Y * anarchic in the flavor space). Even stronger bounds,
m * (1.3)Y * TeV ,
can be obtained from / K but, again, they are model dependent and in principle could be loosened by acting on the NP flavor structure (as done, for example, in Ref. [27]). The lower IR bounds on m * we have found from b → sγ, on the contrary, are robust MFV bounds that cannot be evaded by assuming particular conditions on the structure of the strong sector.
tan ϕ L1 = ∆ L1 M Q * ≡ s 1 c 1 , s 1 ≡ sin ϕ L1 c 1 ≡ cos ϕ L1 s 2 = ∆ L2 M Q * cos ϕ L1 s 3 = ∆ L2 M Q * ∆ 2 L1 + M 2 Q * − M 2 Q * sin ϕ L1 t L = c 1 t el L − s 1 T com L − s 2 T com L T L = s 1 t el L + c 1 T com L + s 3 T com L T L = (s 2 c 1 − s 1 s 3 ) t el L − (s 1 s 2 + c 1 s 3 ) T com L + T com L b L = c 1 b el L − s 1 B com L − s 2 B com L B L = s 1 b el L + c 1 B com L + s 3 B com L B L = (s 2 c 1 − s 1 s 3 ) b el L − (c 1 s 3 + s 1 s 2 ) B com L + B com L(87)s 4 = ∆ L2 ∆ L1 ∆ 2 L1 + M 2 Q * − M 2 Q * T R = T com R + s 4 T com R T R = T com R − s 4 T com R B R = B com R + s 4 B com R B R = B com R − s 4 B com R (88) tan ϕ R = ∆ R1 MT * s R ≡ sin ϕ R c R ≡ cos ϕ R tan ϕ bR = ∆ R2 MB * s bR ≡ sin ϕ bR c bR ≡ cos ϕ bR t R = c R t el R − s RT com R T R = s R t el R + c RT com R b R = c bR b el R − s bRB com R B R = s bR b el R + c bRB com R(89)
Physical heavy fermion masses are related to the bare ones according to:
MT = M 2 T * + ∆ 2 R1 = MT * c R MB = M 2 B * + ∆ 2 R2 = MB * c bR M T = M B = M 2 Q * + ∆ 2 L1 = M Q *
In the elementary/composite basis the Yukawa Lagrangian reads:
L Y U K = Y * U T r Q H T + Y * D T r Q H B + h.c. = Y * U T φ † 0T +T 2/3 φ 0T +T 5/3 φ +T −Bφ −T + Y * D B −1/3 φ † 0B +B φ 0B +T φ +B −B −4/3 φ −B + h.c.(91)L Y U K =Y * U c 1 c R T L φ † 0T R −B L φ −T R + Y * U c R T 2/3L φ 0TR +T 5/3L φ +T R − Y * U (s 1 s 2 + c 1 s 3 ) c R T L φ † 0T R −B L φ −T R − Y * U s 1 c R t L φ † 0T R −b L φ −T R − Y * U s R T 2/3L φ 0 t R +T 5/3L φ + t R + Y * U (s 1 s 2 + c 1 s 3 ) s R T L φ † 0 t R −B L φ − t R − Y * U c 1 s R T L φ † 0 t R −B L φ − t R + Y * U s 1 s R t L φ † 0 t R −b L φ − t R + Y * U T R φ † 0T L −B R φ −T L + Y * U T 2/3R φ 0TL +T 5/3R φ +T L − Y * U s 4 T R φ † 0T L −B R φ −T L + Y * D c bR B −1/3L φ † 0B R −B −4/3L φ −B R + Y * D c bR B L φ 0BR +T L φ +B R − Y * D s bR B −1/3L φ † 0 b R −B −4/3L φ − b R − Y * D s bR B L φ 0 b R +T L φ + b R − Y * D s 2 c bR b L φ 0BR +t L φ +B R + Y * D s 2 s bR b L φ 0 b R +t L φ + b R − Y * D s 3 s bR B L φ 0 b R +T L φ + b R + Y * D s 3 c bR B L φ 0BR +T L φ +B R + Y * D B R φ 0BL +T R φ +B L + Y * U B −1/3R φ † 0B L −B −4/3R φ −B L + Y * D s 4 B R φ 0BL +T R φ +B L + h.c.(96)
A.2 TS10
Fermions rotate from the elementary/composite basis to the 'physical' light(SM)/heavy basis as:
tan ϕ L1 = ∆ L1 M Q * ≡ s 1 c 1 t L = c 1 t el L − s 1 T com L T L = s 1 t el L + c 1 T com L b L = c 1 b el L − s 1 B com L B L = s 1 b el L + c 1 B com L (97) tan ϕ R = ∆ R1 MQ * s R ≡ sin ϕ R c R ≡ cos ϕ R tan ϕ bR = ∆ R2 MQ * s bR ≡ sin ϕ bR c bR ≡ cos ϕ bR t R = c R t el R − s RT com R T R = s R t el R + c RT com R b R = c bR b el R − s bRB com R B R = s bR b el R + c bRB com R(98)
Physical heavy fermion masses are related to the bare ones as:
MT = M 2 Q * + ∆ 2 R1 = MQ * c R MB = M 2 Q * + ∆ 2 R2 = MQ * c bR
In the elementary/composite basis the Yukawa Lagrangian reads:
L Y U K = +Y * T r HQQ + Y * T r Q HQ(100)
After field rotation to the mass eigenstate basis, before EWSB, L Y U K reads as in eq. (105).
After EWSB top and bottom masses arise as:
L Y U K = Y * c 1 c R 1 √ 2 T L φ † 0T R −B L φ −T R − Y * c R 1 √ 2 T 2/3L φ 0TR +T 5/3L φ +T R − Y * s 1 c R 1 √ 2 t L φ † 0T R −b L φ −T R + Y * s 1 s R 1 √ 2 t L φ † 0 t R −b L φ − t R + Y * s R 1 √ 2 T 2/3L φ 0 t R +T 5/3L φ + t R − Y * c 1 s R 1 √ 2 T L φ † 0 t R −B L φ − t R + Y * 1 √ 2 T R φ † 0T L −B R φ −T L − Y * 1 √ 2 T 2/3R φ 0TL +T 5/3R φ +T L + Y * T 5/3L φ † 0T 5/3R −T 2/3L φ −T 5/3R + Y * T 5/3R φ † 0T 5/3L −T 2/3R φ −T 5/3L − Y * s 1 c bR b L φ 0BR +t L φ +B R + Y * s 1 s bR b L φ 0 b R +t L φ + b R − Y * c 1 s bR B L φ 0 b R +T L φ + b R + Y * c 1 c bR B L φ 0BR +T L φ +B R + Y * B R φ 0BL +T R φ +B L + Y * B R φ † 0B L + Y * T2/3R φ +B L Y * 1 √ 2 T R φ † 0T L +B R φ −T L − Y * 1 √ 2 T 2/3R φ 0T L −T 5/3R φ +T L + Y * c 1 1 √ 2 T L φ † 0T R +B L φ −T R − Y * 1 √ 2 T 2/3L φ † 0T R −T 5/3L φ +T R − Y * s 1 1 √ 2 t L φ † 0T R +b L φ −T R + Y * T 5/3R φ 0T 5/3L −T R φ −T 5/3L + Y * c 1 B L φ † 0B R −T L φ −T 5/3R − Y * s 1 b L φ † 0B R −t L φ −T
5/3R
+ Y * T2/3L φ +B R + Y * T5/3L φ 0T 5/3R + h.c.
(105)
B BOUND derivation
The SM prediction and the experimental measurement [20] of the b → sγ branching ratio are respectively: BR th = (315 ± 23)10 −6 (106)
BR ex = (355 ± 24 ± 9)10 −6 (107)
The b → sγ decay rate is:
Γ tot ∝ |C 7 (µ b )| 2 + |C 7 (µ b )| 2 ≈ |C SM 7 (µ b ) + C N P 7 (µ b )| 2 + |C N P 7 (µ b )| 2(108)
If we consider only the C 7 contribution, we obtain:
Γ tot Γ SM = 1 + 2 Re(C SM 7 (µ b ) * C N P 7 (µ b )) |C SM 7 (µ b )| 2 + O(∆C 2 7 )(109)
For µ b = 5 GeV, µ W = M W , α S = 0.118, the SM contribution to C 7 at the scale µ b reads [19]:
C SM 7 (µ b ) = 0.695C SM 7 (µ W ) + 0.086C SM 8 (µ W ) − 0.158C SM 2 (µ W ) = −0.300 .(110)
the index i runs over the four up-type heavy fermions of the model, u (i) , m * (i) denotes the physical mass of the the u (i) heavy fermion and the α
Figure 1 :
11 loop Infrared contribution to C 7 in the SM.
f
RH = −0.777, for m t = 174 GeV and M W = 80.4 GeV.We point out that the bound on the CHM contributions to b → sγ, C CH 7 in eq. (32), can be directly translated into a bound on the effective vertex
R 7 .)
7contribution is negative. This implies a positive contribution C CH−IR−T S5 7 (f RH is negative). The condition in (32) is asymmetric and is stronger in the case of a positive C CH−IR Applying this condition to the infrared contribution in (48), we get, for r = M D M D = 1, the following bound on the D = (T, B) doublet mass: TeV, changing r to r = 0.8(1.2). In the TS10, there is only one doublet, D = (T, B)
Figure 4 :
4TS5 and TS10 at small elementary/composite mixing angles s 1 and s bR . ξ ψ/χ denotes the degree of compositeness of a SM/Heavy fermion. In the specific TS5 and TS10 models: ξ qL ≡ s 1 , ξ D ≡ c 1 . D = (T, B),D denotes a SU (2) L singlet heavy fermion. We highlight (in bold) the MFV bounds from C CH 7 . For the estimated bounds from C CH 7 and for the bounds from C CH−U V7, we indicate both the values that can be obtained in the case of a positive (the first number) or a negative (the second number in parenthesis) contribution. 1 loop Infrared contribution to C 7 .
(D L ) 23 ∼ ms m b Vts , making use, again, of the estimates (54) and (53). The flavor violation in C CH−IR7
and (44), the condition on C CH−IR 7 (µ w ), eq. (33), gives thus the estimated bounds: M D 0.80 TeV(57)in models with P C without P C symmetry. Considering the specific TS5 and TS10 models, C
Figure 5 :
51 loop CHM UV contribution to C 7 .
the index i runs over the four down-type heavy fermions of the model, d (i) =B,
we show the bound on the doublet mass M T as function of s 1 from the condition on C CH−U V 7 , for different values of the ratio k = M T MT between doublet and singlet masses, fixing Y * = 3 (Left Plot), and for different value of Y * , fixing k = 1 (Right Plot). We set MB = MT and M T = M T . These values are obtained by taking into account the strongest values between the neutral Higgs contribution and the charged Higgs one. We set s bR = s 1 .
Figure 6 :Figure 7 :
67we show the bound on the doublet mass M T as function of s 1 from the condition on C CH−U V 7 , for different values of the ratio k = M T MT between doublet andT singlet mass, fixing Y * = 3 (Left Plot), and for different Y * values, setting k = M T MT = 1 (Right Plot). The custodian singlet masses have the following relations with MT : MB c R MT , MB = MT = c R MT . All these bounds are obtained by taking into account the strongest values between the neutral Higgs contribution and the charged Higgs one. We can see that in the TS10 model, the UV bounds are particularly strong in the case of Bounds from C CH−U V 7 in the TS5. Left Plot: bounds for different values of k = M T MT and Y * = 3; Right Plot: bounds for different values of Y * and k = 1. We set MB = MT and M T = M T . Also shown is the exclusion region for s 1 , obtained from the condition s R = Bounds from C CH−U V 7 in the TS10. Left Plot: bounds for different values of k = M T MT (MB c R MT , MB = MT = c R MT ), fixing Y * = 3; Right Plot: bounds for different values of Y * , fixing k = 1. We also show the exclusion region for s 1 , obtained from the condition s R = 2mt Y * vs 1 ≤ 1.
c 1 M
1T 5/3 = M T 2/3 = M Q * M T = M B = M 2 Q * + ∆ 2 L2 M Q * = M B−1/3 = M B−4/3
MT 5/ 3 =M
3MT 5/3 = MT = MB = MQ * M T = M B = M 2 T 2/3 = M T 5/3 = M Q *
). From(6) we can see that heavier SM particles have larger degrees of compositeness: heavy SM particles, like the top, have to be quite composite while the light ones are almost elementary.
Table 1 :
1Estimated bounds from b → sγ in a generic composite Higgs model and in the specific
(1 − γ 5 ) bH + + h.c. .(i)
1 , α
(i)
2 coefficients derive from the
interactions:
L ⊃ū (i) α
(i)
1 (1 + γ 5 ) + α
(i)
2 (117)
1
M B MB
+
5
96
s 2
R
M 2
B
+ O(s 2
1 ) + O(s 2
bR )
(120)
see Ref.[9], for two-and three-site effective theories where the full Higgs non-linearities are included.
Tension that instead affects Technicolor and Extended Technicolor Models.3 As a result of RG evolution above the compositeness scale. The smallness of ∆ parameters also allows for a sort of GIM mechanism that suppresses large Flavor-Changing Neutral Currents[8].
The TS5 model has been already briefly described in[17], where it was adopted to study the phenomenology of heavy-colored vectors at the LHC.
In fact, once produced, heavy fermions of the first two generations will also decay mostly to tops and bottoms, since flavor-changing transitions are not suppressed in the strong sector, while the couplings to the light SM quarks are extremely small, see the discussion in Ref.[2].
The contribution from heavy gluon and heavy fermion exchange is suppressed. Indeed this contribution is approximately diagonal in the flavor space.
That corresponds to the estimate of the hadronic matrix element 2π I=0 |y s O G |K 0 in the chiral quark model and to the first order in the chiral expansion.
AcknowledgmentsI would like to thank Roberto Contino for having followed this work from the beginning and for comments on the manuscript.After field rotation to the mass eigenstate basis, before EWSB, L Y U K reads as in eq. (96).After the EWSB top and bottom masses arise as:We have also electroweak mixings among fermions. The fermionic mass matrices for up and down states read, in the basis t LTLT2/3LTLT L t RTR T 2/3R T R T R for the up sector and in the basis b LBLB LB−1/3LBL b RBR B R B −1/3R B R for the down-type fermions:The scaling factor of the NP contribution to C 7 from the scale µ W to the scale µ b is:By considering all the previous equations, we obtain at 95% C.L.:The scaling factor of the NP contribution to C 7 from the scale m * = 1 TeV to the scale µ W is:and we obtain at 95% C.L.:If we consider only the C 7 contribution, we obtain:We haveAfter the EWSB, we diagonalize the up-type quarks mass matrix of (94) and the downtype one (95) perturbatively in x ≡ Y * v √ 2m * , neglecting O(x 2 ). We find the following coefficients:the heavy fermion T 2/3 gives a contribution of O(x 2 ) to k charged and we can neglect it. Considering the eq.(116) and the coefficients in (118), neglecting again O(x 2 ) terms, we obtain:if we can neglect O(s 2 1 ).D Ultraviolet contributionSumming up, we find in the TS5:
. D B Kaplan, H Georgi, Phys. Lett. B. 136183D. B. Kaplan and H. Georgi, Phys. Lett. B 136, 183 (1984).
. R Contino, T Kramer, M Son, R Sundrum, arxiv:hep- ph/0612180JHEP. 070574R. Contino, T. Kramer, M. Son and R. Sundrum, JHEP 0705 (2007) 074, arxiv:hep- ph/0612180.
. R Contino, G Servant, arxiv:0801.1679JHEP. 080626hep-phR. Contino and G. Servant, JHEP 0806 (2008) 026, arxiv:0801.1679 [hep-ph].
. D B Kaplan, Nucl. Phys. B. 365259D. B. Kaplan, Nucl. Phys. B 365, 259 (1991).
. K Agashe, R Contino, A Pomarol, arxiv:hep- ph/0412089Nucl. Phys. B. 719K. Agashe, R. Contino and A. Pomarol, Nucl. Phys. B 719 (2005) 165-187, arxiv:hep- ph/0412089.
. R Contino, L Da Rold, A Pomarol, arXiv:hep-ph/0612048Phys. Rev. D. 7555014R. Contino, L. Da Rold and A. Pomarol, Phys. Rev. D 75, 055014 (2007), arXiv:hep- ph/0612048.
. K Agashe, R Contino, L Da Rold, A Pomarol, arxiv:hep-ph/0605341Phys. Lett. B. 64162K. Agashe, R. Contino, L. Da Rold and A. Pomarol, Phys. Lett. B 641 (2006) 62, arxiv:hep-ph/0605341.
. K Agashe, G Perez, A Soni, hep-ph/0406101Phys. Rev. Lett. 93K. Agashe, G. Perez, A. Soni, Phys. Rev. Lett. 93 (2004) 201804. [hep-ph/0406101];
. arXiv:hep-ph/0408134Phys. Rev. D. 7116002Phys. Rev. D 71, 016002 (2005) [arXiv:hep-ph/0408134].
. G Panico, A Wulzer, arXiv:1106.2719JHEP. 1109135hep-phG. Panico, A. Wulzer, JHEP 1109 (2011) 135, arXiv:1106.2719 [hep-ph].
. A Manohar, H Georgi, Nucl. Phys. B. 234189A. Manohar and H. Georgi, Nucl. Phys. B 234, 189 (1984);
. H Georgi, L Randall, Nucl. Phys. B. 276241H. Georgi and L. Randall, Nucl. Phys. B 276, 241 (1986).
. K Agashe, R Contino, arXiv:0906.1542Phys. Rev. D. 8075016hep-phK. Agashe and R. Contino, Phys. Rev. D 80, 075016 (2009), arXiv:0906.1542 [hep-ph].
. S J Huber, arXiv:hep-ph/0303183Nucl. Phys. B. 666269S. J. Huber, Nucl. Phys. B 666, 269 (2003), arXiv:hep-ph/0303183.
. C Csaki, A Falkowski, A Weiler, arXiv:0804.1954JHEP. 08098hepphC. Csaki, A. Falkowski and A. Weiler, JHEP 0809, 008 (2008), arXiv:0804.1954 [hep- ph].
. S Casagrande, F Goertz, U Haisch, M Neubert, T Pfoh, arXiv:0807.4937JHEP. 081094hep-phS. Casagrande, F. Goertz, U. Haisch, M. Neubert and T. Pfoh, JHEP 0810, 094 (2008), arXiv:0807.4937 [hep-ph].
. M E Albrecht, M Blanke, A J Buras, B Duling, K Gemmler, arXiv:0903.2415JHEP. 090964hep-phM. E. Albrecht, M. Blanke, A. J. Buras, B. Duling and K. Gemmler, JHEP 0909 (2009) 064, arXiv:0903.2415 [hep-ph].
. K Agashe, A Azatov, L Zhu, arXiv:0810.1016Phys. Rev. D. 7956006hep-phK. Agashe, A. Azatov and L. Zhu, Phys. Rev. D 79, 056006 (2009), arXiv:0810.1016 [hep-ph].
. C Bini, R Contino, N Vignaroli, arxiv:1110.6058JHEP. 1201157hep-phC. Bini, R. Contino, N. Vignaroli, JHEP 1201 (2012) 157, arxiv:1110.6058 [hep-ph];
. O Gedalia, G Isidori, G Perez, arxiv:hep- ph/0905.3264Phys. Lett. B. 682O. Gedalia, G. Isidori and G. Perez, Phys. Lett. B 682, 200-206 (2009), arxiv:hep- ph/0905.3264.
Probing the Standard Model of Particle Interactions. A J Buras, arxiv:hep-ph/9806471Les Houchespublished inA. J. Buras, published in "Probing the Standard Model of Particle Interactions" (Les Houches lectures of 1997), arxiv:hep-ph/9806471.
. T Huber, arXiv:0712.3158J. Phys. Conf. Ser. 11052024hep-phT. Huber, J. Phys. Conf. Ser. 110, 052024 (2008) [arXiv:0712.3158 [hep-ph]].
Tokyo preprint UT-656. K Fujikawa, A Yamada, K. Fujikawa and A. Yamada, Tokyo preprint UT-656, September 1993.
. J A Saavedra, hep-ph/0803.3810Nucl. Phys. B. 804J. A. Aguilar Saavedra, Nucl. Phys. B 804 (2008) 160-192, hep-ph/0803.3810.
. A Pomarol, J Serra, arXiv:0806.3247Phys. Rev. D. 7874026A. Pomarol and J. Serra, Phys. Rev. D 78, 074026 (2008), arXiv:0806.3247.
. G F Giudice, C Grojean, A Pomarol, R Rattazzi, arXiv:hep-ph/0703164JHEP. 070645G. F. Giudice, C. Grojean, A. Pomarol and R. Rattazzi, JHEP 0706, 045 (2007), arXiv:hep-ph/0703164.
. M Redi, A Weiler, arXiv:1106.6357JHEP. 1111108hep-phM. Redi, A. Weiler, JHEP 1111 (2011) 108, arXiv:1106.6357 [hep-ph].
. S. De Curtis, M Redi, A Tesi, arXiv:1110.1613JHEP. 120442hep-phS. De Curtis, M. Redi and A. Tesi, JHEP 1204 (2012) 042, arXiv:1110.1613 [hep-ph].
. M S Carena, E Ponton, J Santiago, C E M Wagner, ; A Djouadi, G Moreau, F Richard, arXiv:0807.4461Nucl. Phys. B. C. Bouchart, G. Moreau76Nucl. Phys. B. hep-phM. S. Carena, E. Ponton, J. Santiago, C. E. M. Wagner, Phys. Rev. D 76 (2007) 035006, hep-ph/0701055; A. Djouadi, G. Moreau, F. Richard, Nucl. Phys. B 773 (2007) 43-64, hep-ph/0610173; C. Bouchart, G. Moreau, Nucl. Phys. B 810 (2009) 66-96, arXiv:0807.4461 [hep-ph].
. L Da Rold, arXiv:1009.2392JHEP. 110234hep-phL. Da Rold, JHEP 1102 (2011) 034, arXiv:1009.2392 [hep-ph];
. E Alvarez, L Da Rold, A Szynkman, arXiv:1011.6557JHEP. 110570hep-phE. Alvarez, L. Da Rold, A. Szynkman, JHEP 1105 (2011) 070, arXiv:1011.6557 [hep-ph].
. B Grzadkowski, M Misiak, arxiv:0802.1413Phys. Rev. D. 7877501hep-phB. Grzadkowski, M. Misiak, Phys. Rev. D 78 (2008) 077501, arxiv:0802.1413 [hep-ph].
. J Drobnak, S Fajfer, J F Kamenik, arxiv:1109.2357Nucl. Phys. B. 855hep-phJ. Drobnak, S. Fajfer, J. F. Kamenik, Nucl. Phys. B 855 (2012) 82-99, arxiv:1109.2357 [hep-ph].
| zyda_arxiv-0506000 |
Scenario-based Optimization Models for Power Grid Resilience to Extreme Flooding Events
Ashutosh Shukla
Erhan Kutanoglu
John J Hasenbein
Scenario-based Optimization Models for Power Grid Resilience to Extreme Flooding Events
Graduate Program in Operations Research and Industrial Engineering The University of Texas at Austin, Austin, United Stateshurricanesstorm-surgestochastic programmingrobust optimization
We propose two scenario-based optimization models for power grid resilience decision making that integrate output from a hydrology model with a power flow model. The models are used to identify an optimal substation hardening strategy against potential flooding from storms for a given investment budget, which if implemented enhances the resilience of the power grid, minimizing the power demand that is shed. The same models can alternatively be used to determine the optimal budget that should be allocated for substation hardening when longterm forecasts of storm frequency and impact (specifically restoration times) are available. The two optimization models differ in terms of capturing risk attitude: one minimizes the average load shed for given scenario probabilities and the other minimizes the worst-case load shed without needing scenario probabilities. To demonstrate the efficacy of the proposed models, we further develop a case study for the Texas Gulf Coast using storm surge maps developed by the National Oceanic and Atmospheric Administration and a synthetic power grid for the state of Texas developed as part of an ARPA-E project. For a reasonable choice of parameters, we show that a scenario-based representation of uncertainty can offer a significant improvement in minimizing load shed as compared to using point estimates or average flood values. We further show that when the available investment budget is relatively high, solutions that minimize the worst-case load shed can offer several advantages as compared to solutions obtained from minimizing the average load shed. Lastly, we show that even for relatively low values of load loss and short post-hurricane power restoration times, it is optimal to make significant investments in substation hardening to deal with the storm surge considered in the NOAA flood scenarios.
Introduction
In the past few years, hurricanes and tropical storms have caused significant damage to critical infrastructures such as transportation systems, healthcare services, and the power grid. Hurricanes Maria, Irma, and Harvey together cost nearly $265B which was more than 85% of total weatherrelated disaster costs in the U.S. in 2017 [1]. Harvey became not only the longest-lasting hurricane with a record level of rainfall but also the costliest at $130B, part of which was due to power outages.
Harvey damaged 90+ substations, downed 800+ transmission assets, 6000+ distribution poles, and 800+ miles of power lines, with a peak power generation loss of 11GW, affecting over 2 million people. It took 2 weeks and 12,000 crew members to restore power [2].
The power grid is impacted by hurricanes and tropical storms primarily due to strong winds and flooding. To address this, a vast body of research that examines the effect of wind fields on transmission lines and towers has been developed. However, to the best of our knowledge, the literature is quite scant on the models that assess the impact of flooding. At the same time, the cost of such disasters has increased in states like Texas which is exposed to the Atlantic basin through the Gulf of Mexico. Moreover, recent studies suggest that we are likely to see more frequent and intense hurricanes in the near future [3]. In response to this, some utilities have employed on-site meteorologists which they have reported to be beneficial [2]. These meteorologists localize the predictions to obtain flood estimates for the region of interest. The estimates are then used to determine the resources needed for forecasted damage and post-storm recovery. To further improve this decision-making process, we present an end-to-end scenario-based optimization approach that integrates the output from a predictive geoscience-based flood model with a power flow model to recommend a plan for substation hardening to relieve the flood impacts of the potential storms.
While doing so, the scenario-based approach accounts for the uncertainty associated with storms and their flood forecasts.
Specifically, we propose two scenario-based optimization models (stochastic and robust) for grid resilience decision making under uncertainty. The choice of the model to be used for decisionmaking depends on the available information about the uncertain parameters which in our case are the flood levels at substations in a flood scenario. We show how the proposed models can be used to identify the substations that should be protected and to what extent. We further explain how the same models can be used for deciding the optimal budget that should be allocated for substation hardening to minimize the expected total disaster management cost. The aforementioned features of the models can help power utilities and grid operators address their concerns like the unpredictable nature of the load loss, the potential for substation flooding, and the potential reduction in generator output due to loss of load as outlined in [2].
The rest of the article is organized as follows. Section 2 presents a review of the literature on power grid resilience, particularly from a modeling and decision making viewpoint. Section 3 presents the overview of the proposed models followed by the notation, assumptions, mathematical formulation, and a brief discussion on the characteristics of the models. Section 4 is dedicated to the development of a case study for the Texas Gulf Coast and Section 5 is to the discussion of the results. We conclude with directions for future research in Section 6.
Literature Review
Power grid resilience to extreme events like cyber-attacks and natural disasters has been a topic of intense research in the past few years [4]. This includes studies focused on developing resilience metrics, methodological frameworks to enhance power grid resilience, and approaches to risk analysis [5,6,7]. In addition, several mathematical models have been developed to aid decision-making in different stages of the power grid resilience management cycle. These models can be categorized based on the planning phase they are developed for: mitigation, preparedness, response, and recovery. Using the flood as an extreme event that the resilience models are designed to respond to, the mitigation decisions are about the permanent hardening of the grid components, re-design of the grid through the introduction of new substations/transmission lines, installation of backup generation, etc. These decisions are made well before the start of the hurricane season when limited information about upcoming hurricanes is available. Similarly, before an imminent hurricane, during the preparedness phase, one has to make decisions about where to install temporary flood barriers like Tiger Dams™ , where to deploy mobile substations to quickly recover damaged substations, and what part of the grid to disconnect to avoid fatal accidents due to the collapse of power lines.
In both the mitigation and preparation phases, decision-makers face significant uncertainty about storm characteristics like path, intensity, forward speed, and precipitation. The decision-making in the response and recovery phase, on the other hand, is not plagued by weather uncertainty. Since this paper focuses on decision-making under weather uncertainty, we limit the review's focus to models that aid decision-making during the mitigation and preparedness phases.
Models for both the mitigation and preparedness phases can be broadly categorized into two groups: (1) machine learning-based and (2) optimization-based. In the case of machine learningbased models, the focus is on prediction, not decision making. For example, a machine learningbased model may predict metrics of interest such as the number of outages, outage duration, etc., for an upcoming hurricane [8,9,10]. However, decision making based on these predictions, like which substations to protect and how to reconfigure the power grid network to minimize load shed, are not typically considered within the model. Optimization-based models leverage predictions for decision making. To do so, the predictions with associated uncertainty are represented using scenarios. The decision-making model is then coupled with these scenarios. The models that we propose in this study belong to the class of optimization-based models. In the subsequent paragraphs, we survey the key characteristics of some of these models and highlight their differences from what we propose.
The optimization models generally consist of two components: uncertainty quantification and decision modeling. To quantify uncertainty about the weather, we first generate a set of scenarios using various kinds of models, such as machine learning-based models, physics-based models, and expert opinions. Then, irrespective of how the scenarios are generated, the decision-making model considers the impacts on the grid under each scenario to recommend decisions that minimize a certain risk measure. The models we review in this subsection are based on different ways the aforementioned components of the optimization model can be developed. In particular, we review various methods of generating representative scenarios and incorporating them into alternative decision making models.
Scenario generation
Scenario generation is one of the most common uncertainty quantification methods for extreme weather. We divide scenario generation techniques into four categories. The first is based on fragility curves. The curve represents the failure probability of a component as a function of some loading parameter. For example, fragility curves for transmission towers have been developed with respect to wind speeds. Such curves have been used in various power grid resilience decision making studies [11,12]. The second is based on statistical methods. For example, in [13], the authors use historical hurricane and tropical storm data for developing a baseline scenario. The alternative scenarios are then developed by altering parameters from the historical data to simulate plausible climate-induced changes to storm behavior. In [14], the path and the wind field of typhoons are simulated using Monte Carlo sampling to quantify the spatio-temporal impacts of wind speed on the transmission line status. In addition to wind and flooding, winter storms in Texas, such as Uri of 2021, have propelled research in power grid resilience to extreme cold events. For example, in [15], the authors have developed a statistical model where they incorporate historical outage data to generate scenarios of generator outages due to extreme cold events. The third set of methods is based on physics-based hydrological models. Two such models, called WRF-Hydro and SLOSH, are used in [16,17] to generate flooding scenarios. In [18], the authors use physics-based climate models to evaluate the resilience of levee-protected electric power networks with the primary focus on performance degradation. The fourth category is based on combinatorial criteria like N − k. In this case, each scenario represents a way in which k out of N components can fail. A model based on this criterion is used in [19].
Decision modeling
Several optimization models have been developed for power grid resilience decision-making against extreme weather events. These include models that can aid in decision-making about the upgrade of the power grid network through a combination of hardening existing components, adding redundant lines, switches, generators, and transformers [14,19,20]. However, hardening large parts of the power grid can be financially infeasible. In such a scenario, stockpiling power grid components in strategic locations enhances resilience by expediting network restoration after the disaster. To decide how stockpiling of components should be done, Coffrin et al. [21] developed a two-stage stochastic mixed-integer program where the first-stage discrete decisions are about stockpiling power grid components and the second-stage decisions are about how to operate the power grid to minimize load-shed. Additionally, network reconfiguration before an imminent hurricane can also enhance resilience. The models proposed in [12] make such decisions using grid islanding techniques. However, none of these models have explicitly focused on assessing the impact of flooding on the power grid. On the other hand, there are several studies that assess the impact of flooding on other critical infrastructures. For example, Kim et al. [16] present a framework and a case study using hurricane Harvey to generate physics-based (hydrological) flood scenarios. These scenarios are then used for resilience decision-making for healthcare infrastructure in [22]. Scenarios generated from physicsbased models have also been used in [23] that developed a model to estimate the overall disaster cost due to physical damage loss, income losses, and inventory losses. In comparison, our proposed models are explicitly geared towards resilience decision-making for the power grid and have a power flow model nested within a larger substation hardening model.
The models closest to ours are [17] and [24]. [17] uses a set of scenarios all based on Hurricane Harvey run on a hydrology model focusing on inland flooding. We instead consider a wider range of storms and storm characteristics and scenarios that are based on NOAA's storm surge simulations. Mohadese et al. [24] propose a stochastic optimization model for identifying and protecting substations a day before the anticipated flooding event, meaning a focus on preparedness. Here, our proposed models differ in several ways. First, we generate scenarios using outputs from physicsbased hydrological models to create flood maps for the region of interest. Our choice is based on the rationale that physics-based models represent flood levels across the region of interest such that they are correlated in space and time. Mohadese et al. [24] have not considered the impact of correlated flooding. They also assume that a substation will transmit power if it is not flooded. In reality, this may not be true due to the network effects and we embed a power flow model within our larger resilience optimization model to address such effects. Finally, our models focus on long-term decision making, highlighting mitigation-phase budgeting and decision making.
Modeling
In this section, we first present an overview of the stochastic and robust optimization models developed to assist in grid resilience decision-making against extreme flooding events. Next, we state the key assumptions, introduce the notation and provide detailed mathematical formulations. Finally, we highlight some of the key characteristics of the proposed models and explain how they can be used to address a wide variety of questions in grid resilience decision-making. We highlight that the models proposed in subsection 3.4 and 3.5 are developed to minimize a risk measure over the load shed for a single flood event. In subsection 3.6, we explain how the same models can be leveraged for multi-year planning. we represent the uncertainty in the meteorological forecasts using a set of hurricane parameters (Hurricane 1, . . . , n). In the next step, using the aforementioned parameters as input, we run a hydrological model to get the corresponding flood maps. The flood maps are then used as input to the two-stage decision-making models. The final output from the decision-making models is a plan for substation hardening. Decision making in both models occurs in two stages. Here we specify first-stage decisions that determine which substations to harden and to what extent. We assume that the substation hardening measures are taken during the mitigation phase of the power grid resilience management cycle. Consequently, the decisions made using the model are not a response to any particular imminent hurricane. Instead, they are intended to harden the grid against multiple hurricanes potentially occurring over multiple years and minimize the long-term disaster costs incurred due to flooding. One such mitigation measure for substation hardening is to build permanent protective structures like walls around the substation periphery as shown in Figure 2.
Overview of the proposed models
After we make the first-stage decisions, we assess their performance in dealing with the flood levels in the second stage. The second-stage assessment involves minimization of the load shed in multiple flood scenarios that may impact the power grid during the multi-year planning horizon.
Assumptions
In the proposed models, we make the following assumptions. The first assumption is that a DCbased power flow approximation is acceptable. For a detailed explanation and derivation of the DC power flow equations from the AC equations, we refer to [25]. This approximation has been widely used and is embedded within larger strategic decision-making problems such as long-term capacity planning and operation of wholesale electricity grids. For the kind of models proposed in this paper, a detailed discussion on the difference in the quality of solutions obtained from different power flow approximation models is given in [26]. The second assumption is that we can model the substation hardening cost with fixed and variable components. The fixed cost is incurred when a substation is chosen for hardening. It can represent the cost of building the foundation on which the protective structure is built. Furthermore, it can also include the costs associated with transporting construction resources to the substation site. The variable cost, on the other hand, is a function of the height of flooding to which the substation is made resilient. In our case, we assume that the variable cost linearly depends on height. This is reasonable when we build wall-like structures to protect substations, as shown in Figure 2. Third, we assume that each substation's hardening and flood levels are discrete and finite. In the proposed formulations, they are assumed to be nonnegative integers. Fourth, we assume that all the flooded substations within the network experience the same downtime and are recovered simultaneously. Lastly, we assume that the value of load loss can be quantified in dollars per hour.
Notation
The proposed models use the following notation. Note that all the cost parameters used in the models are in dollars and all power grid parameters are in the per-unit system. Notation not defined in this section and appearing later in the text is defined as introduced.
Stochastic Optimization Model
The two-stage stochastic optimization model (SO) is expressed as:
L * SO = min x∈X L SO (x),(1a)
where
L SO (x) = k∈K p k L(x, k). (1b)
The objective function in (1a) minimizes the expected unsatisfied power demand (load shed) over the scenarios in set K. Here, X represents the set of feasible first-stage decisions. The following constraints define the set:
i∈I f f i y i + v i x i ≤ I,(2a)x i ≤ H i y i , ∀i ∈ I f .(2b)
Note that the variables x i and y i are defined only for substations that are flooded in at least one scenario. Constraint (2a) enforces that the sum of the fixed and variable costs incurred due to substation hardening does not exceed the investment budget. Constraints (2b) place an upper bound on the extent of flooding to which the substation can be made resilient while linking variables
x i and y i for each substation i. Such constraints represent engineering and practical challenges that may arise while building protective structures that are too tall.
In (1b), L SO (x) represents the expected load shed when the first-stage decision is x. Here, p k represents the probability of scenario k and L(x, k) is the recourse function representing the minimum load shed when the first-stage decision is x and scenario k is realized. The recourse function is defined as follows:
L(x, k) = minimize j∈J D j − s j ,(3a)
subject to
(1 − z j )M ≥ ∆ θ(j)k − x θ(j) , ∀j : θ(j) ∈ I f , (3b) 2z j M ≥ 1 − 2(∆ θ(j)k − x θ(j) ), ∀j : θ(j) ∈ I f ,(3c)z j = 1, ∀j : θ(j) ∈ I \ I f , (3d) u j ≤ z j , ∀j ∈ J ,(3e)s j ≤ z j D j , ∀j ∈ J , (3f) u j G j ≤ g j ≤ u j G j , ∀j ∈ J , (3g) − z λ(r) F r ≤ e r ≤ z λ(r) F r , ∀r ∈ R, (3h) − z µ(r) F r ≤ e r ≤ z µ(r) F r , ∀r ∈ R, (3i) B r (α λ(r) − α µ(r) ) ≥ M (z λ(r) + z µ(r) ) − 2M + e r , ∀r ∈ R, (3j) B r (α λ(r) − α µ(r) ) ≤ −M (z λ(r) + z µ(r) ) + 2M + e r , ∀r ∈ R, (3k) r∈N out j e r − r∈N in j e r = g j − s j , ∀j ∈ J, (3l) − π ≤ α j ≤ π, ∀j ∈ J ,(3m)α β = 0. (3n)
The objective function in (3a) minimizes the unsatisfied power demand when the first-stage decision is x and the flood scenario realized is k. Constraints (3b) and (3c) link the first-stage substation hardening decisions to the second-stage scenario-dependent power flow decisions. For a given hardening decision at a substation, the provided protection level is compared against the flood height at that substation in a given scenario. Depending on whether the hardening level can withstand the flooding, we set the status of the corresponding bus as operational or not. This is indicated by variable z j . For the substations that are not flooded in any of the scenarios, the status of the corresponding buses is set to operational in constraints (3d).
Constraints (3e) capture generator dispatch decisions for operational generators. When z j = 0, we cannot inject power to the network through bus j and therefore set u j = 0. If z j = 1, we let the recourse problem decide if power generated at bus j should be used or not. Constraints (3f) place an upper bound on the amount of power that can be supplied at bus j (which is the demand at that bus). If bus j is flooded, then z j = 0 and no power can be supplied to the loads that are connected to the bus. Constraints (3g) place upper and lower bounds on the amount of power that can be generated at bus j. If bus j is flooded, then u j = 0 and thus g j = 0. If on the other hand bus j is not flooded, the model solves the recourse problem which is a binary linear program to determine the amount of power that should be generated at bus j. Constraints
Robust optimization model
In the two-stage robust optimization model (RO), we minimize the maximum unsatisfied power demand value across all scenarios. Mathematically, the problem can be stated as follows:
L * RO = min x∈X L RO (x),(4a)
where
L RO (x) = max k∈K L(x, k).(4b)
The expression in (4b) finds the maximum scenario-based load shed via a max(·) function. RO in (4a) can be reformulated as
min x∈X {τ : τ ≥ L(x, k) ∀ k ∈ K} ,(5)
where τ is an epigraphical variable.
Model Discussion
In this section, we highlight some of the key characteristics of the models proposed above. This leads us to a feasible solution with no power generation and maximum load shed.
Second, the proposed models can be further tightened based on some simple observations. To do so, we first compute the maximum flood level across all the scenarios for the flooded substations.
Let us represent this value using parameter W i , ∀ i ∈ I f . Next, we observe that the model need not harden any substation to flood height that is higher than W i . Therefore, in constraints (2b), we
can replace H i with min(H i , W i ), ∀i ∈ I f . Constraints (3b) and (3c) use the big-M method. Here, we need to determine the smallest value for M for each constraint. The smallest big-M value for constraints (3b) and (3c) is given by W θ(j) and min(H θ(j) , W θ(j) ) + 0.5, respectively. To verify this, recall the assumption that both the flood height and hardening level can only take non-negative integer values. Also, observe that
− min(H θ(j) , W θ(j) ) ≤ ∆ θ(j)k − x θ(j) ≤ W θ(j) .
Now, in constraints (3b), we need
∆ θ(j)k −x θ(j) M ≤ 1. The smallest value of M that ensures this is W θ(j) .
Similarly, in constraints (3c), we need
1−2(∆ θ(j)k −x θ(j) ) 2M ≤ 1.
The smallest value of M to achieve this is min(H i , W i ) + 0.5. Finally, we can also tighten constraints (3j) and (3k). For both sets of constraints, the smallest value of M is F r + 2πB r .
Third, both SO and RO can be used for hardening decisions that will provide flood mitigation over a planning horizon. To see how, notice that the objective function in the SO computes the expected load shed for a single flood event. However, substation hardening in practice is done over the planning horizon that lasts multiple hurricane seasons and provides permanent protection for multiple flood events. Therefore, to help make hardening decisions that impact performance over multiple events, we first need to compute the disaster management costs due to load shedding over the multi-year planning horizon. To do so, let us assume that the expected number of hurricanes that the study region experiences during the planning horizon is γ. We further assume that during the planning horizon, the total recovery, economic and social costs are represented by the value of load loss of $δ/megawatt-hour. Finally, we assume that it takes h hours to repair all the substations (and restore power to normal operation) starting immediately after a flood event. We believe this assumption is reasonable at the mitigation phase of the decision making process and avoids explicit and detailed modeling of the recovery process. Then, for a given investment budget, the substation hardening decisions that achieve the minimum expected total disaster management cost due to load shedding during the planning horizon is found by solving
DM SO = γhδ L * SO .(6)
In (6), the optimal substation hardening plan to minimize DM SO is the same as the plan obtained by solving SO. This is because the DM SO always equals the objective function of the SO multiplied by a positive constant. Therefore, irrespective of how the frequency of hurricanes, the restoration time, and the value of load loss change over time, the optimal substation hardening plan remains the same as the one obtained from SO. In this case, we assume that the probability distribution over the flood scenarios, and thus, the hurricanes causing them, does not significantly change over time. In practice, this is reasonable for planning horizons for which substation hardening is considered (5-10 years). Similarly, the optimal substation hardening decision that minimizes the maximum total disaster management cost due to load shedding during the planning horizon is found by solving
DM RO = γhδ L * RO .(7)
A key observation is that DM RO provides an upper-bound for DM SO . To understand why, note that the set of feasible first-stage solutions is same for both SO and RO. Further, observe that both the models are bounded below with a minimum objective value of zero and have relatively complete recourse. Now, let x R be a feasible solution to RO. Then,
L * SO = min x∈X L SO (x) (8a) ≤ L SO (x R ) (8b) = k∈K p k L(x R , k) (8c) ≤ k∈K p k max k∈K L(x R , k) (8d) = max k∈K L(x R , k) k∈K p k (8e) = max k∈K L(x R , k) (8f) = L RO (x R ).(8g)
with Equation (8b) holding at equality if and only if x R is optimal for SO and Equation (8d) Finally, in the models proposed so far, we assume that we have a predetermined budget for substation hardening. One may however be interested in determining the optimal budget allocation for minimizing a risk measure over the disaster cost incurred due to both load shedding and substation hardening. The proposed models can easily be modified to find the optimal budget for substation hardening and corresponding hardening decisions. In the case of SO, this can be done by solving
T DM SO = min ω k∈K p k L(x, k) + i∈I f f i y i + v i x i : x i ≤ H i y i , ∀i ∈ I f ,(9)
where ω = γhδ. The value of i∈I f f i y i + v i x i in the optimal solution represents the value of the optimal investment budget. Similarly, for RO, we compute the optimal investment budget by
solving T DM RO = min ωτ + i∈I f f i y i + v i x i : τ ≥ L(x, k) ∀ k ∈ K .(10)
Case Study
In this section, using a case study for the Texas coastal region, we show how the proposed models can be used for power grid resilience decision making. The two main inputs to the proposed models are a set of scenarios that represent flood profiles for different hurricane types and the network parameters for the DC power flow model. To represent flood profiles, we use storm-surge maps developed by the National Oceanic and Atmospheric Administration (NOAA) [27]. For the electric grid, we use the ACTIVSg2000 dataset developed as part of an ARPA-E project [28]. The details of each component are described in the following subsections. We further highlight that although we use the proposed models for storm surge-induced damages, they can be adopted for flooding of any kind as long as the corresponding flood scenarios are available. This could include scenarios for inland flooding as developed in [16] and used for infrastructure resilience problems in [17] and [22].
Lastly, to solve the various parameterizations of the proposed models discussed in this case study, we use the Gurobi solver with the barrier algorithm [29]. Within the solver, we set the MIP-gap threshold to 0.5 percent and limit the solve time to 6 hours. The model is solved on an Apple M1 pro machine with 16 GB of unified memory.
Flood Scenarios
We use the storm-surge maps developed by NOAA using the Sea, Lake, and Overland Surges from Hurricanes (SLOSH) model as flood scenarios. To generate these flood maps, SLOSH uses a simplified parametric wind field model that takes as input the following parameters: storm track, the radius of the maximum wind speeds, and the pressure differential between the storm's central pressure and the ambient pressure. The simulated wind fields are then used to compute surface stresses on the water beneath the hurricane. Finally, the induced stress on the surface of the water is used to determine the storm surge. For a detailed discussion on SLOSH, we refer to [30].
Simulation studies developed using SLOSH have been extensively used to assist agencies like the In addition to real-time storm-surge guidance for imminent hurricanes, NOAA has developed two composite products, Maximum Envelopes of Water (MEOW) and Maximum of the MEOWs (MOM), to provide manageable datasets for medium to long-term hurricane evacuation planning.
To develop these datasets, hurricane simulations with different combinations of intensity, forward speed, direction, and tide levels are run in parallel using SLOSH for the region of interest. Each run may yield different storm surge values for the same grid cell. A maximum overall such value is taken to represent the MEOW value of that grid cell. The same process is repeated for each grid cell within the study region to construct a MEOW map. The resolution of the grid cell is varied to balance accuracy with computation cost. It is finer in regions close to the coast and gets coarser as we move farther away in the ocean. MEOW maps are used to incorporate the uncertainties associated with a given forecast and eliminate the possibility that a critical storm track will be missed in which extreme storm surge values are generated. These maps are generated from several thousand SLOSH runs. In this study, we use the MEOW maps to represent flooding due to storm
surge. An example MEOW map is shown in Figure 3.
Within a MEOW map, it is possible that the water level for adjacent cells may come from different SLOSH runs of specific simulated storms. Nevertheless, since these are the maximum water levels over multiple tracks, we can be assured that if we have hardened a substation for a particular MEOW map, it will provide resilience towards flooding for any of the parallel runs that constitute the MEOW map. Arguably, MEOW maps are still better at representing flood uncertainty than other scenario generation methods where flooding at different nodes within a network is considered independent of each other. Moreover, these MEOW maps have been considered as scenarios in different stochastic optimization models for patient evacuation [16] and grid hardening [31]. before, we assume that in the mitigation phase, the decision-maker does not have information on any specific storm. Therefore, to model the uncertainty in the mitigation phase, we assume that all the remaining MEOW maps are representative of the flooding scenarios. In our case, they are considered equally likely for the stochastic model. This is based on the premise that the larger set of simulations that produced the MEOW maps were sampled according to some underlying distribution and therefore already reflects the underlying characteristics of the distribution implicitly used by NOAA in their development of the MEOW product. Furthermore, these MEOW maps provide us with the storm-surge flood height above ground at each of the substations (and thus for buses within) in the power grid network. In the proposed models, this is represented by ∆ ik ; the level of flooding at substation i in scenario k.
Power Grid
To model the power grid for the state of Texas, we use the synthetic grid called ACTIVSg2000 which contains 2000 buses (within 1250 substations) and 3206 branches. The grid, though synthetic, is designed such that it maintains statistical similarities with the actual Texas grid. We make two further modifications to the grid instance to make it computationally tractable while also considering the coastal part of the grid which is affected due to to storm-surge and thus is the focus of the study.
First, we perform a network reduction on the original grid instance using the electrical equivalent (EEQV) feature in PSS®E to focus on the grid components subject to storm surge flooding. The reduction is such that the buses in the inland region that are not exposed to storm-surge induced flooding are aggregated within a much smaller set of nodes. The part of the grid that is in close proximity to the Texas Gulf Coast, and therefore is prone to flooding due to storm-surge, is retained almost as is. The effect of the network reduction is detailed in Table 1. The topological changes are visualized in Figure 4. Second, we alter the locations of some of the substations. This is because, in
Results and Discussion
In this section, we first determine the expected value of perfect information from a model that can produce perfect forecasts. In the same subsection, we compute the value of the stochastic solution for the different budget levels. Next, in subsection 5.2 we show how the proposed model can be used to determine the optimal investment budget for substation hardening.
Due to dynamically changing climate conditions and ocean temperature, the probabilities of different types of hurricanes can change over time. In subsection 5.3, we show when a decisionmaker can take advantage of using solutions from RO to hedge against this uncertainty by paying a relatively insignificant premium. Finally, the last subsection is dedicated to the analysis of the distribution of load shed across scenarios for the solutions obtained from SO and RO.
The Values of Stochastic Solution and Perfect Information
In the proposed two-stage decision-making models, both the number of variables and constraints grow linearly with the number of scenarios; thus making it computationally challenging. In that case, instead of solving SO for large grid instances with many scenarios, one may be interested in solving simpler versions of the problem. One approach could be to reduce the size of the problem by constructing a single scenario problem where the flood height at each substation is the average of the flood height across all the scenarios. Another way could be to solve the problem for each scenario individually. The first-stage solutions thus obtained can then be analyzed and potentially combined using some heuristic rule. In this section, we analyze the quality of the solutions we get using such approaches. To do so, we use two widely known concepts in the stochastic programming literature: the value of the stochastic solution and the expected value of perfect information.
In Figure 5, we plot the value of L * SO as a function of the investment budget. To determine the budget levels on which the parametric study should be performed, we first compute the minimum budget such that L * SO = 0. This is computed by solving a slightly modified version of SO. Specifically, we first remove constraint (2a) from the formulation and replace the objective function with the minimization of the substation hardening expenditure (i.e., the left-hand-side of (2a)). We also force full satisfaction of demand by replacing constraint (3f) with s j = D j , ∀ j ∈ J . The optimal value to this modified version of SO is in turn the minimum hardening budget required for zero load shed. For the parameters assumed in this case study, the corresponding minimum budget turns out to be $71.35M. Any additional budget beyond this will not improve the objective function value (load shed) and therefore the corresponding optimal solution. Using $71.35M as the reference, in Figure 5, we increase the budget from $0M, in increments of $10M, until the value exceeds $71.35M.
(We note that the same budget values can be used for the RO parametric study. This is because the minimum budget required to achieve the expected load shed of zero is the same as what is required to achieve zero load shed in all the scenarios.)
In addition to computing L * SO for different budget levels, we also compute lower and upper bounds on this value as shown in Figure 5. To compute the upper bound, let us first consider a single-scenario problem called the expected value problem which is defined as
L * EV = min x∈X L(x,k).(11)
Here,k represents a scenario where the flood level at each substation is the mean of the flood heights at that particular substation across all the scenarios. The optimal solution to this problem is called Next, to evaluate the quality of the first-stage substation hardening decisions obtained by solving (11), we compute the expected load shed across all the scenarios by fixing the first-stage decisions tox, denoted by L SO (x). This value serves as an upper bound on L * SO . For a detailed explanation on this, we refer to [33]. We also observe in Figure 5 that the difference between L SO (x) and L * SO increases with an increase in the budget. We refer to this difference as the value of the stochastic solution (VSS) as it represents the value of using a scenario-based representation of the uncertainty as opposed to the average flood values, all calculated within the SO framework. We further highlight that whenx is implemented, the load shed does not strictly decrease with an increase in the budget beyond $50M. This is expected because, for the mean scenario, the model obtains zero load shed with $50.15M. As discussed before, we know that it takes a minimum of $71.35M to achieve zero load shed across all the scenarios. However, once the model achieves zero load shed in the mean scenario, there is no incentive to use additional resources. This also shows that the value of using SO over the expected value problem increases with increases in the investment budget.
To compute a lower bound on L * SO for a given budget level, we assume that the decision-maker has access to perfect information about the flood levels, and therefore can better prepare for each scenario (i.e., in a way, fine-tuning the mitigation plan according to each scenario). That is, perfect information allows the decision maker to make possibly different substation hardening decisions in each scenario to minimize the load shed in that particular scenario. These solutions are referred to as wait-and-see solutions. In this case, we compute
L * WS = k∈K p k min x∈X L(x, k) ,(12)
where the first-stage decisions are scenario-dependent and the value L * WS is referred to as the waitand-see bound. Due to the scenario-specific mitigation decisions, L * WS provides a lower bound on L * SO . The difference in the values L * SO and L * WS is referred to as the expected value of perfect information. It represents the maximum value a decision-maker would be willing to pay in exchange for complete and accurate information about the uncertainty.
The key point that we want to emphasize is that unless the flood model can make perfect predictions, which is usually not the case with the weather models, then not accounting for uncertainty and using just point estimates or mean values of the flood forecasts can lead to significantly inferior decisions. This is evident from the VSS. In fact, as it turns out in this case, the first-stage decisions obtained from SO, even with not-so-perfect forecasts, lead to a load shed performance that is close to what we would get from using a flood model that offers perfect prediction. To put it another way, accounting for flood uncertainty, even with a small number of scenarios, can help reduce the burden of getting perfect information on the decision-maker without making a significant compromise on the performance.
We also note that all bounds converge to the same expected value at the zero budget. This is because, no matter how well we represent the uncertainty or how good the predictions are, if we do not have any resources to use towards mitigation in the first stage, we cannot prevent load shed in the second stage with no protection towards flooding. Then, the expected value of the load shed is only a function of the second-stage decisions (i.e., the best power flow the grid can deliver with flooded substations). Moreover, at a sufficiently high budget value, both L * SO and L * WS converge to zero. This is expected because, despite the fact that we have poor predictions or poor uncertainty representation, we can still prevent any load shed in all the scenarios if we have enough resources to harden all substations to any desirable extent. Performing analysis with the budget level as a parameter, as described in this subsection, requires repeatedly solving SO with different parameters and can be time-consuming. To address this, the property that SO has relatively complete recourse is exploited to warm start the optimization solver and improve the solution time. As was stated in subsection 3.6, we can heuristically generate an initial feasible solution with a hundred percent load shedding for an investment budget value of zero. Once we get an optimal solution corresponding to budget level zero, we use it to warm start the solver for the next higher budget level. The process is repeated to generate high-quality feasible solutions for the next budget level. We further note that the aforementioned approach for warm-starting the solver is also applicable in the case of RO.
Determining optimal budget for substation hardening
Subsection 5.1 focuses on demonstrating how SO is used for resilience decision making for a given investment budget for substation hardening. The models can alternatively be used to decide the optimal value of the investment budget that minimizes the expected total disaster management cost over a multi-year planning horizon. To demonstrate this, we solve T DM SO as described in Section 3.6 assuming that, on average, 10 hurricanes hit the Texas Gulf Coast during the planning horizon. The values of the expected total disaster management cost for different combinations of restoration times and the value of load loss are plotted in Figure 7. As we see in the figure, when the value of load loss is low and the restoration time is short ($250 and 6 hours, respectively), the total disaster management cost is relatively low. Furthermore, the model recommends investing only a quarter of the budget required to achieve zero load shed for substation hardening. This is because the cost associated with losing power is quite low and the outage is restored relatively quickly. On the other hand, when both the load loss value and restoration time are high ($5000 and 48 hours, respectively), the model recommends investing $71.35M (equal to the investment required to achieve zero load shed) for substation hardening, avoiding any costs due to load loss. This is because, for the chosen values, the costs associated with power loss are quite high and it is better to make Texas, the value of load loss was determined to be around $6000 per MWh for Texas [34]. If we take that as given, the results in Figure 7 suggest that we must make investments to achieve close to zero load shed even if restoration time is as short as 6 hours.
We further observe in Figure 7 that the solution corresponding to the value of load loss of $1000 per MWh and restoration time of 6 hours is the same as the solution with the value of load loss of $250 per MWh and a restoration time of 24 hours. This is expected because in T DM SO , both sets of parameters lead to the same optimization problem (ω is the same). It is further apparent that the optimal investment budget for substation hardening increases monotonically from 0 to $71.35M with the increase in the value of ω. Therefore, for any investment budget value between 0 and $71.35M, there exists a unique ω for which the corresponding budget is optimal. We can use this insight to quickly approximate the optimal investment budget for any combination of the value of load loss, restoration time, and the average number of hurricanes that may hit the region of study during the planning horizon. To do so, we use only the values of L * SO for I ∈ {0, 10, 20...80} as computed in Section 5.1. For any given value of γ, h, and δ for which the optimal investment budget needs to be approximated, we compute the value of DM SO + I for each I ∈ {0, 10, 20...80}. The value of I for which DM SO + I is the smallest is the best approximation for the optimal investment budget for a chosen value ω (calculated from γ, h, and δ). In Figure 8, we show the value of DM SO + I for I ∈ {0, 10, 20...80} for different values of ω. The depicted values of ω are reasonable in the sense that they can be derived from the γ, h, and δ used in Figure 7. For example, we notice in Figure 7 that when γ = 10, h = 6, and δ = 250 (ω = 15000), the optimal investment budget is $17.05M. An approximate of this value can be quickly inferred by looking at Figure 8 for the value of ω = 15000.
Optimization in the face of uncertain probabilities
While framing the power-grid resilience decision-making problem as a two-stage stochastic program, we assume that the probability distribution over the scenarios is known. However, the probabilities that we assign to each scenario need not be constant with time. Changing climate oscillations and ocean temperatures routinely affect these probabilities. This is reflected in NOAA's annual hurricane season prediction categories: normal, above normal, and below normal. In this case, if probabilities over scenarios change as compared to what we planned for, the expected performance may deteriorate. To hedge against this, we recommend solving RO and comparing the expected performance of the corresponding decisions. If the difference in the expected performance of decisions recommended by RO and SO is not significant, we advise adopting decisions as recommended by RO. In this way, irrespective of how the probabilities evolve with time, we know that the expected total cost due to the load loss will be less than or equal to the bound obtained in Section 3.6 if the optimal first-stage decisions as recommended by RO are adopted. This can be confirmed for the parameters assumed in the case study through Figure 6. The robust solution refers to the value of the expected load shed when the first-stage hardening decisions as recommended by RO are adopted. Since these decisions are not necessarily optimal for the assumed probability distribution, they lead to a higher expected load shed as compared to the stochastic solution. However, for the RO first-stage decisions, the maximum load shed in any scenario is capped by τ as represented by the red curve. In this case, we observe that the difference in the expected value of performance is almost trivial for investment budget values of $40M and above. Therefore, in those cases, it makes sense to adopt decisions recommended by RO as opposed to what we get from SO to hedge against the change in probabilities due to factors like ocean temperature and climate oscillations. In cases when the difference between the expected performance of SO and RO is significant, the decisions depend on the risk preference of the decision-maker.
SO vs RO: Analysis of the load shed distribution
We conclude the discussion with an analysis of the distribution of the load shed across scenarios for both SO and RO. The load shed in each scenario for both models is represented by the value of the recourse function corresponding to the optimal solution. These values are used to construct the corresponding histogram for both SO and RO at different budget levels as shown in Figure 9.
As expected, the histograms shift to the left with the increase in the budget for both SO and RO.
We observe that the histograms for both SO and RO coincide when the investment budget is $0M. This is expected because there is no hardening done in either model. Consequently, the load shed in each scenario is identical. Moreover, using Figure 6 and Figure 9, we observe that the robust solutions provide an inferior performance in expectation but the RO load shed remains relatively stable across scenarios as compared to SO. We also conclude that for investment budget values of more than $40M, the robust solutions offer much better performance against extreme scenarios while also offering good expected value performance. Therefore, in this case, it is reasonable to implement RO decisions for budget values beyond $40M. In this way, Figures 6 and 9 can be used together to understand the behavior of both SO and RO in expectation and across all the individual scenarios. Figure 9: The histograms of load shed across scenarios for the SO and RO solutions at different budget levels. Notice that the x-axis for each sub-plot has a different scale for better depiction.
Conclusions
In this study, we propose an integrated framework supported by two scenario-based optimization models for power grid resilience decision making against extreme flooding events. The models recommend an optimal substation hardening plan by integrating the predictions generated from a state-of-the-art hydrological model with a DC optimal power flow model. While doing so, we account for uncertainty in hurricane predictions using a scenario-based representation. Furthermore, using a case study for the state of Texas, specifically the coastal region prone to storm surge flooding, we demonstrate how the proposed models can together be used to address a wide variety of insightseeking questions related to power grid resilience decision making. Specifically, we show that using a scenario-based representation of flood uncertainty can offer significant value over mean flood forecasts. We also explain the advantages of using flood maps generated from physics-based models as opposed to other scenario generation methods popular in the literature. Furthermore, we show how can we estimate the expected value of perfect information from near-perfect flood forecasts.
For the case study developed in the paper, we observe that by using a scenario-based representation of uncertainty, the decision-makers can reduce their burden of having access to perfect forecasts.
In addition to quantifying the value of using flood scenarios, we further show how we can use the proposed two-stage framework to determine the optimal investment budget for substation hardening.
Lastly, we explain how we can use the two-stage robust optimization model for power-grid resilience decision making when information about the probability distribution over the flood scenarios is unavailable.
For future research, we suggest four directions. First is the development of scenarios that can consider precipitation-induced inland flooding in addition to storm-surge. Second is the development of methods to account for equity while making substation-hardening decisions. Third is developing models that take into account preparedness measures while making longer-term mitigation decisions, leading to three-stage optimization models. Fourth is the development of decomposition techniques to solve such models in a reasonable time. These challenges form the basis of our ongoing research.
A two-stage stochastic optimization model is developed to address situations where the uncertainty about hurricane-induced flooding is modeled using a probability distribution. In this case, the model minimizes the expected unsatisfied power demand (also referred to as the expected load shed) due to the components' failures (i.e., flooded substations) over a set of scenarios. The two-stage robust model on the other hand requires no information about the probability distribution. The model instead minimizes the maximum load shed in any scenario within the uncertainty set. A general framework representative of the proposed models is shown inFigure 1. As shown in the figure,
A scenario in the proposed models represents the water levels at different substations obtained from the flood map of a specific hurricane type. Hurricanes with different characteristics such as direction, intensity, forward speed, etc. lead to different levels of flooding, generating different scenarios. These scenarios are representative of the flooding that the region of study can experience
Figure 1 Figure 2 :
12An example permanent hardening structure at a substation over a specific time period (typically multiple years). A distinguishing feature of the proposed models is the way we generate the scenarios. Instead of using popular techniques like fragility curves, we use flood maps for scenario representation because they capture the effects of correlated flooding. This is important because the failure of a substation within a power grid can have network effects on the other parts of the grid. To account for such details, we need not only know which substations fail more frequently but also the combination of substations that fail together. Our proposed model accounts for these details and uses them to evaluate the effects of such damages (in the form of load shed) during decision-making by solving a power flow model.The power grid network considered in the proposed models is represented by a graph where the buses are represented by the nodes and the branches interconnecting the buses are represented by the edges of the graph. The branches of the network are held using the transmission towers. We assume that these towers are well above the ground and therefore immune to flooding. It is the substations and components within them that are susceptible to flooding. In this study, we assume that the substations are outdoor open-air facilities. Therefore, when a substation is flooded, we assume that all the components within the substation and the branches connected to all the buses within the substation are out of order.
For
each of these scenarios, we overlay the power grid network on the flood map to identify parts of the network that are flooded. Given the flood height and the level of hardening at a substation, the model infers if a substation is flooded in a particular scenario. If a substation is flooded, all the buses within the substation and the branches connected to those buses are considered to be out of order. Once the damaged state of the power grid network is determined, we solve the second-stage assessment (the so-called recourse problem) which is a DC power flow model to estimate the load shed given the state of the grid. The second-stage decisions in the recourse problem determine the routing of power to minimize unsatisfied power demand. It should be noted that although both the stochastic and robust models involve the same sets of decision variables, the specific solutions suggested by them can be vastly different. The robust model gives us the flexibility to make decisions in the absence of any information about the probability distribution. These decisions can however be far more conservative than the decisions recommended by the stochastic model.
Set of substations indexed by i I f : Set of substations that are flooded in at least one scenario J : Set of buses indexed by j K: Set of scenarios indexed by k R: Set of branches indexed by r B i : Set of buses at substation i N in j : Set of branches incident on bus j with power flowing into bus j N out j : Set of branches incident on bus j with power flowing out of bus j Parameters M : An arbitrarily large constant f i : Fixed cost of hardening at substation i v i : Variable cost of hardening at substation i H i : Maximum flood height to which substation i can be hardened θ(j): Substation that contains bus j ∆ ik : Flood height at substation i in scenario k (a non-negative integer value) B r : Susceptance of branch r F r : Maximum power that can flow in branch r λ(r), µ(r): Head bus and tail bus of branch r D j : Load at bus j G j , G j : Minimum and maximum generation at bus j β: Index of the reference bus p k : Probability of scenario k I: Total investment budget for substation hardening Variables y i : Binary variable indicating whether substation i is chosen for permanent hardening x i : Non-negative integer variable indicating discrete height of hardening at substation i z j : Binary variable indicating if bus j is operational s j : Non-negative real variable indicating load satisfied at bus j g j : Non-negative real variable indicating power generated at bus j u j : Binary variable indicating if generator at bus j is used α j : Real variable indicating voltage phase angle of bus j e r : Real variable indicating power flowing in branch r
(3h) and (3i) place restrictions on the amount of power that can flow through branch r. If the bus at either end of a branch is flooded, then no power can flow through it. On the other hand, if buses at both ends of the branch are operational, then a maximum of F r power can flow through it in either direction. Constraints (3j) and (3k) enforce an approximation to Ohm's Law. If both ends of a branch are operational, then the amount of power flowing in the branch is governed by equations B r (α λ(r) − α µ(r) ) = e r , ∀r ∈ R.If the bus at either end of the branch is flooded, then the above equation need not hold. This is achieved by introducing big-M values. The formulation can be further tightened by appropriately determining the values of big-M . A discussion on this is presented in Section 3.6. Constraints (3l) represent the flow balance which states that the net power injected into the network at bus j is the difference between the power generated and consumed at the same bus. Constraints (3m) impose limits on the phase angle values at buses. Finally, constraints (3n) set the phase angle of the reference/slack bus to 0.
First, both SO and RO have relatively complete recourse. That is, no matter what first-stage decisions we make, the second-stage problem always has a feasible solution. To verify this, consider a case where irrespective of the value of z j 's, we set the value of u j = 0 for all j in the recourse function.
holding at equality if and only if load shed values are equal across all the scenarios. The above inequalities establish that the objective function value corresponding to any feasible solution to RO provides an upper bound on the optimal objective function value of SO. Since any optimal solution to RO is also feasible, it acts as a valid upper bound. In fact, it is the tightest upper bound that can be obtained in this manner.
Figure 3 :
3A sample MEOW generated using category 5 storms approaching the Texas Gulf Coast in the north-west direction with a forward speed of 5 mph
Figure 4 :
4the original dataset, some of the substations are placed in the middle of a water body and are thus flooded by default. To address this, we remap the coordinates of the 1250 substations in the dataset with the coordinates of substations obtained from the Homeland Infrastructure Foundation-Level Data (HIFLD) Electric Substations dataset [32]. The HFILD dataset contains information about real-world substations across the U.S. The remapping is done by solving an optimization problem The figure shows (a) ACTIVSg2000 Synthetic Grid for Texas, and (b) the reduced grid obtained after performing the network reduction. The red elements represent the new nodes andbranches that were introduced as the artifacts of the reduction procedure to maintain equivalence in the grid characteristics that minimizes the total displacement due to remapping. Note that this process does not change the power grid's electrical structure and makes it more realistic using the real-world substation locations (closer to the actual Texas grid) and capturing their real-world flood risks via MEOW-based flood scenarios. Lastly, the fixed cost and the variable cost for substation hardening are assumed to be $25,000 and $100,000 per foot, respectively. These values are derived from various utility reports.
Figure 5 :Figure 6 :
56The graph of the expected load shed values for the expected value solution (green), the stochastic solution (blue), and the wait-and-see solution (orange), as a function of the budget for substation hardening The objective function value for RO (i.e., the optimal maximum load shed) and the expected load shed values for the robust and the stochastic solutions as a function of budget for substation hardening the expected value solution or the mean value solution, henceforth represented byx. We note that the problem in (11) is a single scenario problem and therefore much smaller in size. However, the reduction of scenarios leads to the loss of information about the substations that flood together.
Furthermore
, for sensitivity analysis, we consider three different values of restoration time: 6, 24, and 48 hours. Similarly, for the value of load loss, we consider 5 different values: $250, $500, $1000, $3000, and $5000 per MWh.
Figure 7 :
7significant investments (in fact, everything possible) in substation hardening such that there is no loss of power. For all other combinations in between, both the total expected disaster management cost and its composition vary. Based on a study undertaken by the Electric Reliability Council Total disaster management cost as a function of the value of load loss for different restoration times
For that value, the top-left curve achieves its minimum at $20M which is closest to $17.05M in the set {0, 10, 20...80}.
Figure 8 :
8Total disaster management cost as a function of budget for different values of ω
Table 1 :
1Grid characteristics before and af-ter the electrically equivalent reduction was
performed.
Grid Characteristic
Before After
Substations (#)
1250
362
Buses (#)
2000
663
Transformers (#)
860
358
Transmission Lines (#)
2346 1151
Generators (#)
544
254
Generation Capacity (GW)
96.29 50.98
Load (GW)
67.11 39.69
The original MEOW map dataset for the Texas coastal region comprises 192 flood maps whichare constructed using eight different storm directions (west-south-west, west, west-north-west, north-
west, north-north-west, north, north-north-east, and north-east), six different intensity categories
(0-5), and four different, forward speeds (5,10,15, and 25 mph). To demonstrate the usefulness of
the proposed approach with a computationally tractable use case, we reduce the size of the problem
by eliminating a subset of less severe scenarios. We first drop all the MEOW maps corresponding to
four directions (west-south-west, north, north-north-east, and northeast) as hurricanes belonging to
these categories do not cause significant flooding in the Texas Gulf Coast. We also drop the MEOW
maps corresponding to category 0-4 hurricanes. The storms belonging to category 5 are more intense
versions of these storms. Hence, the model ends up recommending decisions to prepare for worst-case
situations (and thus implicitly prepares for category 0-4 hurricanes as well). Moreover, as discussed
Billion-dollar Weather and Climate Disasters, 1980 -present (NCEI accession 0209268). A B U S Smith, 2020Available atA. B. Smith. U. S. Billion-dollar Weather and Climate Disasters, 1980 -present (NCEI accession 0209268). Available at https://www.ncei.noaa.gov/archive/accession/0209268 (last accessed on February 13, 2023), 2020.
Hurricane Harvey Event Analysis Report. North American Electric Reliability corporation. Technical reportHurricane Harvey Event Analysis Report. Technical report, North American Electric Reliability corporation, Atlanta, GA, 2018.
Changes in Tropical Cyclone Number, Duration, and Intensity in a Warming Environment. P J Webster, G J Holland, J A Curry, H.-R Chang, Science. 3095742P. J. Webster, G. J. Holland, J. A. Curry, and H.-R. Chang. Changes in Tropical Cyclone Number, Duration, and Intensity in a Warming Environment. Science, 309(5742):1844-1846, 2005.
Power system resilience: Current practices, challenges, and future directions. N Bhusal, M Abdelmalak, M Kamruzzaman, M Benidris, IEEE Access. 8N. Bhusal, M. Abdelmalak, M. Kamruzzaman, and M. Benidris. Power system resilience: Current practices, challenges, and future directions. IEEE Access, 8:18064-18086, 2020.
Influence of extreme weather and climate change on the resilience of power systems: Impacts and possible mitigation strategies. M Panteli, P Mancarella, 127Electric Power Systems ResearchM. Panteli and P. Mancarella. Influence of extreme weather and climate change on the resilience of power systems: Impacts and possible mitigation strategies. Electric Power Systems Research, 127:259-270, 2015.
Metrics and quantification of operational and infrastructure resilience in power systems. M Panteli, P Mancarella, D N Trakas, E Kyriakides, N D Hatziargyriou, IEEE Transactions on Power Systems. 326M. Panteli, P. Mancarella, D. N. Trakas, E. Kyriakides, and N. D. Hatziargyriou. Metrics and quantification of operational and infrastructure resilience in power systems. IEEE Transactions on Power Systems, 32(6):4732-4742, 2017.
State-of-the-art review on power grid resilience to extreme weather events: Definitions, frameworks, quantitative assessment methodologies, and enhancement strategies. F H Jufri, V Widiputra, J Jung, Applied Energy. 239F. H. Jufri, V. Widiputra, and J. Jung. State-of-the-art review on power grid resilience to extreme weather events: Definitions, frameworks, quantitative assessment methodologies, and enhancement strategies. Applied Energy, 239:1049-1065, 2019.
Predicting thunderstorm-induced power outages to support utility restoration. E Kabir, S D Guikema, S M Quiring, IEEE Transactions on Power Systems. 346E. Kabir, S. D. Guikema, and S. M. Quiring. Predicting thunderstorm-induced power outages to support utility restoration. IEEE Transactions on Power Systems, 34(6):4370-4381, 2019.
Improving hurricane power outage prediction models through the inclusion of local environmental factors. D Mcroberts, S M Quiring, S D Guikema, Risk analysis. 1038D. McRoberts, S.M. Quiring, and S.D. Guikema. Improving hurricane power outage prediction models through the inclusion of local environmental factors. Risk analysis, 38, 10 2016.
Bayesian multiscale modeling of spatial infrastructure performance predictions with an application to electric power outage forecasting. A Reilly, S D Guikema, Journal of Infrastructure Systems. 212A. Reilly and S.D. Guikema. Bayesian multiscale modeling of spatial infrastructure performance predictions with an application to electric power outage forecasting. Journal of Infrastructure Systems, 21(2), 2015.
Power outage prediction for natural hazards using synthetic power distribution systems. C Zhai, T Y Chen, A G White, S D Guikema, Reliability Engineering & System Safety. 208107348C. Zhai, T.Y. Chen, A.G. White, and S.D. Guikema. Power outage prediction for natural hazards using synthetic power distribution systems. Reliability Engineering & System Safety, 208:107348, 2021.
Boosting the power grid resilience to extreme weather events using defensive islanding. M Panteli, D N Trakas, P Mancarella, N D Hatziargyriou, IEEE Transactions on Smart Grid. 76M. Panteli, D. N. Trakas, P. Mancarella, and N. D. Hatziargyriou. Boosting the power grid resilience to extreme weather events using defensive islanding. IEEE Transactions on Smart Grid, 7(6):2913-2922, 2016.
Simulation of tropical cyclone impacts to the U.S. power system under climate change scenarios. A Staid, S D Guikema, R Nateghi, S M Quiring, M Z Gao, Climatic Change. 1273A. Staid, S. D. Guikema, R. Nateghi, S. M. Quiring, and M. Z. Gao. Simulation of tropical cyclone impacts to the U.S. power system under climate change scenarios. Climatic Change, 127(3):535-546, 2014.
Transmission defense hardening against typhoon disasters under decision-dependent uncertainty. W Zhang, C Shao, B Hu, K Xie, P Siano, M Li, M Cao, IEEE Transactions on Power Systems. W. Zhang, C. Shao, B. Hu, K. Xie, P. Siano, M. Li, and M. Cao. Transmission defense hardening against typhoon disasters under decision-dependent uncertainty. IEEE Transactions on Power Systems, pages 1-11, 2022.
Winter Storm Scenario Generation for Power Grids Based on Historical Generator Outages. B Austgen, M Garcia, B Pierre, J Hasenbein, E Kutanoglu, 2022 IEEE/PES Transmission and Distribution Conference and Exposition (T D). 2022B. Austgen, M. Garcia, B. Pierre, J. Hasenbein, and E. Kutanoglu. Winter Storm Scenario Generation for Power Grids Based on Historical Generator Outages. In 2022 IEEE/PES Trans- mission and Distribution Conference and Exposition (T D), 2022.
Hurricane scenario generation for uncertainty modeling of coastal and inland flooding. K Y Kim, W Y Wu, E Kutanoglu, J J Hasenbein, Z L Yang, Frontiers in Climate. 316K. Y. Kim, W.Y. Wu, E. Kutanoglu, J. J. Hasenbein, and Z.L. Yang. Hurricane scenario generation for uncertainty modeling of coastal and inland flooding. Frontiers in Climate, 3:16, 2021.
Power system resilience to floods: Modeling, impact assessment, and mid-term mitigation strategies. L Souto, J Yip, W Y Wu, B Austgen, E Kutanoglu, J J Hasenbein, Z Yang, C W King, S Santoso, International Journal of Electrical Power & Energy Systems. 135107545L. Souto, J. Yip, W.Y. Wu, B. Austgen, E. Kutanoglu, J.J. Hasenbein, Z.L Yang, C.W. King, and S. Santoso. Power system resilience to floods: Modeling, impact assessment, and mid-term mitigation strategies. International Journal of Electrical Power & Energy Systems, 135:107545, 2022.
Performance degradation of levee-protected electric power network due to flooding in a changing climate. S Miraee-Ashtiani, F Vahedifard, M Karimi-Ghartemani, J Zhao, I Mallakpour, A Aghakouchak, IEEE Transactions on Power Systems. 376S. Miraee-Ashtiani, F. Vahedifard, M. Karimi-Ghartemani, J. Zhao, I. Mallakpour, and A. AghaKouchak. Performance degradation of levee-protected electric power network due to flooding in a changing climate. IEEE Transactions on Power Systems, 37(6):4651-4660, 2022.
Transmission grid resiliency investment optimization model with SOCP recovery planning. K Garifi, E S Johnson, B Arguello, B J Pierre, IEEE Transactions on Power Systems. 371K. Garifi, E. S. Johnson, B. Arguello, and B. J. Pierre. Transmission grid resiliency investment optimization model with SOCP recovery planning. IEEE Transactions on Power Systems, 37(1):26-37, 2022.
Optimal resilient transmission grid design. H Nagarajan, E Yamangil, R Bent, P V Hentenryck, S Backhaus, 2016 Power Systems Computation Conference (PSCC). H. Nagarajan, E. Yamangil, R. Bent, P. V. Hentenryck, and S. Backhaus. Optimal resilient transmission grid design. In 2016 Power Systems Computation Conference (PSCC), pages 1-7, 2016.
Strategic stockpiling of power system supplies for disaster recovery. C Carleton, P V Hentenryck, R Bent, IEEE Power and Energy Society General Meeting. C. Carleton, P. V. Hentenryck, and R. Bent. Strategic stockpiling of power system supplies for disaster recovery. In 2011 IEEE Power and Energy Society General Meeting, pages 1-8, 2011.
Scenario-based Optimization Model for Long-term Healthcare Infrastructure Resilience against Flooding. G T Tutay, J J Hasenbein, E Kutanoglu, IIE Annual Conference Proceedings. G. T. Tutay, J. J. Hasenbein, and E. Kutanoglu. Scenario-based Optimization Model for Long-term Healthcare Infrastructure Resilience against Flooding. In IIE Annual Conference Proceedings, pages 1-6, 2022.
Optimization of Coastal Protections in the Presence of Climate Change. Y Miura, P C Dinenis, K T Mandli, G Deodatis, D Bienstock, Frontiers in Climate. 3613293Y. Miura, P. C. Dinenis, K. T. Mandli, G. Deodatis, and D. Bienstock. Optimization of Coastal Protections in the Presence of Climate Change. Frontiers in Climate, 3:613293, 2021.
Power grid resilience enhancement via protecting electrical substations against flood hazards: A stochastic framework. M Movahednia, A Kargarian, C E Ozdemir, S C Hagen, IEEE Transactions on Industrial Informatics. 183M. Movahednia, A. Kargarian, C. E. Ozdemir, and S. C. Hagen. Power grid resilience enhance- ment via protecting electrical substations against flood hazards: A stochastic framework. IEEE Transactions on Industrial Informatics, 18(3):2132-2143, 2022.
A Survey of Relaxations and Approximations of the Power Flow Equations. D K Molzahn, I A Hiskens, Foundations and Trends in Electric Energy Systems. 41-2D. K. Molzahn and I. A. Hiskens. A Survey of Relaxations and Approximations of the Power Flow Equations. Foundations and Trends in Electric Energy Systems, 4(1-2):1-221, 2019.
Impacts of Approximate Power Flow Models on Optimal Flood Mitigation in a Stochastic Program. B Austgen, J J Hasenbein, E Kutanoglu, IIE Annual Conference Proceedings. B. Austgen, J.J. Hasenbein, and E. Kutanoglu. Impacts of Approximate Power Flow Models on Optimal Flood Mitigation in a Stochastic Program. In IIE Annual Conference Proceedings, pages 518-523, 2021.
A national view of storm surge risk and inundation. B C Zachry, W J Booth, J R Rhome, T M Sharon, Weather, Climate, and Society7B. C. Zachry, W. J. Booth, J. R. Rhome, and T. M. Sharon. A national view of storm surge risk and inundation. Weather, Climate, and Society, 7(2):109-117, 2015.
Grid structural characteristics as validation criteria for synthetic networks. A B Birchfield, T Xu, K M Gegner, K S Shetye, T J Overbye, IEEE Transactions on Power Systems. 324A. B. Birchfield, T. Xu, K. M. Gegner, K. S. Shetye, and T. J. Overbye. Grid structural char- acteristics as validation criteria for synthetic networks. IEEE Transactions on Power Systems, 32(4):3258-3265, 2017.
Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual. 2021.
The role of the SLOSH model in National Weather Service storm surge forecasting. B Glahn, A Taylor, N Kurkowski, W A Shaffer, National Weather Digest. B. Glahn, A. Taylor, N. Kurkowski, and W.A. Shaffer. The role of the SLOSH model in National Weather Service storm surge forecasting. National Weather Digest, pages 1-12.
A Scenario-based Optimization Approach for Electric Grid Substation Hardening Against Storm Surge Flooding. A Shukla, J J Hasenbein, E Kutanoglu, IIE Annual Conference Proceedings. A. Shukla, J.J. Hasenbein, and E. Kutanoglu. A Scenario-based Optimization Approach for Electric Grid Substation Hardening Against Storm Surge Flooding. In IIE Annual Conference Proceedings, pages 1004-1009, 2021.
Homeland Infrastructure Foundation-Level Data. Electric Substations. Available at. Homeland Infrastructure Foundation-Level Data. Electric Substations. Available at: https://hifld-geoplatform.opendata.arcgis.com/datasets/geoplatform::substations/about (last accessed on February 13, 2023).
Introduction to Stochastic Programming. R John, François Birge, Louveaux, Springer-VerlagNew York, NY, USAJohn R. Birge and François Louveaux. Introduction to Stochastic Programming. Springer- Verlag, New York, NY, USA, 1997.
Estimating the Value of Lost Load. ERCOT Resource AdequacyReportERCOT Resource Adequacy Report. Estimating the Value of Lost Load. Available at: https://www.ercot.com/gridinfo/resource/2013/ (last accessed on February 13, 2023.), 2013.
| zyda_arxiv-0512000 |
A new helioseismic constraint on a cosmic-time variation of G
Alfio Bonanno
INAF
Osservatorio Astrofisico di Catania
via S. Sofia, 7895123CataniaItaly
Hans-Erich Fröhlich
Leibniz Institute for Astrophysics Potsdam (AIP)
An der Sternwarte 1614482PotsdamGermany
A new helioseismic constraint on a cosmic-time variation of G
10.1140/epjh/e2016-70034-0
Helioseismology can provide strong constraints on the evolution of Newton's constant over cosmic time. We make use of the best possible estimate of 8640 days of low-BiSON data, corrected for the solar cycle variation, to obtain a new constraint on an evolving gravitational constant. In particular, by means of a Bayesian analysis we conclude thatĠ/G today = (1.25±0.30)×10 −13 yr −1 . Our result, a 4-σ effect, is more than one order of magnitude stronger than previous constraints obtained with helioseismology. We also take into account possible systematic effects by considering the theoretical uncertainties on the efficiency of the proton-proton (pp) fusion cross-section. We show that models with variable G significantly outclass models with no secular variation of G, viz by a Bayes factor exceeding 30.
INTRODUCTION
The idea that the Sun can be considered a laboratory for fundamental physics traces back to the early developments in nuclear physics by contributing to the understanding of the basic nuclear processes involved in stellar nucleosynthesis. In recent times, accurate measurements of acoustic p-mode spectrum combined with inversion techniques have further stressed this role [1]. Important examples are the investigation of the equation of state [2], the discovery of neutrino flavour oscillations [3,4], the properties of Dark Matter [5][6][7][8], the constraints on axions emission [9,10] the properties of the screening of nuclear reaction rates [11,12] and constraints on physical constants [13].
A fundamental problem that can be tackled by means of helioseismology is the possibility of limiting secular variations of G, a possibility argued long ago by Dirac [14] and Milne [15]. This initial intuition has been further elaborated in [16,17] and is nowadays an important ingredient of various scalar-tensor theories [18], quantumgravity inspired models of modified gravity [19,20], and string theory low-energy models [21].
In this context a widely used approach to promote the gravitational constant to a dynamical variable is to extend the general relativistic framework in which gravity is mediated by a massless spin-2 graviton, to include a spin-0 scalar field which couples universally to matter fields. As the universality of free-fall is maintained theories that predict that the locally measured gravitational constant vary with time often violate the equivalence principle in its strong form. For this reason empirical constraints onĠ/G today , where the dot indicates a derivative with respect to the cosmic time t, have been obtained in several contexts [22,23]. Current limits oṅ G/G today span fromĠ/G today = (4 ± 9) × 10 −13 yr −1 obtained from the Lunar Laser Ranging (LLR) experiment [24], to −3 × 10 −13 <Ġ/G today < 4 × 10 −13 yr −1 from BBN [25], orĠ/G today ∼ 10 −12 yr −1 from white dwarfs [26].
Helioseismology is able to provide independent constraints on possible time evolution of the gravitational constant G over cosmic time because the stellar luminosity L varies as ∼ G 7 [27]. For example, a monotonically increasing Newton's constant must be compensated for a systematic decrease of core temperature and a corresponding change in the hydrogen abundance in order to match L , the solar radius R and the metal to hydrogen abundance ratio (Z/X) . In [28] a direct comparison of low-degree p-modes to GONG data has allowed us to ob-tainĠ/G today ≤ 1.6 × 10 −12 yr −1 , assuming a power-law of the type G(t) ∝ t −α . In this paper we shall present a new limit onĠ/G today based on a bayesian approach which makes use of the definitive "best possible estimate" of 8640 days of low-frequency BiSON data, corrected for the solar cycle modulation [29].
SOLAR MODELS AND MODEL UNCERTAINTIES
In this context it is important to reduce as much as possible any source of systematic uncertainties in the input physics of the calibrated solar models in order to obtain a significant constraint onĠ/G today . From this point of view the main problem is clearly our ignorance of the efficiency of the proton-proton (pp) fusion cross-section for which only theoretical estimates are available. An uncertainty of ±3% on the value of S pp (0), the astrophysical S-factor at zero energy, is quoted in [30], in particular. Therefore bothĠ/G today as well as S pp (0), have been estimated from the data in a Bayesian manner.
Our solar models are built using the Catania version of the GARSTEC code [31,32], a fully-implicit 1D code including heavy-elements diffusion and updated input physics. We prescribed the time evolution of the gravitational constant as a power-law [28,33]
G(t) = G 0 t 0 t α(1)
where G 0 is the cosmologically recent value of Newton's constant according to 2010 CODATA so that G 0 = 6.67384 × 10 −8 cm 3 g −1 s −2 and t 0 = 13. [37] are employed and the nuclear reaction rates are taken from the compilation in [30]. Our starting models are chemically homogeneous PMS models with log L/L = 0.21 and log T e = 3.638 K, thus close to the birth line of a 1M object. Initial Helium fraction, (Z/X) and mixing-length parameter are adjusted to match the solar radius R = 6.95613 × 10 10 cm (based on an average of the two values and quoted error bar in Table 3 of [38]), the solar luminosity L = 3.846 × 10 33 erg s −1 [34] and the chemical composition of [39] with (Z/X) = 0.0245 at the surface. We also employed the new accurate meteoritic estimate of the solar age of [40], t = 4.567 Gyr, a value consistent with the helioseismic solar age [41]. We further noticed that models with the so-called "new abundances" for which (Z/X) = 0.0178 [42] would lead to much smaller Bayes factors and we decided not to discuss these models in this work.
In order to define a proper seismic diagnostic we adopted a widely used approach: if ν n,l is the frequency of the mode of radial order n and angular degree , the frequency separation ratios r l,l+2 (n) = ν n,l − ν n−1,l+2 ν n,l+1 − ν n−1,l+1 (2) can be shown to be localized near the core and weakly dependent on the complex physics of the outer layers [43,44]. In particular in the limit n 1
r , +2 (n) ≈ −(4 + 6) 1 4π 2 ν n, R 0 dc s dR dR R (3)
so that a change in temperature (T ) and mean molecular weight (μ) directly impacts on the r , +2 (n) terms as δc s /c s ≈ 1 2 δT /T − 1 2 δμ/μ.
BAYESIAN APPROACH
We consider the following two-dimensional parameter space: −0.1 ≤ α ≤ 0.1 and 0.97 ≤ S/S pp (0) ≤ 1.03. The proposed α range generously covers all previouṡ G/G today limits obtained by independent methods [33]. Moreover, the S interval 0.97-1.03 allows for an up to ±3% deviation from the recommended value S pp (0) = (4.01 ± 0.04) × 10 −22 keV b in [30].
Central to the Bayesian hypothesis testing is the likelihood. In the following, a Gaussian has been assumed,
L(α, S) = N =17 i=1 1 √ 2πσ i exp − (d i − m i (α, S)) 2 2σ 2 i ,(4)
where d i = r 02 (n) are the observed data (n = i + 8, i = 1 . . . N, N = 17), m i the theoretical model values, and σ i the errors (see also [41] for an application of this likelihood to the helioseismic determination of the solar age). All 17 contributions enter the likelihood with the same weight.
The posterior probability distribution is the likelihood (4) weighted with a prior distribution. Obviously, this prior distribution should be a flat one compared to α. Concerning S we decided to take a conservative point of view, i. e. that nothing is known about S pp (0). In that case we are on the safe side and the only eligible prior distribution is a flat one over the logarithm, log(S).
In the end two hypotheses have to been compared: H 1 = H 1 (−0.1 ≤ α ≤ 0.1, 0.97 ≤ S/S pp (0) ≤ 1.03) vs. our zero hypothesis H 0 (α = 0, 0.97 ≤ S/S pp (0) ≤ 1.03).
RESULTS
The posterior probability distribution is indistinguishable from a two-dimensional Gaussian (Fig. 1). The reason is that the theoretical models m i (α, log(S)) are linearly dependent on both α and log(S) (cf. [45]) as we checked in all our models. From α's marginal distribution one reads its mean value and standard deviation: α = −0.0017 ± 0.0004. Formally, this is a 4-σ effect. With t 0 = 13.7 Gyr this translates tȯ G/G today = (1.25 ± 0.30) × 10 −13 yr −1 . As a by-product one gets log(S/S pp (0)) = 0.011±0.008 and a correlation coefficient of -0.62. An enhanced S goes with a reduced α. However, the indicated slight enhancement of Adelberger et al. [30] pp cross-section by 1% proves insignificant. Our result is one order of magnitude stronger than the limit obtained in [28] and comparable in precision to those obtained with LLR [24] or BBN [25].
Integrating the posterior over the whole parameter space or subsections of it, respectively, one gets the required evidences. The evidence in favour of a hypothesis is the prior-weighted mean of the likelihood over parameter space. The ratio of the evidences, E(H 1 )/E(H 0 ), the so-called Bayes factor amounts to 34.0. (If one trusts the relative error in the recommended S pp (0) and applies the appropriate Gaussian prior, this Bayes factor would increase to 51.1.) Despite one parameter more, the α = 0 hypothesis significantly outclasses the zero hypothesis, i. e. no secular variation of G -provided the S factor is the sole and decisive unknown.
FIG. 1
1. 2-dimensional posterior probability distribution. The closed contour with a probability density of 10.3 per cent of the peak density comprises 90 per cent of the total probability. The hatched strip marks the 68.3-per-cent (± 1-σ) interval of α's marginal distribution.
7 Gyr is a reference age of the Universe according to most of ΛCDM estimates. As G 0 M ≡ 1.32712440 × 10 26 cm 3 s −2 [34] is fixed, M = 1.98855×10 33 g is assumed. Irwin's equation of state [35] with OPAL opacities for high temperatures [36] and Ferguson's opacities for low temperatures
edition," (2004).
Acknowledgements.-We acknowledge L. Santagati for careful reading of the manuscript.
. S Basu, 10.1007/s41116-016-0003-4arXiv:1606.07071astro-ph.SRLiving Reviews in Solar Physics. 13S. Basu, Living Reviews in Solar Physics 13, 2 (2016), arXiv:1606.07071 [astro-ph.SR].
. S Basu, W Däppen, A Nayfonov, 10.1086/307312astro-ph/9810132ApJ. 518S. Basu, W. Däppen, and A. Nayfonov, ApJ 518, 985 (1999), astro-ph/9810132.
Super-Kamiokande). Y Fukuda, 10.1103/PhysRevLett.81.1562arXiv:hep-ex/9807003Phys. Rev. Lett. 811562hep-exY. Fukuda et al. (Super-Kamiokande), Phys. Rev. Lett. 81, 1562 (1998), arXiv:hep-ex/9807003 [hep-ex].
. Q R Ahmad, SNO10.1103/PhysRevLett.89.011301arXiv:nucl-ex/0204008Phys. Rev. Lett. 8911301nucl-exQ. R. Ahmad et al. (SNO), Phys. Rev. Lett. 89, 011301 (2002), arXiv:nucl-ex/0204008 [nucl-ex].
. I P Lopes, J Silk, S H Hansen, 10.1046/j.1365-8711.2002.05238.xastro-ph/0111530MNRAS. 331I. P. Lopes, J. Silk, and S. H. Hansen, MNRAS 331, 361 (2002), astro-ph/0111530.
. I Lopes, P Panci, J Silk, 10.1088/0004-637X/795/2/162arXiv:1402.0682astro-ph.SRApJ. 795I. Lopes, P. Panci, and J. Silk, ApJ 795, 162 (2014), arXiv:1402.0682 [astro-ph.SR].
. I Lopes, J Silk, 10.1088/0004-637X/757/2/130arXiv:1209.3631astro-ph.SRApJ. 757I. Lopes and J. Silk, ApJ 757, 130 (2012), arXiv:1209.3631 [astro-ph.SR].
. A C Vincent, P Scott, A Serenelli, 10.1103/PhysRevLett.114.081302arXiv:1411.6626Physical Review Letters. 11481302hepphA. C. Vincent, P. Scott, and A. Serenelli, Physical Re- view Letters 114, 081302 (2015), arXiv:1411.6626 [hep- ph].
. H Schlattl, A Weiss, G Raffelt, 10.1016/S0927-6505(98)00063-2hep-ph/9807476Astroparticle Physics. 10H. Schlattl, A. Weiss, and G. Raffelt, Astroparticle Physics 10, 353 (1999), hep-ph/9807476.
. N Vinyoles, A Serenelli, F L Villante, S Basu, J Redondo, J Isern, 10.1088/1475-7516/2015/10/015arXiv:1501.01639JCAP. 1015astro-ph.SRN. Vinyoles, A. Serenelli, F. L. Villante, S. Basu, J. Redondo, and J. Isern, JCAP 10, 015 (2015), arXiv:1501.01639 [astro-ph.SR].
. G Fiorentini, B Ricci, F L Villante, 10.1016/S0370-2693(01)00221-0astro-ph/0011130Physics Letters B. 503121G. Fiorentini, B. Ricci, and F. L. Villante, Physics Let- ters B 503, 121 (2001), astro-ph/0011130.
. A Weiss, M Flaskamp, V N Tsytovich, 10.1051/0004-6361:20010225astro-ph/0102353A&A. 3711123A. Weiss, M. Flaskamp, and V. N. Tsytovich, A&A 371, 1123 (2001), astro-ph/0102353.
. J Christensen-Dalsgaard, M P Di Mauro, H Schlattl, A Weiss, 10.1111/j.1365-2966.2004.08477.xMNRAS. 356587J. Christensen-Dalsgaard, M. P. Di Mauro, H. Schlattl, and A. Weiss, MNRAS 356, 587 (2005).
. P A M Dirac, 10.1098/rspa.1938.0053Proceedings of the Royal Society of London Series A. 165199P. A. M. Dirac, Proceedings of the Royal Society of Lon- don Series A 165, 199 (1938).
. E A Milne, 10.1038/139409a0Nature. 139409E. A. Milne, Nature 139, 409 (1937).
. C Brans, R H Dicke, 10.1103/PhysRev.124.925Physical Review. 124925C. Brans and R. H. Dicke, Physical Review 124, 925 (1961).
. P G Bergmann, 10.1007/BF00668828International Journal of Theoretical Physics. 125P. G. Bergmann, International Journal of Theoretical Physics 1, 25 (1968).
The Scalar-Tensor Theory of Gravitation. Y Fujii, K Maeda, Cambridge Monographs on Mathematical Physics. Cambridge University PressY. Fujii and K. Maeda, The Scalar-Tensor Theory of Gravitation, Cambridge Monographs on Mathematical Physics (Cambridge University Press, 2003).
. A Bonanno, G Esposito, C Rubano, 10.1088/0264-9381/21/21/017gr-qc/0403115Classical and Quantum Gravity. 215005A. Bonanno, G. Esposito, and C. Rubano, Classical and Quantum Gravity 21, 5005 (2004), gr-qc/0403115.
. L Smolin, 10.1088/0264-9381/33/2/025011arXiv:1507.01229Class. Quant. Grav. 3325011hep-thL. Smolin, Class. Quant. Grav. 33, 025011 (2016), arXiv:1507.01229 [hep-th].
M Gasperini, hep-th/0702166String Theory and Fundamental Interactions. M. Gasperini and J. MaharanaBerlin Springer Verlag737787M. Gasperini, in String Theory and Fundamental Inter- actions, Lecture Notes in Physics, Berlin Springer Verlag, Vol. 737, edited by M. Gasperini and J. Maharana (2008) p. 787, hep-th/0702166.
. J.-P Uzan, 10.12942/lrr-2011-2arXiv:1009.5514Living Reviews in Relativity. 14J.-P. Uzan, Living Reviews in Relativity 14, 2 (2011), arXiv:1009.5514.
. P J Edwin Peebles, 10.1140/epjh/e2016-70034-0arXiv:1603.06474European Physical Journal H. P. J. Edwin Peebles, European Physical Journal H (2016), 10.1140/epjh/e2016-70034-0, arXiv:1603.06474.
. J G Williams, S G Turyshev, D H Boggs, 10.1103/PhysRevLett.93.261101gr-qc/0411113Physical Review Letters. 93261101J. G. Williams, S. G. Turyshev, and D. H. Boggs, Phys- ical Review Letters 93, 261101 (2004), gr-qc/0411113.
. C J Copi, A N Davis, L M Krauss, 10.1103/PhysRevLett.92.171301astro-ph/0311334Physical Review Letters. 92171301C. J. Copi, A. N. Davis, and L. M. Krauss, Physical Review Letters 92, 171301 (2004), astro-ph/0311334.
. E García-Berro, P Lorén-Aguilar, S Torres, L G Althaus, J Isern, 10.1088/1475-7516/2011/05/021arXiv:1105.1992JCAP. 521gr-qcE. García-Berro, P. Lorén-Aguilar, S. Torres, L. G. Al- thaus, and J. Isern, JCAP 5, 021 (2011), arXiv:1105.1992 [gr-qc].
. S Innocenti, G Fiorentini, G G Raffelt, B Ricci, A Weiss, astro-ph/9509090A&A. 312S. degl'Innocenti, G. Fiorentini, G. G. Raffelt, B. Ricci, and A. Weiss, A&A 312, 345 (1996), astro-ph/9509090.
. D B Guenther, L M Krauss, P Demarque, 10.1086/305567ApJ. 498871D. B. Guenther, L. M. Krauss, and P. Demarque, ApJ 498, 871 (1998).
. A.-M Broomhall, W J Chaplin, G R Davies, Y Elsworth, S T Fletcher, S J Hale, B Miller, R New, 10.1111/j.1745-3933.2009.00672.xarXiv:0903.5219MNRAS. 396astro-ph.SRA.-M. Broomhall, W. J. Chaplin, G. R. Davies, Y. Elsworth, S. T. Fletcher, S. J. Hale, B. Miller, and R. New, MNRAS 396, L100 (2009), arXiv:0903.5219 [astro-ph.SR].
. E G Adelberger, 10.1103/RevModPhys.83.195arXiv:1004.2318Rev. Mod. Phys. 83195nucl-exE. G. Adelberger et al., Rev. Mod. Phys. 83, 195 (2011), arXiv:1004.2318 [nucl-ex].
. A Bonanno, H Schlattl, L Paternò, 10.1051/0004-6361:20020749astro-ph/0204331A&A. 3901115A. Bonanno, H. Schlattl, and L. Paternò, A&A 390, 1115 (2002), astro-ph/0204331.
. A Weiss, H Schlattl, 10.1007/s10509-007-9606-5Ap&SS. 31699A. Weiss and H. Schlattl, Ap&SS 316, 99 (2008).
. J.-P Uzan, 10.1103/RevModPhys.75.403hep-ph/0205340Reviews of Modern Physics. 75J.-P. Uzan, Reviews of Modern Physics 75, 403 (2003), hep-ph/0205340.
A N Cox, Allen's Astrophysical Quantities. SpringerA. N. Cox, Allen's Astrophysical Quantities (Springer, 2000).
. S Cassisi, M Salaris, A W Irwin, 10.1086/374218astro-ph/0301378ApJ. 588S. Cassisi, M. Salaris, and A. W. Irwin, ApJ 588, 862 (2003), astro-ph/0301378.
. C A Iglesias, F J Rogers, 10.1086/177381ApJ. 464943C. A. Iglesias and F. J. Rogers, ApJ 464, 943 (1996).
. J W Ferguson, D R Alexander, F Allard, T Barman, J G Bodnarik, P H Hauschildt, A Heffner-Wong, A Tamanai, 10.1086/428642astro-ph/0502045ApJ. 623J. W. Ferguson, D. R. Alexander, F. Allard, T. Barman, J. G. Bodnarik, P. H. Hauschildt, A. Heffner-Wong, and A. Tamanai, ApJ 623, 585 (2005), astro-ph/0502045.
. M Haberreiter, W Schmutz, A G Kosovichev, 10.1086/529492ApJ. 67553M. Haberreiter, W. Schmutz, and A. G. Kosovichev, ApJ 675, L53 (2008).
N Grevesse, A Noels, Origin and Evolution of the Elements. N. Prantzos, E. Vangion-Flam, and M. Casse24514N. Grevesse and A. Noels, in Origin and Evolution of the Elements, Conference Series, Vol. 245, edited by N. Prantzos, E. Vangion-Flam, and M. Casse (1993) p. 14.
. J N Connelly, M Bizzarro, A N Krot, Å Nordlund, D Wielandt, M A Ivanova, 10.1126/science.1226919Science. 338651J. N. Connelly, M. Bizzarro, A. N. Krot,Å. Nordlund, D. Wielandt, and M. A. Ivanova, Science 338, 651 (2012).
. A Bonanno, H.-E Fröhlich, 10.1051/0004-6361/201526419arXiv:1507.05847astro-ph.SRA&A. 580A. Bonanno and H.-E. Fröhlich, A&A 580, A130 (2015), arXiv:1507.05847 [astro-ph.SR].
. M Asplund, N Grevesse, A J Sauval, P Scott, 10.1146/annurev.astro.46.060407.145222arXiv:0909.0948[astro-ph.SRARA&A. 47M. Asplund, N. Grevesse, A. J. Sauval, and P. Scott, ARA&A 47, 481 (2009), arXiv:0909.0948 [astro-ph.SR].
. I W Roxburgh, S V Vorontsov, 10.1051/0004-6361:20031318A&A. 411215I. W. Roxburgh and S. V. Vorontsov, A&A 411, 215 (2003).
. H Floranes, J Christensen-Dalsgaard, M J Thompson, 10.1111/j.1365-2966.2004.08487.xMNRAS. 356671H. Otí Floranes, J. Christensen-Dalsgaard, and M. J. Thompson, MNRAS 356, 671 (2005).
Kendall's advanced theory of statistics. A O'hagan, J J Forster, Bayesian inference2secondA. O'Hagan and J. J. Forster, "Kendall's advanced the- ory of statistics, volume 2b: Bayesian inference, second
| zyda_arxiv-0545000 |
Intersection Bodies of Polytopes: Translations and Convexity
Marie-Charlotte Brandenburg
Chiara Meroni
Intersection Bodies of Polytopes: Translations and Convexity
We continue the study of intersection bodies of polytopes, focusing on the behavior of IP under translations of P . We introduce an affine hyperplane arrangement and show that the polynomials describing the boundary of I(P + t) can be extended to polynomials in variables t ∈ R d within each region of the arrangement. Establishing the convexity space as the set of translations such that I(P + t) is convex, we fully characterize it for two-dimensional polytopes and partially characterize it for higher dimensions, revealing unexpected finite behavior in the two-dimensional case and for the d-dimensional cube.We will rely on methods and results which were developed in[BBMS22]. In this section we review the most important concepts and results we are going to make use of.
Introduction
In the field of convex geometry, intersection bodies have been widely studied from an analytical viewpoint, and mainly in the context of volume inequalities. Originally introduced by Lutwak [Lut88], they have played a significant role in solving the Busemann-Petty problem, which asks to compare the volume of two convex bodies based on the volumes of their linear sections [Gar94a; Gar94b; Kol98; GKS99;Zha99]. Unlike its more famous counterparts, the projection body, the intersection body IK of a star body K is not invariant under affine translation. Furthermore, an intersection body can be both convex and non-convex. Convexity is certified Busemann's theorem [Bus49], which states that IK is convex if K is a convex body centered at the origin (i.e., K is centrally symmetric, where the center of symmetry is the origin), and this statement has been generalized to L p -intersection bodies [Ber09]. On the other hand, given a convex body K ⊆ R d , there always exists some t ∈ R d such that I(K + t) is not convex [Gar06, Thm. 8.1.8].
The occurrence of non-convex intersection bodies has motivated considerations of various measures for capturing the magnitude of their non-convexity, leading to the study of p-convexity of intersection bodies both over the complex numbers and over the reals [KYZ11;HHW12]. Another direction of research concerns an adaptation of the construction of intersection bodies in order to get convexity, which resolves in convex intersection bodies [MR11;Ste16]. A different relative of intersection bodies is the cross-section body [Mar92;Mar94]; however, this starshaped set turned out to be non-convex as well, in the general case [Bre99]. Summarizing, many of the positive results towards convexity in all these works concern intersection bodies of centrally symmetric star bodies. In contrast, we focus on affine translates, and consider objects which are not necessarily centrally symmetric.
The goal of this article is to investigate the behavior of intersection bodies of polytopes under translations, and to determine under which translations the intersection body is convex. In our previous work [BBMS22] we exhibit rich semialgebraic structures of intersection bodies of polytopes. However, in general, the intersection body IP of a polytope P is not a basic semialgebraic set, and there exists a central hyperplane arrangement which describes the regions in which the topological boundary of IP is defined by a fixed polynomial. Taking advantage of these combinatorial and semialgebraic structures opens up new possibilities to study the question of convexity in the present work. In particular, exploiting this semialgebraicity, we are able to characterize convexity by using elementary geometric arguments.
In this article we introduce an affine hyperplane arrangement associated to a fixed polytope P . We prove that for translation vectors t ∈ R d within a region of this arrangement the polynomials defining the boundary of I(P + t) can be extended to polynomials in t 1 , . . . , t d (Theorem 3.5). Establishing the convexity space CS(P ), namely all those translation vectors t such that I(P +t) is convex, we give a full characterization of the convexity space in dimension 2, and a partial characterization for general dimensions. Surprisingly, it turns out that the convexity space of two-dimensional polytopes is always a finite set, and we exhibit the same behavior in higher dimensions for the d-dimensional cube. In particular, this implies that the convexity space is itself non-convex. For higher dimensions d > 2, the convexity space of a polytope may be infinite an even full-dimensional. We summarize our results as follows.
Results. Let CS(P ) = {t ∈ R d | I(P + t) is convex} be the convexity space of P .
(i ) If d = 2 then CS(P ) is finite and consists of at most 5 points.
(ii ) If P = [−1, 1] d is a cube, then CS(P ) is finite and consists of precisely 2d + 1 points.
(iii ) If IP is strictly convex then CS(P ) contains a full-dimensional open ball.
A full classification of the 2-dimensional case is given in Corollary 4.8, and the remaining statements can be found in Proposition 5.4 and Remark 5.5. An example of a strictly convex intersection body is given in Example 5.6.
Overview. The article is structured as follows. In Section 2 we review the main concepts and notation from [BBMS22]. In Section 3 we introduce an affine hyperplane arrangement and describe how it governs the behavior of IP under translation of P . We then turn to the characterization of the convexity space CS(P ), where Section 4 concerns the 2-dimensional case, and Section 5 the case of general dimensions.
Let P ⊆ R d be a convex polytope. The intersection body IP of P is the starshaped set
IP = x ∈ R d ρ IP (x) ≥ 1 , where the radial function ρ IP : R d → R of IP is ρ IP (x) = 1 x vol d−1 (P ∩ x ⊥ ).
Here, vol d−1 denotes the (d − 1)-dimensional Euclidean volume, and x ⊥ ⊆ R d denotes the linear hyperplane which is orthogonal to x ∈ R d , namely the set
x ⊥ = {y ∈ R d | x, y = 0}.
To obtain meaningful results, we may thus assume that P ⊆ R d is a d-dimensional polytope throughout this article. The topological boundary of the intersection body IP is defined by the equation
∂IP = {x ∈ R d | ρ IP (x) = 1}.
Since the radial function satisfies ρ IP (λx) = 1 λ ρ IP (x) for every λ > 0, it is completely determined by its restriction to the unit sphere.
The intersection body IP of a polytope is governed by the central hyperplane arrangement
H(P ) = v =0 is a vertex of P v ⊥ .
We denote the set of vertices of P by vert(P ), and the origin is denoted by 0 ∈ R d . An open chamber C of H(P ) is a connected component of R d \ H(P ). Given such a chamber C, all hyperplanes x ⊥ for x ∈ C intersect P in the interiors of a fixed set of edges. The radial function restricted to such a chamber is a quotient of polynomials
ρ IP | C = p C x 2 q C ,(1)
where p C is divisible by x 2 . Therefore, the topological boundary ∂IP ∩ C is the zero-set of the (irreducible) polynomial p C x 2 − q C . We repeat a key argument in the proof of (1). Let x ∈ C and Q = P ∩ x ⊥ . The value ρ IP (x) is by definition the volume of Q. This computation is done by considering a triangulation T of the boundary of Q. We extend this to a covering of conv(Q, 0) by considering the set conv(∆, 0) for every simplex ∆ ∈ T such that 0 ∈ ∆. Note that if 0 ∈ P , then this induces a central triangulation of Q. Denoting v 1 , . . . , v d the vertices of a simplex ∆ ∈ T , the volume of conv(∆, 0) = conv(v 1 , . . . , v d , 0) is, up to a constant scaling factor, given by the determinant of the matrix
M ∆ (x) = b i 1 ,x a i 1 − a i 1 ,x b i 1 b i 1 −a i 1 ,x . . . b i d−1 ,x a i d−1 − a i d−1 ,x b i d−1 b i d−1 −a i d−1 ,x x ,
where the vertices v i arise as intersection of x ⊥ with edges of P , i.e., v i = conv(a i , b i ) ∩ x ⊥ for a i , b i ∈ vert(P ). Assigning sgn(∆) ∈ {−1, 1} to each simplex, this gives
ρ IP (x) = vol d−1 (Q) = 1 x 2 (d − 1)! ∆∈T sgn(∆) det(M ∆ (x)).
Translations and Affine Hyperplane Arrangements
Let P ⊆ R d be a polytope. In this section we consider how the intersection body of P + t transforms under variation of t ∈ R d . Recall from Section 2 that the combinatorial structure of the boundary of I(P + t) is described by the central hyperplane arrangement H(P + t). We thus begin by studying the behavior of this hyperplane arrangement under translation of P . For this, we introduce a new affine hyperplane arrangement L (P ), which captures the essence of H(P + t) under variation of t. We show that within a region R of L (P ) the polynomials describing the boundary of I(P + t), for t ∈ R, can be extended to polynomials in the variables t 1 , . . . , t d .
Let P ⊆ R d be a polytope and let vert(P ) be the set of its vertices. Denote by H v = v ⊥ ⊆ R d the hyperplane though the origin that is orthogonal to a vertex v ∈ vert(P ). As described in the previous section, the collection of all such hyperplanes forms a central hyperplane arrangement H(P ) in R d . For each such hyperplane we define its positive and negative side as
H + v = x ∈ R d | x, v > 0 and H − v = x ∈ R d | x, v < 0 .
We now choose a translation vector t ∈ R d and consider the vertices {v + t | v ∈ vert(P )} of the translated polytope P + t. The hyperplane arrangement H(P + t) is given by the
hyperplanes (v + t) ⊥ , where v ranges over the vertices of P . The hyperplane H v+t can be obtained from H v by a rotation r v,t : R d → R d such that r v,t v ||v|| = v+t ||v+t|| , and thus r v,t (H v ) = H v+t , r v,t (H + v ) = H + v+t and r v,t (H − v ) = H − v+t .
We label each maximal chamber C of H(P + t) with a sign vector s(C) ∈ {+, −} vert(P +t) indexed by the vertices w = v + t of P + t, where
s(C) w = + if C ⊆ H + w , s(C) w = − if C ⊆ H − w .
The set {s(C) | C maximal chamber of H(P + t)} describes the chirotope or signed cocircuits of the underlying oriented matroid of the hyperplane arrangement [GOT18, Chapter 6.2.3].
H 1 H 2 H 3 (+, −, +) (+, −, −) (+, +, −) (−, +, −) (−, +, +) (−, −, +) H(P ) : H 1 H 2 H 3 (+, −, +) (+, +, +) (+, +, −) (−, +, −) (−, −, −) (−, −, +) H(P t 1 ) : H 1 H 2 H 3 (−, −, +) (−, −, −) (−, +, −) (+, +, −) (+, +, +) (+, −, +)
H(P t 2 ) : Figure 1: The hyperplane arrangements of P + t from Example 3.1. Figure 1 shows the hyperplane arrangements H(P +t) for t 0 = 0, t 1 = (0, 2), and t 2 = (0, −2). Note that the underlying oriented matroids of H(P + t) for t = t 1 and t = t 2 are the same. We continue with this in Example 3.3.
Example 3.1. Let P = conv (v 1 , v 2 , v 3 ) be the triangle with vertices v 1 = 0 1 , v 2 = −1 −1 , v 3 = 1 −1 .
We begin by showing that the signed cocircuit s(C) of a chamber C fully determines the set of edges of P which are intersected by x ⊥ for any x ∈ C.
Lemma 3.2. Let P ⊆ R d be a polytope and let t ∈ R d . Let C be a maximal open chamber of H(P ), and C t be a maximal open chamber of H(P + t) such that s(C) = s(C t ), i.e., their signed cocircuits agree. Given x ∈ C, x t ∈ C t consider
E = {e ⊆ P | e is an edge of P, x ⊥ ∩ e = ∅}, E t = {e t ⊆ P + t | e t is an edge of P + t, x ⊥ t ∩ e t = ∅}
.
Then E t = {e + t | e ∈ E}. Proof. Let e = conv (v 1 , v 2 ) ∈ E be an edge of P . Since x ⊥ ∩ e = ∅, we have that v 1 , v 2 lie on different sides of x ⊥ . Equivalently, we have s(C) v 1 = −s(C) v 2 , and without loss of generality s(C) v 1 = +. Thus, x ∈ H + v 1 ∩ H − v 2 . Since H(P + t) is obtained from H(P ) by rotating the hyperplanes individually, and s(C) = s(C t ), it follows that x t ∈ H + v 1 +t ∩ H − v 2 +t .
Since e + t is an edge of P + t if and only if e is an edge of P , the claim follows.
We consider the affine hyperplane arrangement
L (P ) = {aff(−v 1 , . . . , −v d ) | v 1 , . . . , v d are affinely independent vertices of P }, where aff(−v 1 , . . . , −v d ) denotes the unique affine hyperplane containing the points −v 1 , . . . , −v d . An open region R of L (P ) is a connected component of R d \ L (P )
. We emphasize that there are two hyperplane arrangements in R d which which we consider simultaneously. We have the central hyperplane arrangement H(P + t), which depends on the choice of t, and subdivides R d into open d-dimensional cones, which we call chambers of H(P + t). On the other hand, we have the affine hyperplane arrangement L (P ), which subdivides R d into open d-dimensional components, which we call regions of L (P ). Note that L (P + t) = L (P ) − t by construction.
Example 3.3. Let P be the triangle from Example 3.1. The affine line arrangement L (P ) is shown in Figure 2. Note that the translation vectors t = t 0 , t 1 , t 2 all lie in different regions of the arrangement, despite the fact that the signed cocircuits of H(P + t 1 ) and H(P + t 2 ) agree, as displayed in Figure 1.
In the following we show that L (P ) captures the characteristics of H(P + t) under variation of t. More precisely, we show that within a region R of L (P ) the polynomials describing the boundary of I(P + t), for t ∈ R, can be extended to polynomials in t 1 , . . . , t d .
Proposition 3.4. Let P ⊆ R d be a polytope and R be an open region of L (P ). Then the set of signed cocircuits of H(P + t) is fixed for all t ∈ R. Proof. Let v 1 , . . . , v d be affinely independent vertices of P . By construction of L (P ), R does
t 0 t 1 t 2 −v 3 −v 2 −v 1not intersect A = aff(−v 1 , . . . , −v d ), i.e.
, R is strictly contained in one side of this hyperplane. Without loss of generality, we assume R ⊆ A + . The points w k = v k + t, for k = 1, . . . , d, are linearly independent vertices of P +t for all t ∈ R d \A. Hence, the subarrangement of H(P +t) consisting of hyperplanes w ⊥ 1 , . . . , w ⊥ d is a simplicial arrangement which dissects R d into 2 d open chambers, where each chamber is the image of an orthant of R d under the linear map f defined by e i → w i for all i = 1, . . . , d. Note that the signed cocircuits are fixed for every t ∈ A + . We now consider H(P + t) as common refinement of all subarrangements formed by d hyperplanes with linearly independent normals. The signed cocircuit of a chamber of H(P + t) is uniquely determined by the signed cocircuits of all subarrangements, and the cocircuits of the subarrangements are fixed for all t ∈ R. Thus, the cocircuits of H(P + t) are fixed for all t ∈ R.
Theorem 3.5. Let R be an open region of L (P ), t ∈ R, and let C t be an open chamber of H(P + t). Then the radial function ρ I(P +t) | Ct of I(P + t) restricted to the chamber C t and for t ∈ R is a polynomial in the variables t 1 , . . . , t d of degree at most d − 1.
Proof. By Proposition 3.4, for a fixed region R the set of signed cocircuits of H(P +t) is fixed. Lemma 3.2 then implies that given a region R, t ∈ R, and a chamber C t of H(P + t), for any vector x ∈ C t the set of edges of P + t which intersect x ⊥ is fixed. Let Q = P ∩ x ⊥ , for a certain x ∈ C t , and let T be a triangulation of ∂Q, as explained in Section 2. Let ∆ ∈ T be a maximal simplex with vertices v 1 , . . . v d−1 such that 0 ∈ ∆ and, for each i = 1, . . . , d − 1,
let a i , b i ∈ vert(P ) such that v i = conv(a i + t, b i + t) ∩ x ⊥ . The volume of the d-dimensional simplex conv(∆, 0) is, up to a multiplicative factor of ±1 x (d−1)! , the determinant of the matrix M ∆ (x, t) = b 1 +t,x (a 1 +t)− a 1 +t,x (b 1 +t) b 1 −a 1 ,x . . . b d−1 +t,x (a d−1 +t)− a d−1 +t,x (b d−1 +t) b d−1 −a d−1 ,x x = M ∆ (x, 0) + t,x (a 1 −b 1 ) b 1 −a 1 ,x + t . . . t,x (a d−1 −b d−1 ) b d−1 −a d−1 ,x + t 0 .
The determinant of this matrix is a polynomial in the variables t 1 , . . . , t d of degree at most d − 1. Since the volume of Q can be computed as
vol(Q) = 1 x (d − 1)! ∆∈T sgn(∆) det(M ∆ (x, t)),
the claim follows.
Example 3.6. Figure 3 shows the continuous deformation of the intersection body I(P + t) of the unit square P = [−1, 1] 2 under translation by t ∈ R 2 within each bounded region of the affine line arrangement L (P ).
Convexity in Dimension 2
For each fixed region R of the affine line arrangement L (P ), Theorem 3.5 implies that, as we move t ∈ R continuously, the intersection body I(P + t) deforms continuously as well. We now characterize under which circumstances the intersection body of a polygon is convex. Note that IP cannot be convex if the origin lies outside of P or is a vertex of P (the argument for general dimensions will be given in Remark 5.1). We thus consider the distinct cases of when the origin lies in the interior of P , and when the origin lies in the interior of an edge. Figure 3 indicates that in the case of the square, the intersection body of P + t is convex for precisely 5 translation vectors: the center of symmetry, as well as the midpoints of the four edges. In Theorem 4.6 we show that the number of such translation vectors is always finite, and that parallelograms maximize this number.
Although in this section we focus on 2-dimensional polytopes, we make the following definitions for polytopes of general dimensions.
Definition 4.1. Let P ⊆ R d be a polytope. The convexity space of P is the set
CS(P ) = {t ∈ R d | I(P + t) is convex}.
The goal of this section is to give a characterization of the convexity space of a polygon. In the following Propositions 4.2 and 4.3 we consider polygons with the origin in the interior, and characterize the geometry of the boundary of IP . More precisely, we will see that the chambers in which IP is convex correspond to pairs of parallel edges of P , and that the polynomials defining the boundary of IP are linear in this case.
Proposition 4.2. Let P ⊆ R 2 be a polygon. Let C be a chamber of H(P ), and let x ∈ C. We denote by v 1 (x), v 2 (x) the points of intersection conv (a 1 , b 1 ) , conv (a 2 , b 2 ) be edges of P such that v 1 (x) ∈ conv (a 1 , b 1 ) and v 2 (x) ∈ conv (a 2 , b 2 ). Then the polynomial defining ∂IP in the chamber C is linear if and only if the segments conv (a 1 , b 1 ) and conv (a 2 , b 2 ) are parallel.
x ⊥ ∩ ∂P = {v 1 (x), v 2 (x)}. Let
Proof. We want to prove that {x ∈ C | ρ IP | C (x) = 1} is a line segment if and only if the two edges conv (a i , b i ) are parallel. Assume that v 1 (x) = λa 1 +(1−λ)b 1 and v 2 (x) = µa 2 +(1−µ)b 2 for some λ, µ ∈ (0, 1).
Since v 1 (x), v 2 (x) ∈ x ⊥ , we have λ = b 1 , x b 1 − a 1 , x , µ = b 2 , x b 2 − a 2 , x .
We compute the length of conv(v 1 (x), v 2 (x)), or equivalently of conv(0, v 1 (x) − v 2 (x)). We do this via the area of the triangle with vertices 0, v 1 (x) − v 2 (x) and x x 2 . Hence, the radial function can be computed by the determinantal expression
ρ IP | C (x) = 1 x 2 det v 1 (x) − v 2 (x) x .
We compute the radial function explicitly. First,
v 1 (x) − v 2 (x) = ( b 2 −a 2 ,x ( b 1 ,x a 1 − a 1 ,x b 1 )− b 1 −a 1 ,x ( b 2 ,x a 2 − a 2 ,x b 2 )) b 1 −a 1 ,x b 2 −a 2 ,x .
The boundary ∂P ∩ C is given by the set of points x ∈ C such that ρ IP | C (x) = 1, i.e., the points which satisfy
1 x 2 det b2 − a2, x ( b1, x a1 − a1, x b1) − b1 − a1, x ( b2, x a2 − a2, x b2) x = b1 − a1, x b2 − a2, x ,(2)
assuming that the determinant in the left hand side is positive in C (otherwise it gets multiplied by −1). This determinant is a cubic polynomial in x, which by [BBMS22, Prop. 5.5] is divisible by x 2 . Hence, the left hand side of (2) is a homogeneous linear polynomial in x. It divides the right hand side if and only if (b 2 − a 2 ) = κ(b 1 − a 1 ) for some κ ∈ R, i.e., if the the two edges conv (a i , b i ) are parallel. In this case (2) is a linear equation, and hence the curve defined by (2) is a line; otherwise it is a conic, passing through the origin.
Proposition 4.3. Let P ⊆ R 2 be polygon with the origin in its interior. If there exists a line through the origin which intersects ∂P in two non-parallel edges, then IP is not convex.
Proof. Let C be a chamber of of H(P ) such that x ⊥ intersects two non-parallel edges 1 , 2 of P . Consider u a , u b ∈ C ∩ S 1 . As shown in Figure 4, we denote for some positive real numbers α, β > 0. Since 1 and 2 are not parallel, we have α = β. We can choose a, b such that u a = 1 a ( a 2 −a 1 ) and
u ⊥ a ∩ 1 = a = ( a 1 a 2 ) , u ⊥ b ∩ 1 = b = b 1 b 2 , u ⊥ a ∩ 2 = −αa, u ⊥ b ∩ 2 = −βb, a b −αa −βb a+b 2 0 1 2 u ⊥ a u ⊥ b u ⊥ a+b p a p bu b = 1 b b 2 −b 1 . The lengths of the line segments u ⊥ a ∩ P = conv (a, −αa) and u ⊥ b ∩ P = conv (b, −βb) are u ⊥ a ∩ P = a − (−αa) = (1 + α) a , u ⊥ b ∩ P = b − (−βb) = (1 + β) b .
Thus, the boundary points of IP in directions u a , u b are p a := ρ IP (u a ) u a = (1 + α) a u a = (1 + α) a 2 −a 1 ,
p b := ρ IP (u b ) u b = (1 + β) b u b = (1 + β) b 2 −b 1
respectively. Consider the midpoint a+b 2 ∈ 1 and let u a+b be the unit vector in C orthogonal to a + b (and thus also to a+b 2 ). Then u a+b = 1 a+b a 2 +b 2 −a 1 −b 1 , u ⊥ a+b ∩ 2 = − αβ α+β (a + b) and the boundary point of IP in direction u a+b is
p a+b = ρ IP (u a+b ) u a+b = 1 2 + αβ α + β a + b u a+b = 1 2 + αβ α + β a 2 + b 2 −a 1 − b 1 .
Let q = conv (p a , p b ) ∩ cone (u a+b ), as in Figure 4. We want to prove that IP ∩ C is not convex, by showing that q > p a+b . Indeed, we can compute that
q = (1 + α)(1 + β) 2 + α + β (a 2 + b 2 , −a 1 − b 1 ),
and therefore
q − p a+b = (1 + α)(1 + β) 2 + α + β a + b − 1 2 + αβ α + β a + b = (α − β) 2 2(2 + α + β)(α + β) a + b .
Since α = β, this expression is strictly positive, and so q ∈ IP . This proves that p a , p b ∈ IP , but the segment conv (p a , p b ) is not contained in IP . Hence, IP is not convex.
We are now ready to move towards a full classification of convexity of intersection bodies of polygons for any translation. Note that if P is centrally symmetric, then the convexity of P and the description of IP follow from the following classical statement.
Theorem 4.4 ([Gar06, Theorem 8.1.4]). Let K ⊆ R 2 be a centrally symmetric convex body centered at the origin. Then IK = r π 2 (2K), where r π 2 is a counter-clockwise rotation by π 2 . Our goal is to classify also the cases in which P is not centrally symmetric and centered at the origin. A key argument in the proof of the following Theorem 4.4 is done via the chordal symmetral of P . The chordal symmetral ∆K of a star body K ⊆ R d is the union of segments conv (−c u u, c u u), where u ∈ S d−1 and c u = 1 2 vol d−1 (K ∩ u ⊥ ) [Gar06, Definition 5.1.3]. The chordal symmetral is a starshaped set with respect to the origin. We will make use of the following statements. (ii ) the origin is the midpoint of an edge of P , and P ∪ −P is convex.
Proof. As noted in Remark 5.1, IP is not convex if the origin lies in R 2 \ P , or if the origin is a vertex of P . We are left to analyze the cases in which the origin lies in the interior of P or in the interior of an edge of P .
We first consider the case in which the origin lies in the interior of P and show that IP is convex if and only if P = −P . If P = −P , then Theorem 4.4 implies that IP is convex. Assume now that IP is convex, and the origin lies in the interior of P . Then C ∩ IP is convex for every chamber C of H(P ). In particular, by Proposition 4.3, every line u ⊥ , u ∈ S 1 , which does not intersect a vertex of P intersects ∂P in the interior of two parallel edges. Hence, the edges of P come in pairs of parallel edges. We rotate u ∈ S 1 continuously. Whenever u ⊥ crosses a vertex of one edge, it must also cross a vertex in the parallel edge, since otherwise this results in a pair of non-parallel edges. This implies that for every vertex v of P , there exists a vertex w of P such that w = −λv for some λ > 0. Since all edges are pairwise parallel, this positive scalar λ is the same for all vertices. Therefore, we also get that v = −λw, which implies that λ = 1. Hence, P = −P .
Consider now the case in which the origin lies in the interior of an edge of P . Since the origin lies on the boundary of P , we have that IP = 1 2 I(P ∪ −P ). Using Proposition 4.5 we deduce the following chain of equalities:
IP = 1 2 I(P ∪ −P ) (ii) = 1 2 · 2 ∆(P ∪ −P ) (i) = P ∪ −P.
Therefore, IP is convex if and only if P ∪ −P is convex. In order for this to happen, the origin must be the midpoint of the edge, and additionally P ∪ −P must be convex.
Example 4.7. By Corollary 4.8 for each polygon P there are only finitely many positions of the origin such that the intersection body of P is convex. Figure 5 shows , an acute triangle (k = 3), a diamond shape (k = 2), a panettone shape, and a centrally symmetric polygon which is not a parallelogram (k = 1). The case k = 4 is not realizable.
Corollary 4.8. The convexity space CS(P ) of a polygon P ⊆ R 2 is finite. More precisely, let k = |CS(P )| = |{t ∈ R 2 | I(P + t) is convex}|.
Then k ≤ 5 and the equality is realized exactly when P is a parallelogram.
Proof. By Theorem 4.6, I(P + t) is convex if and only if −t is the center of symmetry of P (if it exists), or a midpoint of an edge such that (P + t) ∪ −(P + t) is convex. Thus, the number of such t ∈ R 2 is finite.
If −t is the midpoint of an edge, then (P − t) ∪ −(P − t) is convex if and only if the sum of the angles adjacent to this edge is at most π. Let v 1 , . . . , v n be the vertices of P , ordered cyclically, and let α i be the interior angle of P at v i (and α n+1 = α 1 ). Assume that there are m pairs of consecutive interior angles (α i , α i+1 ), i = 1, . . . , m such that α i + α i+1 ≤ π. Recall that for any polygon with n vertices, the sum of all interior angles is (n − 2)π. Furthermore, for any angle in a polygon, α i ≤ π holds. We thus obtain that
2(n − 2)π = 2 n i=1 α i = n i=1 (α i + α i+1 ) = m i=1 (α i + α i+1 ) + n i=m+1 (α i + α i+1 ) ≤ m i=1 π + n i=m+1 2π ≤ mπ + 2(n − m)π.
This implies m ≤ 4, hence k ≤ 5. A similar computation, with the exterior angles of P , implies that if k = 5 then n = m = 4 and all pairs of consecutive angles sum up to π. Hence, the unique maximizers of k are parallelograms.
We close this section by pointing out that many arguments made in this section do not generalize to higher dimensions: In constrast to Propositions 4.2 and 4.3, in higher dimensions there exist convex pieces IP ∩ C which are not linear. Furthermore, the identification with the chordal symmetral body, as in Theorem 4.6, does not hold in general. However, these insigts in the 2-dimensional case will turn out to be essential for arguments on the general case in the following section.
Convexity in Higher Dimensions
We devote this section to discuss the convexity space of polytopes of dimension d > 2. We make use of the results obtained in Section 4 to show that, similar to the 2-dimensional case, the convexity space of the d-dimensional cube is finite. In contrast, we give a sufficient condition under which the convexity space is infinite and contains a full-dimensional ball.
Remark 5.1. To obtain an intersection body IP which is convex, the origin must either lie in the interior of P , or in the interior of a facet of P . Otherwise, there exists a hyperplane x ⊥ intersecting P at most in a lower-dimensional face, and thus the radial function of IP in direction x has value 0. The set of such x is a cone V = C ∪ −C, where C ⊂ R d is a convex pointed cone. Then, given x ∈ C, there exist x 1 , x 2 ∈ R d \ V such that x is a convex combination of x 1 and x 2 . Since ρ IP (x) = 0, the segment with extrema ρ IP (x 1 ) x 1 and ρ IP (x 2 ) x 2 is not entirely contained in the intersection body IP , but its extrema are.
The next result connects the intersection body of a convex body to the intersection body of a prism over the given convex body. Proof. Let u = ( u, 0) ∈ H and consider its orthogonal complement u ⊥ ⊆ R d , which in this case can be interpreted as u ⊥ × R ⊆ R d−1 × R. Then
K ∩ u ⊥ = (L × [a, b]) ∩ ( u ⊥ × R) = (L ∩ u ⊥ ) × [a, b].
We can therefore compute the radial function of IK as
ρ IK (u) = vol d−1 (K ∩ u ⊥ ) = vol d−1 (L ∩ u ⊥ ) × [a, b] = (b − a) · ρ IL ( u) for u ∈ H. Equivalently, IK ∩ H = (b − a) IL.
It follows that if IL is non-convex, then so is IK. This behavior can be observed in the following example.
Example 5.3. Consider the unit cube P = [−1, 1] 3 , which is a prism over a square. With the translation t = (1, 1, 1) we obtain the cube P + t = [0, 2] 3 , and I(P + t) is displayed in Figure 6, from two different points of view. Proposition 5.2 implies that I(P + t) ∩ (0, 0, 1) ⊥ is the second dilation of the intersection body of the square [0, 2] 2 , which is also displayed at the bottom left of Figure 3 in red. We can now use Proposition 5.2 to describe the convexity space of a cube in any dimension.
Proposition 5.4. Let P = [−1, 1] d be the centrally symmetric d-dimensional cube. The convexity space CS(P ) is finite, and |CS(P )| = |{t ∈ R d | I(P + t) is convex}| = 2d + 1.
These positions correspond to placing the origin in the center of symmetry of the cube or in the center of symmetry of one of its 2d facets.
Proof. We prove this statement by induction on d. The base case of d = 2 follows from Theorem 4.6. Let now P = [−1, 1] d and consider P + t for some t such that at least one of its coordinates is not in {−1, 0, 1}. Without loss of generality let that coordinate be t 1 . We first prove that in this case P + t is not convex.
Let Q = (P + t) ∩ H, where H = {x ∈ R d | x d = 0}
. Then Q is a translation of a (d − 1)dimensional cube. By the assumption on t 1 , the origin is not the center of symmetry of Q, and not the center of any of its facets. Therefore, by induction, IQ is not convex. Notice that P + t = Q × [a, b] for some a, b ∈ R with b − a = 2. Proposition 5.2 implies that I(P + t) ∩ H = 2IQ. Therefore, I(P + t) ∩ H is not convex, hence I(P + t) itself is not convex.
Consider now the case in which all the coordinates of t are in {−1, 0, 1}. Then the origin is either the center of symmetry of P + t, or it is the center of one of its faces. Recall from Remark 5.1 that if the origin lies on a face of dimension at most d − 2, then the intersection body is not convex.
We are left with the cases in which the origin is the center of the cube or of one of its facets. For t = (0, . . . , 0), P + t = P is centrally symmetric hence I(P + t) is convex. If instead the origin is the center of a facet, then P ∪ −P is a centrally symmetric parallelepiped and by construction I(P + t) = 1 2 I(P ∪ −P ), which is convex. This concludes the proof.
Remark 5.5. We note that whenever the intersection body is strictly convex, then the convexity space contains an open ball around the origin. Indeed this holds in more generality for the intersection body IK of any star body K ⊆ R d , with 0 in its interior, and follows directly from the continuity of the volume function, and therefore of the radial function, with respect to t. Let x, y ∈ R d and p = ρ I(K+t) ( ) · for ∈ {x, y, x + y}, so that p ∈ ∂I(K + t).
Denote by q x+y the point of the segment conv (p x , p y ) which is a multiple of x + y, namely q x+y = ρ I(P +t) (x) ρ I(K+t) (y) ρ I(K+t) (x)+ρ I(K+t) (y) (x + y). Then, I(K + t) is strictly convex if and only if ρ I(P +t) (x) ρ I(K+t) (y) ρ I(K+t) (x) + ρ I(K+t) (y) = q x+y x + y < p x+y x + y = ρ I(K+t) (x + y).
This gives a quadratic condition in ρ I(K+t) , which is continuous in t. Therefore, if (3) holds for IK, it holds also for I(K + t) with t ∈ B ε (0), for some ε > 0.
The next example shows that strictly convex intersection bodies of polytopes as in Remark 5.5 do indeed exist.
Example 5.6. The intersection body of the 3-dimensional centrally symmetric icosahedron P is strictly convex. Indeed, using HomotopyContinuation.jl [BT18] one can check that the algebraic varieties that define the boundary of IP do not contain lines (this is expected, since the generic quintic and sextic surface in 3-dimensional space do not contain lines). Moreover, because of the central symmetry, the intersection body is convex. Hence, it is strictly convex. This intersection body is displayed in [BBMS22, Figure 1], and our computations can be verified using the code on MathRepo [BBMS21].
To summarize, the convexity space can be finite or infinite. Indeed, for d = 2 we have shown that it is always finite, while in higher dimensions it is sometimes infinite, but other times finite, as for a cube. We note that proving non-convexity is a much easier task then proving convexity, as the first can be achieved by showing the non-convexity of a small curve on the boundary, while convexity is a global condition. A possible approach to tackle this problem in the case of polytopes might be studying the curvature of the algebraic hypersurfaces defining the boundary of the intersection body, as in [BRW22].
Another interesting direction of research concerns the topology of the convexity space. We collect here some open questions.
Questions:
1. If the convexity space of P is finite, what are the possible values of |CS(P )|?
2. If the convexity space of P is infinite, how many connected components does it have? What is the dimension of these connected components?
Figure 2 :
2The arrangement L (P ) of the triangle from Examples 3.1 and 3.3.
Figure 3 :
3The arrangement L (P ) of affine lines for P = [−1, 1] 2 , in black, together with I(P + t) for different choices of t, in red, as in Example 3.6.
Figure 4 :
4The proof of Proposition 4.3 in a picture. Left: the lines orthogonal to u a , u b , u a+b and their intersections with the edges 1 , 2 of P . Right: the points p a , p b , p a+b ∈ ∂IP , and the point q ∈ conv (p a , p b ), but q ∈ IP .
Proposition 4.5 ([Gar06, Chapter 5.1]). Let K ⊆ R d be a star body. Then
(i ) K is centrally symmetric and centered at the origin if and only if K = ∆K,(ii ) if K ⊆ R 2 then IK = 2 ∆KWe now prove the main result of this section.
Theorem 4 . 6 .
46Let P ⊆ R 2 be a polygon. Then IP is a convex body if and only if (i ) P = −P , or
a collection of examples of polygons, together with the possible positions of the origin.
Figure 5 :
5Examples in which IP is convex; the bullets represent admissible positions of the origin. From left to right: a parallelogram (k = 5)
Proposition 5. 2 .
2Let L ⊆ R d−1 be a convex body and K = L × [a, b] ⊆ R d−1 × R ∼ = R d be a prism over L. Then, the intersection of IK with the hyperplane H = {x ∈ R d | x d = 0} is the (b − a)th dilate of IL, i.e., IK ∩ H = (b − a) IL.
Figure 6 :
6The intersection body of the 3-dimensional cube P = [0, 2] 3 (blue) and the intersection body of the square Q = [0, 2] 2 (red).
Acknowledgements. We are thankful to Christoph Hunkenschröder for posing a question during a seminar discussion which inspired this work. We thank Andreas Bernig and Jesús De Loera for inspiring conversations about intersection bodies and convexity. We are thankful to the organizers of the conference "Geometry meets Combinatorics in Bielefeld", where most of our ideas fell into place.
MATH-REPO. Mathematical Data and Software. Intersection Bodies of Polytopes. Katalin Berlow, Marie-Charlotte Brandenburg, Chiara Meroni, Isabelle Shankar, Online; accessed 21-February-2023Katalin Berlow, Marie-Charlotte Brandenburg, Chiara Meroni, and Isabelle Shankar. MATH- REPO. Mathematical Data and Software. Intersection Bodies of Polytopes. [Online; accessed 21-February-2023].
Intersection Bodies of Polytopes". In: Beiträge zur Algebra und Geometrie / Contributions to Algebra and Geometry. Katalin Berlow, Marie-Charlotte Brandenburg, Chiara Meroni, Isabelle Shankar, 10.1007/s13366-022-00621-763Katalin Berlow, Marie-Charlotte Brandenburg, Chiara Meroni, and Isabelle Shankar. "Intersec- tion Bodies of Polytopes". In: Beiträge zur Algebra und Geometrie / Contributions to Algebra and Geometry 63 (Jan. 2022), pp. 419 -439. doi: 10.1007/s13366-022-00621-7.
Convexity of Lp-Intersection Bodies. Gautier Berck, 10.1016/j.aim.2009.05.009Advances in Mathematics. 2223Gautier Berck. "Convexity of Lp-Intersection Bodies". In: Advances in Mathematics 222.3 (Oct. 2009), pp. 920-936. doi: 10.1016/j.aim.2009.05.009.
Convex bodies with non-convex cross-section bodies. Ulrich Brehm, 10.1112/S0025579300007610Mathematika. 46Ulrich Brehm. "Convex bodies with non-convex cross-section bodies". In: Mathematika 46.1 (June 1999), 127-129. doi: 10.1112/S0025579300007610.
Enumerative Geometry of Curvature of Algebraic Hypersurfaces. Paul Breiding, Kristian Ranestad, Madeleine Weinstein, arXiv:2206.09130Paul Breiding, Kristian Ranestad, and Madeleine Weinstein. Enumerative Geometry of Curva- ture of Algebraic Hypersurfaces. June 2022. arXiv: 2206.09130.
HomotopyContinuation.jl: A Package for Homotopy Continuation in Julia. Paul Breiding, Sascha Timme, International Congress on Mathematical Software. SpringerPaul Breiding and Sascha Timme. "HomotopyContinuation.jl: A Package for Homotopy Contin- uation in Julia". In: International Congress on Mathematical Software. Springer. 2018, pp. 458- 465.
A Theorem on Convex Bodies of The Brunn-Minkowski Type. Herbert Busemann, 10.1073/pnas.35.1.27Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America35Herbert Busemann. "A Theorem on Convex Bodies of The Brunn-Minkowski Type". In: Pro- ceedings of the National Academy of Sciences of the United States of America 35.1 (Jan. 1949), pp. 27-31. doi: 10.1073/pnas.35.1.27.
Geometric Tomography. Richard J Gardner, 10.1017/CBO9781107341029doi: 10 . 1017 / CBO9781107341029Encyclopedia of Mathematics and its Applications. New YorkCambridge University Press58492Richard J. Gardner. Geometric Tomography. Vol. 58. Encyclopedia of Mathematics and its Applications. Cambridge University Press, New York, 2006, pp. xxii+492. doi: 10 . 1017 / CBO9781107341029.
A positive answer to the Busemann-Petty problem in three dimensions. Richard J Gardner, 10.2307/2118606Annals of Mathematics. Second Series. 140Richard J. Gardner. "A positive answer to the Busemann-Petty problem in three dimensions". In: Annals of Mathematics. Second Series 140.2 (1994), pp. 435-447. doi: 10.2307/2118606.
Intersection bodies and the Busemann-Petty problem. Richard J Gardner, 10.2307/2154703Transactions of the American Mathematical Society. 342Richard J. Gardner. "Intersection bodies and the Busemann-Petty problem". In: Transactions of the American Mathematical Society 342.1 (1994), pp. 435-445. doi: 10.2307/2154703.
An analytic solution to the Busemann-Petty problem on sections of convex bodies. Richard J Gardner, Alexander Koldobsky, Thomas Schlumprecht, 10.2307/120978Annals of Mathematics. Second Series. 149Richard J. Gardner, Alexander Koldobsky, and Thomas Schlumprecht. "An analytic solution to the Busemann-Petty problem on sections of convex bodies". In: Annals of Mathematics. Second Series 149.2 (1999), pp. 691-703. doi: 10.2307/120978.
Jacob E Goodman, O' Joseph, Csaba D Rourke, Tóth, Handbook of Discrete and Computational Geometry. Boca Raton, FLCRC Press3rd edition. Discrete Mathematics and its ApplicationsJacob E. Goodman, Joseph O'Rourke, and Csaba D. Tóth, eds. Handbook of Discrete and Computational Geometry. 3rd edition. Discrete Mathematics and its Applications. CRC Press, Boca Raton, FL, 2018.
The Busemann theorem for complex p-convex bodies. Qingzhong Huang, Binwu He, Guangting Wang, 10.1007/s00013-012-0422-ydoi: 10.1007/ s00013-012-0422-yArchiv der Mathematik. 99Qingzhong Huang, Binwu He, and Guangting Wang. "The Busemann theorem for complex p-convex bodies". In: Archiv der Mathematik 99.3 (Sept. 2012), pp. 289-299. doi: 10.1007/ s00013-012-0422-y.
Intersection bodies, positive definite distributions, and the Busemann-Petty problem. Alexander Koldobsky, 10.1353/ajm.1998.0030American Journal of Mathematics. 120Alexander Koldobsky. "Intersection bodies, positive definite distributions, and the Busemann- Petty problem". In: American Journal of Mathematics 120.4 (1998), pp. 827-840. issn: 0002- 9327. doi: 10.1353/ajm.1998.0030.
The geometry of p-convex intersection bodies. Jaegil Kim, Vladyslav Yaskin, Artem Zvavitch, 10.1016/j.aim.2011.01.011Advances in Mathematics. 226Jaegil Kim, Vladyslav Yaskin, and Artem Zvavitch. "The geometry of p-convex intersection bodies". In: Advances in Mathematics 226.6 (Apr. 2011), pp. 5320-5337. doi: 10.1016/j.aim. 2011.01.011.
Intersection bodies and dual mixed volumes. Erwin Lutwak, 10.1016/0001-8708(88)90077-1Advances in Mathematics. 71Erwin Lutwak. "Intersection bodies and dual mixed volumes". In: Advances in Mathematics 71.2 (1988), pp. 232-261. doi: 10.1016/0001-8708(88)90077-1.
Extremal equalities for cross-sectional measures of convex bodies. Horst Martini, Proceedings of the 3rd Congress of Geometry. the 3rd Congress of GeometryThessaloniki; ThessalonikiAristotle Univ. ThessalonikiHorst Martini. "Extremal equalities for cross-sectional measures of convex bodies". In: Pro- ceedings of the 3rd Congress of Geometry (Thessaloniki, 1991). Aristotle Univ. Thessaloniki, Thessaloniki, 1992, pp. 285-296.
Cross-sectional measures. Horst Martini, Intuitive Geometry. Szeged; Amsterdam63Horst Martini. "Cross-sectional measures". In: Intuitive Geometry (Szeged, 1991). Vol. 63. Col- loq. Math. Soc. János Bolyai. North-Holland, Amsterdam, 1994, pp. 269-310.
The convex intersection body of a convex body. Mathieu Meyer, Shlomo Reisner, 10.1017/S0017089511000103Glasgow Mathematical Journal. 53Mathieu Meyer and Shlomo Reisner. "The convex intersection body of a convex body". In: Glasgow Mathematical Journal 53.3 (2011), 523-534. doi: 10.1017/S0017089511000103.
On convex intersection bodies and unique determination problems for convex bodies. Matthew Stephen, 10.1016/j.jmaa.2016.05.023Journal of Mathematical Analysis and Applications. 443Matthew Stephen. "On convex intersection bodies and unique determination problems for con- vex bodies". In: Journal of Mathematical Analysis and Applications 443.1 (2016), pp. 295-312. doi: 10.1016/j.jmaa.2016.05.023.
A positive solution to the Busemann-Petty problem in R 4. Gaoyong Zhang, 10.2307/120974Annals of Mathematics. Second Series. 149Gaoyong Zhang. "A positive solution to the Busemann-Petty problem in R 4 ". In: Annals of Mathematics. Second Series 149.2 (1999), pp. 535-543. doi: 10.2307/120974.
| zyda_arxiv-0595000 |
New examples of extremal domains for the first eigenvalue of the Laplace-Beltrami operator in a Riemannian manifold with boundary
20 Jun 2014 June 23, 2014
Jimmy Lamboley
Pieralberto Sicbaldi
New examples of extremal domains for the first eigenvalue of the Laplace-Beltrami operator in a Riemannian manifold with boundary
20 Jun 2014 June 23, 2014arXiv:1406.5167v2 [math.DG]
We build new examples of extremal domains with small prescribed volume for the first eigenvalue of the Laplace-Beltrami operator in some Riemannian manifold with boundary. These domains are close to half balls of small radius centered at a nondegenerate critical point of the mean curvature function of the boundary of the manifold, and their boundary intersects the boundary of the manifold orthogonally.
Introduction
New examples of domains with small prescribed volume that are critical points for the first eigenvalue of the Dirichlet Laplace-Beltrami operator are built in [21], under the hypothesis that the Riemannian manifold has at least one nondegenerate critical point of the scalar curvature function. In that case, such domains are given by small perturbations of geodesic balls of small radius centered at a nondegenerate critical point of the scalar curvature. This result has been generalized in [12] to all compact Riemannian manifolds by eliminating the hypothesis of the existence of a nondegenerate critical point of the scalar curvature.
Such examples of critical points for the Laplace-Beltrami operator are parallels to similar shape examples of critical points for the area functional, under the same assumptions, which lead to the construction of constant mean curvature small topological spheres, see [22,28].
The aim of this paper is to give some new examples of domains Ω that are critical points for the first eigenvalue of the Laplace-Beltrami operator (i.e. extremal domains) in some Riemannian manifolds M with boundary. Such examples are new because the boundary of the domain is partially included in the boundary of the manifold. The domains we obtain are close to half-balls centered at a point of ∂M where the mean curvature of ∂M is critical and the criticality is not degenerate. In particular, in the simplest situation, M can be a domain of the Euclidean space, see Fig. 1. Again, we can make a parallel with the case of the area, for which a similar result has been proven in the Euclidian case and dimension 3 in [15], though it is expected to be valid in the general case.
Assume that we are given (M, g) an (n + 1)-dimensional Riemannian manifold, n ≥ 1, with boundary ∂M = ∅. The boundary ∂M is a smooth n-dimensional Riemannian manifold with the metricg induced by g. For a domain Ω contained in the interior of M , Ω ⊂M , the first eigenvalue of the Laplace-Beltrami operator with 0 Dirichlet boundary condition is then given by
λ Ω = min u∈H 1 0 (Ω) Ω |∇u| 2 Ω u 2 .
If Ω is a boundary domain (i.e. a domain such that ∂Ω ∩ ∂M = ∅), we consider the first eigenvalue of the Laplace-
where H 1 0 (Ω) denotes the closure of the space {ϕ ∈ C ∞ (Ω), Supp(ϕ) ⊂ Ω ∪ ∂M } for the H 1 -norm. It is very classical that the optimization problem (1) admits a nonnegative solution if Ω has finite volume, and if Ω is connected such a solution is unique among nonnegative functions whose L 2 -norm is 1. This function is then called the first eigenfunction of Ω.
Under smoothness assumption (for example if Ω is a piecewise C 1,α -domain, see Section 2.1 for more detailed definitions, the space H 1 0 (Ω) is equal to the space of functions in H 1 (Ω) with 0 Dirichlet condition on ∂Ω ∩M , and the function u solving (1) satisfies: ∆ g u + λ Ω u = 0 in Ω u = 0 on ∂Ω ∩M , g(∇u, ν) = 0 on ∂Ω ∩ ∂M (2) where ν denotes the outward normal vector to ∂M , which is well-defined as soon as Ω is included in a small enough ball, which will be the case in the whole paper. This will be referred to as a mixed eigenvalue problem over Ω. Moreover, it is also well-known that if there exists (u, λ) a nontrivial solution of (2) for a connected domain Ω such that u is nonnegative, then λ = λ Ω is the first eigenvalue of Ω, and u is the first eigenfunction of Ω, up to a multiplicative constant.
Let us consider a boundary domain Ω 0 ⊂ M . Ω 0 is said to be extremal if Ω −→ λ Ω is critical at Ω 0 with respect to variations of the domain Ω 0 which preserve its volume. In order to make this notion precise, we first introduce the definition of a deformation of Ω 0 . Definition 1.1. We say that (Ω t ) t∈(−t0,t0) is a deformation of Ω 0 , if there exists a vector field V on M , of class C 2 , such that its flow ξ, defined for t ∈ (−t 0 , t 0 ) by dξ dt (t, p) = V (ξ(t, p)) and ξ(0, p) = p , preserves the boundary of the manifold, i.e. ξ(t, p) ∈ ∂M for all (t, p) ∈ (−t 0 , t 0 ) × ∂M , and for which Ω t = ξ(t, Ω 0 ).
The deformation is said to be volume preserving if the volume of Ω t does not depend on t.
If (Ω t ) t∈(−t0,t0) is a deformation of Ω 0 , we denote by λ t the first eigenvalue of the Laplace-Beltrami operator −∆ g on Ω t . We prove in Section 2 that t −→ λ t is smooth in a neighborhood of t = 0. If Ω ⊂M this fact is standard and follows from the implicit function theorem together with the fact that the first eigenvalue of the Laplace-Beltrami operator is simple, see for example [18]. When the boundary ∂M is invariant by the flow of the deformation, as required in Definition 1.1, a similar strategy still works when ∂Ω ∩ ∂M = ∅, but this is less classical since one needs to manage the singularities of the boundary domains under consideration, see Proposition 2.5. The derivative at 0 of t → λ t is then called the shape derivative of Ω → λ Ω at Ω 0 in the direction V .
This remark allows us to give the definition of an extremal domain. Definition 1.2. A domain Ω 0 is an extremal domain for the first eigenvalue of −∆ g if for any volume preserving
deformation {Ω t } t of Ω 0 , we have dλ t dt | t=0 = 0 ,(3)
where λ t = λ Ωt as defined in (1).
All along the paper, we will use a special system of coordinates, that we remind here: let p ∈ ∂M , and let N be the unit normal vector field on ∂M near p that points into M . We fix local geodesic normal coordinates x = (x 1 , ..., x n ) in a neighborhood of 0 ∈ R n to parametrize U p a neighborhood of p in ∂M by Φ. We consider the mapping
Ψ(x 0 , x) = Exp Φ(x) (x 0 N (Φ(x)))(4)
which is a local diffeomorphism from a neighborhood of 0 ∈ R n+1
+ (where R n+1 + = {(x 0 , x) ∈ R n+1 : x 0 > 0}) into V p a neighborhood of p in M . For all ε > 0 small enough, we denote B + ε ⊂ R n+1 +
the half-ball given by the Euclidean ball of radius ε centered at the origin and restricted to x 0 > 0, and we denote Now we can state the main result of the paper: Theorem 1.3. Assume that p 0 ∈ ∂M is a nondegenerate critical point of H, the mean curvature function of (∂M,g).
B + g,ε (p) = Ψ(B + ε ) ⊂M . Φ(p 1 ) p 1 Φ N (Φ(p 2 )) ∂M p 2 x 1 x 2 N (Φ(p 1 )) Φ(p 2 )
Then, for all ε > 0 small enough, say ε ∈ (0, ε 0 ), there exists a boundary domain Ω ε ⊂ M such that :
(i) The volume of Ω ε is equal to the Euclidean volume of B + ε .
(ii) The domain Ω ε is extremal in the sense of Definition 1.2.
(iii) The boundary ∂Ω ε ∩M intersects ∂M orthogonally, (iv) The boundary ∂Ω ε ∩M is analytic if M is analytic.
Moreover, there exists c > 0 and, for all ε ∈ (0, ε 0 ), there exists p ε ∈ ∂M such that ∂Ω ε ∩M is a normal graph over
∂B + g,ε (p ε ) ∩M for some function w ε with w ε C 2,α ∂B + g,ε (pε)∩M ≤ c ε 3 . and dist(p ε , p 0 ) ≤ c ε .
This result will be proven in Section 4.5. The strategy of the proof of this result is inspired by [21]. In order to give the outline of the paper, we recall here the strategy of the proof and insist on the main differences with [21]. The first step is to characterize the extremality of a domain Ω 0 with the Euler-Lagrange equation, that leads to:
g(∇u, ν) = constant on ∂Ω 0 ∩M .(5)
The difficulty here is to prove this characterization for domains that are only piecewise smooth (see Section 2.1 where we introduce the notion of boundary edge domain and analyze the regularity theory of mixed boundary value problem in such domains). In particular, we prove in Section 2.2 that in order to be extremal it is enough for a domain to satisfy (3) for deformations that preserve the contact angle on ∂M ; this important fact will be used in the rest of the paper for the construction of extremal domains. This is an interesting difference with the case of critical points of the area functional, as we explain in Section 2.3: condition (5) contains already the information that the contact angle between ∂Ω 0 ∩M and ∂Ω 0 ∩ ∂M is constant and equal to π/2, see Corollary 2.6; this is due to the non-locality of the Euler-Lagrange equation for this problem. It also implies the analytic regularity of ∂Ω 0 ∩M .
Then, thanks to a dilation of the metric and a control of the volume constraint, we reformulate in Section 3 the problem into solving for any small ε the equation
F (p, ε,v) = 0 (6)
where p ∈ ∂M ,v ∈ C 2,α (S n + ) is a function that parametrize a perturbation of the half-geodesic ball B + g,ε (p), and F (p, ε,v) represents the difference between g(∇u, ν) and its mean value on the boundary of this perturbed half geodesic ball. We then want to solve this equation for ε > 0 by using the implicit function Theorem and therefore study the operator ∂vF (p, 0, 0), which is basically related to the second order shape derivative of λ 1 at the Euclidian half-ball. This is the purpose of Sections 4.1 and 4.2, where we use a symmetrization argument to come down to the study of the same operator in the Euclidian ball, which has been done in [21]. As expected, that operator has a nontrivial kernel (because of the invariance of λ 1 by translation along ∂R n+1 + in the Euclidian setting) and we are only able to solve F (p, ε,v(p, ε)) = k(p, ε)
where k(p, ε) is a linear function induced by an element of ∂R n+1 + , see Proposition 4.4. Here comes the final step of the proof of Theorem 1.3, which takes into account the geometry of ∂M : by studying the expansion of F,v with respect to ε, we prove in the end of Section 4 that close to a point p 0 which is a nondegenerate critical point of the mean curvature of ∂M , one can chose p ε such that k(p ε , ε) = 0 and conclude the proof. We insist on the fact that this step is more involved here than in [21]: indeed, the expansions in ε contain lower order term than in the case without boundary (see Lemma 4.3 and Propositions 4.4,4.5). Nevertheless, thanks to the choice of our coordinates, the strategy still applies because these lower order terms are orthogonal to linear functions induced by elements of ∂R n+1 + .
Characterization of boundary extremal domains
In this section, we focus on an analytic characterization of extremal domains. The main difficulty here is to handle the shape derivative of Ω → λ Ω in a nonsmooth setting. Indeed, because of the presence of a boundary in M , we are naturally led to deal with domains that are only piecewise smooth. First, we will treat the regularity for the mixed problem (2) in some domains called boundary edge domains. We compute then the shape derivative of Ω → λ Ω in this setting. Since we have to deal with possibly nonsmooth eigenfunctions, one needs to carefully prove the differentiability of Ω → λ Ω and compute the shape derivative. We will also insist on some important aspects of the non-locality of the extremality condition for λ 1 , and compare it with the case of critical points for the area functional.
Boundary edge domains and regularity of the eigenfunction
Definition 2.1.
Let Ω be a boundary domain of the manifold M , that is to say ∂Ω ∩ ∂M = ∅. We say that Ω is a boundary edge domain if it satisfies the following condition:
1. ∂Ω ∩M and ∂Ω ∩ ∂M are smooth n-dimensional submanifolds with boundary, 2. Γ := ∂Ω ∩M ∩ ∂M is a (n − 1)-dimensional smooth submanifold without boundary.
In that case, given p ∈ Γ we can define ω(p) the angle between the normal vector to Γ tangent to ∂M and the normal vector to Γ tangent to ∂Ω ∩M . The function ω : Γ → [0, π] will be referred to as the contact angle of the domain Ω, see Fig. 3. Let Ω be a connected boundary edge domain of finite volume such that the contact angle ω is strictly between 0 and π. Then there exists ε > 0 such that for any f ∈ H −1/2+ε (Ω), the solution u of
ω(p 2 ) p 2 ∂M p 1 Γ Ω ω(p 1 ) −∆ g u = f in Ω u = 0 on ∂Ω ∩M , g(∇u, ν) = 0 on ∂Ω ∩ ∂M (7)
is in the space H 3/2+ε (Ω). Remark 2.3. It is important for our purpose to work here with Sobolev regularity: if indeed we work with Hölderregularity, we can only conclude that u ∈ C 0,1/2+ε (Ω), which does not suffice to justify the expression of the shape derivative, which uses the trace of the gradient on ∂Ω, see Section 2.2, while from the fact that u ∈ H 3/2+ε (Ω), we can deduce that ∇u has a trace in L 2 (∂Ω) (we use here a trace theorem, valid since under our assumptions, Ω has a Lipschitz boundary).
Proof. Let f ∈ H s (Ω) where s ∈ (−1, 0). It is well-known from the variational formulation of the problem that there exists a unique u ∈ H 1 (Ω) weak solution of (7). We wonder for which s we can state that u ∈ H s+2 (Ω). To that end, we work locally around a point p ∈ Γ: there exist special cylindrical coordinates (r, θ, y) such that Γ correspond to r = 0, y ∈ Γ parametrizes the edge (p corresponding to y = 0), and Ω corresponds to 0 < θ < ω(y); since Ω is a boundary edge domain, these coordinates are well-defined and C ∞ . From the literature on edge asymptotics, we know that u can be written around p as the sum of a singular function u sing and a remainder term u reg which is more regular that u sing ; more precisely, it is known (see for example [7,8,9,13,14]) that if ω(y) ∈ (0, π/2) then u sing (r, θ, y) = 0 and u reg ∈ H s+2 (Ω)
if ω(y) ∈ (π/2, π) then u sing (r, θ, y) = c(y) r π/2ω(y) ϕ(θ, y),
if ω(y) = π/2 in a neighborhood of y = 0, then u sing (r, θ, y) = r
q≥1 c q (y) ln q (r) ϕ q (y, θ) ,
where c, (c q ) q∈N (containing only a finite number of non-zero terms) and ϕ, (ϕ q ) q∈N are smooth functions (we notice that when n = 1, the set Γ is made of two points, in that case the regularity on Γ is an empty condition). Let us conclude in the last two cases. In the second one, we know that
if α > s ′ − n + 1 2 , then r → r α ∈ H s ′ (R n+1 ),
and therefore the regularity increases with small angles, and the worst regularity is obtained when the angle is close to π, but is always strictly better than H 3/2 which is the limit case when ω = π and n + 1 = 2. In the last case, it is clear that r ln q (r) = o(r 1−δ ) for any small δ, so we obtain that the regularity is also better than H 3/2 , therefore there exists s strictly above −1/2 such that u ∈ H s+2 (Ω). It remains to understand the case where ω(0) = π/2 but ω is not constant in a neighborhood of y = 0. In that case, the asymptotic development is more involved (phenomenon of crossing singularities), but it is explained in [8,9] that up to an arbitrary small loss of regularity, we obtain the same range of regularity as in the case ω = π/2, and therefore again u sing is in H 3/2+ε (Ω).
In the previous proof, we have seen that the regularity is more or less monotone with respect to the contact angle: smaller is the angle, higher is the regularity, and for angles close to π, the regularity decreases up to the space H 3/2 . However, it is also known that there exists some exceptional angles, for which the regularity is higher than expected (see for example [1] for a description of this phenomenon for the angle π/4 in dimension 2). We prove here that the angle π/2 is such an exceptional angle in our situation. More precisely we prove that when the angle is π/2 everywhere on the interface, the regularity is actually C 2,α , whereas it was expected to be C 0,α for every α in the proof of the previous statement. This will be very useful in the proof of Theorem 1.3. This result is related to the fact that one can use a symmetrization argument to conclude that the first expected term in the asymptotic development of u vanishes.
Proposition 2.4.
Let Ω be a boundary edge domain, such that the angle ω defined on Γ is constant and equal to π/2. Then for every α < 1 and any f ∈ C 0,α (Ω), the solution u of (7) is in C 2,α (Ω).
Proof. We use the same setting as in the proof of Proposition 2.2, but now in the class of Hölder spaces, so we consider f ∈ C 0,α (Ω). Around p ∈ Γ, from [8,9,13,14], we know that the exponents in the asymptotic development for the mixed boundary problem are (π/2ω + kπ/ω) k∈N , so for the angle π/2 the first terms are 1 and 3 and since r → r 3 ln q (r) belongs to the space C 2,α (Ω) for every α and any integer q, we conclude that
u(r, θ, y) = r q≥1 c q (y) ln q (r) ϕ q (y, θ) + u reg (r, θ, y),(8)
for y close to 0, r small, θ ∈ (0, π/2) and where functions (c q , ϕ q ) are smooth and u reg is in C 2,α locally around p.
The result will be proven if we prove that c q = 0 for q ≥ 1. To that end, we use a symmetrization procedure through ∂M , using around p ∈ Γ the coordinates (x 0 , x) described in (4). We define
U = Ψ −1 (Ω ∩ B + g,r0 (p)) ⊂ B + r0 , so that ∂U ∩ ({0} × R n ) = Ψ −1 (∂Ω ∩ ∂M ∩ B +
g,r0 (p)). With this choice of coordinates, U is again a boundary edge domain whose contact angle is constant and equal to π/2 on γ = Ψ −1 (Γ).
We now define W = {(x 0 , x) /(|x 0 |, x) ∈ U } and
∀(x 0 , x) ∈ W,ů(x 0 , x) = u(x 0 , x) if x 0 > 0 u(−x 0 , x) if x 0 < 0
and similarly we defineg andf .
Since the contact angle is π/2, the symmetrized domain W is smooth around 0; using that u satisfies a Neumann boundary condition on ∂Ω ∩ ∂M , we deduce thatů satisfies
−∆gů =f in W u = 0 on ∂W ∩ B r0 .
and finally, the symmetrized metricg is no longer C ∞ but has Lipschitz coefficients, andf is again in C 0,α (W ).
Since the Laplace operator can be written in a divergence form
∆gu = 1 |g| ∂ i |g|g ij ∂ jů
we can apply the regularity theory for elliptic PDE in divergence form in a smooth set, with Lipschitz coefficients: precisely, from [19,Theorem 8.34] we know thatů ∈ C 1,α W and therefore (c q ) q≥1 must be zero, and finally u ∈ C 2,α (Ω).
Shape derivative in nonsmooth domains
Proposition 2.5.
Let Ω 0 be a connected boundary domain of finite volume. Assume that (Ω t ) t is a deformation of Ω 0 induced by the vector field V , as defined in Definition 1.2. Then t −→ λ t is C ∞ around t = 0. If moreover Ω 0 is a boundary edge domain such that the contact angle is strictly between 0 and π, then g(∇u 0 , ν 0 ) ∈ L 2 (∂Ω 0 ) and
dλ t dt | t=0 = − ∂Ω0∩M (g(∇u 0 , ν 0 )) 2 g(V, ν 0 ) dvol g ,(9)
where dvol g is the volume element on ∂Ω 0 ∩M for the metric induced by g and ν 0 is the normal vector field on ∂Ω 0 ∩M .
Before proving this result, we give some remarks and consequences. The differentiability of some similar shape functional for mixed boundary value problem is studied in [25,Section 3.9] in the case of a smooth domain, which corresponds to the case of a angle constant and equal to π. In that case formula (9) is not valid since the eigenfunction u is not smooth enough. Also in [3], the case of angles different from π is considered, but for a different shape functional, and restricted to the two-dimensional case. Proposition 2.5 allows us to characterize extremal domains for the first eigenvalue of the Laplace-Beltrami operator under 0 mixed boundary conditions, and state the problem of finding extremal domains into the solvability of an over-determined elliptic problem. As a consequence of the previous result, we obtain indeed:
Corollary 2.6.
Let Ω 0 be a boundary edge domain. Then Ω 0 is extremal if and only if the first eigenfunction u 0 of
Ω 0 satisfies g(∇u 0 , ν 0 ) = constant on ∂Ω 0 ∩M(10)
where ν 0 is the outward normal vector field on ∂Ω 0 ∩M . In that case, ∂Ω 0 ∩M necessarily meets ∂M orthogonally, that is to say the contact angle function ω is equal to π/2 on Γ.
Proof of Corollary 2.6: Let Ω 0 be a boundary extremal domain for the first eigenvalue of the Laplace-Beltrami operator, with 0 Dirichlet boundary condition on ∂Ω 0 ∩M and 0 Neumann boundary condition on ∂Ω 0 ∩ ∂M . Using Proposition 2.5, we obtain
∂Ω0∩M (g(∇u 0 , ν 0 )) 2 g(V, ν 0 ) dvol g = 0
for all field V preserving the volume of the domain, i.e. such that
∂Ω0∩M g(V, ν 0 ) dvol g = 0.(11)
This means that g(∇u 0 , ν 0 ) is constant. On the other hand, if g(∇u 0 , ν 0 ) is constant, by the previous proposition we have that Ω 0 is extremal, because V satisfy (11).
It remains to investigate the angle between ∂Ω 0 ∩M and ∂Ω 0 ∩ ∂M , when (10) is satisfied. Let's assume that y → ω(y) is not constantly equal to π/2; then there exists a neighborhood in Y ⊂ Γ = ∂Ω ∩M ∩ ∂M where ω is different from π/2. We work locally around a point y 0 ∈ Y. We need now a more explicit version of the asymptotic development written in the proof of Proposition 2.2. To that end, we use the results of [10,11,9] which asserts that since the principal part of our operator is the Euclidian Laplacian, we have, up to a smooth change of coordinates, that u 0 (r, θ, y) can be written u reg (r, θ, y) + u sing (r, θ, y) with:
if ω(y) ∈ (0, π/2) in Y, then u sing = 0 and u reg ∈ H s+2 (Ω) is flat at order 2, which means u reg = O(r 2 ) and
∇u reg = O(r), if ω(y) ∈ (π/2, π) in Y, then u sing (r, θ, y) = c(y)r π/2ω(y) cos π 2ω(y) θ ,
and u reg is more flat than u sing , meaning u reg = o(r) and ∇u reg = o(1),
(note that here, with the terminology of [8,9], there is no crossing singularities, since ω(y) = π/2 on Y and we are only interested in the first term of the asymptotic). Therefore in the first case g(∇u 0 , ν 0 ) = O(r) and in the second case g(∇u 0 , ν 0 ) behaves like − π 2ω(y) c(y)r π/2ω(y)−1 sin π 2ω(y) θ , and therefore, in both cases, cannot be a nonzero constant on ∂Ω ∩M = {θ = ω(y)}. This is a contradiction (remind that from maximum principle, the constant g(∇u 0 , ν 0 ) cannot be a zero), and one concludes that ω(y) = π/2 everywhere on Γ.
Proof of Proposition 2.5: Let Ω 0 be a boundary domain, connected and of finite volume. We denote by ξ t = ξ(t, ·) the flow associated to V , ν t the outward unit normal vector field to ∂Ω t . We first remind that, since Ω t is connected, for t small enough λ t the first eigenvalue of Ω t with mixed boundary condition is simple, so one can define t → u t ∈ H 1 0 (Ω t ) the one-parameter family of first eigenfunctions of the Laplace-Beltrami operator, normalized to be positive and have L 2 (Ω t )-norm equal to 1. As usual in the computation of a shape derivative, we
consider u t = u t • ξ(t, ·) ∈ H 1 0 (Ω 0 ). Step 1: ∃ t 0 > 0 such that t ∈ (−t 0 , t 0 ) → ( u t , λ t ) ∈ H 1 0 (Ω 0 ) × R is C ∞ .
The variational formulation of the equation satisfied by u t is:
Ωt g(∇u t , ∇ϕ) = λ t Ωt u t ϕ , ∀ϕ ∈ H 1 0 (Ω t ).
We are going to transport that formulation on the fixed domain Ω 0 , in order to obtain the variational formulation satisfied by u t ∈ H 1 0 (Ω). To that aim, we use the following equality, which relies on the fact that
ξ t (∂Ω 0 ∩ ∂M ) = ∂Ω t ∩ ∂M
and is a consequence of the hypothesis ξ t (∂M ) ⊂ ∂M :
H 1 0 (Ω 0 ) = {ϕ • ξ t , ϕ ∈ H 1 0 (Ω t )}.
With this equality and a change of variable (see for example [18] for details), we obtain:
Ω0 g(A(t) ∇ u t , ∇ϕ) = λ t Ω0 u t ϕ J t , ∀ϕ ∈ H 1 0 (Ω 0 ), where J t = det(Dξ t ), and A(t) := J t Dξ −1 t (Dξ −1 t ) T . We then define G : (−t 0 , t 0 ) × H 1 0 (Ω 0 ) × R −→ H 1 0 (Ω 0 ) ′ × R (t, v, µ) −→ −div g (A(t)∇v) − µvJ t , Ω0 v 2 J t − 1 where H 1 0 (Ω 0 ) ′ is the dual space of H 1 0 (Ω 0 )
, and −div g (A(t)∇v) has to be understood in the weak sense:
−div g (A(t)∇v), ϕ H 1 0 (Ω0) ′ × H 1 0 (Ω0) = Ω0 g(A(t)∇v, ∇ϕ).
It is easy to check that G is C ∞ , see again [18] for more details. In order to apply the implicit function theorem for the equation G(t, u t , λ t ) = 0, we focus on the differential of G at (0, u 0 , λ 0 ) with respect to the couple (v, µ):
∂ (v,µ) G(0, u 0 , λ 0 )(w, ν) = −∆ g w − νu 0 − λ 0 w , 2 Ω0 u 0 w , ∀(w, ν) ∈ H 1 0 (Ω t ) × R.
Because of the Banach isomorphism Theorem, in order to prove to prove that such differential is an isomorphism, it is enough to prove that given (f,
Λ) ∈ H 1 0 (Ω 0 ) ′ × R, the equation −∆ g w − νu 0 − λ 0 w, 2 Ω0 u 0 w = (f, Λ) admits a unique solution (w, ν) ∈ H 1 0 (Ω 0 ) × R. The operator −∆ g − λ 0 ½ has a one-dimensional kernel, spanned by u 0 . Therefore f + νu 0 is in the range of −∆ g − λ 0 ½ if and only if it is orthogonal to u 0 (in the sense of the duality H 1 0 (Ω 0 ) ′ × H 1 0 (Ω 0 )
). This leads to the unique value ν = − f, u 0 .
Moreover, one knows that the solutions w of (
−∆ g − λ 0 ½) w = f + νu 0 form a one-dimensional affine space v 0 + Span(u 0 ), so w = v 0 + αu 0 for some α ∈ R. The equation 2 Ω0 u 0 w = Λ uniquely determines α and so w. We can conclude that ∂ (v,µ) F (0, u 0 , λ 0 ) is an isomorphism, and therefore t → ( u t , λ t ) is C ∞ .
Now and for the rest of the proof, Ω 0 is assumed to be a boundary edge domain whose contact angle is always strictly between 0 and π.
Step 2: Generalized Green formula: we prove in this step that given ε ∈ (0, 1/2) and Ω a Lipschitz domain,
denoting H s (∆ g , Ω) := ϕ ∈ H s (Ω), ∆ g ϕ ∈ L 2 (Ω) for s ∈ (1/2, 3/2) we have: ∀u ∈ H 3/2−ε (∆ g , Ω), ∀v ∈ H 1/2+ε (∆ g , Ω)
,
Ω (v∆ g u − u∆ g v) = g(∇u, ν 0 ), v H −ε (∂Ω)×H ε (∂Ω) − u, g(∇v, ν 0 ) H 1−ε (∂Ω)×H −1+ε (∂Ω) (12)
When u, v are smooth, this equality is just the classical Green formula. The above generalization is easily obtained by a density argument, using the following result from [6, Lemma 2 and 3]:
H 3/2−ε (∆ g , Ω) = {ϕ ∈ H 1 (Ω), ∆ g ϕ ∈ L 2 (Ω) and ϕ |∂Ω ∈ H 1−ε (Ω)},
and H 1/2+ε (∆ g , Ω) = {ϕ ∈ H ε (Ω), ∆ g ϕ ∈ L 2 (Ω) and g(∇ϕ, ν 0 ) |∂Ω ∈ H −1+ε (Ω)} (13) and that C ∞ (Ω) is dense in H 3/2−ε (∆ g , Ω).
Step 3:
Computation of d dt u t : From u t = u t • ξ −1 t , we obtain that u ′ = d dt |t=0 u t is well-defined in Ω 0 and that u ′ = u ′ − g(∇u, V ),(14)
where u ′ = d dt |t=0 u t ∈ H 1 0 (Ω 0 ), well-defined from Step 1. Using that u ∈ H 3/2+ε (Ω 0 ) and that u ′ ∈ H 1 (Ω 0 ), we know from (14) that u ′ ∈ H 1/2+ε (Ω 0 ). We also know that, the domain Ω 0 being piecewise C ∞ , the functions u and u ′ are locally C ∞ on Ω 0 \ Γ. With these regularities, we can compute the equation and the boundary conditions satisfied by u ′ : first, we differentiate with respect to t the identity
∆ g u t + λ t u t = 0.(15)
and evaluate the result at t = 0 to obtain
∆ g u ′ 0 + λ 0 u ′ 0 = −λ ′ 0 u 0 , in Ω 0 .(16)
Moreover, using again (14), we obtain that
u ′ = −g(∇u, V ) on ∂Ω ∩M .
and since u 0 = 0 on ∂Ω 0 ∩M , only the normal component of V plays a rôle in the previous formula. Therefore, we have, again since ξ(t, ∂Ω 0 ∩ ∂M ) = ∂Ω t ∩ ∂M :
u ′ = − g(∇u 0 , ν 0 ) g(V, ν 0 ), on ∂Ω 0 ∩M(17)
About the Neumann part of the boundary, we have:
for all p ∈ ∂Ω 0 ∩ ∂M, g(∇u t (ξ(t, p)), ν t ) = 0.
Since V is tangential on ∂M , using the normal geodesic coordinates we have ν t = −∂ x 0 on ∂Ω t ∩ ∂M , and in particular it does not depend on t and
g(∇u t (ξ(t, p)), ν t ) = −∂ x 0 u t (ξ(t, p)) = 0.(18)
So, differentiating (18) with respect to t and evaluating the result at t = 0 we obtain
0 = −∂ x 0 ∂ t u 0 − g(∇∂ x 0 u 0 , V ) = −∂ x 0 ∂ t u 0 = g(∇∂ t u 0 , ν 0 )(19)
on ∂Ω 0 ∩ ∂M , where we used the facts that ∂ x 0 u 0 = 0 on ∂Ω 0 ∩ ∂M and that g(V, ν 0 ) = 0 in ∂Ω 0 ∩ ∂M .
Step 5: Computation of d dt |t=0 λ t : From (16), multiplying by u and integrating over Ω, we obtain, using the generalized Green formula together with the regularity we have proven on u and u ′ :
λ ′ 0 = Ω (−∆ g u ′ − λ 0 u ′ )u = Ω (−∆ g u − λu)u ′ + u ′ , g(∇u, ν 0 ) H −ε (∂Ω)×H ε (∂Ω) − u, g(∇u ′ , ν 0 ) H 1−ε (∂Ω)×H −1+ε (∂Ω) .
Since u = 0 on ∂Ω ∩M and g(∇u ′ , ν 0 ) = 0 on ∂Ω ∩ ∂M , we have u, g(∇u ′ , ν 0 ) H 1−ε (∂Ω)×H −1+ε (∂Ω) = 0. Finally, since u and u ′ are smooth enough so that g(∇u, ν 0 ) |∂Ω , u ′ |∂Ω ∈ L 2 (∂Ω), we can write
u ′ , g(∇u, ν 0 ) H −ε (∂Ω)×H ε (∂Ω) = ∂Ω u ′ g(∇u, ν 0 ) = − ∂Ω∩M (g(∇u, ν 0 )) 2 g(V, ν),
and we finally obtain
λ ′ = − ∂Ω∩M (g(∇u, ν 0 )) 2 g(V, ν).
Extremal domains versus the isoperimetric problem
As we said, extremal domains are the critical points of the functional Ω → λ Ω under a volume constraint Vol g Ω = κ. The problem of finding extremal domains for the first eigenvalue of the Laplace-Beltrami operator is considered, by the mathematical community, very close to the isoperimetric problem.
Given a compact Riemannian manifold M and a positive number κ < Vol g (M ), where Vol g (M ) denotes the volume of the manifold M , the isoperimetric problem consists in studying, among the compact hypersurfaces Σ ⊂ M enclosing a region Ω of volume κ, those which minimize the area functional Ω → Vol g (∂Ω ∩M ) (note that we do not take in account the area of ∂Ω coming from the boundary of M ). The solutions of the isoperimetric problem are (where they are smooth enough) constant mean curvature hypersurfaces and intersect the boundary of the manifold orthogonally (see for example [24]). In fact, constant mean curvature hypersurfaces intersecting ∂M orthogonally are exactly the critical points of the area functional Ω → Vol g (∂Ω ∩M ) under a volume constraint Vol g Ω = κ.
In the case of a manifod M without boundary, it is well known that the determination of the isoperimetric profile λ Ω (see [4]). For this reason it is natural to expect that the solutions to the isoperimetric problem for small volumes are close in some sense to the solutions of the Faber-Krähn minimization problem. And such closeness can be expected also for the corresponding critical points.
The results known up to now about extremal domains underline such expectations. In the case of a manifold without boundary, the constructions of extremal domains in [21,12] are the parallel of the constructions of constant mean curvature topological spheres in a Riemannian manifold M done in [28,22]. And in the case of a manifold with boundary, our construction is the parallel of the constructions of constant mean curvature topological half-spheres in a Riemannian manifold M done in [15] for dimension 3.
Nevertheless, Proposition 2.5 and Corollary 2.6 show a very interesting difference between extremal domains and critical points of the area functional, based on the following: Remark 2.7. A significant fact contained in the statement of Proposition 2.5 is that the shape derivative for the first eigenvalue of the Laplace-Beltrami operator with mixed boundary condition in the boundary edge domain Ω 0 does not contain a singular term supported by the "corner part" of the boundary ∂Ω 0 , as it is the case for the area functional, see (21).
In order to understand the consequences of this remark, let's compare the Euler-Lagrange equations of the two problems: criticality for λ 1 is written
dλ t dt | t=0 = ∂Ω0∩M (g(∇u 0 , ν 0 )) 2 g(V, ν 0 ) dvol g = 0(20)
whereas for the area functional we have
d dt Vol g (∂Ω t ∩M )| t=0 = ∂Ω0∩M H 0 g(V, ν 0 ) + Γ g(V, τ 0 ) = 0 ,(21)
where (Ω t ) t is a volume preserving deformation of Ω 0 given by the vector field V , H 0 is the mean curvature of ∂Ω 0 ∩M , ν 0 is the normal vector on ∂Ω 0 ∩M , and τ 0 is the normal vector to Γ tangent to ∂Ω 0 ∩M . For the area functional, the consequence of (21) is that in order to be critical Ω 0 must satisfy, denoting ν 1 the normal vector to Γ tangent to ∂M : H 0 ≡ constant, and g(τ 0 , ν 1 ) = 0 or equivalently ω ≡ π/2 on Γ , the first condition being obtained with vector fields V supported inM whereas the second condition is obtained thanks to vector fields V that are supported in a neighborhood of Γ. For λ 1 , only using vector fields V that are supported inM we obtain as a consequence of (20) that in order to be critical Ω 0 must satisfy:
g(∇u 0 , ν 0 ) = constant on ∂Ω 0 ∩M .(22)
The fact that the contact angle is π/2 on Γ is already contained in the above equation (see Corollary 2.6), and therefore domains that are critical domains for λ 1 in the sense of Definition 1.2 (i.e. for any vector field V tangent on ∂M ) are the same as critical domains for λ 1 restricted to vector fields supported inM , which is not the case for the area functional.
In other words, one can easily build surfaces that have a constant mean curvature but intersects the boundary ∂M with an angle different from π/2 (and therefore are not extremal sets for the relative perimeter under volume constraint), whereas every set satisfying (22) intersects the boundary ∂M with angle equal to π/2. These properties lie on the fact that the operator given by the mean curvature is local while the Dirichlet to Neumann operator is nonlocal.
3 Analysis of the problem 3.1 Notations and formulation of the problem Euclidean notations. We define the following notations:
R n+1 + = {x = (x 0 , x ′ ) = (x 0 , x 1 , . . . , x n ) ∈ R n+1 : x 0 > 0}
will be the upper Euclidean half-space,
B + 1 = B 1 ∩ R n+1 +
will be the upper Euclidean unit half-ball and
S n + = {x ∈ S n : x 0 > 0}
will be the upper Euclidean unit hemisphere. Given a continuous function f : S n + −→ (0, ∞), we also denote
B + f := x ∈ R n+1 + : 0 < |x| < f (x/|x|) .
Riemannian notations in (M, g). Let p a point of ∂M . We denote by E 1 , ..., E n the orthonormal base of T p ∂M associated to the geodesic normal coordinates x 1 , ..., x n in ∂M around p. If the point q ∈ ∂M has coordinates x ′ ∈ R n , we set
Θ(x ′ ) := n i=1 x i E i ∈ T p ∂M .(23)
The point q ∈ ∂M whose geodesic coordinates are given by x ′ is
q = Φ(x ′ ) = Exp ∂M p (Θ(x ′ )) .
Given a continuous function f : S n + −→ (0, ∞) whose L ∞ norm is small (say less than the cut locus of p) we define
B + g,f (p) := Exp M Φ(x ′ ) (x 0 N (Φ(x ′ ))) : x ∈ R n+1 + 0 < |x| < f (x/|x|) .
The subscript g is meant to remind the reader that this definition depends on the metric.
Formulation of the problem. Our aim is to show that, for all ε > 0 small enough, we can find a point p ε ∈ ∂M and a (smooth) function v = v(p ε , ε) : S n + −→ R with 0 Neumann condition at the boundary of S n + such that
Vol B + g,ε(1+v) (p) = ε n Vol B + 1(24)
and the over-determined elliptic problem
∆ g φ + λ φ = 0 in B + g,ε(1+v) (p) φ = 0 on ∂B + g,ε(1+v) (p) ∩M g(∇φ, ν) = 0 on ∂B + g,ε(1+v) (p) ∩ ∂M g(∇φ, ν) = constant on ∂B + g,ε(1+v) (p) ∩M(25)
has a nontrivial positive solution, where ν is the normal vector on ∂B + g,ε(1+v) (p). Notice that the 0 Neumann boundary condition on v is justified by Corollary 2.6. Indeed, the half ball B + g,ε (p) intersects ∂M orthogonally, and then, since an extremal domain also intersects ∂M orthogonally, the deformation v should satisfy a fortiori a 0 Neumann boundary condition.
Dilation of the metric
We follow the strategy of [21], paying attention to the fact that we are working in a more general situation because our domains are boundary edge domains. Our first aim is to give a sense to the problem when ε = 0. Observe that, considering the dilated metricḡ := ε −2 g, Problem (24)-(25) is equivalent to finding a point p ∈ ∂M and a function v : S n + −→ R with 0 Neumann condition at the boundary of S n + such that
Vol B + g,1+v (p) = Vol B + 1(26)
and for which the over-determined elliptic problem
∆ḡφ +λφ = 0 in B + g,1+v (p) φ = 0 on ∂B + g,1+v (p) ∩M g(∇φ,ν) = 0 on ∂B + g,1+v (p) ∩ ∂M g(∇φ,ν) = constant on ∂B + g,1+v (p) ∩M(27)
has a nontrivial positive solution, whereν is the normal vector on ∂B + g,1+v (p). The relation between the solutions of the two problems is simply given by φ = ε −n/2φ and λ = ε −2λ .
Let us define the coordinates y = (y 0 , y ′ ) = (y 0 , y 1 , ..., y n ) ∈ B + 1 bȳ Ψ(y) := Exp M Φ(y ′ ) ε y 0N (Φ(y ′ )) whereΦ (y ′ ) := Exp ∂M p ε n i=1 y i E i for p ∈ ∂M , andN is the unit normal vector about ∂M for the metricḡ pointing into M . Using Proposition 5.1 of the Appendix, in the new coordinates y the metricḡ can be written as
g 00 = 1 g 0j = 0 g ij = δ ij + 2 ε g(∇ Ei N, E j ) y 0 + ε 2 R 0i0j (y 0 ) 2 + ε 2 g(∇ Ei N, ∇ Ej N ) (y 0 ) 2 + 2 ε 2 k R k0ij y k y 0 + 1 3 ε 2 k,ℓR ikjl y k y ℓ + O(ε 3 )(28)
for i, j, k, l = 1, ...n, where R andR are respectively the curvature tensors of M and ∂M , and
R 0i0j = g R(N, E i ) N, E j R k0ij = g R(E k , N ) E i , E j R ijkl =g R (E i , E k ) E j , E ℓ .
In the coordinates y and the metricḡ, the problem can be continuously extended for ε = 0 and in this case it becomes
∆φ +λφ = 0 in B + 1+v φ = 0 on ∂B + 1+v ∩ R n+1 + ∇φ,ν = 0 on ∂B + 1+v ∩ ∂R n+1 +(29)
where ∆ denotes the usual Laplacian in R n+1 and ·, · the usual scalar product in R n+1 , with the normalization
B + 1+vφ 2 = 1(30)
and the volume constraint Vol(B + 1+v ) = Vol(B + 1 ). In particular, when v = 0 we have
∆φ 1 + λ 1 φ 1 = 0 in B + 1 φ 1 = 0 on ∂B + 1 ∩ R n+1 + ∇φ 1 , ν = 0 on ∂B + 1 ∩ ∂R n+1 +(31)
where λ 1 is the first eigenvalue of the unit Euclidean ball and φ 1 is the restriction to B + 1 of the solution to
∆φ 1 + λ 1φ1 = 0 in B 1 φ 1 = 0 on ∂B 1 .
chosen in order to be positive and have L 2 (B 1 ) norm equal to 2.
Volume constraint and differentiability with respect to (ε,v)
In this section, we deal with the volume condition (which leads to replace the variable v byv subject to the condition of having a zero mean), and prove the differentiability of (λ,φ) with respect to (ε,v). The result is similar to Proposition 3.2 in [21], and we use the same strategy, though we have to pay attention to the singularities at the boundary of our domain. Let us define the space
C 2,α m,N C (S n + ) := v ∈ C 2,α (S n + ), S n +v = 0 , ∂ N v = 0 on ∂S n + ,
where ∂ N v = 0 denotes the 0 Neumann condition at the boundary of S n + .
Proposition 3.1. Given a point p ∈ ∂M , there exists ε 0 > 0, locally uniform in p, such that for all ε ∈ (0, ε 0 ) and all functionv ∈ C 2,α m,N C (S n + ) such that v C 2,α (S n + ) ≤ ε 0 , there exists a unique positive functionφ =φ(p, ε,v) ∈ C 2,α (B + g,1+v (p)), a constantλ =λ(p, ε,v) ∈ R and a constant v 0 = v 0 (p, ε,v) ∈ R such that
Volḡ(B + g,1+v (p)) = Vol(B + 1 )(32)
where v := v 0 +v andφ is a solution to the problem
∆ḡφ +λφ = 0 in B + g,1+v (p) φ = 0 on ∂B + g,1+v (p) ∩M g(∇φ,ν) = 0 on ∂B + g,1+v (p) ∩ ∂M(33)
which is normalized by
B + g,1+v (p)φ 2 dvolḡ = 1 .(34)
In additionφ,λ and v 0 depend smoothly on the functionv and the parameter ε, can be extended smoothly to ε = 0 by (29), and in particular (φ,λ, v 0 ) = (φ 1 , λ 1 , 0) when (ε,v) = (0, 0).
Proof. The proof of this result is similar to the proof of Proposition 3.2 in [21], basically based on the implicit function Theorem. Therefore we only describe the differences from [21], which are the choice of coordinates and the regularity theory for the Laplace-Beltrami operator in domains with singularities.
For the choice of coordinates we use the following coordinates: given (v 0 ,v) ∈ R × C 2,α m,N C (S n + ) and v = v 0 +v, we consider the parameterization of B + g,1+v (p) = B + g,ε(1+v) (p) given bŷ
Ψ(y) := Exp M Φ(y ′ ) 1 + v 0 + χ(y)v y |y| y 0 N (Φ(y ′ )) whereΦ (y ′ ) = Exp ∂M p 1 + v 0 + χ(y)v y |y| n i=1 y i E i .
Here y = (y 0 , y ′ ) ∈ B + 1 , χ is a cutoff function identically equal to 0 when |y| ≤ 1/2 and identically equal to 1 when |y| ≥ 3/4, introduced to avoid the singularity at the origin of the polar coordinates. In these coordinates the metriĉ
g :=Ψ * ḡ(35)
can be written asĝ
= (1 + v 0 ) 2 i,j (δ ij + C ij ) dy i dy j ,
where the coefficients C ij = C ij ε,v ∈ C 1,α (B + 1 ) are functions of y depending on ε, v = v 0 +v and the first partial derivatives of v. It is important here to notice that
(ε, v 0 ,v) −→ C ij ε,v ∈ C 1,α (B + 1 )
are smooth maps, as in [21]. Now for all ψ ∈ C 2,α (B + 1 ) such that
B + 1 ψ φ 1 = 0 we define N (ε,v, ψ, v 0 ) := ∆ψ + λ 1 ψ + (∆ĝ − ∆ + µ) (φ 1 + ψ) , Volĝ(B + 1 ) − Vol (B + 1 ) where µ is given by µ = − B + 1 φ 1 (∆ĝ − ∆) (φ 1 + ψ) ,
so that the first entry of N is L 2 (B + 1 )-orthogonal to φ 1 (for the Euclidean metric). Thanks to the choice of coordinates, the mapping N is a smooth map from a neighborhood of (0, 0, 0, 0) in [0, ∞)×C 2,α m,N C (S n + )×C 2,α ⊥ , 0 (B + 1 )×R into a neighborhood of (0, 0) in C 0,α ⊥ (B + 1 )×R. Here the subscript ⊥ indicates that the functions in the corresponding space are L 2 (B + 1 )-orthogonal to φ 1 and the subscript 0 indicates that the functions satisfy the mixed condition at the boundary of B + 1 . The differential of N with respect to (ψ, v 0 ), computed at (0, 0, 0, 0), given by
∂ (ψ,v0) N (0, 0, 0, 0) = ∆ + λ 1 , n Vol(B + 1 ) is invertible from C 2,α ⊥,0 (B + 1 ) × R into C 0,α ⊥ (B + 1 ) × R, by Proposition 2.4.
Then the implicit function theorem applies as in [21] and completes the proof of the result.
Strategy for the proof of Theorem 1.3
We define the operator
F (p, ε,v) =ḡ(∇φ,ν) | ∂B + g,1+v (p)∩M − 1 Volḡ ∂B + g,1+v (p) ∩M ∂B + g,1+v (p)∩Mḡ (∇φ,ν) dvolḡ ,
whereν denotes the unit normal vector field to ∂B + g,1+v (p) ∩M , (φ, v 0 ) is the solution of (32)-(33)-(34). Recall that v = v 0 +v. The operator F is locally well defined in a neighborhood of (p, 0, 0) in ∂M × [0, ∞) × C 2,α m,N C (S n + ), and after canonical identification of ∂B + g,1+v (p) ∩M with S n + we can consider that it takes its values in C 1,α (S n + ). Moreover, it is easy to see that the zero mean condition is preserved, and then we will write that F takes its values in C 1,α m (S n + ). Our aim is to find (p, ε,v) such that F (p, ε,v) = 0. Observe that, with this condition,φ =φ(ε,v) will be the solution to the problem (27).
Following the proof of the previous result, we have the alternative expression for F :
F (p, ε,v) =ĝ(∇φ,ν) | ∂B + 1 ∩R n+1 + − 1 Volĝ(∂B + 1 ∩ R n+1 + ) ∂B + 1 ∩R n+1 +ĝ (∇φ,ν) dvolĝ ,
where this timeν is the the unit normal vector field to ∂B + 1 using the metricĝ defined by (35). Our aim is to solve the equation F (p, ǫ,v) = 0 for some (p, ǫ,v). The first question we should consider is the following: if we fix a point p ∈ ∂M , can we find for all ε small enough a functionv =v(ε) in order that
F (p, ǫ,v(ε)) = 0 ?
The answer will be negative, because we will see that the kernel K of ∂vF (p, 0, 0) :
C 2,α m,N C (S n + ) → C 1,α m (S n + )
is nontrivial. Nevertheless, we will obtain a characterization of K proving that it is given by the space of linear functions (restraint to the half-sphere) depending only on the coordinates y 1 , ..., y n , i.e. functions S n + → R y → a, y for some a = (a 0 , a) ∈ R n+1 with a 0 = 0. Moreover we will prove that ∂vF (p, 0, 0) is an isomorphism from K ⊥ to the image of ∂vF (p, 0, 0), and then the implicit function theorem will give the following result: for all ε small enough there exist an element k(ε) ∈ K and a functionv(ε) such that
F (p, ǫ,v(ε)) = k(ε) .
Clearly, since we fixed the point p, the functionv and the element k depend also on p, and in fact we have to write F (p, ǫ,v(p, ε)) = k(p, ε) .
In the last section we will show that it is possible to apply the implicit function theorem to the equation k(p, ε) = 0 obtaining that: for all ε small enough, there exists a point p ε such that k(p ε , ε) = 0 . and this will complete the proof of the result.
4 Solving the problem 4.1 Computation of the linearization of F with respect tov at (p, ε,v) = (p, 0, 0) In Section 3.3 we established the existence of a unique positive functionφ ∈ C 2,α B + 1+v (close to φ 1 ), a constant λ ∈ R (close to λ 1 ) and a constant v 0 ∈ R (close to 0), solutions to (32)-(33)-(34). Recall that λ 1 is the first eigenvalue of −∆ in the half ball B + 1 with 0 mixed boundary condition and φ 1 is the associated eigenfunction which is normalized to be positive and have L 2 (B + 1 ) norm equal to 1. For allv ∈ C 2,α m,N C (S n + ) let ψ be the (unique) solution of Proof. When ε = 0 we have already seen thatḡ in the coordinates y is the Euclidean metric. If v ∈ C 2,α m (S n ) we can define the operator F :F
∆ψ + λ 1 ψ = 0 in B + 1 ψ = −∂ r φ 1v on ∂B + 1 ∩ R n+1 + ∇ψ, ν = 0 on ∂B + 1 ∩ ∂R n+1 + (36) which is L 2 (B + 1 )-orthogonal to φ 1 . We define L 0 (v) := ∂ r ψ + ∂ 2 r φ 1v | ∂B + 1 ∩R n+1 + (37) Clearly we have L 0 : C 2,α m,N C (S n + ) → C 1,α m (S n + ) .(v) = ∇φ,ν | ∂B1+v − 1 Vol ∂B 1+v ∂B1+v ∇φ,ν ,
whereν denotes the unit normal vector field to ∂B 1+v andφ is the solution, with L 2 -norm equal to 2, of
∆φ +λφ = 0 in B 1+ṽ φ = 0 on ∂B 1+v .(38)
After identification of ∂B 1+v with S n we can considered the operatorF well defined from C 2,α m (S n ) into C 1,α m (S n ). In the proof of Proposition 4.3 in [21] it is proved that the linearization ofF with respect to v at v = 0 is given by the operatorL
0 : C 2,α m (S n ) −→ C 1,α m (S n ) v → ∂ rψ + ∂ 2 rφ 1 v | ∂B1(39)
whereφ 1 is the first eigenfunction of −∆ in B 1 with 0 Dirichlet boundary condition and normalized to have L 2 -norm equal to 2, andψ is the (unique) solution of
∆ψ + λ 1ψ = 0 in B 1 ψ = −∂ rφ1 v on ∂B 1 (40)
which is L 2 (B 1 )-orthogonal toφ 1 . Notice that φ 1 and ψ are then the restrictions ofφ 1 andψ to the half-ball B + 1 . Let w be a function in C 2,α m,N C (S n + ). We extend the function w to a functionw over all S n in this way: for (y 0 , y 1 , ..., y n ) ∈ S n + we setw (−y 0 , y 1 , ..., y n ) = w(y 0 , y 1 , ..., y n ) .
Observe thatw ∈ C 2,α (S n ) because the function w satisfies the Neumann condition at the boundary of S n + , and his mean is 0 because are 0 the means over S n + and over the complement of S n + . We conclude thatw ∈ C 2,α m,Sym (S n ), where the subscript Sym means that the function is symmetric with respect to the hyperplane {x 0 = 0}, and m means as usual that the function has mean 0. We have defined the mapping
α : C 2,α m,N C (S n + ) −→ C 2,α m,Sym (S n ) w →w ,(41)
and it is easy to see that this mapping in an isomorphism.
If we consider the operatorF defined only in C 2,α m,Sym (S n ), it is natural that its linearization with respect to v at v = 0 is given by the operatorL 0 restricted to C 2,α m,Sym (S n ) with image in C 1,α m,Sym (S n ). We observe that if v ∈ C 2,α m,Sym (S n ), then the solution of (38) is symmetric with respect to the hyperplain {x 0 = 0} and the normal derivative with respect to x 0 computed at {x 0 = 0} is 0. Then from the definitions of F andF we conclude that
F (p, 0,v) =F (α(v))| ∂B + 1 ∩R n+1 +
where α is the isomorphism defined in (41). We define also the mapping
β : C 1,α m,Sym (S n ) −→ C 1,α m,N C (S n + ) v → v| S n +
and we observe that it is an isomorphism. We claim that
L 0 = β •L 0 • α.
We remark that the operator β •L 0 • α is defined on C 2,α m,N C (S n + ) and his image is contained in C 1,α m,N C (S n + ). We have to prove that L 0 (w) =L 0 (w)| ∂B + 1 ∩R n+1 + By the symmetry of the funcionw with respect to the hyperplane {x 0 = 0}, we conclude that the solution of (40) with v =w is symmetric with respect to the hyperplane {x 0 = 0}, then ∂ x0ψ | {x0=0} = 0 andL 0 (w) is symmetric with respect to the hyperplane {x 0 = 0}. So the restriction ofψ to the half-ball B + 1 is the solution of (36), wherē v = w, and L 0 (w) is exactly the restriction ofL 0 (w) to ∂B + 1 ∩ R n+1 + . This completes the proof of the claim. Using this relation we conclude that that L 0 (w) =L 0 (α(w))| ∂B + 1 ∩R n+1
+ .
This completes the proof of the proposition.
Study of the operator L 0
Proposition 4.2. The operator
L 0 : C 2,α m,N C (S n + ) −→ C 1,α m,N C (S n + )
, is a self adjoint, first order elliptic operator. Its kernel K is given by the space of linear functions depending only on the coordinates y 1 , ..., y n , i.e. functions S n + → R y → a, y for some a = (a 0 , a ′ ) ∈ R n+1 with a 0 = 0. Moreover, L 0 has closed range and is an isomorphism from K ⊥ to Im(L 0 ), where K ⊥ is the space L 2 -orthogonal to K in C 2,α m,N C (S n + ) and Im(L 0 ) denotes the range of L 0 in C 1,α m,N C (S n + ).
Proof. LetL 0 the operator defined in (39) and α the isomorphism defined in (41). In Proposition 4.2 of [21] it is proved that:
•L 0 is a self adjoint, first order elliptic operator,
• its kernel is given by the space of linear functions restraint to S n , and
• there exists a constant c > 0 such that
v C 2,α (S n ) ≤ c L 0 (v) C 1,α (S n ) ,(42)
provided that v is L 2 (S n )-orthogonal to the kernel ofL 0 .
The last elliptic estimate implies that the operatorL 0 has closed range, and using the other two properties we have thatL 0 is an isomorphism from the space L 2 -orthogonal to its kernel and its range.
We are interested in considering the operatorL 0 defined only in the domain C 2,α m,Sym (S n ) and from now onL 0 will be defined only in C 2,α m,Sym (S n ). The image ofL 0 is naturally given by functions that are symmetric with respect to the hyperplane {x 0 = 0}, then we havẽ
L 0 : C 2,α m,Sym (S n ) −→ C 1,α m,Sym (S n )
We can conclude that the new operatorL 0 is a self-adjoint, first order elliptic operator, with kernelK given by the space of linear functions which are symmetric with respect to the hyperplane {x 0 = 0}, i.e. functions S n + → R y → a, y for some a = (a 0 , a ′ ) ∈ R n+1 with a 0 = 0. Inequality (42) holds naturally also for the new operatorL 0 , provided v is L 2 (S n )-orthogonal toK.
From the proof of Proposition 4.1 we have
L 0 = β •L 0 • α .
With this caracterization of the operator L 0 and the properties ofL 0 , we deduce that the kernel of L 0 is given by the space K of functions S n + → R y → a, y for some a = (a 0 , a ′ ) ∈ R n+1 with a 0 = 0, and that L 0 has closed range and is an isomorphism from K ⊥ to Im(L 0 ), where K ⊥ is the space L 2 -orthogonal to K in C 2,α m,N C (S n + ) and Im(L 0 ) denotes the range of L 0 in C 1,α m,N C (S n + ).
4.3
Solving the problem on the space orthogonal to the kernel of L 0 Lemma 4.3. Let p ∈ ∂M . There exists a function f p ∈ C 1,α ([0, 1]) such that
F (p, ε, 0)(y 0 , y ′ ) = ε f p (y 0 ) + O(ε 2 )
for all ε small enough.
Proof. We keep the notations of the proof of the Proposition 3.1 withv ≡ 0. Sincev ≡ 0, we have
N (ε, 0, 0, 0) = (∆ĝ − ∆ + µ) φ 1 , Volĝ(B + 1 ) − Vol (B + 1 ) , and µ = − B + 1 φ 1 (∆ĝ − ∆) φ 1 .
If in addition v 0 = 0, we can estimateĝ ij = δ ij +Ĝ ij ε y 0 + O(ε 2 ) , whereĜ ij are real constants. Hence, by the symmetry of the problem, N (ε, 0, 0, 0)(y 0 , y ′ ) = ε (ϕ(y 0 , |y ′ |),
V ) + O(ε 2 ) ,
where the first component of ϕ ∈ C 0,α ([0, 1] 2 ) and V is a real number. The implicit function theorem immediately implies that the solution of N (ε, v 0 , 0, ψ) = 0 satisfies ψ(ε, p, 0) C 2,α + |v 0 (ε, p, 0)| ≤ c ε but in addition there exist a functionψ p ∈ C 2,α ([0, 1] 2 ) such that ψ(ε, p, 0)(y 0 , y ′ ) = εψ p (y 0 , |y ′ |) + O(ε 2 ) .
To complete the proof, observe thatν = (1
+ v 0 ) −1 ∂ r on ∂B + 1 ∩ R n+1 + whenv ≡ 0. Therefore there exist a function f p ∈ C 2,α ([0, 1] 2 ) such thatĝ (∇φ,ν)(y 0 , y ′ ) = ∂ r φ 1 + εf p (y 0 , |y ′ |) + O(ε 2 ) .
(be careful thatĝ is defined with v 0 = v 0 (ε, p, 0) andv ≡ 0). Since ∂ r φ 1 is constant along ∂B + 1 ∩ R n+1 + , we conclude that there exist a function f p ∈ C 2,α ([0, 1]) such that
F (p, ε, 0)(y 0 , y ′ ) = εf p (y 0 ) + O(ε 2 ) .
This completes the proof of the Lemma.
Proposition 4.4. There exists ε 0 > 0 such that, for all ε ∈ [0, ε 0 ] and for all p in a compact subset of ∂M , there exists a unique functionv =v(p, ε) ∈ K ⊥ such that
F (p, ε,v(p, ε)) ∈ K .
The functionv(p, ε) depends smoothly on p and ε and v(p, ε)(y 0 , y ′ ) = εṽ p (y 0 ) + O(ε 2 ) for a suitable functionṽ p ∈ C 2,α ([0, 1]).
Proof. We fix p in a compact subset of ∂M and definē F (p, ε,v, a) := F (p, ε,v) + a, · By Proposition 3.1,F is a C 1 map from a neighborhood of (p, 0, 0, 0) in M ×[0, ∞)×K ⊥ ×∂R n+1 + into a neighborhood of 0 in C 1,α (S n + ). Moreover we have
•F (p, 0, 0, 0) = 0,
• the differential ofF with respect tov computed at (p, 0, 0, 0) is given by L 0 restricted to K ⊥ , and
• the image of the linear map a −→ a, · , a = (a 0 , a ′ ) with a 0 = 0 coincides with K.
Thanks to the result of Proposition 4.2, the implicit function theorem can be applied to the equation F (p, ε,v, a) = 0 at (p, 0, 0, 0) with respect to the variable ε. We obtain the existence ofv(p, ε) ∈ C 2,α m,N C (S n + ) and a(p, ε) ∈ ∂R n+1 + , smoothly depending on ε such thatF (p, ε,v(p, ε), a(p, ε)) = 0 , that means, by the definition ofF , F (p, ε,v(p, ε)) ∈ K .
The fact thatv depends smoothly on p and ε is standard. The ε-expansion ofv follow at once from Lemma 4.3.
4.4
Projecting over the kernel of L 0 : appearance of the mean curvature of ∂M Thanks to Proposition 4.4 we are able to build, for all p in a compact subset of ∂M and ε small enough, a function v(p, ε) in K ⊥ such that F (p, ε,v(p, ε)) ∈ K .
Now, as natural, we project the operator F over its K and we then we have to find, for each ε, the good point p ε in order that such the projection of F over K is equal to 0. In other words, for all ε small enough we want to find a point p ε ∈ ∂M such that
S n + F (p ε , ε,v(p ε , ε)) b, · = 0 for all b ∈ ∂R n+1 + .
The main result of this section is the following: with |b| = 1, we have the following ε-expansion:
S n + F (p, ε,v(p, ε)) b, · = C ε 2g (∇gH(p), Θ(b ′ )) + O(ε 3 ) .
where C is a real constant, H is the mean curvature of ∂M ,g is the metric of ∂M induced by g and Θ has been defined in (23).
Proof.
Take p ∈ ∂M , ε small enough,v ∈ C 2,α m,N C with small norm, and b ∈ ∂R n+1 + . We denote by L ε the linearization of F with respect tov, and by L 2 ε the second derivative of F with respect tov, both computed at the point (p, ε, 0):
L ε = ∂vF (p, ε, 0) and L 2 ε = ∂ 2 v F (p, ε, 0) . We have S n + F (p, ε,v) b, · = S n + (F (p, ε, 0) + L 0v ) b, · + S n + (F (p, ε,v) − F (p, ε, 0) − L εv ) b, · + S n + (L ε − L 0 )v b, ·
Now we apply this formula for our functionv =v(p, ε) given by Proposition 4.4. We havev ∈ K ⊥ , so L 0v ∈ K ⊥ , and then S n
+ L 0v b, · = 0 .
We obtain that
S n + F (p, ε,v) b, · = S n + F (p, ε, 0) b, · + S n + (F (p, ε,v) − F (p, ε, 0) − L εv ) b, · + S n + (L ε − L 0 )v b, ·(43)
wherev =v(p, ε) is the function given by Proposition 4.4. We need now two intermediate lemmas.
Lemma 4.6. For all p ∈ ∂M , for all b = (0, b ′ ) ∈ ∂R n+1 +
we have the following ε-expansion:
S n + F (p, ε, 0) b, · = C ε 2g (∇gH(p), Θ(b ′ )) + |b| O(ε 3 ) ,
where Θ is defined in (23) and
C = −2 S n + y 0 (y 1 ) 2 1 ∂ r φ 1 (1) B + 1 r |∂ r φ 1 | 2 where r = |y|.
Proof. We recall that
F (p, ε,v) =ĝ(∇φ,ν) | ∂B + 1 ∩R n+1 + − 1 Volĝ(∂B + 1 ∩ R n+1 + ) ∂B + 1 ∩R n+1 +ĝ (∇φ,ν) dvolĝ ,
where the metricĝ has been defined in (35) for the coordinates y. Then
S n + F (p, ε,v) b, · = S n +ĝ (∇φ,ν) b, · . Whenv = 0 we haveν = (1 + v 0 ) ∂ r on ∂B + 1 ∩ R n+1 + , where r = |y|. Then S n + F (p, ε,v) b, · = (1 + v 0 ) S n + ∂φ ∂r b, · = 1 + v 0 ∂ r φ 1 (1) S n + ∂φ ∂r ∇φ 1 , b(44)
where we used the fact that φ 1 is a radial function. Using this last property and the Green's identities we have:
S n + ∂φ ∂r ∇φ 1 , b = B + 1 (∆ + λ 1 )φ ∇φ 1 , b − B + 1φ (∆ + λ 1 ) ∇φ 1 , b = B + 1 (∆ + λ 1 )φ ∇φ 1 , b = B + 1 (∆ − ∆ĝ)φ ∇φ 1 , b + (λ 1 −λ) B + 1φ ∇φ 1 , b = B + 1 (∆ − ∆ĝ) φ 1 ∇φ 1 , b + B + 1 (∆ − ∆ĝ) (φ − φ 1 ) ∇φ 1 , b + (λ 1 −λ) B + 1 (φ − φ 1 ) ∇φ 1 , a
Let compute the first term. Recall that ∆ĝ := n i,j=0ĝ
ij ∂ yi ∂ yj + n i,j=0 ∂ yiĝ ij ∂ yj + 1 2 n i,j=0ĝ ij ∂ yi log |ĝ| ∂ yj .
From (28) we have that the coefficients of the metricĝ can be expanded, for i, k, j, ℓ = 1, ..., n, aŝ
g 00 (y) = (1 + v 0 ) 2 g 0j (y) = 0 g ij (y) = (1 + v 0 ) 2 δ ij + 2(1 + v 0 ) ε g(∇ Ei N, E j ) y 0 + R 0i0j (1 + v 0 ) 2 ε 2 (y 0 ) 2 +(1 + v 0 ) 2 ε 2 g(∇ Ei N, ∇ Ej N ) (y 0 ) 2 + 2(1 + v 0 ) 2 ε 2 k R k0ij y k y 0 + 1 3 (1 + v 0 ) 2 ε 2 k,ℓR ikjℓ y k y ℓ + O(ε 3 )
Keeping in mind that v 0 = v 0 (p, ε) = O(ε), the third equality simplifies slightly obtaininĝ
g 00 (y) = (1 + v 0 ) −2 g 0j (y) = 0 g ij (y) = (1 + v 0 ) −2 δ ij − 2(1 + v 0 ) ε g(∇ Ei N, E j ) y 0 − R 0i0j ε 2 (y 0 ) 2 −ε 2 g(∇ Ei N, ∇ Ej N ) (y 0 ) 2 − 2ε 2 k R k0ij y k y 0 − 1 3 ε 2 k,ℓR ikjℓ y k y ℓ + O(ε 3 ) .
Using the fact that R k0ii = 0, we have
log |ĝ| = 2n log(1 + v 0 ) − 2 ε(1 + v 0 ) H(p) y 0 + ε 2 2 − Ric(N ) + 4 i =j g(∇ Ei N, E i ) g(∇ Ej N, E j ) + i g(∇ Ei N, ∇ Ei N ) − 4 i =j g(∇ Ei N, E j ) g(∇ Ej N, E i ) (y 0 ) 2 + 1 3R kℓ y k y ℓ + O(ε 3 )
where Ric denotes the Ricci curvature of ∂M andR
kℓ = n i=1R ikiℓ .
A straightforward computation (still keeping in mind that v 0 = O(ε)) shows that
∆ − ∆ĝ φ 1 = − λ 1 (1 − (1 + v 0 ) −2 ) φ 1 + 2 (1 + v 0 ) −1 ε i,j g(∇ Ei N, E j ) y 0 y i y j r 2 ∂ 2 r φ 1 + δ i j r ∂ r φ 1 − y i y j r 3 ∂ r φ 1 + ε (1 + v 0 ) −1 H(p) y 0 r ∂ r φ 1 + ε 2 k,i,j,ℓ R 0i0j + g(∇ Ei N, ∇ Ej N ) (y 0 ) 2 + 2 R k0ij y k y 0 + 1 3R ikjℓ y k y ℓ · · y i y j r 2 ∂ 2 r φ 1 + δ i j r ∂ r φ 1 − y i y j r 3 ∂ r φ 1 + ε 2 k,i,j 2R i0ij y 0 + 1 3R ikji y k + 1 6R ik y k y j r ∂ r φ 1 + ε 2 − Ric(N ) + 4 i =j g(∇ Ei N, E i ) g(∇ Ej N, E j ) + i g(∇ Ei N, ∇ Ei N ) −4 i =j g(∇ Ei N, E j ) g(∇ Ej N, E i ) · (y 0 ) 2 r ∂ r φ 1
where i, j, k = 1, ..., n. Observe that we have used the fact that R(X, X) ≡ 0 and the symmetries of the curvature tensor for which R ijkl = R klij . Now, in the computation of
B + 1 (∆ − ∆ĝ) φ 1 ∇φ 1 , b ,
observe that the terms in the expansion of (∆ − ∆ĝ) φ 1 which contain an even number of coordinates different to y 0 , such as y 0 or y i y j y k y ℓ or (y 0 ) 2 y i y j etc. do not contribute to the result since, once multiplied by ∇φ 1 , b (keep in mind that b = (0, b ′ )), their average over S n + is 0. Therefore, we can write
B + 1 (∆ − ∆ĝ) φ 1 ∇φ 1 , b = ε 2 σ =0 B + 1 ∂ r φ 1 a σ y σ r · · 2 k,i,j R k0ij y i y j y k y 0 r 2 ∂ 2 r φ 1 − y i y j y k y 0 r 3 ∂ r φ 1 + 2 k,i,j R i0ij y 0 y j r ∂ r φ 1 + O(ε 3 )
We make use of the technical Lemmas 5.2 and 5.3 of the Appendix to conclude that
B + 1 (∆ − ∆ĝ) φ 1 ∇φ 1 , b =C ε 2g ∇gH(p), Θ(b ′ ) + O(ε 3 ).(45)whereC = −2 S n + y 0 (y 1 ) 2 B + 1 r |∂ r φ 1 | 2 .
Now we have to compute the terms
B + 1 (∆ − ∆ĝ) (φ − φ 1 ) ∇φ 1 , b and (λ 1 −λ) B + 1 (φ − φ 1 ) ∇φ 1 , a .
We observe that the coefficients of the metric, for i, j = 1, ..., n, are given bŷ
g ij (y) = δ ij +Ĝ ij ε y 0 + O(ε 2 )
for some constants G ij . Then the ε-first order term ofφ − φ 1 is radial in the coordinates y 1 , ..., y n , i.e. there exists a function h ∈ C 2,α ([0, 1] 2 ) such that
(φ − φ 1 )(y 0 , y ′ ) = ε h(y 0 , |y ′ |) + O(ε 2 ) .
Let ρ := |y ′ |. Using the same computation given above, we find
∆ − ∆ĝ (φ − φ 1 ) = (1 − (1 + v 0 ) −2 ) ∆(φ − φ 1 ) + O(ε 2 ) y 0 y i y j ρ 2 ∂ 2 ρ h + y 0 ρ δ i j ∂ ρ h − y 0 y i y j ρ 3 ∂ ρ h + ∂ y 0 h + O(ε 3 ) = O(ε 2 ) h (y 0 , ρ) + y 0 y i y j ρ 2 ∂ 2 ρ h + y 0 ρ δ i j ∂ ρ h − y 0 y i y j ρ 3 ∂ ρ h + ∂ y 0 h + O(ε 3 )
for some functionh ∈ C 0,α ([0, 1] 2 ), and the terms O(ε 2 ) do not depend on the coordinates. As in the previous computation, terms which contain an even number of coordinates different to y 0 do not contribute to the result since, once multiplied by ∇φ 1 , b , their average over S n + is 0. Therefore
B + 1 (∆ − ∆ĝ)(φ − φ 1 ) ∇φ 1 , b = O(ε 3 ).
For the last term we have to estimate, the previous computation immediately implies that
B + 1 (φ − φ 1 ) ∇φ 1 , b = O(ε 2 )
and then (λ 1 −λ)
B + 1 (φ − φ 1 ) ∇φ 1 , b = O(ε 3 ) .
We conclude that
S n + ∂φ ∂r ∇φ 1 , b = B + 1 (∆ − ∆ĝ)φ 1 ∇φ 1 , b + |b| O(ε 3 ) =C ε 2g ∇gH(p), Θ(b ′ ) + |b| O(ε 3 ) .
The Lemma follows at once from (44), keeping in mind that v 0 = O(ε).
Lemma 4.7. Letv =v(p, ε) ∈ C 2,α m,N C (S n + ) such that in the coordinates y = (y 0 , y ′ ) we havē v(y 0 , y ′ ) = εṽ p (y 0 ) + O(ε 2 )
for some functionṽ p ∈ C 2,α ([0, 1]). Then there exist two functions δ p , σ p ∈ C 2,α ([0, 1]) such that
((L ε − L 0 )v)(y 0 , y ′ ) = ε 2 δ p (y 0 ) + O(ε 3 )
and
F (p, ε,v) − F (p, ε, 0) − L εv = ε 2 σ p (y 0 ) + O(ε 3 ) .
Proof. Clearly both L ε and L 0 are first order differential operators, and the dependence on ε is smooth. Now, the difference between the coefficients ofḡ written in the coordinates y defined in (28) and the coefficient of the Euclidean metric can be estimated byḡ
ij (y 0 , y ′ ) =Ḡ ij ε y 0 + O(ε 2 ) If the functionv is such thatv (y 0 , y ′ ) = εṽ p (y 0 ) + O(ε 2 )
for some functionṽ p ∈ C 2,α ([0, 1]), it is then clear that
((L ε − L 0 )v) = ε ((L ε − L 0 )ṽ p ) + O(ε 3 )
where now the functionṽ p is considered as a function on the coordinates (y 0 , y ′ ) by the simple relationṽ p (y 0 , y ′ ) = v p (y 0 ). Moreover if we consider the operator F restricted to functionsv that depend only on the first variable y 0 , it is clear that the linearization of F at (p, ε, 0) maps from the subset of functions in C 2,α m,N C that depend only on the first variable y 0 into the subset of functions in C 1,α m,N C that depend only on the first variable y 0 . Then there exists a function δ p ∈ C 1,α ([0, 1]) such that
((L ε − L 0 )ṽ p )(y 0 , y ′ ) = ε δ p (y 0 ) + O(ε 2 ) and then ((L ε − L 0 )v)(y 0 , y ′ ) = ε 2 δ p (y 0 ) + O(ε 3 ) .
Now let us estimate the second term. Taking in account thatv = O(ε) we have
F (p, ε,v) = F (p, ε, 0) + L εv + L 2 ε (v,v) + O(ε 3 )
and then
F (p, ε,v) − F (p, ε, 0) − L εv = L 2 ε (v,v) + O(ε 3 ) . If the functionv is such thatv (y 0 , y ′ ) = εṽ p (y 0 ) + O(ε 2 ) then F (p, ε,v) − F (p, ε, 0) − L εv = ε 2 L 2 ε (ṽ p ,ṽ p ) + O(ε 3 )
where again the functionṽ p is considered as a function on the coordinates (y 0 , y ′ ) byṽ(y 0 , y ′ ) =ṽ(y 0 ), and as for L ε it is easy to see that L 2 ε maps from the subset of functions in C 2,α m,N C that depend only on the first variable y 0 into the subset of functions in C 1,α m,N C that depend only on the first variable y 0 . Then there exists a function
σ p ∈ C 1,α ([0, 1]) such that F (p, ε,v) − F (p, ε, 0) − L εv = ε 2 σ p (y 0 ) + O(ε 3 ) .
This completes the proof of the Lemma.
We are now able to conclude the proof of Proposition 4.5. Using Lemma 4.7 we get
S n + (F (p, ε,v) − F (p, ε, 0) − L εv ) b, · + S n + (L ε − L 0 )v b, · = O(ε 3 ) .
Then, from (43) and using Lemma 4.6, we have that for all p ∈ ∂M and all b ∈ ∂R n+1 + with |b| = 1 the following ε-expansion holds:
S n + F (p, ε,v(p, ε)) b, · = C ε 2g (∇gH(p), Θ(b ′ )) + O(ε 3 ) .
This completes the proof of the Proposition.
Proof of Theorem 1.3
Let b = (0, b ′ ) ∈ ∂R n+1 + with |b| = 1 and define
G b (p, ε) := ε −2 S n + F (p, ε,v(p, ε)) b, · = Cg(∇gH(p), Θ(b ′ )) + O(ε) .
Clearly if ε = 0, we have that
S n + F (p, ε,v(p, ε)) b, · = 0 ⇐⇒ G b (p, ε) = 0 . G b is a function defined on ∂M × [0, +∞) into R.
By the assumption of our main Theorem 1.3, ∂M has a nondegenerate critical point p 0 of the mean curvature. Then the differential of G b with respect to p computed at (p 0 , 0) is invertible and G b (p 0 , 0) = 0. By the implicit function theorem, for all ε small enough there exists p ε ∈ ∂M close to p 0 such that G b (p ε , ε) = 0 for all b ∈ ∂R n+1 + with |b| = 1. In addition we have dist(p 0 , p ε ) ≤ c ε
We conclude then that F (p ε , ε,v(p ε , ε)) ∈ K ⊥ where K is the kernel of the operator L 0 . But by the construction ofv, we have also that F (p ε , ε,v(p ε , ε)) ∈ K and then F (p ε , ε,v(p ε , ε)) = 0 .
This means that the normal derivative of the first eigenfunction of the Laplace-Beltrami operator on Ω ε = B + g,ε (p ε ) with mixed boundary condition is constant on ∂Ω ε ∩M and then Ω ε is extremal.
The only remaining point in the proof of Theorem 1.3, is the analyticity of ∂Ω ε ∩M when M itself is analytic. This is a classical consequence of the extremality condition, see [20].
Appendix
Expansion of the metric
Take the local coordinates x 0 , x 1 , ..., x n in a neighborhood of a point p ∈ ∂M that we introduced in (4). We denote the corresponding coordinate vector fields by X j := Ψ * (∂ x j ) for j = 0, 1, ..., n. We want to write the expansion of the coefficients g ij of the metric Ψ * g in these coordinates. According with our notation, E j are the coordinate vector field X j evaluated at p.
Proposition 5.1. At the point of coordinate x = (x 0 , x 1 , ..., x n ), the following expansion holds :
g 00 = 1
g 0j = 0 g ij = δ ij + 2 g(∇ Ei N, E j ) x 0 + R 0i0j (x 0 ) 2 + g(∇ Ei N, ∇ Ej N ) (x 0 ) 2 +2 k R k0ij x k x 0 + 1 3 k,ℓR ijkl x k x ℓ + O(|x| 3 ) for i, j, k, l = 1, ...n, where
R 0i0j = g R(N, E i ) N, E j R k0ij = g R(E k , N ) E i , E j R ijkl =g R (E i , E k ) E j , E ℓ .
Here R andR are respectively the curvature tensors of M and ∂M .
This result of this proposition is very well known. For example, the same kind of coordinates that we use in this paper are also used in [23], and Proposition 5.1 of [23] combined with the classical expansion of a metric in its geodesic normal coordinate (see for example [27]) immediately implies our Proposition 5.1. Nevertheless, in order to make the reading easier, we write the proof of the proposition.
Proof. We consider the mapping F . The curve x 0 −→ F (x 0 , x) being a geodesic we have g(X 0 , X 0 ) ≡ 1. This also implies that ∇ X0 X 0 ≡ 0 and hence we get ∂ x 0 g(X 0 , X j ) = g(∇ X0 X 0 , X j ) + g(∇ X0 X j , X 0 ) = g(∇ X0 X j , X 0 ) .
The vector fields X 0 and X j being coordinate vector fields we have ∇ X0 X j = ∇ Xj X 0 and we conclude that 2 ∂ x 0 g(X 0 , X j ) = 2 g(∇ Xj X 0 , X 0 ) = ∂ x j g(X 0 , X 0 ) = 0 .
Therefore, g(X 0 , X j ) does not depend on x 0 and since on ∂M this quantity is 0 for j = 1, . . . , n, we conclude that the metric g can be written as g = d(x 0 ) 2 +ḡ x 0 , whereḡ x 0 is a family of metrics on ∂M smoothly depending on x 0 (this is nothing but Gauss' Lemma). Ifg is the metric of ∂M induced by g, we certainly haveḡ
x 0 =g + O(x 0 ) .
We now derive the next term the expansion ofḡ x 0 in powers of x 0 . To this aim, we compute ∂ x 0 g(X i , X j ) = g(∇ Xi X 0 , X j ) + g(∇ Xj X 0 , X i ) , for all i, j = 1, . . . , n. Since X 0 = N on ∂M , we get ∂ x 0ḡ x 0 | x 0 =0 = 2 g(∇ · N, ·) , by definition of the second fundamental form. This already implies that
g x 0 =g + 2g(∇ · N, ·) x 0 + O((x 0 ) 2 ) .
Using the fact that the X 0 and X j are coordinate vector fields, we can compute ∂ 2 x 0 g(X i , X j ) = g(∇ X0 ∇ Xi X 0 , X j ) + g(∇ X0 ∇ Xj X 0 , X i ) + 2 g(∇ Xi X 0 , ∇ Xj X 0 ).
By definition of the curvature tensor, we can write
∇ X0 ∇ Xj = R(X 0 , X j ) + ∇ Xj ∇ X0 + ∇ [X0,Xj ] ,
which, using the fact that X 0 and X j are coordinate vector fields, simplifies into ∇ X0 ∇ Xj = R(X 0 , X j ) + ∇ Xj ∇ X0 .
Since ∇ X0 X 0 ≡ 0, we get ∇ X0 ∇ Xj X 0 = R(X 0 , X j ) X 0 .
Inserting this into (46) yields ∂ 2 x 0 g(X i , X j ) = 2 g(R(X 0 , X i ) X 0 , X j ) + 2 g(∇ Xi X 0 , ∇ Xj X 0 ) .
Evaluation at x 0 = 0 gives ∂ 2
x 0ḡ x 0 | x 0 =0 = 2 g(R(N, ·) N, ·) + 2 g(∇ · N, ∇ · N ). This implies thatḡ
x 0 =g + 2g(∇ · N, ·) x 0 + [g(∇ · N, ∇ · N ) + g(R(N, ·) N, ·)] (x 0 ) 2 + O((x 0 ) 3 )(47)
Now that we have the first terms of the expansion ofḡ x 0 in powers of x 0 we find the expansion of these term with respect to the geodesic coordinates (x 1 , ..., x n ) of ∂M in a neighborhood of p. Recall that for i, j, k, l = 1, ..., ñ
g ij = δ ij + 1 3 k,ℓR ikjℓ x k x ℓ + O(|x| 3 ),(48)
whereR ikjℓ =g R (E i , E k ) E j , E ℓ The proof of this fact can be found for example in [27]. Moreover for k = 1, ..., n we have
∂ x k g(∇ Xi N, X j ) = g(∇ X k ∇ Xi N, X j ) + g(∇ Xi N, ∇ X k X j ) = g(∇ X k ∇ N X i , X j ) + g(∇ Xi N, ∇ X k X j ) = g(R(X k , N )X i , X j ) + g(∇ N ∇ X k X i , X j ) + g(∇ Xi N, ∇ X k X j )
and evaluated at p ∂ x k g(∇ Xi N, X j )| p = g(R(E k , N )E i , E j )
From (47), using (48) and (49), we find the expansion of the metric in the coordinates x 0 , x 1 , ..., x n up to the term of order |x| 2 .
Technical Lemmas
Lemma 5.2. For all σ = 1, . . . , n, we have i,j,k S n + R k0ij x 0 x i x j x k x σ = 0.
Proof. To see that we consider all terms of the above sum, obtained fixing the 4-tuple (i, k, j, σ). We observe that if in such a 4-tuple there is an element that appears an odd number of time then
S n + x 0 x i x j x k x σ = 0. Then i,j,k S n + R k0ij x 0 x i x j x k x σ = i S n + R σ0ii + R i0iσ + R i0σi x 0 (x i ) 2 (x σ ) 2 = 0
by the symmetries of the curvature tensor. Proof. Again, we find that S n +
x 0 x j x σ dvolg = 0 unless the indices j, σ are equal. Hence
i,j S n + R i0ij x 0 x j x σ = S n + x 0 (x σ ) 2 i R i0iσ = − S n + x 0 (x 1 ) 2 H ,σ
This completes the proof of the result.
Figure 1 :
1M can be a Euclidean domain (bounded or not). If p is a nondegenerate critical point for the mean curvature of ∂M , then it is possible to construct an extremal domain as a perturbation of a half-ball centered at p.
Figure 2 :
2Our coordinates are defined as (x 0 , x), x being the normal geodesic coordinates on ∂M and x 0 the coordinate associated to the normal direction.
Figure 3 :
3A boundary edge domain in M Proposition 2.2.
is related to the Faber-Krähn profile, where one looks for the least value of the first eigenvalue of the Laplace-Beltrami operator amongst domains with prescribed volume F K κ := inf Ω⊂M : Volg Ω=κ
Proposition 4. 1 .
1The linearization the operator F with respect tov computed at (p, 0, 0), i.e. ∂vF (p, 0, 0) , is equal to L 0 .
Proposition 4. 5 .
5For all p ∈ ∂M and all b = (0, b ′ ) ∈ ∂R n+1 +
Lemma 5. 3 .
3For all σ = 1, . . . , n, we havei,j S n + R i0ij x 0 x j x σ = − S n + x 0 (x 1 ) 2 H ,σ
Acknowledgements. This work was partially supported by the project Projet ANR-12-BS01-0007 OPTIFORM financed by the French Agence Nationale de la Recherche (ANR).
On solutions of elliptic equations satisfying mixed boundary conditions. A Azzam, E Kreyszig, SIAM J. Math. Anal. 13A. Azzam and E. Kreyszig, On solutions of elliptic equations satisfying mixed boundary conditions, SIAM J. Math. Anal. 13 (1983), 254-262
Domain sensitivity analysis of the acoustic far-field pattern. M Bochniak, F Cakoni, Math. Methods Appl. Sci. 257M. Bochniak and F. Cakoni, Domain sensitivity analysis of the acoustic far-field pattern, Math. Methods Appl. Sci. 25 (2002), no. 7, 595-613.
Sensitivity analysis of 2D elastic structures in presence of stress singularities. M Bochniak, A Sändig, Arch. Mech. (Arch. Mech. Stos.). 513-4M. Bochniak and A.M Sändig, Sensitivity analysis of 2D elastic structures in presence of stress singularities, Arch. Mech. (Arch. Mech. Stos.) 51 (1999), no. 3-4, 275-29.
Eigenvalues in Riemannian Geometry. I Chavel, Academic Press, IncI. Chavel, Eigenvalues in Riemannian Geometry, Academic Press, Inc., 1984.
Singularités d'arêtes pour les problèmes aux limites elliptiques. M Costabel, M Dauge, Congrès Equations aux Dérivées Partielles. Saint Jean de Monts; Bordeauxet Publications du CeReMaBM. Costabel and M. Dauge, Singularités d'arêtes pour les problèmes aux limites elliptiques, Congrès Equations aux Dérivées Partielles, Saint Jean de Monts 1992, et Publications du CeReMaB 9207 Bordeaux 1992.
Un résultat de densité pour leséquations de Maxwell régularisées dans un domaine lipschitzien. M Costabel, M Dauge, C. R. Acad. Sci. Paris Sér. I Math. 3279M. Costabel and M. Dauge, Un résultat de densité pour leséquations de Maxwell régularisées dans un domaine lipschitzien, C. R. Acad. Sci. Paris Sér. I Math. 327 (1998), no. 9, 849-854.
Edge singularities for elliptic boundary value problems. M Costabel, M Dauge, Journées "Equations aux Dérivées Partielles. Saint-Jean-de-Monts; Palaiseau12Exp. No. IVM. Costabel and M. Dauge. Edge singularities for elliptic boundary value problems, Journées "Equations aux Dérivées Partielles" (Saint-Jean-de-Monts, 1992), Exp. No. IV, 12 pp., Ecole Polytech., Palaiseau, 1992
General edge asymptotics of solutions of second order elliptic boundary value problems I. M Costabel, M Dauge, Proc. Roy. Soc. Edinburgh Sect. A. 1231109155M. Costabel and M. Dauge. General edge asymptotics of solutions of second order elliptic boundary value problems I, Proc. Roy. Soc. Edinburgh Sect. A 123 (1993), no. 1, 109155
General edge asymptotics of solutions of second order elliptic boundary value problems II. M Costabel, M Dauge, Proc. Roy. Soc. Edinburgh Sect. A. 1231157184M. Costabel and M. Dauge. General edge asymptotics of solutions of second order elliptic boundary value problems II, Proc. Roy. Soc. Edinburgh Sect. A 123 (1993), no. 1, 157184
Edge asymptotics on a skew cylinder: complex variable form Partial differential equations. M Costabel, M Dauge, 8190WarsawM. Costabel and M. Dauge, Edge asymptotics on a skew cylinder: complex variable form Partial differential equations, (Warsaw, 1990), 8190
Construction of corner singularities for Agmon-Douglis-Nirenberg elliptic systems. M Costabel, M Dauge, Math. Nachr. 162209237M. Costabel and M. Dauge, Construction of corner singularities for Agmon-Douglis-Nirenberg elliptic systems, Math. Nachr. 162 (1993), 209237
Extremal domains for the first eigenvalue in a general compact Riemannian manifold. E Delay, P Sicbaldi, preprintE. Delay and P. Sicbaldi. Extremal domains for the first eigenvalue in a general compact Riemannian manifold, preprint.
Elliptic Boundary Value Problems on Corner Domains. M Dauge, Lecture Notes in Mathematics. 1341Springer-VerlagM. Dauge Elliptic Boundary Value Problems on Corner Domains, Lecture Notes in Mathematics, 1341. Springer- Verlag, Berlin, 1988
Neumann and mixed problems on curvilinear polyhedra. M Dauge, Integral Equations Oper. Theory. 15M. Dauge Neumann and mixed problems on curvilinear polyhedra, Integral Equations Oper. Theory. 15 (1992), 227-261.
Embedded disc-type surfaces with large constant mean curvature and free boundaries. M M Fall, Commun. Contemp. Math. 1461250037M. M. Fall. Embedded disc-type surfaces with large constant mean curvature and free boundaries, Commun. Contemp. Math., Vol. 14, No. 6 (2012) 1250037.
Hadamard formula in nonsmooth domains and applications. G Fremiot, J Sokolowski, Lecture Notes in Pure and Appl. Math. 219DekkerG. Fremiot and J. Sokolowski, Hadamard formula in nonsmooth domains and applications, Lecture Notes in Pure and Appl. Math., 219, Dekker, New York, 2001
Variational problems in the theory of elliptic partial differetial equations. P R Garadedian, M Schiffer, Journal of Rational Mechanics and Analysis. 2P. R. Garadedian and M. Schiffer. Variational problems in the theory of elliptic partial differetial equations, Journal of Rational Mechanics and Analysis 2 (1953), 137-171.
Variation et optimisation de formes, Une analyse géométrique. A Henrot, M Pierre, Mathématiques & Applications. 48SpringerA. Henrot and M. Pierre, Variation et optimisation de formes, Une analyse géométrique, Mathématiques & Applications 48, Springer, Berlin, 2005
Elliptic Partial Differential Equations of Second Order. D Gilbarg, N S Trudinger, Grundlehren der mathematischen Wissenschaften, a Series of Comprehensive Studies in Mathematics. Springer2242 nd EditionD. Gilbarg and N. S. Trudinger. Elliptic Partial Differential Equations of Second Order, Grundlehren der mathematischen Wissenschaften, a Series of Comprehensive Studies in Mathematics, Vol. 224, 2 nd Edition, Springer 1977, 1983.
Regularity in free boundary problems. D Kinderlehrer, L Nirenberg, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 4373391Kinderlehrer, D. and Nirenberg, L. Regularity in free boundary problems. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 4 (1977), no. 2, 373391
Extremal domains for the first eigenvalue of the Laplace-Beltrami operator. F Pacard, P Sicbaldi, Annales de l'Institut Fourier. 59F. Pacard and P. Sicbaldi. Extremal domains for the first eigenvalue of the Laplace-Beltrami operator, Annales de l'Institut Fourier, Vol. 59 n.2 (2009), 515-542.
Constant mean curvature sphere in riemannian manifolds. F Pacard, X Xu, Manuscripta Math. 1283F. Pacard and X. Xu. Constant mean curvature sphere in riemannian manifolds, Manuscripta Math. 128 (2009), n.3, 275-295
Doubling constructions for constant mean curvature hypersurfaces in Riemannian manifolds. F Pacard, T Sun, preprintF. Pacard and T. Sun. Doubling constructions for constant mean curvature hypersurfaces in Riemannian man- ifolds, preprint.
The Isoperimetric Problem, notes of the lecture series given during the Clay Mathematics Institute Summer School on the Global Theory of Minimal Surfaces (2001) at the Mathematical Sciences Research Institute. A Ros, Berkeley, CaliforniaA. Ros, The Isoperimetric Problem, notes of the lecture series given during the Clay Mathematics Institute Summer School on the Global Theory of Minimal Surfaces (2001) at the Mathematical Sciences Research Institute, Berkeley, California. http://www.ugr.es/∼aros/isoper.htm.
Introduction to shape optimization, Shape sensitivity analysis. J Sokolowski, J P Zolésio, Springer Series in Computational Mathematics. 16Springer-VerlagJ. Sokolowski and J.P. Zolésio, Introduction to shape optimization, Shape sensitivity analysis, Springer Series in Computational Mathematics, 16. Springer-Verlag, Berlin, 1992
Shape optimization for non-smooth geometry in two dimensions. M Souli, J P Zolesio, A Ouahsine, Comput. Methods Appl. Mech. Engrg. 1881-3M. Souli and J.P. Zolesio and A. Ouahsine, Shape optimization for non-smooth geometry in two dimensions, Comput. Methods Appl. Mech. Engrg. 188 (2000), no. 1-3, 109-119
Riemannian Geometry. T J Willmore, Oxford Science PublicationsT. J. Willmore. Riemannian Geometry, Oxford Science Publications (1996).
Foliation by constant mean curvature spheres. R Ye, Pacific Journal of Mathematics. 1472R. Ye. Foliation by constant mean curvature spheres, Pacific Journal of Mathematics, Vol.147 n.2 (1991), 381- 396.
| zyda_arxiv-0597000 |
RECOVERY OF THE NONLINEARITY FROM THE MODIFIED SCATTERING MAP
4 Apr 2023
Jason Murphy
RECOVERY OF THE NONLINEARITY FROM THE MODIFIED SCATTERING MAP
4 Apr 2023
We consider a class of one-dimensional nonlinear Schrödinger equations of the form (i∂t + ∆)u = [1 + a]|u| 2 u. For suitable localized functions a, such equations admit a small-data modified scattering theory, which incorporates the standard logarithmic phase correction. In this work, we prove that the small-data modified scattering behavior uniquely determines the inhomogeneity a.
Introduction
We consider one-dimensional nonlinear Schrödinger equations of the form
(i∂ t + ∆)u = [1 + a]|u| 2 u, u| t=0 = u 0 ,(1.1)
where the inhomogeneity a : R → R is a localized function of x ∈ R. For suitable functions a, equation (1.1) admits a small-data modified scattering theory for initial data chosen from a weighted Sobolev space. In this paper, we prove that the modified scattering map uniquely determines the inhomogeneity a.
We first describe the class of inhomogeneities considered in this work: Definition 1.1 (Admissible). We say a : R → R is admissible if a ∈ L 1 ∩ L ∞ , xa ∈ L 2 , and ∂ x a ∈ L 1 .
For admissible inhomogeneities a, we may obtain the following modified scattering result for small initial data in a weighted Sobolev space, which incorporates the typical logarithmic-type phase correction. In the notation below, F denotes the Fourier transform and e it∆ = F −1 e −itξ 2 F is the Schrödinger group. For ε sufficiently small, we may use Theorem 1.2 to define the modified scattering map S a : B ε → L ∞ by S a (u 0 ) = w + , where w + is as in (1.2).
Our main result shows that the modified scattering map uniquely determines the inhomogeneity a. Theorem 1.4 fits in the context of a wide body of work on the recovery of nonlinearities (and external potentials) for nonlinear dispersive equations, particularly the question of recovery from scattering data; we refer the reader to [1-3, 7, 9, 12, 16, 18, 21-24, 26-33] for a broad selection of works in this direction. The chief novelty in our work stems from the fact that we consider a class of equations for which the usual (unmodified) scattering fails. That is, the long-time behavior of solutions is not simply given by the underlying linear dynamics; instead, due to insufficient time decay in the nonlinear term, one must incorporate a logarithmic phase correction in order to describe the long-time asymptotic behavior. Consequently, the structure of the modified scattering map is more complicated to describe. Nonetheless, as we will explain below, this modified map suffices to uniquely determine the inhomogeneity present in the nonlinearity.
Before discussing the proof of Theorem 1.4, let us briefly describe the proof of modified scattering for (1.1) (Theorem 1.2). Modified scattering for cubic nonlinear Schrödinger equations in one dimension is an important topic that has been addressed in many different settings (see e.g. [4][5][6]8,10,11,14,17,19,20]). In the setting of (1.1), the inhomogeneous cubic term may be viewed as a short-range perturbation to the long-range nonlinearity |u| 2 u; indeed, the inhomogeneity a(x) does not appear in the phase correction itself (cf. (1.2)). Our proof of modified scattering follows the basic scheme set out in [11] (based on taking the Fourier transform of the Duhamel formula and using an integrating factor to remove the non-integrable cubic part), using local smoothing estimates (similar to those appearing in [5]) to handle the inhomogeneous cubic term. For the details, see Section 3.
In Section 4, we prove the main result, Theorem 1.4. Before discussing specific details of the proof, let us first recall the standard approach to recovering the nonlinearity from the usual scattering map (going back at least as far as [16,25]). To fix ideas, let us consider the problem of recovering an unknown, localized coefficient in a 1d nonlinear Schrödinger equation of the form
(i∂ t + ∆)u = a|u| 2 u, u| t=0 = u 0 . (1.3)
For a ∈ L 1 ∩ L ∞ , one can prove that the usual (unmodified) scattering behavior holds for small initial data in L 2 (see e.g. [18]); that is, there exists a map S a such that
lim t→∞ u(t) − e it∆ S a (u 0 ) L 2 = 0,
where u is the solution to (1.3). In fact, using the Duhamel formula, one obtains the following implicit formula for S a :
S a (u 0 ) = u 0 − i ∞ 0 e −it∆ [a|u(t)| 2 u(t)] dt.
Specializing to u 0 = εϕ (with ϕ ∈ S and 0 < ε ≪ 1), pairing this identity with ϕ, and approximating u(t) by e it∆ u 0 (the Born approximation), one can show that
S a (εϕ), ϕ = ε ϕ, ϕ − iε 3 ∞ 0 R a(x)|e it∆ ϕ(x)| 4 dx dt + O(ε 4 ).
It follows that knowledge of S a suffices to determine the functionals
∞ 0 R a(x)|e it∆ ϕ(x)| 4 dx dt for ϕ ∈ S. (1.4)
The problem then reduces to showing that knowledge of the functionals (1.4) uniquely determines the coefficient a.
In the setting of Theorem 1.4, the overall structure of the argument is similar; however, the analysis becomes more complicated due to the fact that the form of the modified scattering map is different than that of the standard scattering map. In particular, the modified scattering map is no longer easily viewed as a perturbation of the identity. Instead, in Proposition 4.1, we show that for ϕ ∈ S and 0 < ε ≪ 1, we have the expansion
S a (εϕ),φ = ε φ,φ + 1 2i log(1 + 1 2ε ) |S a (εϕ)| 2 S a (εϕ),φ + ε 3 Q ε [ϕ] − iε 3 ∞ 0 R a(x)|e it∆ ϕ(x)| 4 dx dt + O(ε 4 ),
whereφ is the Fourier transform of ϕ and Q ε is a multilinear expression in ϕ (which, importantly, is independent of a). Thus, despite the more complicated structure of S a , we find that knowledge of S a still essentially determines the functionals appearing in (1.4), and the problem once again reduces to showing that the functionals (1.4) determine the coefficient a.
In earlier works (e.g. [16,25]), this final step is completed by evaluating the functional along a sequence of test functions concentrating at a point and utilizing the dominated convergence theorem in order to determine a pointwise. In the present setting, the low-power nonlinearity poses an additional challenge; indeed, we cannot use dominated convergence directly, as we cannot guarantee that e it∆ ϕ ∈ L 4 t,x (R × R) even for ϕ ∈ S. Instead, inspired in part by [12], we proceed by specializing to the case of Gaussian data, for which the free evolution may be computed explicitly. In this way, we find that knowledge of (1.4) suffices to determine the convolution a * K for an explicit kernel K, and the problem reduces to verifying directly thatK = 0 almost everywhere. This final step is completed by evaluating a Gaussian integral.
The rest of this paper is organized as follows: In section 2, we set up notation and collect some preliminary lemmas. In Section 3, we establish modified scattering for (1.1) (Theorem 1.2). Finally, in Section 4, we prove the main result, Theorem 1.4.
Acknowledgements. J.M. was supported by NSF grant DMS-2137217 and a Simons Collaboration Grant. G.C. would like to thank the Department of Mathematics and Statistics at Missouri S&T, where part of this work was completed, for its hospitality.
Notation and preliminary results
We write A B to denote A ≤ CB for some C > 0. We indicate dependence on parameters via subscripts, e.g. A a B means A ≤ CB for some C = C(a) > 0.
We write H k,ℓ to denote the weighted Sobolev space with norm
u H k,ℓ = ∂ x k x ℓ u L 2 ,
where · is the Japanese bracket notation, i.e. x = √ 1 + x 2 . We write S for Schwartz space.
We denote the Fourier transform of a function f :
R d → C by F d f (ξ) = (2π) − d 2 R d e −ix·ξ f (x) dx,
with the inverse Fourier transform given by
F −1 d f (x) = (2π) − d 2 R d e ix·ξ f (ξ) dξ.
If d = 1, we will omit the subscript. We also write F f =f and F −1 f =f . We caution the reader that factors of 2π will be uniformly omitted throughout the computations below.
The Schrödinger group is given by the Fourier multiplier operator
e it∆ = F −1 e −itξ 2 F .
This operator admits the factorization identity
e it∆ = M (t)D(t)F M (t),
where
M (t) = e i x 2 4t and [D(t)f ](x) = (2it) − 1 2 f ( x 2t ). The Galilean operator J(t) is defined via J(t) = x + 2it∂ x = e it∆ xe −it∆ . (2.1)
Given a solution u to (1.1), we will perform much of the analysis on the associated profile f (t) = e −it∆ u(t). Suitable bounds on the profile imply estimates for the solution itself, as is seen in the following lemma.
Lemma 2.1. Let f (t) = e −it∆ u(t). Then for any 0 < c < 1 4 , u(t) L ∞ x c |t| − 1 2 { f (t) L ∞ + |t| −c f (t) H 1 }.
Proof. We write
u(t) = M (t)D(t)F M (t)f (t) = M (t)D(t)f (t) + M (t)D(t)F [M (t) − 1]f (t).
We now observe that
M (t)D(t)f (t) L ∞ |t| − 1 2 f (t) L ∞ ,
which is acceptable. For the remaining term, we use Hausdorff-Young, the pointwise estimate |M (t) − 1| |x| 2c |t| −c , and Cauchy-Schwarz to obtain
M (t)D(t)F [M (t) − 1]f (t) L ∞ |t| − 1 2 −c |x| 2c f L 1 |t| − 1 2 −c x f L 2 , which is acceptable.
Next we introduce a smoothing estimate, which is the dual of the classical Kato smoothing estimate. This estimate will be used to analyze the inhomogeneous cubic term. Such estimates appear in more general settings in [5].
Lemma 2.2. Let φ : R → C satisfy |φ(k)| |k| 1 2 . (2.2)
Then for all t ≥ 0, we have
t 0 e −iξ 2 s φ(ξ)F (s, ξ) ds L 2 ξ F L 1 x L 2 s (R×[0,t]) . (2.3)
Proof. We argue by duality. We will first prove that
R e ixξφ (ξ)e iξ 2 s h(ξ) dξ L ∞ x L 2 s (R×[0,t]) h L 2 (2.4)
for any h ∈ L 2 . Without loss of generality, we restrict the integral to ξ > 0. Changing variables via ξ 2 = λ and using Plancherel (in time) and (2.2), we obtain
∞ 0 e ixξφ (ξ)e iξ 2 s h(x) dξ 2 L 2 s ([0,t]) ∞ 0 e ix √ λφ ( √ λ)e isλ h( √ λ) 1 √ λ dλ 2 L 2 s (R) ∞ 0 φ ( √ λ) √ λ h( √ λ) 2 dλ h 2 L 2 ,
uniformly in x, which yields (2.4). Now, given h ∈ L 2 and F ∈ L 1 x L 2 s , we use (2.4) and Hölder to estimate
t 0 h(ξ)e iξ 2 sφ (ξ)F (s, ξ) ds dξ = t 0 R R e ixξ e iξ 2 sφ (ξ)h(ξ) dξ F (s, x) dx ds R e ixξφ (ξ)e iξ 2 s h(ξ) dξ L ∞ x L 2 s (R×[0,t]) F L 1 x L 2 s (R×[0,t]) h L 2 F L 1 x L 2 s (R×[0,t])
, which implies the desired estimate.
The Direct Problem
In this section we prove Theorem 1.2. The proof follows largely along standard lines (see e.g. [11]), with some modifications to handle the inhomogeneous cubic term.
We let u 0 ∈ H 1,1 with u 0 H 1,1 = ε > 0, and let u : [0, ∞) × R → C be the corresponding solution to (1.1). We define the profile f (t) = e −it∆ u(t). By standard well-posedness arguments and Sobolev embedding, one can derive that that sup
t∈[0,1] [ u(t) H 1 + J(t)u(t) L 2 ε. (3.1) Using (1.1), we have that i∂ tf (t, ξ) = F e −it∆ (|u| 2 u)(ξ) + F e −it∆ (a|u| 2 u)(ξ).
In particular, we have the following straightforward estimates, which will be useful for t ∈ [0, 1]: by Hausdorff-Young and Plancherel,
∂ tf L ∞ ξ [1 + a]|u| 2 u L 1 x u 3 L 3 x u 3 H 1 x ε 3 and ∂ ξf L 2 ξ J(t)([1 + a]|u| 2 u) L 2 u 2 L ∞ Ju L 2 + u 3 L ∞ t∇a L 2 ε 3 .
Next, we isolate the component of i∂ tf that fails to be integrable as t → ∞. Evaluating the Fourier transform and changing variables via ξ − σ → σ, we obtain
F e −it∆ |u| 2 u (ξ) = e it[ξ 2 −(ξ−η) 2 +(η−σ) 2 −σ 2 )]f (t, ξ − η)f (t, η − σ)f (t, σ) dσ dη = e 2itησ G ξ [f (t), f (t), f (t)](η, σ) dσ dη, where G ξ [f, g, h](η, σ) :=f (ξ − η)ĝ(η − ξ + σ)ĥ(ξ − σ). (3.2)
We continue from above, using Plancherel and the identity
F 2 [e 2itησ ] = 1 2t e −i ησ 2t to obtain F e −it∆ |u| 2 u (ξ) = 1 2t e −i ησ 2s F −1 2 G ξ [f (t), f (t), f (t)] (η, σ) dσ dη.
Noting thatf (−ξ) =f (ξ), so that
G ξ [f (t), f (t), f (t)](0, 0) = |f (t, ξ)| 2f (t, ξ),
we therefore find that
F e −it∆ |u| 2 u (ξ) = 1 2t |f (t, ξ)| 2f (t, ξ) + 1 2t e −i ησ 2t − 1 F −1 2 {G ξ [f (t), f (t), f (t)]}(η, σ) dσ dη.
Combining the computations above, we derive that
i∂ t f (t, ξ) = 1 2t |f (t, ξ)| 2f (t, ξ) + F e −it∆ a|u| 2 u (ξ) + 1 2t e −i ησ 2t − 1 F −1 2 {G ξ [f (t), f (t), f (t)]}(η, σ) dσ dη.
We now define
w(t) = e iB(t)f (t), where B(t) := exp i t 0 |f (s)| 2 ds 2s+1 .(3.
3)
It follows that
i∂ t w(t, ξ) = e iB(t,ξ) i∂ t f (t, ξ) − 1 2t+1 |f (t, ξ)| 2f (t, ξ) (3.4) = e iB(t,ξ) 1 2t(2t+1) |f (t, ξ)| 2f (t, ξ) (3.5) + F e −it∆ (a|u| 2 u)(ξ) (3.6) + 1 2t e −i ησ 2t − 1]F −1 2 {G ξ [f (t), f (t), f (t)]}(η, σ) dσ dη . (3.7)
Using (3.4) and (3.1), we find that
∂ t w H 1 ε 3 uniformly for t ∈ [0, 1]. (3.8)
We obtain estimates for t ∈ [1, ∞) using a bootstrap argument. In particular, assuming that the solution satisfies estimates of the form
f (t) L ∞ ξ ≤ 2Cε and f (t) H 1 ≤ 2C t δ ε (3.9)
uniformly in t ≥ 1, the estimates obtained below will demonstrate that the solution satisfies the improved bounds
f (t) L ∞ ξ ≤ Cε and f (t) H 1 ≤ C t δ ε.
Here δ = O(ε 2 ) is a small parameter. Observe that by Lemma 2.1, the assumptions (3.9) also guarantee that
u(t) L ∞ t − 1 2 ε. Noting that f (t) L ∞ ξ ≡ w(t) L ∞ ξ ,
we begin by using the expansion (3.5)-(3.7) to estimate ∂ t w in in L ∞ ξ . In particular, we will prove that if (3.9) holds, then
∂ t w L ∞ ξ t −1− 1 10 ε 3 uniformly for t ≥ 1. (3.10)
First, by (3.9) we immediately see that
1 2t(2t+1) |f | 2f L ∞ ξ t −2 ε 3 ,
which is acceptable. Next, using (3.9), Hausdorff-Young, and Lemma 2.1, we estimate
F e −it∆ (a|u| 2 u) L ∞ ξ a|u| 2 u L 1 a L 1 u 3 L ∞ a t − 3 2 ε 3 ,
which is acceptable. Finally, we turn to (3.7). We begin by using the pointwise estimate
|e ix − 1| ≤ |x| 1 5 to obtain (3.7) L ∞ ξ |t| −1− 1 5 |η| 1 5 |σ| 1 5 |F −1 2 {G ξ [f, f, f ]}(η, σ)| dσ dη L ∞ ξ . (3.11)
To estimate the right-hand side of (3.11), we rely on the following general trilinear estimate. We state the result in more generality than is needed here, as this formulation will be useful in the next section.
Lemma 3.1 (Trilinear Estimate).
Define G ξ (·, ·, ·) as in (3.2). Then
|η| 1 5 |σ| 1 5 |F −1 2 {G ξ [f, g, h]}(η, σ)| dσ dη f H 0,1 g H 0,1 h H 0,1 uniformly in ξ. Proof. Recall that G ξ [f, g, h](x, y) =f (ξ − x)ĝ(x − ξ + y)ĥ(ξ − y).
Thus, writing e iab db = δ a=0 , we have
F −1 2 {G ξ [f, g, h]}(η, σ) = · · · e i[xη+yσ−v(ξ−x)−z(x−ξ+y)−r(ξ−y)] f (v)ḡ(z)h(r) dx dy dr dv dz = ḡ(z)e izξ f (v)e −ivξ e i[x(v+η−z)] h(r)e −irξ e i[y(r+σ−z)] dy dr dx dv dz = ḡ(z)h(z − σ)e iξσ f (v)e −ivξ e i[x(v+η−z)] dx dv dz = f (z − η)ḡ(z)h(z − σ)e iξ[η+σ−z] dz. (3.12)
It follows that
|F −1 2 {G ξ [f, g, h]}(η, σ)| ≤ |f (z − η)h(z − σ)g(z)| dz
uniformly in ξ, and hence
|x| c f L 1 x f L 2 ,
which is a consequence of Cauchy-Schwarz.
Continuing from (3.11) and applying Lemma 3.1 and (3.9), we obtain
(3.7) L ∞ ξ |t| −1− 1 5 f (t) 3 H 0,1 |t| −1− 1 5 +3δ ε 3 ,
which is acceptable (provided δ is sufficiently small). This completes the proof of (3.10), which suffices to close the bootstrap estimate forf in L ∞ .
To complete the proof of (3.9), it suffices to close the bootstrap estimate for H 1 -norm off . Without loss of generality, we estimate theḢ 1 -norm only.
Using the Duhamel formula, we first write
∂ ξf (t, ξ) = ∂ ξû0 (ξ) (3.13) − i t 0 ∂ ξ F e −is∆ |u| 2 u (ξ) ds (3.14) − i t 0 ∂ ξ F e −is∆ a|u| 2 u (ξ) ds. (3.15)
The term in (3.13) is O(ε) in L 2 ξ , which is acceptable. Using the same computations as above, we may write
(3.14) = −i t 0 e 2isησ ∂ ξ G ξ [f (s), f (s), f (s)](η, σ) dσ dη ds. (3.16)
Recalling the definition of G ξ (see (3.2)), it follows from the product rule that 1)) and undo the computations that led to (3.16) to see that (3.14) may be written as a sum of terms of the form
∂ ξ G ξ [f, f, f ]t 0 F [e −is∆ O(u 2 )Ju](ξ) ds.
In particular, by (3.1) and (3.9), we may estimate
(3.14) L 2 ξ t 0 u(s) 2 L ∞ ξ J(s)u(s) L 2 ds t 0 s −1+δ ε 3 ds t δ ε 3 ,
which is acceptable. It remains to estimate (3.15). We begin by writing
∂ ξ t 0 F e −is∆ a|u| 2 u (ξ) ds = ∂ ξ t 0 e isξ 2 F a|u| 2 u (ξ) ds = t 0 e isξ 2 ∂ ξ F a|u| 2 u (ξ) ds (3.17) + 2i t 0 ξse isξ 2 F a|u| 2 u (ξ) ds. (3.18)
Using (3.1) and (3.9), we first estimate
(3.17) L 2 ξ t 0 x a|u| 2 u L 2 x ds t 0 xa L 2 u 3 L ∞ ds t 0 ε 3 s − 3 2 ds ε 3 ,
which is acceptable. Next, we let ϕ be a smooth cutoff to |ξ| ≤ 1 and decompose
(3.18) = 2i t 0 ξϕ(ξ)se isξ 2 F a|u| 2 u (ξ) ds (3.19) + 2i t 0 [1 − ϕ(ξ)]se isξ 2 ξF a|u| 2 u (ξ) ds. (3.20)
Applying Lemma 2.2 (with φ(ξ) = ξϕ(ξ) andF (s, ξ) = sF (a|u| 2 u)), (3.1), (3.9), and Minkowski's integral inequality, we deduce that
(3.19) L 2 ξ sa|u| 2 u L 1 x L 2 s (R×[0,t]) a L 1 s|u| 2 u L ∞ x L 2 s (R×[0,t]) s|u| 2 u L 2 s L ∞ x ([0,t]×R) ε 3 s s − 3 2 L 2 s ([0,t]) ε 3 log t ,
which is acceptable. Similarly, applying Lemma 2.2 (with φ(ξ) = 1−ϕ(ξ) andF (s, ξ) = sξF (a|u| 2 u)), we find that
(3.20) L 2 ξ s|u| 2 u ∂ x a L 1 x L 2 s (R×[0,t]) + sa|u| 2 ∂ x u L 1 x L 2 s (R×[0,t]) .
(3.21)
For the first term, we proceed as we did for (3.19). This yields
s|u| 2 u∂ x a L 1 x L 2 s (R×[0,t]) ε 3 ∂ x a L 1 log t ,
which is acceptable. For the second term, we write
s∂ x u = 1 2i [J(s)u(s) − xu(s)]
Then, using (3.9) (noting that Ju L 2 = f Ḣ1 by (2.1)), we estimate
sa|u| 2 ∂ x u L 1 x L 2 s (R×[0,t]) x a L 2 |u| 2 x −1 [Ju − xu] L 2 s,x ([0,t]×R) u 2 L 4 s L ∞ x ([0,t]×R) Ju L ∞ s L 2 x ([0,t]×R) + x x u L ∞ s L 2 x ([0,t]×R) ε 3 t δ ,
which is acceptable. Combining the estimates above, we can close the bootstrap for the H 1 -component off . Thus the desired bounds forf hold for all t ≥ 0, and in particular we obtain the bound (3.10).
With (3.10) in hand, we obtain the establish the existence of w + in L ∞ ξ such that
w(t) − w + L ∞ ξ t − 1 10 ε 3 (3.22)
uniformly for t ≥ 0, which suffices to complete the proof of Theorem 1.2.
The Inverse Problem
The goal of this section is to prove Theorem 1.4. Our first step is a careful analysis of the scattering map u 0 → S a (u 0 ) for a fixed admissible inhomogeneity a.
S a (εϕ),φ = ε φ,φ + 1 2i log(1 + 1 2ε ) |S a (εϕ)| 2 S a (εϕ),φ + ε 3 Q ε [ϕ] − iε 3 ∞ 0 R a(x)|e it∆ ϕ(x)| 4 dx dt + O(ε 4 ), (4.1) where Q ε [ϕ] := ∞ ε 1 2it [e −i ησ 2t −1]ϕ(z −η)ϕ(z −σ)φ(z)φ(z −η−σ) dz dη dσ dt. (4.2)
Proof. We write u 0 = εϕ and let u be the solution to (1.1) with u| t=0 = u 0 . We define the profile f (t) = e −it∆ u(t) and the modified profile w(t) = e iB(t)f (t) as in (3.3). In particular, there exists w + ∈ L ∞ ξ such that w(t) → w + = S a (u 0 ) in L ∞ ξ as t → ∞. By construction, we have
w + L ∞ ξ ε.
We begin by using (3.5)-(3.7) from the preceding section to write
iw + (ξ) = iû 0 (ξ) + ε 0 i∂ t w(t, ξ) dt + ∞ ε 1 2t(2t+1) |w(t, ξ)| 2 w(t, ξ) dt (4.3) + ∞ ε e iB(t,ξ) G t [f (t), f (t), f (t)](ξ) dt (4.4) + ∞ ε e iB(t,ξ) F [e −it∆ {a|u(t)| 2 u(t)}](ξ) dt,(4.5)
where
G t [f, g, h](ξ) := 1 2t [e −i ησ 2t − 1]F −1 2 {G ξ [f, g, h]}(η, σ) dη dσ,(4.6)
with G ξ (·, ·, ·) as in (3.2). The termû 0 (ξ) is O(ε). The analysis now proceeds by separating the remaining components in (4.3)-(4.4) that are O(ε 3 ) in L ∞ ξ from those that are o(ε 3 ) as ε → 0. We first observe that by (3.8), we have that
ε 0 ∂ t w dt L ∞ ξ ε 4 .
For the remaining term in (4.3), we claim that
∞ ε 1 2t(2t+1) |w(t, ξ)| 2 w(t, ξ) dt = 1 2 log(1 + 1 2ε )|w + (ξ)| 2 w + (ξ) + O(ε 4 ) (4.7)
in L ∞ ξ . To see this, we use (3.22) to estimate
|w(t)| 2 w(t) − |w + | 2 w + L ∞ ξ { w(t) 2 L ∞ ξ + w + 2 L ∞ ξ } w(t) − w + L ∞ ξ ε 5 t − 1 10 , which yields ∞ ε 1 2t(2t+1) |w(t)| 2 w(t) − |w + | 2 w + dt L ∞ ξ ε 5 ∞ ε 1 2t(2t+1) t − 1 10 dt ε 5 | log ε| = O(ε 4 ).
As ∞ ε 1 2t(2t+1) dt = 1 2 log(1 + 1 2ε ), we conclude that (4.7) holds. Collecting the estimates so far, we have found We turn to the terms in (4.4)-(4.5). We first show that the phase exp{iB(t)} can be removed up to errors that are higher order in ε (at the price of logarithmic time growth). In particular, we have
e iB(t) − 1 L ∞ ξ B(t) L ∞ ξ t 0 f (s) 2 L ∞ ξ ds 2s+1 ε 2 log t .
(4.9)
We now use (4.9) to show that
(4.4) + (4.5) = ∞ ε G t [f (t), f (t), f (t)](ξ) dt + ∞ ε F [e −it∆ {a|u(t)| 2 u(t)}](ξ) dt + O(ε 4 ) (4.10)
uniformly in ξ. To this end, we will verify the following two estimates:
∞ ε log t G t [f (t), f (t), f (t)] L ∞ ξ dt ε 14 5 , (4.11) ∞ ε log t F [e −it∆ {a|u(t)| 2 u(t)}] L ∞ ξ dt ε 3 . (4.12)
Using Lemma 3.1, we first have
|(4.11)| ∞ ε |t| −1− 1 5 log t |η| 1 5 |σ| 1 5 |F −1 2 {G ξ [f (t), f (t), f (t)]}(η, σ)| dη dσ ∞ ε |t| −1− 1 5 log t f (t) 3 H 0,1 dt ε 3 ∞ ε |t| −1− 1 5 t 3δ log t dt ε 14 5 .
Next, by Hausdorff-Young and Lemma 2.1,
|(4.12)| ∞ ε log t a|u(t)| 2 u(t) L 1 dt ∞ ε log t a L 1 u(t) 3 L ∞ dt ε 3 ∞ ε log t t − 3 2 dt ε 3 .
Combining the preceding estimates with (4.9), we derive (4.10). We now analyze each term in (4.10) more closely. We show that up to acceptable errors, we may replace the full solution with its initial data:
Lemma 4.2. The following approximations hold. First, ∞ ε G t [f (t), f (t), f (t)] dt = ∞ ε G t [u 0 , u 0 , u 0 ] dt + O(ε 4 ) (4.13)
in L ∞ ξ . Next, for any test function ψ, Proof. We begin with (4.13). Writing
f (t) = u 0 + t 0 ∂ s f (s) ds,
we find that it suffices to prove that
∞ ε G t g, h, t 0 ∂ s f (s) ds dt = O(ε 4 ) in L ∞ ξ , where g, h ∈ u 0 , t 0 ∂ s f ds .
For each such term, we use Lemma 3.1 to estimate
∞ ε G t g, h, t 0 ∂ s f (s) ds (ξ) dt ∞ ε |t| −1− 1 5 |η| 1 5 |σ| 1 5 F −1 2 G ξ g, h, t 0 ∂ s f (s) ds (η, σ) dη dσ dt ∞ ε |t| −1− 1 5 g H 0,1 h H 0,1 t 0 ∂ s f (s) ds H 0,1 dt uniformly in ξ.
Noting that the estimates in the preceding section imply
x t 0 ∂ s f (s) ds L 2 t 3δ ε 3 ,
we see that
g H 0,1 + h H 0,1 ε + ε 3 t 3δ .
It follows that
∞ ε G t g, h, t 0 ∂ s f (s) ds L ∞ ξ dt ∞ ε |t| −1− 1 5 {ε 5 t 3δ + ε 9 t 9δ } dt ε 24 5 ,
which is acceptable. We turn to (4.14). Fixing a test function ψ, we see that it suffices to prove Thus by the dispersive estimate, Sobolev embedding, unitarity of e it∆ , and Lemma 2.1,
we have ∞ ε a[|u(t)| 2 u(t) − |e it∆ u 0 | 2 e it∆ u 0 ], e it∆ψ dt ∞ 0 a |u(t)| 2 + |e it∆ u 0 | 2 |u(t) − e it∆ u 0 | · e it∆ψ L 1 x dt ∞ 0 a L 2 x N (t) L 2 x u(t) 2 L ∞ x + e it∆ u 0 2 L ∞ x e it∆ψ L ∞ x dt a ε 2 ∞ 0 t − 3 2 ψ H 1,1 t 0 1 + a L ∞ x |u(s)| 2 u(s) L 2 x ds dt a,ψ ε 2 ∞ 0 t − 3 2 t 0 u(s) 2 L ∞ u(s) L 2 ds dt a,ψ ε 5 ∞ 0 t − 3 2 t 0 s −1 ds dt a,ψ ε 5 ∞ 0 t − 3 2 log t dt a,ϕ ε 5 ,
which is acceptable.
We return to the expansion for w + given in (4.3)-(4.5) and pair the expression withφ. We insert (4.8) for (4.3) and combine (4.10) with Lemma 4.2 to replace the terms (4.4)-(4.5). Recalling u 0 = εϕ, this yields
S a (εϕ),φ = ε φ,φ + 1 2i log(1 + 1 2ε ) |w + | 2 w + ,φ − iε 3 ∞ ε G t [ϕ, ϕ, ϕ],φ dt − iε 3 ∞ ε a(x)|e it∆ ϕ(x)| 4 dx dt + O(ε 4 ).
Comparing the identity above with (4.1), we see that to complete the proof of Proposition 4.1 it suffices to verify the following:
ε 0 R a(x)|e it∆ ϕ(x)| 4 dx dt = O(ε), (4.15) ∞ ε 1 i G t [ϕ, ϕ, ϕ],φ dt = Q ε [ϕ],(4.16)
where Q ε is as in (4.2). The estimate (4.15) follows from the straightforward bound ε 0 a(x)|e it∆ ϕ(x)| 4 dx dt ε a L ∞ e it∆ ϕ 4
L ∞ t L 4 x a ε ϕ 4 H 1 ,
where we have applied Sobolev embedding and unitary of e it∆ .
The identity (4.16) follows from a straightforward calculation: recalling the definition in (4.6) and the identity in (3.12), we have [e −i ησ 2t − 1]ϕ(z − η)φ(z)ϕ(z − σ)φ(ξ)e iξ[η+σ−z] dz dξ dη dσ dt
= ∞ ε 1 2it [e −i ησ 2t − 1]ϕ(z − η)φ(z)ϕ(z − σ)φ(z − η − σ) dz dη dσ dt = Q ε [ϕ],
as desired.
We now turn to the proof of our main result, Theorem 1.4.
Proof of Theorem 1.4. We let a and b be admissible in the sense of Definition 1.1 and suppose that the modified scattering maps S a and S b agree on their common domain. We now fix ϕ ∈ S and sufficiently small ε > 0 and apply the main identity (4.1) in Proposition 4.1 to both S a (εϕ) and S b (εϕ). As S a (εϕ) = S b (εϕ), this implies
∞ 0 R a(x)|e it∆ ϕ(x)| 4 dx dt = ∞ 0 R b(x)|e it∆ ϕ(x)| 4 dx dt + O(ε)
for any ε > 0. It follows that exp{− x 2 4(1+it) } (see [34]). In particular,
K ϕ (x) = ∞ 0 1 1+t 2 exp − x 2 1+t 2 dt.
Now suppose that (4.17) holds. Then, by translation invariance for the linear Schrödinger equation, we have that R a(x)K ϕ (x − x 0 ) dx = 0 for all x 0 ∈ R.
Thus, to deduce that a ≡ 0, it suffices to verify thatK ϕ = 0 almost everywhere. In fact, for ξ = 0, we can computeK ϕ (ξ) explicitly as a Gaussian integral:
K ϕ (ξ) = ∞ 0 (1 + t 2 ) −1 R exp −ixξ − x 2 1+t 2 dx dt = √ π ∞ 0 (1 + t 2 ) − 1 2 exp − ξ 2 (1+t 2 ) 4
dt.
AsK ϕ (ξ) is the integral of a positive function, the result follows.
Theorem 1. 2 (
2Modified scattering). Let a : R → R be admissible in the sense of Definition 1.1. If u 0 H 1,1 is sufficiently small, then there exists a unique forwardglobal solution u to (1.1) and w + ∈ L ∞ ξ such that lim t→∞ exp i t 0 |F e −is∆ u(s)| 2 ds 2s+1 F e −it∆ u(t) 1.2, we may define the modified scattering map.
Definition 1. 3 (
3Modified scattering map). Let a be admissible in the sense of Definition 1.1. Given ε > 0, define B ε = {u 0 ∈ H 1,1 : u 0 H 1,1 < ε}.
Theorem 1. 4 (
4The modified scattering map determines the nonlinearity). Suppose a and b admissible in the sense of Definition 1.1. Let S a : B → L ∞ and S b : B ′ → L ∞ denote the corresponding modified scattering maps.If S a = S b on B ∩ B ′ , then a ≡ b.
1 5
1]|f (z − η)h(z − σ)g(z)| dz dσ dη uniformly in ξ. The result now follows from the fact that for any 0 < c < 1 2 ,
is a linear combination of terms of the form G ξ [xf, f, f ]. After distributing the derivative, we can use the identity xf (s) = xe −is∆ u(s) = e −is∆ J(s)u(s) (cf. (2.
Proposition 4 . 1 (
41Structure of S a ). Let a be admissible in the sense of Definition 1.1. Let ϕ ∈ S(R) and ε > 0 be sufficiently small. Let u : [0, ∞) × R → C be the solution to (1.1) with u| t=0 = εϕ. Then
(4.3) = iû 0 (ξ) + 1 2 log(1 + 1 2ε )|w + (ξ)| 2 w + (ξ) + O(ε 4 ).(4.8)
F
[e −it∆ {a|u| 2 u}], ψ dt = ∞ ε a|e it∆ u 0 | 2 e it∆ u 0 , e it∆ψ dt + O(ε 4 ). (4.14)
2 u − |e it∆ u 0 | 2 e it∆ u 0 ], e it∆ψ dt = O(ε 4 ).To prove this we first note that by the Duhamel formula for (1.1), we have u(t) − e it∆ u 0 = N (t) := −i t 0 e i(t−s)∆ [(1 + a)|u| 2 u](s) ds.
x)|e it∆ ϕ(x)| 4 dx dt for all ϕ ∈ S. Thus the proof of Theorem 1.4 reduces to showing that if a is admissible in the sense of Definition 1.1 and ∞ 0 R a(x)|e it∆ ϕ(x)| 4 dx dt = 0 for all ϕ ∈ S, (4.17)then a ≡ 0. Given ϕ ∈ S, we define the functionK ϕ (x) = ∞ 0 |e it∆ ϕ(x)| 4 dtand first prove that K ϕ ∈ L 2 . To see this, we use Minkowski's integral inequality followed by the dispersive estimate and Sobolev embedding to estimate
Inverse scattering for critical semilinear wave equations. A Barreto, G Uhlmann, Y Wang, Pure Appl. Anal. 42A. Sá Barreto, G. Uhlmann, and Y. Wang, Inverse scattering for critical semilinear wave equations. Pure Appl. Anal. 4 (2022), no. 2, 191-223.
Recovery of a cubic non-linearity in the wave equation in the weakly non-linear regime. A Barreto, P Stefanov, Comm. Math. Phys. 3921A. Sá Barreto and P. Stefanov, Recovery of a cubic non-linearity in the wave equation in the weakly non-linear regime. Comm. Math. Phys. 392 (2022), no. 1, 25-53.
Analyticity of the scattering operator for semilinear dispersive equations. R Carles, I Gallagher, Comm. Math. Phys. 2863R. Carles and I. Gallagher, Analyticity of the scattering operator for semilinear dispersive equations. Comm. Math. Phys. 286 (2009), no. 3, 1181-1209.
The 1-dimensional nonlinear Schrödinger equation with a weighted L 1 potential. G Chen, F Pusateri, Anal. PDE. 154G. Chen and F. Pusateri, The 1-dimensional nonlinear Schrödinger equation with a weighted L 1 potential. Anal. PDE 15 (2022), no. 4, 937-982.
On the 1d cubic NLS with a non-generic potential. G Chen, F Pusateri, arXiv:2205.01487PreprintG. Chen and F. Pusateri, On the 1d cubic NLS with a non-generic potential. Preprint arXiv:2205.01487.
Long-time asymptotics for solutions of the NLS equation with initial data in a weighted Sobolev space. Dedicated to the memory of Jürgen K. P Deift, X Zhou, Moser. Comm. Pure Appl. Math. 568P. Deift and X. Zhou, Long-time asymptotics for solutions of the NLS equation with initial data in a weighted Sobolev space. Dedicated to the memory of Jürgen K. Moser. Comm. Pure Appl. Math. 56 (2003), no. 8, 1029-1077.
The geometrical approach to multidimensional inverse scattering. V Enss, R Weder, J. Math. Phys. 368V. Enss and R. Weder, The geometrical approach to multidimensional inverse scattering. J. Math. Phys. 36 (1995), no. 8, 3902-3921.
Asymptotics for large time of solutions to the nonlinear Schrödinger and Hartree equations. N Hayashi, P Naumkin, Amer. J. Math. 1202N. Hayashi and P. Naumkin, Asymptotics for large time of solutions to the nonlinear Schrödinger and Hartree equations. Amer. J. Math. 120 (1998), no. 2, 369-389.
Recovery of a cubic nonlinearity for the nonlinear Schrödinger equation. C Hogan, J Murphy, D Grow, J. Math. Anal. Appl. 5221127016C. Hogan, J. Murphy, and D. Grow, Recovery of a cubic nonlinearity for the nonlinear Schrödinger equation. J. Math. Anal. Appl. 522 (2023), no. 1, Article 127016.
Global bounds for the cubic nonlinear Schrödinger equation (NLS) in one space dimension. M Ifrim, D Tataru, Nonlinearity. 288M. Ifrim and D. Tataru, Global bounds for the cubic nonlinear Schrödinger equation (NLS) in one space dimension. Nonlinearity 28 (2015), no. 8, 2661-2675.
A new proof of long-range scattering for critical nonlinear Schrödinger equations. J Kato, F Pusateri, Differential Integral Equations. 249J. Kato and F. Pusateri, A new proof of long-range scattering for critical nonlinear Schrödinger equations. Differential Integral Equations 24 (2011), no 9-10, 923-940.
The scattering map determines the nonlinearity. R Killip, J Murphy, M Visan, arXiv:2207.02414Proc. Amer. Math. Soc. Preprint. To appear inR. Killip, J. Murphy, and M. Visan, The scattering map determines the nonlinearity. To appear in Proc. Amer. Math. Soc. Preprint arXiv:2207.02414.
A note on recovering the nonlinearity for generalized higher-order Schrödinger equations. Z Lee, X Yu, arXiv:2303.06312PreprintZ. Lee and X. Yu, A note on recovering the nonlinearity for generalized higher-order Schrödinger equations. Preprint arXiv:2303.06312.
Scattering and small data completeness for the critical nonlinear Schrödinger equation. H Lindblad, A Soffer, Nonlinearity. 192H. Lindblad and A. Soffer, Scattering and small data completeness for the critical nonlinear Schrödinger equation. Nonlinearity 19 (2006), no. 2, 345-353.
Modified scattering for the one-dimensional cubic NLS with a repulsive delta potential. S Masaki, J Murphy, J Segata, Int. Math. Res. Not. IMRN. 201924S. Masaki, J. Murphy, and J. Segata, Modified scattering for the one-dimensional cubic NLS with a repulsive delta potential. Int. Math. Res. Not. IMRN 2019, no. 24, 7577-7603.
On a nonlinear scattering operator. C S Morawetz, W A Strauss, Comm. Pure Appl. Math. 26C. S. Morawetz and W. A. Strauss, On a nonlinear scattering operator. Comm. Pure Appl. Math. 26 (1973), 47-54.
A review of modified scattering for the 1d cubic NLS. Harmonic analysis and nonlinear partial differential equations. J Murphy, Res. Inst. Math. Sci. (RIMS). 882021RIMS Kokyuroku BessatsuJ. Murphy, A review of modified scattering for the 1d cubic NLS. Harmonic analysis and nonlinear partial differential equations, 119-146, RIMS Kokyuroku Bessatsu, B88, Res. Inst. Math. Sci. (RIMS), Kyoto, 2021.
Recovery of a spatially-dependent coefficient from the NLS scattering map. J Murphy, arXiv:2209.07680PreprintJ. Murphy, Recovery of a spatially-dependent coefficient from the NLS scattering map. Preprint arXiv:2209.07680.
Sharp asymptotic behavior of solutions for cubic nonlinear Schrödinger equations with a potential. I Naumkin, J. Math. Phys. 57531I. Naumkin, Sharp asymptotic behavior of solutions for cubic nonlinear Schrödinger equa- tions with a potential. J. Math. Phys. 57 (2016), no. 5, 051501, 31 pp.
Nonlinear Schrödinger equations with exceptional potentials. I Naumkin, J. Differential Equations. 2659I. Naumkin, Nonlinear Schrödinger equations with exceptional potentials. J. Differential Equations 265 (2018), no. 9, 4575-4631.
Analyticity of the nonlinear scattering operator. B Pausader, W A Strauss, Discrete Contin. Dyn. Syst. 252B. Pausader and W. A. Strauss, Analyticity of the nonlinear scattering operator. Discrete Contin. Dyn. Syst. 25 (2009), no. 2, 617-626.
The inverse scattering problem for Schrödinger and Klein-Gordon equations with a nonlocal nonlinearity. H Sasaki, Nonlinear Analysis, Theory, Methods & Applications. 66H. Sasaki, The inverse scattering problem for Schrödinger and Klein-Gordon equations with a nonlocal nonlinearity, Nonlinear Analysis, Theory, Methods & Applications 66 (2007), 1770-1781.
Inverse scattering for the nonlinear Schrödinger equation with the Yukawa potential. H Sasaki, Comm. Partial Differential Equations. 337-9H. Sasaki, Inverse scattering for the nonlinear Schrödinger equation with the Yukawa po- tential. Comm. Partial Differential Equations 33 (2008), no. 7-9, 1175-1197.
Uniqueness on identification of cubic convolution nonlinearity. H Sasaki, M Watanabe, J. Math. Anal. Appl. 3091H. Sasaki and M. Watanabe, Uniqueness on identification of cubic convolution nonlinearity. J. Math. Anal. Appl. 309 (2005), no. 1, 294-306.
Nonlinear scattering theory. W A Strauss, Scattering Theory in Mathematical Physics. J. A. Lavita and J. P. Marchand. D. ReidelDordrecht, Holland/BostonW. A. Strauss, Nonlinear scattering theory. In Scattering Theory in Mathematical Physics, edited by J. A. Lavita and J. P. Marchand. D. Reidel, Dordrecht, Holland/Boston, 1974, pp. 53-178.
Inverse scattering for the nonlinear Schrödinger equation with cubic convolution nonlinearity. M Watanabe, Tokyo J. Math. 241M. Watanabe, Inverse scattering for the nonlinear Schrödinger equation with cubic convo- lution nonlinearity. Tokyo J. Math. 24 (2001), no. 1, 59-67.
Time-dependent method for non-linear Schrödinger equations in inverse scattering problems. M Watanabe, J. Math. Anal. Appl. 4592M. Watanabe, Time-dependent method for non-linear Schrödinger equations in inverse scat- tering problems. J. Math. Anal. Appl. 459 (2018), no. 2, 932-944.
Inverse scattering for the nonlinear Schrödinger equation. R Weder, Comm. Partial Differential Equations. 2211R. Weder, Inverse scattering for the nonlinear Schrödinger equation. Comm. Partial Differ- ential Equations 22 (1997), no. 11-12, 2089-2103.
Inverse scattering for the non-linear Schrödinger equation: reconstruction of the potential and the non-linearity. R Weder, Math. Methods Appl. Sci. 244R. Weder, Inverse scattering for the non-linear Schrödinger equation: reconstruction of the potential and the non-linearity. Math. Methods Appl. Sci. 24 (2001), no. 4, 245-25
L p -L p ′ estimates for the Schrödinger equation on the line and inverse scattering for the nonlinear Schrödinger equation with a potential. R Weder, J. Funct. Anal. 1701R. Weder, L p -L p ′ estimates for the Schrödinger equation on the line and inverse scattering for the nonlinear Schrödinger equation with a potential. J. Funct. Anal. 170 (2000), no. 1, 37-68.
Inverse scattering for the nonlinear Schrödinger equation. II. Reconstruction of the potential and the nonlinearity in the multidimensional case. R Weder, Proc. Amer. Math. Soc. 12912R. Weder, Inverse scattering for the nonlinear Schrödinger equation. II. Reconstruction of the potential and the nonlinearity in the multidimensional case. Proc. Amer. Math. Soc. 129 (2001), no. 12, 3637-3645.
Inverse scattering for the non-linear Schrödinger equation: reconstruction of the potential and the non-linearity. R Weder, Math. Methods Appl. Sci. 244R. Weder, Inverse scattering for the non-linear Schrödinger equation: reconstruction of the potential and the non-linearity. Math. Methods Appl. Sci. 24 (2001), no. 4, 245-254.
Multidimensional inverse scattering for the nonlinear Klein-Gordon equation with a potential. R Weder, J. Differential Equations. 1841R. Weder, Multidimensional inverse scattering for the nonlinear Klein-Gordon equation with a potential. J. Differential Equations 184 (2002), no. 1, 62-77.
M Visan, Dispersive Equations, in "Dispersive Equations and Nonlinear Waves, Oberwolfach Seminars. Basel AG, BaselBirkhauser/Springer45M. Visan, Dispersive Equations, in "Dispersive Equations and Nonlinear Waves, Oberwol- fach Seminars" 45, Birkhauser/Springer Basel AG, Basel, 2014.
Georgia Institute of Technology Email address: gc@math. gatech.eduGeorgia Institute of Technology Email address: [email protected]
Missouri University of Science & Technology Email address: jason. [email protected] University of Science & Technology Email address: [email protected]
| zyda_arxiv-0599000 |
Topics in the Haystack: Extracting and Evaluating Topics beyond Coherence
Anton Thielmann [email protected]
Chair of Data Science and Applied Statistics
TU Clausthal
Germany
Quentin Seifert
Chair of Spatial Data Science and Statistical Learning
University of Göttingen
Germany
Arik Reuter
Chair of Data Science and Applied Statistics
TU Clausthal
Germany
Elisabeth Bergherr
Chair of Spatial Data Science and Statistical Learning
University of Göttingen
Germany
Benjamin Säfken
Chair of Data Science and Applied Statistics
TU Clausthal
Germany
Topics in the Haystack: Extracting and Evaluating Topics beyond Coherence
Extracting and identifying latent topics in large text corpora has gained increasing importance in Natural Language Processing (NLP). Most models, whether probabilistic models similar to Latent Dirichlet Allocation (LDA) or neural topic models, follow the same underlying approach of topic interpretability and topic extraction. We propose a method that incorporates a deeper understanding of both sentence and document themes, and goes beyond simply analyzing word frequencies in the data. This allows our model to detect latent topics that may include uncommon words or neologisms, as well as words not present in the documents themselves. Additionally, we propose several new evaluation metrics based on intruder words and similarity measures in the semantic space. We present correlation coefficients with human identification of intruder words and achieve near-human level results at the word-intrusion task. We demonstrate the competitive performance of our method with a large benchmark study, and achieve superior results compared to state-of-the-art topic modeling and document clustering models.
Introduction
Identifying latent topics in large text corpora is a central task in Natural Language Processing (NLP). With the ever-growing availability of textual data in virtually all languages and about every possible topic, automated topic extraction is gaining increasing importance. Hence, the approaches are manifold. For almost all models, a topic is intuitively defined by a set of words with each word having a probability of occurrence for the given topic. Different topics can share words, and a document can be linked to more than one topic. Generative probabilistic models, such as probabilistic latent semantic analysis (PLSA) [23] and Latent Dirichlet Allocation (LDA) [12], are still widely used and inspired multiple adaptations as e.g. [1,11,14,38, 41] all drawing heavily from word-co-occurrences. Due to its popularity and general good performance on benchmark datasets, the interpretation of a topic from LDA is seldomly challenged. Neural topic models, like e.g. [18,52], further improve upon the existing methods by integrating word-embeddings or variational autoencoders [45] into the modeling approach, but still heavily rely on the ideas from [12].
New methods that challenge the typical idea of topic modeling also integrate wordand document-embeddings [4,20,43]. However, improvement over the current state of the art is usually measured in terms of performance as determined by evaluation arXiv:2303.17324v1 [cs.CL] 30 Mar 2023 metrics on standard benchmark datasets. While older models were still evaluated using likelihood-based perplexity metrics [28,29,41], empirical results showed a negative correlation between perplexity based metrics and human evaluation of a topic model [13]. Additionally, Chang et al. [13] first introduced the idea of intruder words. According to this idea, a topic is considered coherent or simply put, good, if a randomly chosen word, not belonging to that topic, can clearly be identified by humans. As human evaluation of models is cost and time intensive, researchers used new evaluation methods that correlated with human evaluation [30,37]. Hoyle et al. [24] even found no contemporary model at all that used human feedback as a form of model evaluation. Newer models were hence evaluated using coherence scores [4,18,20,43,45]. However, Hoyle et al. [24] found severe flaws in coherence scores. First, they find that coherence scores exaggerate differences between models and second, they validate the findings from Bhatia et al. [6] and find much lower Pearson correlations between automated coherence scores and human evaluation as compared to [30].
We identify two shortcomings in the current state-of-the-art in topic modelling. The first is the significant gap in validated automatic evaluation methods for topic models. The second stems from the continued reliance on evaluation methods based on word co-occurrences and outdated definitions of topics from older models. Current methods rely on limited corpora from which the topic representations are created. However, integrating larger corpora into the modeling process can enhance topic quality by including contextually relevant words that were missing from the original corpus.
Contributions The contributions of this paper are hence twofold and can be summarized as follows:
-We propose the Context Based Topic Model (CBTM) that, with only a few adaptations, integrates linguistic ideas into its modeling. Soft-clustering on the document level is integrated, such that P(document | topic) is modeled. -We introduce new topic modeling performance metrics. The validation of the proposed metrics is validated by demonstrating impressive correlations with human judgement. -We conduct a benchmark study comparing the presented approach to state-of-theart topic modeling and document clustering methods and outperform common benchmark models on both, coherence scores and the presented new metrics for topic evaluation.
The remainder of the paper is structured as follows: First, a short introduction into the used linguistic ideas and the definition of topics is presented. Second, the method of extracting latent topics from documents, incorporating the aforementioned definitions, is presented. Third, new evaluation metrics are introduced and validated by presenting correlations with human annotators. Fourth, the proposed model is applied to two common data sets and compared with state-of-the-art topic models. Finally, a discussion of the limitations as well as a conclusion is given in sections 6 and 7. The word best representing a sentence (or document) does not necessarily needs to be included in that text. The figure represents a New York Times headline from the financial crisis in 2009: "Lehman had to die so Global Finance could live". All words present in that text and additional words are mapped into a high dimensional feature space. The dimensions are reduced to visually demonstrate, that words not occurring in that sentence, e.g. banking crisis are better suited to summarize that sentence than words present in the sentence, e.g. global.
2 On the Nature of Topics While there have been numerous approaches to extracting latent topics from large text corpora, little effort has been made in adapting those models to more refined definitions of a topic. We propose a topic model that follows ideas from linguistic definitions of topics [16,17]. We present two ideas from linguistic theory in order to construct more humanly interpretable topics:
i) A word that most accurately expresses the topic of a document may not necessarily occur in that document. ii) Only using nouns and noun phrases is more appropriate for representing understandable topics.
i) closely follows Guijarro [21]: "a topic is, above all, a textual category that is determined by the context and not by purely formal or structural aspects." Therefore, the topic of a document or even a sentence may go beyond the mere occurrence of all the words in that document. That is, a word that most accurately expresses the topic of a document may not necessarily occur in that document. We leverage a simple example from a New York Times headline to demonstrate that: "Lehman had to die so Global Finance could live"
That sentence pertains to the financial crisis and the collapse of the Lehman Brothers bank, but neither phrase is explicitly mentioned. A bag-of-words model that only considers words present in the document corpus would not be able to accurately capture the document's topic. Contextually relevant words, even if not present in the document, can provide better representations. Figure 1 shows the described example. Comparing the cosine distance in a reduced embedding space between the complete embedded sentence (TEXT) and each embedded word demonstrates how words and phrases not occurring in that text can be a meaningful summary of that text. "Banking crisis" is a more meaningful representation of the sentence than e.g. "global" and lies closer to the text in the semantic space.
Common topic models, such as [9,12,18,45], as well as document clustering methods, such as [4,20,43], face a limitation in that they only consider words that appear in the reference corpus when generating topic representations. This limitation can lead to incorrect topic interpretations, as shown in the example above. Through expanding the reference corpus and leveraging pre-trained embedding models, we make sure that "the indispensability of frame knowledge for understanding texts" [5] is accounted for.
ii) closely follows Beghto [5], after whom one of the features of generalized titles is the absence of verbal forms. Following the idea that a title is the highest macroproposition of a textual unit [5], we apply this idea to the construction of topics and hence propose to only consider nouns and noun phrases for the proposed method of topic extraction.
Methodology
Let V = {w 1 , . . . , w n } be the vocabulary of words and D = {d 1 , . . . , d M } be a corpus, i.e. a collection of documents. Each document is a sequence of words d i = [w i1 , . . . , w ini ] where w ij ∈ V and n i denotes the length of document d i . Further, let D = {δ 1 , . . . , δ M } be the set of documents represented in the embedding space, such that δ i is the vector representation of d i and let W = {ω 1 , . . . , ω n } be the vocabulary's representation in the same embedding space. Hence, each word w i in the embedding space represented as ω i ∈ R L has the same dimensionality L as a document vector δ i ∈ R L . There are different representations of topics, but mostly a topic t k from a set of topics T = {t 1 , . . . , t K } is represented as a discrete probability distribution over the vocabulary [12], such that t k is often expressed as (φ k,1 , . . . , φ k,n ) T and n i=1 φ k,i = 1 for every k 3 .
Based upon the idea expressed in section 2, we form clusters from the documents embeddings, D and subsequently extract topics, t k , that represent these clusters best. Hence, after transforming the raw documents into document vectors, they are clustered. Due to the curse of dimensionality [2] we reduce the dimensions before clustering using UMAP [36], closely following [4] and [20]. However, we allow each document to belong to more than one cluster resulting in document topic matrices θ and word topic matrices β, similar to LDA [12]. The documents are clustered with a Gaussian mixture model [40], as it not only allows for soft-clustering, but also has the advantage of optimizing hyperparameters via, for instance, the Akaike information criterion or the Bayesian information criterion. As a results, CBTM, in contrast to [4,20,43] offers not only word-topic distributions but also document-topic distributions.
Topic Extraction
To find the words that best represent the corpus' topics, we first extract the centroids of the k clusters, µ k ∈ R L , in the original embedding space. Second, we filter the given vocabulary for nouns and enhance this vocabulary by any specified external vocabulary of nouns, resulting in a new dictionaryV = {w 1 , . . . , w n , w n+1 , . . . , w n+z }. The word vectors ω i closest to µ k in the embedding space, are the words that represent cluster k's centroid best [4], where it could happen, that a word represents a topic ideally where w / ∈ V but always w ∈V . To compute the words best representing a topic, we compute the cosine similarity between every word inV and all cluster centroids in the embedding space. For a single word w, its embedding ω and a single cluster with centroid µ, we hence compute:
sim(ω, µ) = ω · µ ω µ ,(1)
where
ω · µ = L i=1 ω i µ i and ω µ = L i=1 (ω i ) 2 L i=1 (µ i ) 2 .
L denotes the vectors dimension in the feature space which is identical for ω and µ.
To avoid having words in a topic that are semantically overly similar, as e.g. economics and economy, each topic can be cleaned. The cosine similarity between the top Z words contained in a topic can be computed and all words that exceed a certain threshold, e.g. 0.85 4 , are removed in descending order of the similarity with the clusters centroid. An additional advantage of the corpus expansion is the possibility to model documents in one language, but create topics in a different language, when using a multi-language embedding model.
Evaluation
Given the described approach, we are effectively losing any idea of co-occurence based coherence for model evaluation. The words best describing a cluster of documents or topic do not necessarily have to occur together often in documents. In fact, a word capturing the topic of a single document optimally, does not necessarily have to be contained in that same document. Additionally, by enhancing the corpus, it might be possible that neologisms are the words best representing a topic. Imagine, e.g. a set of documents being equally about software and hardware issues. The neologism softwarehardware would be an understandable and reasonable word describing that topic, but would perform poorly in any word-co-occurence based evaluation measure. Fig. 2. The expressivity of a model is captured by averaging over the topics centroids cosine similarity to the null space, defined as the centroid of all embedded stopwords. For visualization the vector dimensions are heavily reduced, but the overall expressivity is still visualized. Due to the dimensionality reduction, the axes are just labelled "X" and "Y" respectively. The visualized topics are created from the 20 Newsgroups data set with the CBTM method and a single topic, "would", created with a LDA model. The topic's top word is annotated at the topic's position in the reduced embedding space.
Evaluation Metrics
For evaluation, we hence propose new, non word-co-occurence based measures and use existing measures leveraging word embeddings [47]. We validate the intruder based metrics by computing correlations with human annotations.
Topic Expressivity (EXPRS) First, we propose a novel measure inherently representing the meaningfulness of a topic. For that, we leverage stopwords, which widely recognized fulfill a grammatical purpose, but transport nothing about the meaning of a document [42,54]. Hence, we compute the vector embeddings of all stopwords and calculate a centroid embedding. Subsequently, we compute the cosine similarity between a topic centroid and the stopword centroid (see Figure 2).
The weighted topic vector centroid, γ k , is computed by taking the top Z words and normalizing their weights, such that
Z i=1 φ k,i = 1. The complete vector is hence computed as γ k = 1 Z Z i=1 φ k,
i ω i and the overall metric, which we call the models expressivity, where we sum over all K topics is defined as:
EXP RS(γ, ψ) = 1 K K i=1 sim(γ i , ψ)(2)
with ψ being the centroid vector representation of all stopwords. Note, that γ i = µ i , as µ i is the centroid of the document cluster and γ i is the centroid of topic t i . Fig. 3. The intruder word detection in the embedding space. A topic, covering "religion", and an intruder word, "medicine" are plotted with heavily reduced dimensions, using a PCA. The intruder word clearly separates from the otherwise coherent topic, even in a two-dimensional space. Due to the dimension reduction, the axis are just labelled with "X" and "Y" respectively. The topic is again created with the CBTM method on the 20 Newsgroups data set.
Embedding Coherence (COH) A measure, generally introduced by Aletras and Stevenson [3] and reformulated by Fang et al. [19] resembling classical coherence scores, is constructed by computing the similarity between the top Z words in a topic. While Aletras and Stevenson [3] compute the word vectors using word co-occurrences we follow Fang et al. [19] and use the created word-embeddings. In contrast to classical coherence, we compute the similarity between every top-Z word in the topic and do not implement a sliding-window approach. Hence, for Z words, we sum over Z(Z−1) 2 cosine similarities:
COH(t k ) = Z−1 i=1 Z j=i+1 sim(ω i , ω j ),(3)
where the overall average coherence of a model is hence computed as:
2 K(Z − 1)Z K k=1 COH(t k ).
Word embedding-based Weighted Sum Similarity (WESS) A metric representing the diversity or the similarity between the topics of a topic model was introduced by [47] as the Word embedding-based Weighted Sum Similarity and is slightly adjusted for comparing models with a different number of topics as:
W ESS(T ) = (K − 1)K 2 K−1 i=1 K j=i+1 sim(γ i , γ j ),(4)
where γ i represents the weighted topic centroid for topic i. While this metric certainly captures the similarity between topics, it does also reflect the diversity of the model. Hence, if W ESS(T ) is close to 1, the model would have created topics that are extremely similar to one another. Additionally, we propose three different new metrics, leveraging the idea of intruder words [13] and similarly integrating an idea of topic diversity. First, a metric that is based upon unweighted topic centroids.
Intruder Shift (ISH) Given the top Z words from a topic, we calculate the topics unweighted centroid, denoted asγ i . Subsequently, we randomly select a word from that topic and replace it with a randomly selected word, from a randomly selected different topic. The centroid of the resulting words is again computed, denoted asγ i . Given a coherent topic and generally diverse topics, one would expect a larger shift in the topics centroids. Therefore we calculate the intruder shift of every topic and average over the number of topics:
ISH(T ) = 1 K K i=1 sim(γ i ,γ i )(5)
Hence, one would expect a coherent and diverse topic model to have a lower ISH score than an incoherent and non-diverse topic model.
Intruder Accuracy (INT)
The second intruder-word based metric follows the classical approach of identifying an intruder word more closely. Given Z top words of a topic, we again randomly select an intruder word from a randomly drawn topic. Subsequently, we calculate the cosine similarity for every possible pair of words within the set of the top Z words. Then we calculate the cosine similarity of each top word and the intruderω. Finally, our metric reports the fraction of top words to which the intruder has the least similar word embedding.
IN T (t k ) = 1 Z Z i=1 1(∀j : sim(ωi,ω) < sim(ωi, ωj))(6)
Hence we return the number of words from the set where the farthest word from them in the embedding space is the intruder word, divided by the number of words, Z, taken into account (See Figure 3 for a visualization).
Average Intruder Similarity (ISIM) As a last metric, we propose the average cosine similarity between every word in a topic and an intruder word:
ISIM (t k ) = 1 Z Z i=1 sim(ω i ,ω)(7)
To account for any induced randomness in the metrics ISH, IN T and ISIM due to the random choice of a particular intruder from a particular topic, we propose to calculate those metrics multiple times with differently chosen random intruder words and subsequently average the results. Hence, the robustness against the specific selection of intruder words is increased.
Validation of Metrics
To validate the intruder word based evaluation metrics we take the publicly available data from Chang et al. [13]. Similar to Lau et al.
[30] we compute the metrics over all topics and all models provided in [13] for the 20 Newsgroups dataset. However, for clear interpretability, we reduce all words that include hyphens, due to the representations from [13]. Hence, we compute the metrics for 7,004 topics in total. We compute the accuracy of the metrics in terms of the true intruder and the humanly detected intruder for all metrics as well as the Pearson-r. While the important measures are here the correlation with the human annotations, reporting the correlations with the true intruder word ensures that the metrics are not inherently biased towards machine selection. For the accuracy, we consider a pre-selected or human-selected intruder to be correctly identified, if the score for this word is the lowest or highest, respectively, among all displayed top words. The results are shown in Table 1. For all results it must be noted that the human answers have some ambiguity in them. As reported by Lau et al.
[30], the Pearson-r between the human answers was 0.77. Hence, the results for IN T with a maximum correlation of 0.728 is highly credible and outperforms the reported correlations [30] for coherence evaluation metrics. Interestingly, ISIM performs best, when considering the accuracy for the true intruder word, but significantly worse when considering the human selected word. We find that, independent of the chosen model, the newly introduced metrics strongly outperform the results reported by Lau et al.
[30] at the topic-level with reported Pearson correlations of around r = 0.6.
Results
To evaluate the proposed model, we compare the model results with different benchmark models. We also demonstrate the validity of our two hypotheses on corpus expansion and noun phrases stated in Section 2.
As comparison models, we use BERTopic [20] and Top2Vec [4] as closely related models and representatives of clustering based topic models, LDA [12] as a model not leveraging pre-trained embeddings, CTM [9] as a generative probabilistic model leveraging pre-trained embeddings, a simple K-Means model -closely following the architecture from [20], but replacing HDBSCAN with a K-Means clustering approach, ETM [18] leveraging word2vec [31] and NeuralLDA and ProdLDA [45]. All models are Table 1. Metric Evaluation: Accuracy and Pearson correlation with the reported true (Intruder) and humanly selected (Human) intruder word from Chang et al. [13] for all models and all topics on the 20 Newsgroups dataset. As embedding models we consider the Paraphrase-MiniLM-L6-v2 model [39], the All-MiniLM-L12-v2 model [53], the All-mpnet-base-v2 model [44], the Multiqa-mpnet-base-dot-v1 model [44] and the All-distilroberta-v1 model [34] as well as a word2vec model pre-trained on the GoogleNews corpus and a Glove model pre-trained on a Wikipedia corpus. The three best results for the human correlation and accuracy are marked in bold. One can see that the metric evaluation for different embedding models produces impressive results, given the correlation between participants of 0.77. The paraphrase-MiniLM-L6-v2 performs best, considering IN T and ISIM , closely followed by the Glove model. fit using the OCTIS framework [46]. Where applicable the same pre-trained embedding model as for CBTM, all-MiniLM-L6-v2 [39] is used. Note, that we perform extensive hyperparameter tuning for all models except for CBTM. A detailed description of the benchmark models hyperparameters and the hyperparameter tuning can be found in the Appendix. As a corpus expanding the reference corpus in CBTM for topic extraction we use the Brown corpus taken from nltk [10], which we also use for filtering the vocabulary for noun-phrases. We compute the proposed metrics from Section 4 except for the ISH metric due to its inferior performance on the intruder word detection task (Table 1). Additionally, we compute normalized pointwise mutual information (NPMI) scores [30] with the input corpus as the reference corpus and Topic Diversity (WESS) and Wordembedding Pairwise Coherence scores (COHPW) using the OCTIS framework [46]. All word-embedding based metrics are computed with the paraphrase-MiniLM-L6-v2 model [39] due to the results from Table 1, except for WESS and COHPW where we use OCTIS' default pre-trained word2vec [31] model 5 . Table 2. Comparison of noun-based topic extraction vs. non-noun-based model extraction for the CBTM model. The reported metrics are averaged over the results for three datasets, the 20 Newsgroups dataset, the BBC News dataset and the M10 dataset. All datasets are taken from OCTIS. All models are fitted using the all-MiniLM-L6-v2 model [39]. Given the results from Table 1, paraphrase-MiniLM-L6-v2 is used for the embedding based evaluation metrics. We report the baseline metrics for a model not using an expanded corpus and using all word types and report the differences to that baseline. We find that especially expanding the reference corpus leads to better topics, represented by nearly all metrics. As expected, the NPMI coherence scores are considerably worse, when expanding the reference corpus. That is due to the fact, that we used the original corpus the models where fit on as the NPMI coherence reference corpus. Additionally, we find that only considering nouns for topic words, can increase the evaluation metrics, especially when we clean the topics. To confirm our two hypotheses from Section 2 that expanding the reference corpus and only considering nouns for topic extraction can increase the topic quality, we perform several analyses. We compare the presented method with and without reference corpus expansion and with and without noun phrase filtering. The averaged results over 3 datasets can be seen in Table 2.
Hypothesis I: Corpus Expansion Our results confirm our hypothesis that expanding the reference corpus leads to creating better topics depicted by nearly all metrics. Unsurprisingly, we find that NPMI coherence scores, only using the reference corpus for computing the coherence are decreased when expanding the reference corpus during topic extraction. Additionally, we find that using a smaller pre-trained model for computing the metrics, as the leveraged word2vec [31] model for COHPW and WESS also shows a decrease in performance when expanding the reference corpus. That is presumably due to the smaller vocabulary size used in these models.
Hypothesis II: Noun Phrases
We find that the noun-based models perform worse than the models that consider all types of words and for the different embedding models used to construct the evaluation metrics. However, we find that when cleaning the topics the topic quality increases when using only nouns as compared to using all word types. Additionally we find that expanding the reference corpus and only considering nouns achieves better performance than no expansion and using all word types.
Benchmarks For comparing CBTM with other models we use two standard benchmark datasets, 20 Newsgroups and Reuters [33] as shown in Table 3. We fix the number of topics to the true number of topics of 20 and 90, respectively (see Appendix for additional benchmarks on two further datasets). CBTM outperforms all models, concerning INT, COH and COHPW for both datasets for all configurations. Additionally, CBTM performs well on topic diversity for the 20 Newsgroups dataset and EXPRS for both datasets. Interestingly, it also performs very well concerning classical NPMI coherence scores for the 20 Newsgroups dataset when not expanding the reference corpus. As expected, the models closely related to CBTM perform also well on both datasets. However, while Top2Vec, BERTopic and the used K-Means model are closely related to the proposed CBTM, CBTM achieves much better results concerning all metrics. Interestingly, CTM performs very well on smaller datasets (see supplemental material for additional benchmarks). Additionally, our results do not confirm that models that use a hard clustering approach perform considerably worse for a multi-label dataset (Reuters) as compared to models that integrate soft-clustering (see e.g. CTM/ETM vs Top2Vec/BERTopic results).
Conclusion
We develop a novel model for topic extraction beyond the mere occurrence of words in the reference corpus. We are able to show that expanding the reference corpus improves model performance. Additionally, we can confirm, that restricting the word types for Table 3. Benchmark results on the 20 Newsgroups and Reuters datasets. All models are fit using the all-MiniLM-L6-v2 pre-trained embedding model [39] where applicable. paraphrase-MiniLM-L6-v2 is used for the evaluation metrics ISIM, INT, TOP DIV and EXPRS. For the metrics available in OCTIS we use the default embeddings which are pre-trained word2vec embeddings on the Google News corpus. Extensive hyperparameter tuning is performed for the comparison models (see Appendix). All models, except BERTopic and Top2Vec, are fit with a pre-specified number of 20 or 90 topics respectively. BERTopic and Top2Vec detect the optimal number of topics automatically, hence we fit the model as intended by the authors. However, we additionally fit a K-Means model using the class based tf-idf topic extraction method from BERTopic with 20 and 90 topics respectively and hierarchically reduce the number of topics in Top2Vec. topic extraction by only considering nouns can also lead to improved topic quality, under certain conditions. CBTM outperforms commonly used state-of-the-art topic models on multiple benchmark datasets, even in cases where the comparison models underwent extensive hyperparameter tuning while no hyperparameter tuning was performed for CBTM (see supplemental material for details on the hyperparameter tuning).
Given that almost all newly introduced topic models are evaluated automatically [24], automatic evaluation metrics are of outmost importance. Hoyle et al. [24] even postulated that automatic topic model evaluation is broken, as the current used metrics have overall low correlations with human judgement of topic quality. We present multiple novel evaluation metrics closely following state of the art human evaluation of topic model quality and achieve great correlations with human evaluation. We greatly improve upon the correlation with human evaluation compared to the currently most often used metric, NPMI, achieving correlations of around r = 0.73 compared to NPMI correlations of r = 0.63. The proposed approach of using word embeddings and cosine similarity achieves impressive results given the overall lower agreement between human responses (Pearson-r=0.77).
Additionally, we introduce a novel evaluation metric, based upon the centroid cluster of stopwords in the embedding space. Given the approach of enhancing the reference corpus, the described model might be especially useful when evaluating short texts or identifying sparsely represented topics in a corpus [48,49]. Through the inherent sparsity of the data, the words best describing a topic might not be included in the reference corpus and an enhancement could thus greatly improve the creation of topics.
Limitations
Automated evaluation of topic model quality is inherently difficult. That difficulty is considerably increased by the fact there is no gold standard or even a ground truth for the quality of a topic. Chang et al. [13] introduced the reasonable approach of evaluating the coherence of a set of words with intruder-words. However, one cannot expect 100% agreement between people when it comes to judging whether a word is an intruder word in a topic. The proposed evaluation metrics achieve impressive results with human annotations, they cannot, however, reflect human ambiguity or extreme subtlety in perceived topic quality. Additionally, as all evaluation metrics based upon human evaluation and hence experimental results achieved with human participants, the metrics might reflect a selection bias (WEIRD) [22]. Further embedding models could be evaluated and tested and larger human evaluation studies could be conducted.
Recent findings about the dominance of certain dimensions in transformer embeddings [51] suggest an inherent bias in transformer embeddings that could negatively affect similarity measures in the semantic space. Our results do not suggest that such a bias negatively influences the modeling results, however, this study does not look into the dimensionality effects which could be the topic of further research.
Moreover, the creation of transformer models solely for the purpose of topic extraction that emphasize, for example, the beginnings of phrases due to their increased importance to the underlying topics of a subsubsection [25, 26] could greatly improve upon the existing methods.
A Supplemental Methodology
To make reading easier, we provide a full notation list. All used variables and their notation can be found here. All modeling steps from the proposed method are presented here in extensive form. First, the target corpus should be embedded. This can be done, either using contextualized transformer embeddings, as e.g. Bianchi et al. [8] showed that contextualized embeddings can improve topic quality. However, approaches as used by Sia et al. [43] where every word is embedded singularly and the documents are represented as centroid vectors of all occurring words are also possible. Second, the dimensions of the embedded documents, δ i , are reduced due to the curse of dimensionality. Afterwards, the reduced embeddings, δ i , are clustered e.g. using GMM such that soft clustering is possible. The centroids for each document cluster, µ k , are computed. Next, the corpus is filtered for nouns and all nouns present in the corpus supplemented by all nouns present in an expansion corpus are embedded. Note, that here the same embedding procedure must be chosen as for the documents ( see e.g. [4,20]). Then, the similarity between all candidate words and all document cluster centroids is computed. Based on the candidate embeddings and the similarity to the document clusters µ k , the topic centroids γ k are computed and similar to LDA, we get a document topic matrix, θ, and a word topic matrix, β. Last, a cleaning step can be performed to remove overly similar words from the topics.
B Human Topic Evaluation
As automated evaluation of topic model quality is inherently difficult, creating great questionnaires and adequately operationalizing what researchers are interested in is adamantly important. Lund et al. [35] introduced a topic-word matching task, weighting and selecting answers from participants that have a high confidence and performed well on test questions. Choosing that approach reduces ambiguity in answers, but also induces a bias towards highly confident participants and neglects the subtle differences in perceived quality from humans.
[37], chose a straight-forward approach of letting humans rate the created topics quality. Choosing a 3-point scale for model evaluation, however, can induce unreliability of responses [27]. [6,7] introduce a document-level topic model evaluation leveraging the intruder-topic task, also introduced in Chang et al. [13]. However, for direct annotation they also resort to a 3-point ordinal scale. Clark et al. [15] even question human judgement all together; however, the used questionnaire design not only does not provide a midpoint but additionally can strongly induce a bias in preference due to a highly biasing follow up question [15] (See e.g.
[32]).
B.1 Additional Benchmark Results
In addition to the 20 Newsgroups and Reuters dataset, we fit all models on the M10 and BBC News datasets. Both datasets are taken from OCTIS [46]. CBTM again outperforms most other models on nearly all metrics. Interestingly, CTM achieves good results for the BBC News dataset, which is comparably small with <2.000 documents. For the M10 dataset, which is comprised of scientific papers and hence a more difficult dataset, we find that topic expansion strongly improves the model performance.
C Experimental Setup
For all tested models, we use the same pre-trained embedding model all-MiniLM-L6-v2 [39], where applicable. NPMI Coherence scores are calculated as presented by [30]. For the best possible comparison, we use the same dimensionality reduction for CBTM as is used in Doc2Vec [4] and BERTopic [20]. Hence, we use Umap [36] and reduce the dimensions to 5, explicitly using the same hyperparameters as done in the mentioned models. The same is done for the simple K-Means model.
C.1 Hyperparameter Tuning
For CBTM we do not implement any form of hyperparameter tuning. Hence, the Gaussian Mixture Model is fit using scikit-learns default parameters. Hence the convergence threshhold for the Expectation Maximization (EM) Algorithm is 0.0001, each component has its own general covariance matrix and 1e-6 is added to the covariance diagonals for regularization purposes. The maximum number of iterations in the EM algorithm is set to 100 and K-Means is used to initialize the weights.
Hence, the results achieved by CBTM could be further optimized, by e.g. optimizing GMM with respect to the Bayesian-or Akaike Information Criterion. Additionally, the pre-trained embedding could be fine-tuned, which is true for all models leveraging pre-trained embeddings and could additionally improve the models performance [8,50].
For LDA, ProdLDA, NeuraLLDA, ETM and CTM, we optimize over various hyperparameters with Bayesian optimization as provided by the OCTIS package [46]. We use Table 5. Benchmark results on the M10 dataset. All models are fit using the all-MiniLM-L6-v2 pre-trained embedding model [39] where applicable. paraphrase-MiniLM-L6-v2 is used for the evaluation metrics ISIM, INT, TOP DIV and EXPRS. For the metrics available in OCTIS we use the default embeddings which are pre-trained word2vec embeddings on the Google News corpus. Extensive Hyperparameter tuning is performed for the comparison models (See Appendix). All models, except BERTopic and Top2Vec, are fit with a pre-specified number of 10 topics. BERTopic and Top2Vec detect the optimal number of topics automatically, hence we fit the model as intended by the authors. However, we additionally fit a KMeans model using the class based tf-idf topic extraction method from BERTopic with 10 topics and hierarchically reduce the number of topics in Top2Vec. model perplexity, measured based on the evidence lower bound of a validation sample of documents, as the objective function in order to not rely on metrics, such as NPMI coherence or WESS, that measure either cohesion or separation of topics. LDA is optimized over the parameters of the two symmetric Dirichlet priors on the topic-specific word distribution and the document-specific topic distribution. For ProdLDA, NeuralLDA and CTM, the learning rate parameter, as well as the number of layers and the number of neurons per layer in the inference network are considered. Finally, for ETM, we tune the learning rate, the number of hidden units in the encoder and the embedding size. Since BERTopic and Top2Vec are highly insensitive to different hyperparameter settings of the underlying HDBSCAN algorithm and also do not provide a way to measure the (marginal) likelihood of data, we choose the default hyperparameters for those models. While finding the optimal hyperparameters for these models might improve their performances compared to the models where we implemented hyperparameter tuning, the same is true for CBTM. Table 6. Benchmark results on the BBC News dataset. All models are fit using the all-MiniLM-L6-v2 pre-trained embedding model [39] where applicable. paraphrase-MiniLM-L6-v2 is used for the evaluation metrics ISIM, INT, TOP DIV and EXPRS. For the metrics available in OCTIS we use the default embeddings which are pre-trained word2vec embeddings on the Google News corpus. Extensive Hyperparameter tuning is performed for the comparison models (See Appendix). All models, except BERTopic and Top2Vec, are fit with a pre-specified number of 10 topics. BERTopic and Top2Vec detect the optimal number of topics automatically, hence we fit the model as intended by the authors. However, we additionally fit a KMeans model using the class based tf-idf topic extraction method from BERTopic with 5 topics and hierarchically reduce the number of topics in Top2Vec. interpretation, truth, assert, argue, claim, consideration, logic, insist, complain, belief 20 secure, encryption, security, encrypt, privacy, protect, protection, scheme, enforcement, access Table 7. The CBTM model fit on the 20 Newsgroups dataset. The reference corpus is expanded with the brown corpus taken from the nltk package [10].
Coherence Measures
Fig. 1 .
1Fig. 1. The word best representing a sentence (or document) does not necessarily needs to be included in that text. The figure represents a New York Times headline from the financial crisis in 2009: "Lehman had to die so Global Finance could live". All words present in that text and additional words are mapped into a high dimensional feature space. The dimensions are reduced to visually demonstrate, that words not occurring in that sentence, e.g. banking crisis are better suited to summarize that sentence than words present in the sentence, e.g. global.
(↑) COHPW (↑) COH (↑) TOP DIV (↑) WESS (↓) EXPRS (↓) ISIM (↓) INT (↑)
(↑) COHPW (↑) COH (↑) TOP DIV (↑) WESS (↓) EXPRS (↓) ISIM (↓) INT (↑)
21 .
21Guijarro, A.J.M.: Towards a definition and hierarchization of topic. Talk and Text: Studies on Spoken and Written Discourse, ed. by A. Rothwell, A. Guijarro & J. Albentosa pp. 97-116 (2000) 22. Henrich, J., Heine, S.J., Norenzayan, A.: Most people are not weird. Nature 466(7302), 29-29 (2010) 23. Hofmann, T.: Unsupervised learning by probabilistic latent semantic analysis. Machine learning 42(1), 177-196 (2001) 24. Hoyle, A., Goel, P., Hian-Cheong, A., Peskov, D., Boyd-Graber, J., Resnik, P.: Is automated topic model evaluation broken? the incoherence of coherence. Advances in Neural Information Processing Systems 34 (2021) 25. Kieras, D.E.: Initial mention as a signal to thematic content in technical passages. Memory & Cognition 8(4), 345-353 (1980) 26. Kieras, D.E.: Topicalization effects in cued recall of technical prose. Memory & Cognition 9(6), 541-549 (1981) 27. Krosnick, J.A.: Questionnaire design. In: The Palgrave handbook of survey research, pp. 439-455. Springer (2018) 28. Lafferty, J., Blei, D.: Correlated topic models. Advances in neural information processing systems 18 (2005) 29. Larochelle, H., Lauly, S.: A neural autoregressive topic model. Advances in Neural Information Processing Systems 25 (2012) 30. Lau, J.H., Newman, D., Baldwin, T.: Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In: Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. pp. 530-539 (2014) 31. Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: International conference on machine learning. pp. 1188-1196. PMLR (2014) 32. Lehman, D.R., Krosnick, J.A., West, R.L., Li, F.: The focus of judgment effect: A question wording effect due to hypothesis confirnation bias. Personality and Social Psychology Bulletin 18(6), 690-699 (1992) 33. Lewis, D.D.: Reuters-21578 text categorization collection data set (1997) 34. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) 35. Lund, J., Armstrong, P., Fearn, W., Cowley, S., Byun, C., Boyd-Graber, J., Seppi, K.: Automatic evaluation of local topic quality. arXiv preprint arXiv:1905.13126 (2019) 36. McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) 37. Newman, D., Lau, J.H., Grieser, K., Baldwin, T.: Automatic evaluation of topic coherence. In: Human language technologies: The 2010 annual conference of the North American chapter of the association for computational linguistics. pp. 100-108 (2010) 38. Ramage, D., Hall, D., Nallapati, R., Manning, C.D.: Labeled lda: A supervised topic model for credit attribution in multi-labeled corpora. In: Proceedings of the 2009 conference on empirical methods in natural language processing. pp. 248-256 (2009) 39. Reimers, N., Gurevych, I.: Sentence-bert: Sentence embeddings using siamese bert-networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (11 2019), https://arxiv.org/abs/ 1908.10084 40. Reynolds, D.A.: Gaussian mixture models. Encyclopedia of biometrics 741(659-663) (2009) 41. Rosen-Zvi, M., Griffiths, T., Steyvers, M., Smyth, P.: The author-topic model for authors and documents. arXiv preprint arXiv:1207.4169 (2012) 42. Salton, G.: Automatic text processing: The transformation, analysis, and retrieval of. Reading: Addison-Wesley 169 (1989)
(↑) COHPW (↑) COH (↑) TOP DIV (↑) WESS (↓) EXPRS (↓) ISIM (↓) INT (↑)
(↑) COHPW (↑) COH (↑) TOP DIV (↑) WESS (↓) EXPRS (↓) ISIM (↓) INT (↑)
Table 4 .
4Variable listV
Vocabulary
D
Corpus
M
Number of documents in the corpus
di
Document i
wi
Word i in V
ωi
Word i represented in the embedding space
δi
Document i represented in the embedding spacê
δi
di represented in the reduced embedding space
t k
Topic k
T
Set of topics
φ k,i
Probability of word i in topic k
γ k
Topic centroid vector of topic k
µ k
Mean of document cluster k
θ
Document cluster/topic matrix
β
Word cluster/topic matrix
ψ
Null Space/centroid of all stopwords
HDBSCAN results with > 10 topics * Only Nouns + Expanded topic corpusKmeans
-0.108
0.063
0.254
0.940
0.354
0.458
0.149
0.320
BERTopic † -0.318
0.056
0.231
0.628
0.424
0.514
0.165
0.219
Top2Vec †
-0.345
0.083
0.315
0.060
0.547
0.478
0.220
0.326
TOP2Vec
-0.270
0.100
0.335
0.780
0.496
0.454
0.198
0.484
LDA
-0.176
0.035
0.244
0.830
0.330
0.440
0.208
0.177
ProdLDA
-0.251
0.074
0.222
0.970
0.425
0.508
0.170
0.220
NeuralLDA -0.571
0.030
0.186
0.373
0.582
0.581
0.185
0.118
ETM
-0.204
0.044
0.255
0.330
0.591
0.500
0.268
0.151
CTM
-0.322
0.060
0.239
0.950
0.247
0.353
0.172
0.271
CBTM
+
-0.8411
0.338
0.512
0.855
0.322
0.383
0.179
0.827
CBTM *
-0.5762
0.441
0.419
0.770
0.394
0.420
0.193
0.719
CBTM * +
-0.8033
0.358
0.451
0.825
0.339
0.395
0.166
0.818
†
HDBSCAN results with > 5 topics * Only Nouns + Expanded topic corpus Topic Words 1 game, league, player, play, baseball, sport, pitch, hockey, team, batf 2 application, program, software, workstation, code, window, file, programming, print, tool 3 bullet, firearm, weapon, attack, shoot, kill, action, armed, protect, protection 4 homosexual, homosexuality, sexual, insist, reject, accept, morality, contrary, disagree, oppose 5 machine, chip, circuit, electronic, hardware, equipment, device, computer, workstation, processor 6 vehicle, auto, engine, rear, tire, driver, truck, motor, wheel, bike 7 israeli, conflict, oppose, attack, peace, struggle, arab, turkish, armenian, kill 8 action, consideration, complain, oppose, bother, rule, issue, policy, insist, accept 9 complain, respond, response, consideration, suggestion, idea, bother, challenge, influence, accept 10 orbit, satellite, solar, planet, shuttle, mission, earth, rocket, moon, plane 11 mailing, mail, send, email, contact, message, telephone, address, customer, request 12 printer, print, font, format, digital, make, manufacture, manufacturer, machine, workstation 13 sell, sale, purchase, offer, brand, customer, supply, vendor, deal, price 14 send, inform, publish, message, newsgroup, reader, mailing, post, topic, mail 15 lose, result, score, loss, beat, challenge, division, note, gain, fall 16 belief, faith, doctrine, accept, truth, religion, notion, religious, trust, interpretation 17 hardware, computer, device, drive, machine, monitor, electronic, chip, shareware, modem 18 patient, complain, care, affect, effect, issue, treat, suffer, response, treatment 19Kmeans
-0.868
0.088
0.333
1.000
0.297
0.490
0.139
0.667
BERTopic † -0.307
0.053
0.232
0.623
0.423
0.513
0.166
0.218
Top2Vec †
-0.339
0.082
0.314
0.059
0.542
0.477
0.218
0.329
TOP2Vec
-0.324
0.097
0.334
0.920
0.419
0.435
0.173
0.528
LDA
-0.150
0.029
0.208
0.840
0.447
0.480
0.202
0.098
ProdLDA
-0.290
0.050
0.212
0.960
0.484
0.541
0.171
0.199
NeuralLDA -0.460
0.077
0.190
1.000
0.574
0.558
0.170
0.136
ETM
-0.184
0.043
0.249
0.600
0.510
0.489
0.252
0.182
CTM
-0.299
0.050
0.232
1.000
0.236
0.369
0.148
0.241
CBTM
+
-0.851
0.351
0.456
0.810
0.368
0.444
0.186
0.701
CBTM *
0.055
0.440
0.402
0.765
0.433
0.518
0.202
0.602
CBTM * +
-0.772
0.373
0.403
0.795
0.397
0.474
0.181
0.656
†
See table 4in the Appendix for a complete variable and notation list
The cosine similarity between the words "economy" and "economies", using the paraphrase-MiniLM-L6-v2 embedder [39] is for instance 0.9.
The word2vec model is trained on the GoogleNews corpus. The number of top words, Z, taken into account for the metrics EXP RS, COH, W ESS, IN T and ISIM is 10. For IN T and ISIM , we randomly select an intruder word from a randomly selected topic 50 times and report the averages.
flda: matrix factorization through latent dirichlet allocation. D Agarwal, B C Chen, Proceedings of the third ACM international conference on Web search and data mining. the third ACM international conference on Web search and data miningAgarwal, D., Chen, B.C.: flda: matrix factorization through latent dirichlet allocation. In: Proceedings of the third ACM international conference on Web search and data mining. pp. 91-100 (2010)
On the surprising behavior of distance metrics in high dimensional space. C C Aggarwal, A Hinneburg, D A Keim, International conference on database theory. SpringerAggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International conference on database theory. pp. 420-434. Springer (2001)
Evaluating topic coherence using distributional semantics. N Aletras, M Stevenson, Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013)-Long Papers. the 10th International Conference on Computational Semantics (IWCS 2013)-Long PapersAletras, N., Stevenson, M.: Evaluating topic coherence using distributional semantics. In: Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013)- Long Papers. pp. 13-22 (2013)
D Angelov, arXiv:2008.09470Top2vec: Distributed representations of topics. arXiv preprintAngelov, D.: Top2vec: Distributed representations of topics. arXiv preprint arXiv:2008.09470 (2020)
Bibliographic classification theory and text linguistics: aboutness analysis, intertextuality and the cognitive act of classifying documents. C Beghtol, Journal of documentation. Beghtol, C.: Bibliographic classification theory and text linguistics: aboutness analysis, inter- textuality and the cognitive act of classifying documents. Journal of documentation (1986)
An automatic approach for document-level topic model evaluation. S Bhatia, J H Lau, T Baldwin, arXiv:1706.05140arXiv preprintBhatia, S., Lau, J.H., Baldwin, T.: An automatic approach for document-level topic model evaluation. arXiv preprint arXiv:1706.05140 (2017)
Topic intrusion for automatic topic model evaluation. S Bhatia, J H Lau, T Baldwin, Association for Computational LinguisticsBhatia, S., Lau, J.H., Baldwin, T.: Topic intrusion for automatic topic model evaluation. Association for Computational Linguistics (2020)
F Bianchi, S Terragni, D Hovy, arXiv:2004.03974Pre-training is a hot topic: Contextualized document embeddings improve topic coherence. arXiv preprintBianchi, F., Terragni, S., Hovy, D.: Pre-training is a hot topic: Contextualized document embeddings improve topic coherence. arXiv preprint arXiv:2004.03974 (2020)
Cross-lingual contextualized topic models with zero-shot learning. F Bianchi, S Terragni, D Hovy, D Nozza, E Fersini, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeAssociation for Computational LinguisticsBianchi, F., Terragni, S., Hovy, D., Nozza, D., Fersini, E.: Cross-lingual contextualized topic models with zero-shot learning. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. pp. 1676-1683. Association for Computational Linguistics, Online (Apr 2021), https://www.aclweb. org/anthology/2021.eacl-main.143
Natural language processing with Python: analyzing text with the natural language toolkit. S Bird, E Klein, E Loper, O'Reilly MediaBird, S., Klein, E., Loper, E.: Natural language processing with Python: analyzing text with the natural language toolkit. " O'Reilly Media, Inc." (2009)
The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. D M Blei, T L Griffiths, M I Jordan, Journal of the ACM (JACM). 572Blei, D.M., Griffiths, T.L., Jordan, M.I.: The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the ACM (JACM) 57(2), 1-30 (2010)
Latent dirichlet allocation. D M Blei, A Y Ng, M I Jordan, Journal of machine Learning research. 3Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. Journal of machine Learning research 3(Jan), 993-1022 (2003)
Reading tea leaves: How humans interpret topic models. J Chang, S Gerrish, C Wang, J Boyd-Graber, D Blei, Advances in neural information processing systems. 22Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J., Blei, D.: Reading tea leaves: How humans interpret topic models. Advances in neural information processing systems 22 (2009)
Latent dirichlet mixture model. J T Chien, C H Lee, Z H Tan, Neurocomputing. 278Chien, J.T., Lee, C.H., Tan, Z.H.: Latent dirichlet mixture model. Neurocomputing 278, 12-22 (2018)
All that's' human'is not gold: Evaluating human evaluation of generated text. E Clark, T August, S Serrano, N Haduong, S Gururangan, N A Smith, arXiv:2107.00061arXiv preprintClark, E., August, T., Serrano, S., Haduong, N., Gururangan, S., Smith, N.A.: All that's' human'is not gold: Evaluating human evaluation of generated text. arXiv preprint arXiv:2107.00061 (2021)
A systematic definition of sentence topic. A Davison, Center for the Study of ReadingTechnical ReportDavison, A.: A systematic definition of sentence topic. Center for the Study of Reading Technical Report; no. 264 (1982)
Syntactic markedness and the definition of sentence topic. A Davison, Language. 604Davison, A.: Syntactic markedness and the definition of sentence topic. Language 60(4), 797-846 (1984)
Topic modeling in embedding spaces. A B Dieng, F J Ruiz, D M Blei, Transactions of the Association for Computational Linguistics. 8Dieng, A.B., Ruiz, F.J., Blei, D.M.: Topic modeling in embedding spaces. Transactions of the Association for Computational Linguistics 8, 439-453 (2020)
Using word embedding to evaluate the coherence of topics from twitter data. A Fang, C Macdonald, I Ounis, P Habel, Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR conference on Research and Development in Information RetrievalFang, A., Macdonald, C., Ounis, I., Habel, P.: Using word embedding to evaluate the coherence of topics from twitter data. In: Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. pp. 1057-1060 (2016)
Bertopic: Neural topic modeling with a class-based tf-idf procedure. M Grootendorst, arXiv:2203.05794arXiv preprintGrootendorst, M.: Bertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint arXiv:2203.05794 (2022)
S Sia, A Dalmia, S J Mielke, arXiv:2004.14914Tired of topic models? clusters of pretrained word embeddings make for fast and good topics too! arXiv preprint. Sia, S., Dalmia, A., Mielke, S.J.: Tired of topic models? clusters of pretrained word embed- dings make for fast and good topics too! arXiv preprint arXiv:2004.14914 (2020)
Mpnet: Masked and permuted pre-training for language understanding. K Song, X Tan, T Qin, J Lu, T Y Liu, Advances in Neural Information Processing Systems. 33Song, K., Tan, X., Qin, T., Lu, J., Liu, T.Y.: Mpnet: Masked and permuted pre-training for language understanding. Advances in Neural Information Processing Systems 33, 16857- 16867 (2020)
Autoencoding variational inference for topic models. A Srivastava, C Sutton, arXiv:1703.01488arXiv preprintSrivastava, A., Sutton, C.: Autoencoding variational inference for topic models. arXiv preprint arXiv:1703.01488 (2017)
S Terragni, E Fersini, B G Galuzzi, P Tropeano, A Candelieri, Octis: Comparing and optimizing topic models is simple! In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Terragni, S., Fersini, E., Galuzzi, B.G., Tropeano, P., Candelieri, A.: Octis: Comparing and optimizing topic models is simple! In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. pp. 263- 270 (2021)
Word embedding-based topic similarity measures. S Terragni, E Fersini, E Messina, International Conference on Applications of Natural Language to Information Systems. SpringerTerragni, S., Fersini, E., Messina, E.: Word embedding-based topic similarity measures. In: International Conference on Applications of Natural Language to Information Systems. pp. 33-45. Springer (2021)
One-class support vector machine and lda topic model integration-evidence for ai patents. A Thielmann, C Weisser, A Krenz, Soft computing: Biomedical and related applications. SpringerThielmann, A., Weisser, C., Krenz, A.: One-class support vector machine and lda topic model integration-evidence for ai patents. In: Soft computing: Biomedical and related applications, pp. 263-272. Springer (2021)
Unsupervised document classification integrating web scraping, one-class svm and lda topic modelling. A Thielmann, C Weisser, A Krenz, B Säfken, Journal of Applied Statistics. Thielmann, A., Weisser, C., Krenz, A., Säfken, B.: Unsupervised document classification integrating web scraping, one-class svm and lda topic modelling. Journal of Applied Statistics pp. 1-18 (2021)
Human in the loop: How to effectively create coherent topics by manually labeling only a few documents per class. A Thielmann, C Weisser, B Säfken, arXiv:2212.09422arXiv preprintThielmann, A., Weisser, C., Säfken, B.: Human in the loop: How to effectively create coherent topics by manually labeling only a few documents per class. arXiv preprint arXiv:2212.09422 (2022)
All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. W Timkey, M Van Schijndel, arXiv:2109.04404arXiv preprintTimkey, W., van Schijndel, M.: All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. arXiv preprint arXiv:2109.04404 (2021)
Atm: Adversarial-neural topic model. R Wang, D Zhou, Y He, Information Processing & Management. 566102098Wang, R., Zhou, D., He, Y.: Atm: Adversarial-neural topic model. Information Processing & Management 56(6), 102098 (2019)
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou, Advances in Neural Information Processing Systems. 33Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M.: Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems 33, 5776-5788 (2020)
The automatic identification of stop words. W J Wilbur, K Sirotkin, Journal of information science. 181Wilbur, W.J., Sirotkin, K.: The automatic identification of stop words. Journal of information science 18(1), 45-55 (1992)
| zyda_arxiv-0634000 |
Deep Structural Causal Models for Tractable Counterfactual Inference
Nick Pawlowski
Imperial College London
Imperial College London
Imperial College London
Daniel C Castro
Imperial College London
Imperial College London
Imperial College London
Ben Glocker [email protected]
Imperial College London
Imperial College London
Imperial College London
Deep Structural Causal Models for Tractable Counterfactual Inference
We formulate a general framework for building structural causal models (SCMs) with deep learning components. The proposed approach employs normalising flows and variational inference to enable tractable inference of exogenous noise variables-a crucial step for counterfactual inference that is missing from existing deep causal learning methods. Our framework is validated on a synthetic dataset built on MNIST as well as on a real-world medical dataset of brain MRI scans. Our experimental results indicate that we can successfully train deep SCMs that are capable of all three levels of Pearl's ladder of causation: association, intervention, and counterfactuals, giving rise to a powerful new approach for answering causal questions in imaging applications and beyond. The code for all our experiments is available at https://github.com/biomedia-mira/deepscm.
Introduction
Many questions in everyday life as well as in scientific inquiry are causal in nature: "How would the climate have changed if we'd had less emissions in the '80s?", "How fast could I run if I hadn't been smoking?", or "Will my headache be gone if I take that pill?". None of those questions can be answered with statistical tools alone, but require methods from causality to analyse interactions with our environment (interventions) and hypothetical alternate worlds (counterfactuals), going beyond joint, marginal, and conditional probabilities [1]. Even though these are natural lines of reasoning, their mathematical formalisation under a unified theory is relatively recent [2].
In some statistics-based research fields, such as econometrics or epidemiology, the use of causal inference methods has been established for some time [3,4]. However, causal approaches have been introduced into deep learning (DL) only very recently [5]. For example, research has studied the use of causality for disentanglement [6,7], causal discovery [8,9], and for deriving causality-inspired explanations [10,11] or data augmentations [12]. Causal DL models could be capable of learning relationships from complex high-dimensional data and of providing answers to interventional and counterfactual questions, although current work on deep counterfactuals is limited by modelling only direct cause-effect relationships [11] or instrumental-variable scenarios [13], or by not providing a full recipe for tractable counterfactual inference [14].
The integration of causality into DL research promises to enable novel scientific advances as well as to tackle known shortcomings of DL methods: DL is known to be susceptible to learning spurious correlations and amplifying biases [e.g. 15], and to be exceptionally vulnerable to changes in the input distribution [16]. By explicitly modelling causal relationships and acknowledging the difference between causation and correlation, causality becomes a natural field of study for improving the transparency, fairness, and robustness of DL-based systems [17,18]. Further, the tractable inference of deep counterfactuals enables novel research avenues that aim to study causal reasoning on a per-instance rather than population level, which could lead to advances in personalised medicine as well as in decision-support systems, more generally.
In this context, our work studies the use of DL-based causal mechanisms and establishes effective ways of performing counterfactual inference. Our main contributions are: 1) a unified framework for structural causal models using modular deep mechanisms; 2) an efficient approach to estimating counterfactuals by inferring exogenous noise via variational inference or normalising flows; 3) case studies exemplifying how to apply deep structural causal models and perform counterfactual inference. The paper is organised as follows: we first review structural causal models and discuss how to leverage deep mechanisms and enable tractable counterfactual inference. Second, we compare our work to recent progress in deep causal learning in light of Pearl's ladder of causation [19]. Finally, we apply deep structural causal models to a synthetic experiment as well as to modelling brain MRI scans, demonstrating the practical utility of our framework in answering counterfactual questions.
Deep Structural Causal Models
We consider the problem of modelling a collection of K random variables x = (x 1 , . . . , x K ). By considering causal relationships between them, we aim to build a model that not only is capable of generating convincing novel samples, but also satisfies all three rungs of the causation ladder [19]. The first level, association, describes reasoning about passively observed data. This level deals with correlations in the data and questions of the type "What are the odds that I observe...?", which relates purely to marginal, joint, and conditional probabilities. Intervention concerns interactions with the environment. It requires knowledge beyond just observations, as it relies on structural assumptions about the underlying data-generating process. Characteristic questions ask about the effects of certain actions: "What happens if I do...?". Lastly, counterfactuals deal with retrospective hypothetical scenarios. Counterfactual queries leverage functional models of the generative processes to imagine alternative outcomes for individual data points, answering "What if I had done A instead of B?". Arguably, such questions are at the heart of scientific reasoning (and beyond), yet are less well-studied in the field of machine learning. The three levels of causation can be operationalised by employing structural causal models (SCMs) 2 , recapitulated in the next section.
Background on structural causal models
A structural causal model G := (S, P ( )) consists of a collection S = (f 1 , . . . , f K ) of structural assignments x k := f k ( k ; pa k ) (called mechanisms), where pa k is the set of parents of x k (its direct causes), and a joint distribution P ( ) = K k=1 P ( k ) over mutually independent exogenous noise variables (i.e. unaccounted sources of variation). As assignments are assumed acyclic, relationships can be represented by a directed acyclic graph (DAG) with edges pointing from causes to effects, called the causal graph induced by G. Every SCM G entails a unique joint observational distribution P G (x), satisfying the causal Markov assumption: each variable is independent of its non-effects given its direct causes. It therefore factorises as P G (x) = K k=1 P G (x k |pa k ), where each conditional distribution P G (x k |pa k ) is determined by the corresponding mechanism and noise distribution [1].
Crucially, unlike conventional Bayesian networks, the conditional factors above are imbued with a causal interpretation. This enables G to be used to predict the effects of interventions, defined as substituting one or multiple of its structural assignments, written as 'do( · · · )'. In particular, a constant reassignment of the form do(x k := a) is called an atomic intervention, which disconnects x k from all its parents and represents a direct manipulation disregarding its natural causes.
While the observational distribution relates to statistical associations and interventions can predict causal effects, SCMs further enable reasoning about counterfactuals. These are hypothetical retrospective interventions, given an observed outcome: 'What would x i have been if x j were different, given that we observed x?'. This type of question effectively offers explanations of the data, since we can analyse the changes resulting from manipulating each variable. Counterfactual queries can be mathematically formulated as a three-step procedure [2, Ch. 7]:
1. Abduction: Predict the 'state of the world' (the exogenous noise, ) that is compatible with the observations, x, i.e. infer P G ( |x).
k x k pa k f k (a) Invertible explicit likelihood z k u k x k pa k e k h k g k (b) Amortised explicit likelihood k x k pa k f k e k
(c) Amortised implicit likelihood Figure 1: Classes of deep causal mechanisms considered in this work. Bi-directional arrows indicate invertible transformations, optionally conditioned on other inputs (edges ending in black circles). Black and white arrowheads refer resp. to the generative and abductive directions, while dotted arrows depict an amortised variational approximation. Here, f k is the forward model, e k is an encoder that amortises abduction in non-invertible mechanisms, g k is a 'high-level' non-invertible branch (e.g. a probabilistic decoder), and h k is a 'low-level' invertible mapping (e.g. reparametrisation).
2. Action: Perform an intervention (e.g. do(x k := x k )) corresponding to the desired manipulation, resulting in a modified SCM G = G x;do( x k ) = ( S, P G ( |x)) [1, Sec. 6.4].
Prediction:
Compute the quantity of interest based on the distribution entailed by the counterfactual SCM, P G (x).
With these operations in mind, the next section explores a few options for building flexible, expressive, and counterfactual-capable functional mechanisms for highly structured data.
Deep mechanisms
In statistical literature (e.g. epidemiology, econometrics, sociology), SCMs are typically employed with simple linear mechanisms (or generalised linear models, involving an output non-linearity). Analysts attach great importance to the regression weights, as under certain conditions these may be readily interpreted as estimates of the causal effects between variables. While this approach generally works well for scalar variables and can be useful for decision-making, it is not flexible enough to model higher-dimensional data such as images. Solutions to this limitation have been proposed by introducing deep-learning techniques into causal inference [8,14].
We call an SCM that uses deep-learning components to model the structural assignments a deep structural causal model (DSCM). In DSCMs, the inference of counterfactual queries becomes more complex due to the potentially intractable abduction step (inferring the posterior noise distribution, as defined above). To overcome this, we propose to use recent advances in normalising flows and variational inference to model mechanisms for composable DSCMs that enable tractable counterfactual inference. While here we focus on continuous data, DSCMs also fully support discrete variables without the need for relaxations (see Appendix C). We consider three types of mechanisms that differ mainly in their invertibility, illustrated in Fig. 1.
Invertible, explicit: Normalising flows model complex probability distributions using transformations from simpler base distributions with same dimensionality [20]. For an observed variable x, diffeomorphic transformation f , and base variable ∼ P ( ) such that x = f ( ), the output density p(x) can be computed as p(x) = p( )|det ∇f ( )| −1 , evaluated at = f −1 (x) [21,22]. For judicious choices of f , the Jacobian ∇f may take special forms with efficiently computable determinant, providing a flexible and tractable probabilistic model whose parameters can be trained via exact maximum likelihood. Furthermore, flows can be made as expressive as needed by composing sequences of simple transformations. For more information on flow-based models, refer to the comprehensive survey by Papamakarios et al. [22]. Note that this class of models also subsumes the typical location-scale and inverse cumulative distribution function transformations used in the reparametrisation trick [23,24], as well as the Gumbel trick for discrete variable relaxations [25,26].
Although normalising flows were originally proposed for unconditional distributions, they have been extended to conditional densities [27], including in high dimensions [28,29], by parametrising the transformation as x = f ( ; pa X ), assumed invertible in the first argument. In particular, conditional flows can be adopted in DSCMs to represent invertible, explicit-likelihood mechanisms (Fig. 1a):
x i := f i ( i ; pa i ), p(x i |pa i ) = p( i ) · |det ∇ i f i ( i ; pa i )| −1 i=f −1 i (xi;pa i )
.
(1)
Amortised, explicit: Such invertible architectures typically come with heavy computational requirements when modelling high-dimensional observations, because all intermediate operations act in the space of the data. Instead, it is possible to use arbitrary functional forms for the structural assignments, at the cost of losing invertibility and tractable likelihoods p(x k |pa k ). Here, we propose to separate the assignment f k into a 'low-level', invertible component h k and a 'high-level', non-invertible part g k -with a corresponding noise decomposition k = (u k , z k )-such that
x k := f k ( k ; pa k ) = h k (u k ; g k (z k ; pa k ), pa k ), P ( k ) = P (u k )P (z k ) .(2)
In such a decomposition, the invertible transformation h k can be made shallower, while the upstream non-invertible g k maps from a lower-dimensional space and is expected to capture more of the high-level structure of the data. Indeed, a common implementation of this type of model for images would involve a probabilistic decoder, where g k may be a convolutional neural network, predicting the parameters of a simple location-scale transformation performed by h k [24].
As the conditional likelihood p(x k |pa k ) in this class of models is no longer tractable because z k cannot be marginalised out, it may alternatively be trained with amortised variational inference. Specifically, we can introduce a variational distribution Q(z k |x k , pa k ) to formulate a lower bound on the true marginal conditional log-likelihood, which will be maximised instead:
log p(x k |pa k ) ≥ E Q(z k |x k ,pa k ) [log p(x k |z k , pa k )] − D KL [Q(z k |x k , pa k ) P (z k )] .(3)
The argument of the expectation in this lower bound can be calculated similarly to Eq. (1):
p(x k |z k , pa k ) = p(u k ) · |det ∇ u k h k (u k ; g k (z k , pa k ), pa k )| −1 u k =h −1 k (x k ;g k (z k ,pa k ),pa k )
. (4) The approximate posterior distribution Q(z k |x k , pa k ) can for example be realised by an encoder function, e k (x k ; pa k ), that outputs the parameters of a simple distribution over z k (Fig. 1b), as in the auto-encoding variational Bayes (AEVB) framework [24].
Amortised, implicit: While the models above rely on (approximate) maximum-likelihood as training objective, it is admissible to train a non-invertible mechanism as a conditional implicitlikelihood model (Fig. 1c), optimising an adversarial objective [30][31][32]. Specifically, a deterministic encoder e j would strive to fool a discriminator function attempting to tell apart tuples of encoded real data (x j , e j (x j ; pa j ), pa j ) and generated samples (f j ( j ; pa j ), j , pa j ).
Deep counterfactual inference
Now equipped with effective deep models for representing mechanisms in DSCMs, we discuss the inference procedure allowing us to compute answers to counterfactual questions.
Abduction: As presented in Section 2.1, the first step in computing counterfactuals is abduction, i.e. to predict the exogenous noise, , based on the available evidence, x. Because each noise variable is assumed to affect only the respective observed variable, ( k ) K k=1 are conditionally independent given x, therefore this posterior distribution factorises as P G ( |x) = K k=1 P G ( k |x k , pa k ). In other words, it suffices to infer the noise independently for each mechanism, given the observed values of the variable and of its parents 3 .
For invertible mechanisms, the noise variable can be obtained deterministically and exactly by just inverting the mechanism:
i = f −1 i (x i ; pa i ).
Similarly, implicit-likelihood mechanisms can be approximately inverted by using the trained encoder function: j ≈ e j (x j ; pa j ).
Some care must be taken in the case of amortised, explicit-likelihood mechanisms, as the 'high-level' noise z k and 'low-level' noise u k are not independent given x k . Recalling that this mechanism is trained along with a conditional probabilistic encoder, Q(z k |e k (x k ; pa k )), the noise posterior can be approximated as follows, where δ w ( · ) denotes the Dirac delta distribution centred at w:
P G ( k |x k , pa k ) = P G (z k |x k , pa k ) P G (u k |z k , x k , pa k ) ≈ Q(z k |e k (x k ; pa k )) δ h −1 k (x k ;g k (z k ;pa k ),pa k ) (u k ) .(5)
Action: The causal graph is then modified according to the desired hypothetical intervention(s), as in the general case (Section 2.1). For each intervened variable x k , its structural assignment is replaced either by a constant, x k := x k -making it independent of its former parents (direct causes, pa k ) and of its exogenous noise ( k )-or by a surrogate mechanism x k := f k ( k ; pa k ), forming a set of counterfactual assignments, S. This then defines a counterfactual SCM G = ( S, P G ( |x)).
Prediction: Finally, we can sample from G. Noise variables that were deterministically inverted (either exactly or approximately) can simply be plugged back into the respective forward mechanism to determine the new output value. Notice that this step is redundant for observed variables that are not descendants of the ones being intervened upon, as they will be unaffected by the changes.
As mentioned above, the posterior distribution over (z k , u k ) for an amortised, explicit-likelihood mechanism does not factorise (Eq. (5)), and the resulting distribution over the counterfactual x k cannot be characterised explicitly. However, sampling from it is straightforward, such that we can approximate the counterfactual distribution via Monte Carlo as follows, for each sample s:
z (s) k ∼ Q(z k |e k (x k ; pa k )) u (s) k = h −1 k (x k ; g k (z (s) k ; pa k ), pa k ) x (s) k = h k (u (s) k ; g k (z (s) k ; pa k ), pa k ) .(6)
Consider an uncorrelated Gaussian decoder for images as a concrete example, predicting vectors of means and variances for each pixel of x k : g k (z k ; pa k ) = (µ(z k ; pa k ), σ 2 (z k ; pa k )). Exploiting the reparametrisation trick, counterfactuals that preserve x k 's mechanism can be computed simply as
u (s) k = (x k − µ(z (s) k ; pa k )) σ(z (s) k ; pa k ), x (s) k = µ(z (s) k ; pa k ) + σ(z (s) k ; pa k ) u (s)
k , where and denote element-wise division and multiplication, respectively. In particular, in the constant-variance setting adopted for our experiments, counterfactuals further simplify to
x (s) k = x k + [µ(z (s) k ; pa k ) − µ(z (s) k ; pa k )] .
This showcases how true image counterfactuals are able to retain pixel-level details. Typical conditional generative models would output only µ(z k ; pa k ) (which is often blurry in vanilla variational auto-encoders [33]), or would in addition have to sample P (u k ) (resulting in noisy images).
Related Work
Deep generative modelling has seen a wide range of contributions since the popularisation of variational auto-encoders (VAEs) [24], generative adversarial networks (GANs) [34], and normalising flows [21]. These models have since been employed to capture conditional distributions [27,29,32,35], and VAEs and GANs were also extended to model structured data by incorporating probabilistic graphical models [36][37][38]. In addition, deep generative models have been heavily used for (unsupervised) representation learning with an emphasis on disentanglement [39][40][41][42]. However, even when these methods faithfully capture the distribution of observed data, they are capable of fulfilling only the association rung of the ladder of causation.
Interventions build on the associative capabilities of probabilistic models to enable queries related to changes in causal mechanisms. By integrating a causal graph into the connectivity of a deep model, it is possible to perform interventions with GANs [14] and causal generative NNs [8]. VAEs can also express causal links using specific covariance matrices between latent variables, which however restrict the dependences to be linear [6]. Despite reaching the second rung of the causal ladder, these methods lack tractable abduction capabilities and therefore cannot generate counterfactuals. Some machine-learning tasks such as explainability, image-to-image translation, or style transfer are closely related to counterfactual queries of the sort 'How would x (have to) change if we (wished to) modify y?'. Here, y could be the style of a picture for style transfer [43], the image domain (e.g. drawing to photo) for image-to-image translation [44], the age of a person in natural images [45] or medical scans [46], or a predicted output for explainability [11]. However, these approaches do not explicitly model associations, interventions, nor causal structure. Potentially closest to our work is a method for counterfactual explainability of visual models, which extends CausalGANs [14] to predict reparametrised distributions over image attributes following an assumed causal graph [10]. However, this approach performs no abduction step, instead resampling the noise of attributes downstream from the intervention(s), and does not include a generative model of imaging data. To the best of our knowledge, the proposed DSCM framework is the first flexible approach enabling end-to-end training and tractable inference on all three levels of the ladder of causation for high-dimensional data.
T t I i z X x u X t i x (a) Independent T t I i z X x u X t i x (b) Conditional T t I i z X x u X t i x (c) Full
Case Study 1: Morpho-MNIST
We consider the problem of modelling the causal model of a synthetic dataset based on MNIST digits [47], where stroke thickness causes the brightness of the digit: thicker digits are brighter whereas thinner digits are dimmer. This simple dataset allows for examining the three levels of causation in a controlled and measurable environment. We use morphological transformations on MNIST [48] to generate a dataset with known causal structure and access to the 'true' process of generating counterfactuals. The SCM for this synthetic dataset is as follows:
t := f * T ( * T ) = 0.5 + * T , * T ∼ Γ(10, 5) . i := f * I ( * I ; t) = 191 · σ(0.5 · * I + 2 · t − 5) + 64 , * I ∼ N (0, 1) . x := f * X ( * X ; i, t) = SetIntensity(SetThickness( * X ; t); i) , * X ∼ MNIST ,(7)
where SetIntensity( · ; i) and SetThickness( · ; t) refer to the operations that act on an image of a digit and set its intensity to i and thickness to t (see Appendix A.1 for details), x is the resulting image, * is the exogenous noise for each variable and σ( · ) is the logistic sigmoid.
We use this setup to study the capabilities of our framework in comparison to models with less causal structure. We adapt the true causal graph from Eq. (7) and model thickness and intensity using (conditional) normalising flows and employ a conditional VAE for modelling the image. In particular, we adopt the causal graphs shown in Fig. 2 and test a fully independent model (Fig. 2a), a conditional decoder model (Fig. 2b), as well as our full causal model (Fig. 2c). All our experiments were implemented within PyTorch [49] using the Pyro probabilistic programming framework [50], and implementation details can be found in Appendices A.2 and B.2.
We quantitatively compare the associative capabilities of all models by evaluating their evidence lower bound (Eq. (3)), log-likelihoods and reconstruction errors as shown in Table 1. We find that performance improves consistently with the model's capabilities: enabling conditional image generation improves p(x|t, i), and adding a causal dependency between t and i improves p(i|t). Further, we examine samples of the conditional and unconditional distributions in Appendix A.3.1.
The interventional distributions can be directly compared to the true generative process. Figure 3 shows that the densities predicted by our full model after intervening on t closely resemble the
p(t, i) p(t, i | do(t + 1)) p(t, i | do(t 0.5))
True data Lastly, we examine the full model's ability to generate counterfactuals. The other two models were omitted as they are incapable of accomplishing interventions, a prerequisite for counterfactual inference. Examples of previously unseen images and generated counterfactuals are shown in Fig. 4. We see that our model is capable of generating convincing counterfactuals that preserve the digit identity while changing thickness and intensity consistently with the underlying causal model.
Case Study 2: Brain Imaging
Our real-world application touches upon fundamental scientific questions in the context of medical imaging: how would a person's anatomy change if particular traits were different? We illustrate with a (simplified) example that our DSCM framework may provide the means to answer such counterfactual queries, which may enable entirely new research into better understanding the physical manifestation of lifestyle, demographics, and disease. Here, we model the appearance of brain MRI scans given the person's age and biological sex, as well as brain and ventricle volumes 4 , using population data from the UK Biobank [51]. Ventricle and total brain volumes are two quantities that are closely related to brain age [52] and can be observed relatively easily. We adopt the causal graph shown in Fig. 5a and otherwise follow the same training procedure as for the MNIST experiments.
The learned DSCM is capable of all three levels of the causal hierarchy. We present the analysis of lower levels in Appendix B.3.1 and focus here on counterfactuals, shown in Fig. 5b (more examples in Appendix B.3.2). The difference maps show plausible counterfactual changes: increasing age causes slightly larger ventricles while decreasing the overall brain volume (first column). In contrast, directly changing brain volume has an opposite effect on the ventricles compared to changing age (second column). Intervening on ventricle volume has a much more localised effect (third column), while intervening on the categorical variable of biological sex has smaller yet more diffuse effects. Note how the anatomical 'identity' (such as the cortical folding) is well preserved after each intervention.
Conclusion
We introduce a novel general framework for fitting SCMs with deep mechanisms. Our deep SCM (DSCM) framework fulfils all three rungs of Pearl's ladder of causation-in particular, it is the first to enable efficient abduction of exogenous noise, permitting principled counterfactual inference. We demonstrate the potential of DSCMs with two case studies: a synthetic task of modelling Morpho-MNIST digits with a known causal structure and a real-world example with brain MRI.
The ability to correctly generate plausible counterfactuals could greatly benefit a wide variety of possible applications, e.g.: explainability, where differences between observed and counterfactual data can suggest causal explanations of outcomes; data augmentation, as counterfactuals can extrapolate beyond the range of observed data (e.g. novel combinations of attributes); and domain adaptation, since including the source of the data as an indicator variable in the causal model could enable generating counterfactual examples in a relevant target domain.
The proposed method does not come without limitations to be investigated in future work. Like the related approaches, the current setup requires all variables to be observed when computing a counterfactual, which may limit its applicability in certain scenarios. This could be alleviated by imputing the missing data via MCMC or learning auxiliary distributions. Further work should study more closely the dynamic behaviour of deep mechanisms in SCMs. While not observed in our experiments, neural networks may not learn to cleanly separate the roles of its inputs on the output as expected-which could require custom counterfactual regularisation similar to losses used in image-to-image translation [46] and explainability work [11]. The use of such flexible models also raises questions about the identifiability of the 'true' mechanism, as counterfactuals may not be uniquely defined. Lastly, it would be interesting to examine whether this framework can be applied to causal discovery, attempting to uncover plausible causal structures from data.
Broader Impact
Causal inference can be applied to a wide range of applications, promising to provide a deeper understanding of the observed data and prevent the fitting of spurious correlations. Our research presents a methodological contribution to the causal literature proposing a framework that combines causal models and deep learning to facilitate modelling high-dimensional data.
Because of the general applicability of deep learning and causal inference, our framework could have a broad impact of enabling fairer machine learning models explicitly modelling causal mechanisms, reducing spurious correlations and tackling statistical and societal biases. The resulting models offer better interpretability due to counterfactual explanations and could yield novel understanding through causal discovery.
However, causal modelling relies on strong assumptions and cannot always unambiguously determine the true causal structure of observational data. It therefore is necessary to carefully consider and communicate the assumptions being made by the analyst. In this light, our methodology is susceptible to being used to wrongly claim the discovery of causal structures due to careless application or intentional misuse. Particularly, the use of 'black-box' components as causal mechanisms may exacerbate concerns about identifiability, already present even for simple linear models. Whereas deep causal models can be useful for deriving insights from data, we must be cautious about their use in consequential decision-making, such as in informing policies or in the context of healthcare.
[55] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[
A Synthetic Morpho-MNIST Experiment
A.1 Data Generation
We use the original MNIST dataset [47] together with the morphometric measurements introduced with Morpho-MNIST [48] to add functionality to measure intensity as well as set the intensity and thickness to a given value.
We implement MeasureIntensity by following the processing steps proposed by Castro et al. [48], and measure the intensity i of an image as the median intensity of pixels within the extracted binary mask. Once the intensity is measured, the entire image is rescaled to match the target intensity, with values clamped between 0 and 255 (images are assumed to be in unsigned 8-bit format).
Originally, Morpho-MNIST only proposed relative thinning and thickening operations. We expand those operations to absolute values by calculating the amount of dilation or erosion based on the ratio between target thickness and measured thickness.
Finally, we follow Eq.
A.2 Experimental Setup
We use (conditional) normalising flows for all variables apart from the images, which we model using (conditional) deep encoder-decoder architectures. The flows consist of components that constrain the support of the output distribution (where applicable) and components relevant for fitting the distribution. We use unit Gaussians as base distributions for all exogenous noise distributions P ( ) and, if available, we use the implementations in PyTorch [49] or Pyro [50] for all transformations. Otherwise, we adapt the available implementations, referring to [53] for details. We indicate with θ the modules with learnable parameters.
We model the mechanisms of the thickness t and intensity i as
t := f T ( T ) = (exp • AffineNormalisation • Spline θ )( T ) , (A.1) i := f I ( I ; t) = AffineNormalisation • sigmoid • ConditionalAffine θ (t) ( I ) . (A.2)
In the independent model, where i is not conditioned on t, we use instead
i := f I ( I ) = (AffineNormalisation • sigmoid • Spline θ • Affine θ )( I ) . (A.3)
We found that including normalisation layers help learning dynamics 5 and therefore include flows to perform commonly used normalisation transformations. For doubly bounded variable y we learn the flows in unconstrained space and then constrain them by a sigmoid transform and rescale to the original range using fixed affine transformations with bias min(Y ) and scale [max(Y ) − min(Y )].
We constrain singly bounded values by applying an exponential transform to the unbounded values and using an affine normalisation equivalent to a whitening operation in unbounded log-space. We denote those fixed normalisation transforms as AffineNormalisation and use a hat to refer to the unconstrained, normalised values (e.g. pa k ). The Spline θ transformation refers to first-order neural spline flows [53], Affine θ is an element-wise affine transformation, and sigmoid refers to the logistic function. ConditionalAffine θ (·) is a regular affine transform whose transformation parameters are predicted by a context neural network taking · as input. In the case of f I ( I ; t), the context network is represented by a simple linear transform. Further, we model x using a low-level flow:
h X (u X ; pa X ) = [Preprocessing • ConditionalAffine θ ( pa X )](u X ) , (A.4)
where the ConditionalAffine transform practically reparametrises the noise distribution into another Gaussian distribution and Preprocessing describes a fixed preprocessing transformation. We follow the same preprocessing as used with RealNVP [54]. The context network for the conditional affine transformation is the high-level mechanism g X (z X ; pa X ) and is implemented as a decoder network that outputs the bias for of the affine transformation, while the log-variance is fixed to log σ 2 = −5.
We implement the decoder network as a CNN:
g X (z X ; pa X ) = (Conv θ (1; 1; 1; 0) • ConvTranspose θ (1; 4; 2; 1) • ReLU • BN θ • ConvTranspose θ (64; 4; 2; 1) • Reshape(64, 7, 7) • ReLU • BN θ • Linear θ (1024) • ReLU • BN θ • Linear θ (1024))([z X , pa X ]) , (A.5)
where the operators describe neural network layers as follows: BN is batch normalisation; ReLU the ReLU activation function; Conv(c; k; s; p) and ConvTranspose(c; k; s; p) are a convolution or transposed convolution using a kernel with size k, a stride of s, a padding of p and outputting c channels; Linear(h) is a linear layer with h output neurons; and Reshape(·) reshapes its inputs into the given shape ·. Lastly, [z X , pa X ] denotes the concatenation of z X and pa X , and z X ∈ R 16 .
Equivalently, we implement the the encoder function as a simple CNN that outputs mean and log-variance of a independent Gaussian:
e X (x; pa X ) = [Linear θ (16), Linear θ (16)] • [LeakyReLU(0.1), pa X ] • BN θ • Linear θ (100) • Reshape(128 · 7 · 7) • LeakyReLU(0.1) • BN θ • Conv θ (128; 4; 2, 1) • LeakyReLU(0.1) • BN θ • Conv θ (64; 4; 2, 1) (x) , (A.6)
where LeakyReLU( ) is the leaky ReLU activation function with a leakiness of .
We use Adam [55] for optimisation with batch size of 256 and a learning rate of 10 −4 for the encoder-decoder and 0.005 for the covariate flows. We set the number of particles (MC samples) for estimating the ELBO to 4. We use 32 MC samples for estimating reconstruction and counterfactuals. We train all models for 1000 epochs and report the results of the model with the best validation loss.
A.3 Additional Results
Here we further illustrate the associative, interventional, and counterfactual capabilities of the trained independent, conditional, and full models. (Continued on the next page.)
A.3.1 Association
E Q(z X |e X (x;pa X )) [g X (z X ; pa X )],
where e X and g X are the image encoder and decoder networks. All models seem capable of producing faithful reconstructions. Since t causes i, notice how p(t|i) (left) is markedly different from p(t|do(i)) (middle), which collapses to p(t). On the other hand, p(i|do(t)) and p(i|t) (right) are identical.
A.3.3 Counterfactual
Original
do(t = 1.0) do(t = 3.0) do(t = 5.0) do(i = 64) do(i = 160) do(i = 255) t = 1.8 i = 119
Original
do(t = 1.0) do(t = 3.0) do(t = 5.0) do(i = 64) do(i = 160) do(i = 255) t = 3.6 i = 242
Original We observe that all counterfactuals preserve the digits' identity and style. Our model even generates sensible counterfactual images (with some artefacts) in very low-density regions, e.g. '0' with do(i = 64) (thick but dim), and very far from the original, e.g. '2' with do(t = 5.0).
do(t = 1.0) do(t = 3.0) do(t = 5.0) do(i = 64) do(i = 160) do(i = 255) t = 2.6 i = 142
B Brain Modelling
B.1 Data Generation
The original three-dimensional (3D) T1-weighted brain MRI scans have been pre-processed by the data providers of the UK Biobank Imaging study using the FSL neuroimaging toolkit [56]. The pre-processing involves skull removal, bias field correction, and automatic segmentation of brain structures. In addition, we have rigidly registered all scans to the standard MNI atlas space using an in-house image registration tool, which enabled us to extract anatomically corresponding mid-axial 2D slices that were used for the experiments presented in this paper. The 2D slices were normalised in intensity by mapping the minimum and maximum values inside the brain mask to the range [0, 255]. Background pixels outside the brain were set to zero. Age and biological sex for each subject were retrieved from the UK Biobank database along with the pre-computed brain and ventricle volumes. These volumes are derived from the 3D segmentation maps obtained with FSL, and although these are image-derived measurements, they may serve as reasonable proxies of the true measurements within our (simplified yet plausible) causal model of the physical manifestation of the brain anatomy.
B.2 Experimental Setup
The setup for the brain imaging experiment closely follows the MNIST example as described in Appendix A.2. We randomly split the available 13, 750 brain images into train, validation and test sets with the respective ratios 70%, 15% and 15%. During training, we randomly crop the brain slices from their original size of 233 px × 197 px to 192 px × 192 px and use center crops during validation and testing. The cropped images are downsampled by a factor of 3 to a size of 64 px × 64 px.
We use the same low-level mechanism for the image x as with MNIST images but change the encoder and decoder functions to a deeper architecture with 5 scales consisting of 3 blocks of (LeakyReLU(0.1) • BN θ • Conv θ ) each as well as a linear layer that converts to and from the latent space with 100 dimensions. We directly learn the binary probability of the sex s and use the following invertible transforms to model the age a, brain volume b, and ventricle volume v as
a := f A ( A ) = exp • AffineNormalisation • Spline θ ( A ) , (B.1) b := f B ( B ; s, a) = exp • AffineNormalisation • ConditionalAffine θ ([s, a]) ( B ) , (B.2) v := f V ( V ; a, b) = exp • AffineNormalisation • ConditionalAffine θ ([b, a]) ( V ) , (B.3)
where the context networks are implemented as a fully-connected network with 8 and 16 hidden units, and a LeakyReLU(0.1) nonlinearity.
B.3 Additional Results
Likewise, we present more detailed analyses of the model trained on UK Biobank brain images and covariates, in terms of modelling the observational distribution and computing various counterfactual queries. (Continued on the next page.)
Original
Original Recon.
Recon. Full model (a) Age vs. brain volume: p(a, b|s). Here we see differences in head size across biological sexes (reflected in brain volume), as well as a downward trend in brain volume as age progresses. (b) Age vs. ventricle volume: p(a, v |b ∈ · ). As expected from the literature [52], we observe a consistent increase in ventricle volume with age, in addition to a proportionality relationship with the overall brain volume.
C Discrete counterfactuals
As mentioned in the main text, the DSCM framework supports not only low-and high-dimensional continuous data, but also discrete variables. In particular, discrete mechanisms with a Gumbel-max parametrisation have been shown to lead to counterfactuals satisfying desirable properties [57]. For example, they are invariant to category permutations and are stable, such that increasing the odds only of the observed outcome cannot produce a different counterfactual outcome. More computational details and properties of the Gumbel distribution are found in Maddison and Tarlow [58].
Consider a discrete random variable over K categories, y, with a conditional likelihood described by logits λ, assumed to be a function g Y of its parents, pa Y :
P (y = k |pa Y ) = e λ k K l=1 e λ l , λ = g Y (pa Y ) . (C.1)
Under the Gumbel-max parametrisation, the mechanism generating y can be described as
y := f Y ( Y ; pa Y ) = arg max 1≤l≤K ( l Y + λ l ), l Y ∼ Gumbel(0, 1) . (C.2)
Samples from the Gumbel(0, 1) distribution can be generated by computing − log(− log U ), where U ∼ Unif(0, 1).
The Gumbel distribution has certain special properties [58] that enable tractable abduction. Given that we observed y = k, samples can be generated from the exact posterior P ( Y |y = k, pa Y ):
k Y = G k + log l e λ l − λ k , G k ∼ Gumbel(0, 1), l Y = − log(e −G l −λ l + e − k Y −λ k ) − λ l , G l ∼ Gumbel(0, 1), ∀l = k . (C.3)
Finally, given an upstream counterfactual intervention such that λ = g Y ( pa Y ), the counterfactual outcome for y can be determined simply as
y = f Y ( Y ; pa Y ) = arg max 1≤l≤K ( l Y + λ l ) . (C.4)
Note that this entire derivation applies to a truly discrete variable, without the need for continuous relaxations as commonly used in deep generative models [25,26], as the likelihood is given in closed form and no gradients of expectations are necessary.
Figure 2 :
2Computational graphs of the structural causal models for the Morpho-MNIST example. The image is denoted by x, stroke thickness by t, and image intensity by i. The corresponding causal diagrams are displayed in the top-right corners.
Figure 3 :Figure 4 :
34Distributions of thickness and intensity in the true data (left), and learned by the full (centre) and conditional (right) models. Contours depict the observational (red, shaded) and interventional joint densities for do(t := f T ( T ) + 1) (blue, solid) and do(t := f T ( T ) − 0.5) (green, dashed). Original A do(t = 5) do(i = 64) do(t = 3, i = 180) Original B do(t = 1.5) do(t =1.5, i = 224) do(t = 3, Counterfactuals generated by the full model. (left) Counterfactual 'trajectories' of two original samples, A and B, as their thickness and intensity are modified, overlaid on the learned joint density p(t, i). (right) Original and counterfactual images corresponding to samples A and B. true behaviour. The conditional and independent models operate equivalently and are incapable of modelling the relationship between t and i, capturing only their marginal distributions.
Figure 5 :
5(b = 800 ml) do(v = 110 ml) do(s = male) s = female; a = 49 y; b = 1153 ml; v = 26.62 ml (b) Original image, counterfactuals, and difference maps Brain imaging example. Variables are image (x), age (a), sex (s), and brain (b) and ventricle (v) volumes. The counterfactuals show different interventions on the same original brain.
( 7 ) 216 Figure A. 1 :
72161to modify each image within the MNIST dataset and randomly split the original training set into a training and validation set. We show random samples from the resulting test set in Fig. A.1. t = 2.3; i = 145 t = 3.2; i = 229 t = 2.9; i = 191 t = 2.6; i = 134 t = 2.6; i = 125 t = 2.7; i = 170 t = 4.0; i = 243 t = 2.3; i = 122 t = 2.1; i = 103 t = 3.6; i = 226 t = 2.2; i = 106 t = 3.3; i = 223 t = 2.9; i = 189 t = 3.6; i = 242 t = 3.5; i = 224 t = 3.1; i = Random exemplars from the synthetically generated Morpho-MNIST test dataset
Figure A. 2 :Figure A. 3 :Figure A. 4 :
234Random samples generated by the independent, conditional and full model. Note how all models appear to have the same unconditional generation capacity. Conditional samples generated by the independent, conditional, and full model. The high-level noise, z X , is shared for all samples from each model, ensuring the same 'style' of the generated digit. The independent model generates images independent of the thickness and intensity values, resulting in identical samples. For the conditional and full models, thickness and intensity change consistently along each column and row, respectively. Reconstructions. These are computed as Monte Carlo averages approximating
Figure
A.5: Comparison of the target covariates and the corresponding values measured from the generated images. The leftmost column refers to the accuracy of the SetThickness and SetIntensity transforms used in generating the synthetic dataset, and the remaining three columns describe the fidelity of samples generated by each of the learned models. While images sampled from the independent model are trivially inconsistent with the sampled covariates, the conditional and full models show comparable conditioning performance. i|do(t)) = p(i|t)Figure A.6: Difference between conditioning and intervening, based on the trained full model. The joint density p(t, i) is shown as contours in the background, for reference, and the 'violin' shapes represent the density of one variable when conditioning or intervening on three different values of the other variable.
Figure A. 7 :
7Original samples and counterfactuals from the full model. The first column shows the original image and true values of the non-imaging data. The even rows show the difference maps between the original image and the corresponding counterfactual image.
Figure B. 1 :
1Random examplars from the test set of the adopted UK Biobank dataset
Figure B. 2 :
2Random samples from the model trained on the UK Biobank dataset b = 800 ml v = 10 ml v = 100 ml v = 1000 ml b = 1200 ml b = 1600 ml v = 10 ml v = 100 ml v = 1000 ml v = 10 ml v = 100 ml v = 1000 ml Figure B.3: Conditional samples from the model trained on the UK Biobank dataset. Images in each 3×3 block share the same the high-level noise vector, z X . Each row consistently changes the brain size, whereas each column changes the ventricle volume.
Figure B. 4 :
4Original samples and reconstructions from the model trained on the UK
Figure B. 5 :
5Densities for the true data (KDE) and for the learned model. The overall trends and interactions present in the true data distribution seem faithfully captured by the model. do(s = male) do(a = 40 y) do(a = 80 y) do(b = 800 ml) do(b = 1600 ml) do(v = 11 ml) do(v = 110 ml) s = female a = 49 y b = 1153 ml v = 26.62 ml Original do(s = female) do(a = 40 y) do(a = 80 y) do(b = 800 ml) do(b = 1600 ml) do(v = 11 ml) do(v = 110 ml) s = male a = 68 y b = 1078 ml v = 19.89 ml Original do(s = male) do(a = 40 y) do(a = 80 y) do(b = 800 ml) do(b = 1600 ml) do(v = 11 ml) do(v = 110 ml) s = female a = 50 y b = 1095 ml v = 46.84 ml Original do(s = male) do(a = 40 y) do(a = 80 y) do(b = 800 ml) do(b = 1600 ml) do(v = 11 ml) do(v = 110 ml) s = female a = 60 y b = 1035 ml v = 24.29 ml Original do(s = female) do(a = 40 y) do(a = 80 y) do(b = 800 ml) do(b = 1600 ml) do(v = 11 ml) do(v = 110 ml) s = male a = 70 y b = 1062 ml v = 34.87 ml
Figure B. 6 :
6Original samples and counterfactuals from the model trained on the UK Biobank dataset. The first column shows the original image and true values of the non-imaging data. The even rows show the difference maps between the original image and the corresponding counterfactual image.
Table 1 :
1Comparison of the associative abilities of the models shown inFig. 2. The image is denoted by x, thickness by t, and intensity by i. Quantities with ≥ are lower bounds. MAE refers to the mean absolute error between pixels of the original image and of its reconstruction.Model
log p(x, t, i) ≥ log p(x|t, i) ≥ log p(t) log p(i|t) MAE(x, x )
Independent
−5925.26
−5919.14
−0.93
−5.19
4.50
Conditional
−5526.50
−5520.37
−0.93
−5.19
4.26
Full
−5692.94
−5687.71
−0.93
−4.30
4.43
2
4
6
thickness (t)
100
150
200
250
intensity (i)
SCMs are also known as (nonlinear) structural equation models or functional causal models.
Note that here we assume full observability, i.e. no variables are missing when predicting counterfactuals. We discuss challenges of handling partial evidence in Section 6.
Ventricles are fluid-filled cavities identified as the dark areas in the centre of the brain.
We observed that not normalising the inputs can lead to the deep models prioritising learning the dependence on the variable with largest magnitude. This phenomenon should be investigated further.
AcknowledgementsWe thank Thanos Vlontzos for helpful comments on a draft of this paper. This
Elements of Causal Inference: Foundations and Learning Algorithms. Jonas Peters, Dominik Janzing, Bernhard Schölkopf, MIT PressCambridge, MAJonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of Causal Inference: Foundations and Learning Algorithms. MIT Press, Cambridge, MA, 2017.
Causality: Models, Reasoning, and Inference. Judea Pearl, Cambridge University Press2nd editionJudea Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2nd edition, 2009.
Causality and econometrics. H Wold, Econometrica. 222H. Wold. Causality and econometrics. Econometrica, 22(2):162-177, 1954. URL http: //www.jstor.org/stable/1907540.
Causal diagrams for epidemiologic research. Sander Greenland, Judea Pearl, James M Robins, Epidemiology. 101Sander Greenland, Judea Pearl, and James M. Robins. Causal diagrams for epidemiologic re- search. Epidemiology, 10(1):37-48, 1999. URL http://www.jstor.org/stable/3702180.
Causality for machine learning. Bernhard Schölkopf, arXiv:1911.10500arXiv preprintBernhard Schölkopf. Causality for machine learning. arXiv preprint arXiv:1911.10500, 2019.
CausalVAE: Structured causal disentanglement in variational autoencoder. Mengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, Jun Wang, arXiv:2004.08697arXiv preprintMengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. CausalVAE: Structured causal disentanglement in variational autoencoder. arXiv preprint arXiv:2004.08697, 2020.
Learning independent causal mechanisms. Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningPMLR80Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, and Bernhard Schölkopf. Learning independent causal mechanisms. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of PMLR, pages 4036-4044. PMLR, 2018.
Learning functional causal models with generative neural networks. Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag, 10.1007/978-3-319-98131-4_3Explainable and Interpretable Models in Computer Vision and Machine Learning. Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Yagmur Güçlütürk, Umut Güçlü, and Marcel van GervenChamSpringer International PublishingOlivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, and Michèle Sebag. Learning functional causal models with generative neural networks. In Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Yagmur Güçlütürk, Umut Güçlü, and Marcel van Gerven, editors, Explainable and Interpretable Models in Computer Vision and Machine Learning, pages 39-80. Springer International Publishing, Cham, 2018. doi: 10.1007/978-3-319-98131-4_3.
A meta-transfer objective for learning to disentangle causal mechanisms. Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Nan Rosemary Ke, Sebastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, Christopher Pal, International Conference on Learning Representations. Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Nan Rosemary Ke, Sebastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher Pal. A meta-transfer objective for learning to disentangle causal mechanisms. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=ryxWIgBFPS.
Explaining visual models by causal attribution. Parafita Álvaro, Jordi Martínez, Vitrià Marca, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEEÁlvaro Parafita Martínez and Jordi Vitrià Marca. Explaining visual models by causal attribution. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 4167-4175. IEEE, 2019.
Explanation by progressive exaggeration. Sumedha Singla, Brian Pollack, Junxiang Chen, Kayhan Batmanghelich, International Conference on Learning Representations. Sumedha Singla, Brian Pollack, Junxiang Chen, and Kayhan Batmanghelich. Explanation by progressive exaggeration. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=H1xFWgrFPS.
Learning the difference that makes a difference with counterfactually-augmented data. Divyansh Kaushik, Eduard Hovy, Zachary C Lipton, International Conference on Learning Representations. Divyansh Kaushik, Eduard Hovy, and Zachary C. Lipton. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=Sklgs0NFvr.
Deep IV: A flexible approach for counterfactual prediction. Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy, PMLRProceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Jason Hartford, Greg Lewis, Kevin Leyton-Brown, and Matt Taddy. Deep IV: A flexible approach for counterfactual prediction. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of PMLR, pages 1414-1423. PMLR, 06-11 Aug 2017.
Causal-GAN: Learning causal implicit generative models with adversarial training. Murat Kocaoglu, Christopher Snyder, Alexandros G Dimakis, Sriram Vishwanath, International Conference on Learning Representations. Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis, and Sriram Vishwanath. Causal- GAN: Learning causal implicit generative models with adversarial training. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum? id=BJE-4xW0W.
Men also like shopping: Reducing gender bias amplification using corpus-level constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979-2989, 2017.
Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, International Conference on Learning Representations. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Good- fellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6199.
Counterfactual fairness. Matt J Kusner, Joshua Loftus, Chris Russell, Ricardo Silva, Advances in Neural Information Processing Systems. 30Matt J. Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems 30 (NIPS 2017), pages 4066-4076, 2017.
Preventing failures due to dataset shift: Learning predictive models that transport. Adarsh Subbaswamy, Peter Schulam, Suchi Saria, PMLRProceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS 2019). the Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS 2019)PMLR89Adarsh Subbaswamy, Peter Schulam, and Suchi Saria. Preventing failures due to dataset shift: Learning predictive models that transport. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS 2019), volume 89 of PMLR, pages 3118-3127. PMLR, 2019.
The seven tools of causal inference, with reflections on machine learning. Judea Pearl, 10.1145/3241036Communications of the ACM. 623Judea Pearl. The seven tools of causal inference, with reflections on machine learning. Commu- nications of the ACM, 62(3):54-60, feb 2019. doi: 10.1145/3241036.
A family of nonparametric density estimation algorithms. Esteban G Tabak, Cristina V Turner, 10.1002/cpa.21423Communications on Pure and Applied Mathematics. 662Esteban G. Tabak and Cristina V. Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145-164, feb 2013. doi: 10.1002/cpa.21423.
Variational inference with normalizing flows. Danilo Rezende, Shakir Mohamed, PMLRProceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningPMLR37Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of PMLR, pages 1530-1538. PMLR, 2015.
Normalizing flows for probabilistic modeling and inference. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan, arXiv:1912.02762arXiv preprintGeorge Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019.
Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, PMLRProceedings of the 31st International Conference on Machine Learning. the 31st International Conference on Machine LearningPMLR32Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of PMLR, pages 1278-1286. PMLR, 2014.
Auto-encoding variational Bayes. P Diederik, Max Kingma, Welling, International Conference on Learning Representations. Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6114.
Categorical reparameterization with Gumbel-softmax. Eric Jang, Shixiang Gu, Ben Poole, International Conference on Learning Representations. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with Gumbel-softmax. In International Conference on Learning Representations, 2017. URL https://openreview. net/forum?id=rkE3y85ee.
The Concrete distribution: A continuous relaxation of discrete random variables. Chris J Maddison, Andriy Mnih, Yee Whye Teh, International Conference on Learning Representations. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete distribution: A con- tinuous relaxation of discrete random variables. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=S1jE5L5gl.
Conditional density estimation with Bayesian normalising flows. L Brian, Richard E Trippe, Turner, NIPS 2017 Workshop on Bayesian Deep Learning. Brian L. Trippe and Richard E. Turner. Conditional density estimation with Bayesian normalising flows. In NIPS 2017 Workshop on Bayesian Deep Learning, 2017. URL http://arxiv.org/abs/1802.04908.
Structured output learning with conditional generative flows. You Lu, Bert Huang, Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence. the Thirty-Fourth AAAI Conference on Artificial Intelligence2020You Lu and Bert Huang. Structured output learning with conditional generative flows. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020).
. AAAI. To appearAAAI, 2020. URL http://arxiv.org/abs/1905.13288. To appear.
Christina Winkler, Daniel Worrall, Emiel Hoogeboom, Max Welling, arXiv:1912.00042Learning likelihoods with conditional normalizing flows. arXiv preprintChristina Winkler, Daniel Worrall, Emiel Hoogeboom, and Max Welling. Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042, 2019.
Adversarial feature learning. Jeff Donahue, Philipp Krähenbühl, Trevor Darrell, International Conference on Learning Representations. Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In International Conference on Learning Representations, 2017. URL https://openreview. net/forum?id=BJtNZAFgg.
Adversarially learned inference. Ishmael Vincent Dumoulin, Ben Belghazi, Olivier Poole, Alex Mastropietro, Martin Lamb, Aaron Arjovsky, Courville, International Conference on Learning Representations. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=B1ElR4cgg.
Mehdi Mirza, Simon Osindero, arXiv:1411.1784Conditional generative adversarial nets. arXiv preprintMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Autoencoding beyond pixels using a learned similarity metric. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, Ole Winther, PMLRProceedings of The 33rd International Conference on Machine Learning. The 33rd International Conference on Machine LearningPMLR48Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of PMLR, pages 1558-1566. PMLR, 2016.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems 27 (NIPS 2014). Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 2672-2680, 2014.
Learning structured output representation using deep conditional generative models. Kihyuk Sohn, Honglak Lee, Xinchen Yan, Advances in Neural Information Processing Systems 28 (NIPS 2015). Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems 28 (NIPS 2015), pages 3483-3491, 2015.
Composing graphical models with neural networks for structured representations and fast inference. Matthew J Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, Sandeep R Datta, Advances in Neural Information Processing Systems 29 (NIPS 2016). Matthew J. Johnson, David K. Duvenaud, Alex Wiltschko, Ryan P. Adams, and Sandeep R. Datta. Composing graphical models with neural networks for structured representations and fast inference. In Advances in Neural Information Processing Systems 29 (NIPS 2016), pages 2946-2954, 2016.
Variational message passing with structured inference networks. Wu Lin, Mohammad Emtiyaz Khan, Nicolas Hubacher, International Conference on Learning Representations. Wu Lin, Mohammad Emtiyaz Khan, and Nicolas Hubacher. Variational message passing with structured inference networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=HyH9lbZAW.
Graphical generative adversarial networks. Chongxuan Li, Max Welling, Jun Zhu, Bo Zhang, Advances in Neural Information Processing Systems. 31Chongxuan Li, Max Welling, Jun Zhu, and Bo Zhang. Graphical generative adversarial networks. In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), pages 6069-6080, 2018.
Deep convolutional inverse graphics network. D Tejas, William F Kulkarni, Pushmeet Whitney, Josh Kohli, Tenenbaum, Advances in Neural Information Processing Systems 28 (NIPS 2015). Tejas D. Kulkarni, William F. Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convo- lutional inverse graphics network. In Advances in Neural Information Processing Systems 28 (NIPS 2015), pages 2539-2547, 2015.
Info-GAN: Interpretable representation learning by information maximizing generative adversarial nets. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Advances in Neural Information Processing Systems 29 (NIPS 2016). Ilya Sutskever, and Pieter AbbeelXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info- GAN: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems 29 (NIPS 2016), pages 2172-2180, 2016.
Shakir Mohamed, and Alexander Lerchner. β-VAE: Learning basic visual concepts with a constrained variational framework. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, International Conference on Learning Representations. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. β-VAE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Sy2fzU9gl.
Attribute-based regularization of VAE latent spaces. Ashis Pati, Alexander Lerch, arXiv:2004.05485arXiv preprintAshis Pati and Alexander Lerch. Attribute-based regularization of VAE latent spaces. arXiv preprint arXiv:2004.05485, 2020.
Image style transfer using convolutional neural networks. Leon A Gatys, Alexander S Ecker, Matthias Bethge, Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. the 2016 IEEE Conference on Computer Vision and Pattern RecognitionLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Image style transfer using convolu- tional neural networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 2414-2423, 2016.
Image-to-image translation with conditional adversarial networks. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A Efros, Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. the 2017 IEEE Conference on Computer Vision and Pattern RecognitionPhillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pages 1125-1134, 2017.
Face aging with conditional generative adversarial networks. G Antipov, M Baccouche, J Dugelay, 2017 IEEE International Conference on Image Processing (ICIP). G. Antipov, M. Baccouche, and J. Dugelay. Face aging with conditional generative adversarial networks. In 2017 IEEE International Conference on Image Processing (ICIP), pages 2089- 2093, 2017.
Learning to synthesise the ageing brain without longitudinal data. Tian Xia, Agisilaos Chartsias, Chengjia Wang, Sotirios A Tsaftaris, arXiv:1912.02620arXiv preprintTian Xia, Agisilaos Chartsias, Chengjia Wang, and Sotirios A. Tsaftaris. Learning to synthesise the ageing brain without longitudinal data. arXiv preprint arXiv:1912.02620, 2019.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, 10.1109/5.726791Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. doi: 10.1109/5.726791.
Morpho-MNIST: Quantitative assessment and diagnostics for representation learning. C Daniel, Jeremy Castro, Bernhard Tan, Ender Kainz, Ben Konukoglu, Glocker, Journal of Machine Learning Research. 20178Daniel C. Castro, Jeremy Tan, Bernhard Kainz, Ender Konukoglu, and Ben Glocker. Morpho- MNIST: Quantitative assessment and diagnostics for representation learning. Journal of Machine Learning Research, 20(178), 2019.
PyTorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. 32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pages 8024-8035, 2019.
Pyro: Deep universal probabilistic programming. Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, Noah D Goodman, Journal of Machine Learning Research. 2028Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D. Goodman. Pyro: Deep universal probabilistic programming. Journal of Machine Learning Research, 20(28), 2019.
UK Biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. Cathie Sudlow, John Gallacher, Naomi Allen, Valerie Beral, Paul Burton, John Danesh, Paul Downey, Paul Elliott, Jane Green, Martin Landray, PLoS Medicine. 123Cathie Sudlow, John Gallacher, Naomi Allen, Valerie Beral, Paul Burton, John Danesh, Paul Downey, Paul Elliott, Jane Green, Martin Landray, et al. UK Biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Medicine, 12(3), 2015.
Ageing and the brain. Ruth Peters, Postgraduate Medical Journal. 82964Ruth Peters. Ageing and the brain. Postgraduate Medical Journal, 82(964):84-88, 2006.
Neural spline flows. Conor Durkan, Artur Bekasov, Iain Murray, George Papamakarios, Advances in Neural Information Processing Systems. 32Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pages 7511-7522, 2019.
Density estimation using real nvp. Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio, Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp, 2016. URL https://arxiv.org/abs/1605.08803.
| zyda_arxiv-0635000 |
THE L ∞ -POSITIVITY PRESERVING PROPERTY AND STOCHASTIC COMPLETENESS
22 Dec 2021
Andrea Bisterzo
Ludovico Marini
THE L ∞ -POSITIVITY PRESERVING PROPERTY AND STOCHASTIC COMPLETENESS
22 Dec 2021
We say that a Riemannian manifold satisfies the L p -positivity preserving property if (−∆ + 1)u ≥ 0 in a distributional sense implies u ≥ 0 for all u ∈ L p . While geodesic completeness of the manifold at hand ensures the L p -positivity preserving property for all p ∈ (1, +∞), when p = +∞ some assumptions are needed. In this paper we show that the L ∞ -positivity preserving property is in fact equivalent to stochastic completeness, i.e., the fact that the minimal heat kernel of the manifold preserves probability. The result is achieved via some monotone approximation results for distributional solutions of −∆ + 1 ≥ 0, which are of independent interest.
Introduction
Let (M, g) be an n-dimensional Riemannian manifold with Riemannian measure dµ g . In the following ∆ = div ∇ is the (negatively defined) Laplace-Beltrami operator and, unless explicitly stated, all integrals are taken with respect to the Riemannian volume measure dµ g .
The aim of this paper is to study qualitative properties for certain solutions of elliptic PDEs involving the following Schrödinger operator (1.1) L := ∆ − 1.
We say that u ∈ L 1 loc (M ) solves (−∆ + 1)u ≥ 0 in the sense of distributions if M u(−∆ + 1)ϕ ≥ 0 for all ϕ ∈ C ∞ c (M ) with ϕ ≥ 0. Note that this is equivalent to say that (−∆ + 1)u is a positive Radon measure. If more regularity is assumed, namely u ∈ W 1,2 loc (M ), we talk of a weak solution of (−∆ + 1)u ≥ 0 ifˆM g(∇ϕ, ∇u) + uϕ ≥ 0 for all ϕ ∈ C ∞ c (M ) with ϕ ≥ 0. Finally, if u ∈ C 2 (M ), then (−∆ + 1)u ≥ 0 is intended in a strong, pointwise sense. Naturally, if u ∈ C 2 (M ) is a strong solution of the inequality, it is also a weak and thus distributional solution.
We begin with the following definition: The definition was proposed by Güneysu in [13] although the case p = 2 of this property appears in previous works of Kato, [20], and Braverman, Milatovic and Shubin, [4]. In this last paper, the authors proved that the validity of the L 2 -positivity preserving property implies the essential self-adjointness of the Schrödinger operator −∆ + V for all L 2 loc nonnegative potentials V . Recall that −∆+V : C ∞ c (M ) → L 2 (M ) is essentially self-adjoint if it has an unique self-adjoint extension to L 2 (M ) (its closure). On the other hand, the operator −∆ + V is known to be essentially self-adjoint on geodesically complete manifolds, see [29,Corollary 2.9] or [4] and [16] for the case of operators acting on Hermitian vector bundles. This conjecture has remained open for 20 years and has only recently been solved in the positive by Pigola and Veronelli in [25]. For a complete introduction to the topic we refer to the nice survey [14], to Chapter XIV.5 of [15] and Appendix B of [4].
The case p = +∞ of Definition 1.1 is instead related to stochastic completeness. Recall that a manifold is said to be stochastically complete if the Brownian paths on M have almost surely infinite lifetime or, equivalently, if the minimal positive heat kernel associated to the Laplace-Beltrami operator preserves probability. For the scope of this article, however, we shall adopt the following (equivalent) definition, which is more relevant from the point of view of PDEs.
Definition 1.2. A Riemannian manifold (M, g) is said to be stochastically complete if the only bounded, non-negative C 2 solution of ∆u ≥ u on M is u ≡ 0.
There are countless characterizations of stochastic completeness, a comprehensive account is beyond the scope of this paper, we refer the reader to [9,11,23,24] or the very recent [12]. See also Section 2 below. Stochastic completeness is implied by several geometric, analytic and probabilistic conditions. For instance, stochastic completeness is ensured by conditions on the curvature tensor. In this direction, the most general result is the one of Hsu in [19], a particular case of which states that geodesically complete manifold whose Ricci curvature satisfies
Ric(x) ≥ −Cr 2 (x)
outside a compact set are in fact stochastically complete.
As a matter of fact, the L p -positivity preserving property implies stochastic completeness of the manifold at hand, as it has been observed by Güneysu in [13]. In particular, stochastically incomplete manifolds provide counterexamples to the validity of the L ∞ -positivity preserving property. As an example, take a Cartan-Hadamard manifold whose Ricci curvature diverges at −∞ faster than quadratically, for computations we refer to [21].
In the last years there has been an effort to better understand the L p -positivity preserving property and to find geometric and analytic conditions ensuring its validity. If one takes M = R n with the usual Euclidean metric, the L 2 -positivity preserving property was first proved by Kato,[20], using the theory of operators on tempered distributions. In a Riemannian setting, however, one does not dispose of tempered distributions and it is thus necessary to take other paths. Following an idea of Davies in [4], if the manifold admits a family of smooth cutoff functions with a good control on the Laplacian, it is possible to prove the L p -positivity preserving property. In this direction we mention the results by Braverman, Milatovic and Schubin in [4]; by Güneysu in [13,15]; by Bianchi and Setti in [2]; and by the second author and Veronelli in [21]. Using a completely different strategy, Pigola and Veronelli, [25], were finally able to prove that the L p -positivity preserving property for p ∈ (1, +∞) holds on geodesically complete manifolds, thus verifying that the BMS conjecture is true. Remark 1.3. Without geodesic completeness the L p -positivity preserving property generally fails for every p ∈ [1, +∞]. To see this, take B 1 ⊆ R 2 the Euclidean open ball of radius 1. Then, the radial function u(r) = −r, which belongs to all L p spaces, is a non-positive function which satisfies (−∆ + 1)u ≤ 0.
The proof of Pigola and Veronelli uses some new regularity results for non-negative subharmonic distributions to prove that the L p -positivity preserving property is implied by a Liouville-type property for L p -subharmonic distributions. When p = 1, +∞, this property is known to hold on geodesically complete manifolds thanks to a result of Yau, [31]. This strategy, however, fails when p = 1 or p = +∞ since there are known counterexamples to the Liouville-type property. Remark 1.4. To the best of our knowledge, when p = +∞ the most general condition known so far ensuring the validity of the L ∞ -positivity preserving property is the one of Theorem II in [21]. This condition, which requires geodesic completeness and
Ric(x) ≥ −Cr 2 (x)
outside a compact set, is essentially the celebrated condition of Hsu, [19], for stochastic completeness.
The above observation suggests a much closer relation between stochastic completeness and the L ∞ -positivity preserving property. The main result of this article is in fact the following:
Theorem A. Let (M, g) be a Riemannian manifold, then M has the L ∞ -positivity preserving property if and only if it is stochastically complete.
Theorem A together with the result of Pigola and Veronelli, [25], give the full picture of the L p -positivity preserving property when p ∈ (1, +∞]. When p = 1, the best result we have is the one of the second author with Veronelli, [21,Theorem II], which ensures the L 1 -positivity preserving property if the manifold is complete and the Ricci curvature essentially grows like
Ric(x) ≥ −Cr 2 (x) outside of a compact set. Using a construction suggested to us by Veronelli we also prove the following:
Theorem B. For every ε > 0, there exists a 2-dimensional Riemannian manifold (M, g) whose Gaussian curvature satisfies
K(x) ∼ −Cr(x) 2+ε ,
such that the L 1 -positivity preserving property fails on M . Here r(x) denotes the Riemannian distance from some fixed pole. Remark 1.5. Theorem B together with Remark 1.3 show that the result of Theorem II in [21] alluded in the above is optimal. Remark 1.6. Using a simple trick introduced in [18], the counterexample in dimension 2 of Theorem B can be used to construct counterexamples to the L 1 -positivity preserving property in arbitrary dimensions n ≥ 2. It suffices to take the product of the 2 dimensional model manifold M with an arbitrary n − 2 dimensional closed Riemannian manifold. Extending the function which provides the counterexample on M to the whole product produces a counterexample in a manifold of dimension n.
In order to prove that stochastic completeness implies the L p -positivity preserving property, we show that it is essentially a problem of regularity for the distributional, L ∞ solutions of Lu ≥ 0, where L is defined in (1.1). Using a Brezis-Kato inequality we reduce ourselves to prove the following:
Proposition C. Let (M, g) be a Riemannian manifold and let u ∈ L ∞ (M ) satisfying Lu ≥ 0 in the sense of distributions. Then, there exists some w ∈ C ∞ (M ) with sup M w < +∞ such that u ≤ w and Lw ≥ 0 in the strong sense.
This latter result follows from a monotone approximation theorem for the distributional solutions of Lu ≥ 0 which is of independent interest.
(i) u k ց u pointwise a.e.; (ii) Lu k ≥ 0 for all k; (iii) u k → u in L 1 (Ω); (iv) sup Ω u k ≤ 2 ess sup Ω u.
Using a trick due to Protter and Weinberger, [26], it is sufficient to prove a monotone approximation result for the distributional solution of ∆ α v ≥ 0, where ∆ α v := α −2 div(α 2 ∇v) and α is a smooth positive function to be specified later. The monotone approximation for the weighted Laplacian is obtained using a strategy outlined by Bonfiglioli and Lanconelli in [3] together with some mean value representation formulas for the solution of ∆ α v = 0. Theorem D generalizes a result of Pigola and Veronelli in [25] where the monotone approximation was proved only on coordinate charts. If the manifold at hand admits a minimal, positive Green function for the operator ∆ α (i.e. it is α-non-parabolic) and if this Green function vanishes at infinity (i.e. it is strongly α-non-parabolic), as a byproduct of the proof of Theorem D we obtain a global, monotone approximation result. The paper is organized as follows. In Section 2 we study the relation between the L ∞positivity preserving property and stochastic completeness, showing that the former property implies the latter and the converse is true up to a claim which is proved later on. Section 3 is devoted to the monotone approximation results. We first observe that the conclusions of Theorem D can be inferred form an equivalent statement for the operator ∆ α . We prove some mean value representation formulae for α-harmonic functions and show how these can be used to produce a monotone approximating sequence with wanted properties. As a corollary of Theorem D, we obtain the desired claim which concludes the proof of Theorem A. We end the section by observing that if we make some assumptions on the geometry of M , the monotone approximation results have a global nature. Finally, in Section 4 we construct a class of Riemannian manifolds on which the L 1 -positivity preserving property fails thus proving Theorem B.
(i) u k ց u pointwise a.e.; (ii) Lu k ≥ 0 for all k; (iii) u k → u in L 1 (M ); (iv) sup M u k ≤ 2 ess sup M u.
L ∞ -positivity preserving property and stochastic completeness
The aim of this section is to investigate the connection between the L ∞ -positivity preserving property and stochastic completeness. As pointed out in the introduction, there are several possible definitions one can give for stochastic completeness. We cite here the ones relevant to our exposition.
(i) for every λ > 0, the only bounded, non-negative C 2 solution of ∆u ≥ λu is u ≡ 0;
(ii) for every λ > 0, the only bounded, non-negative C 2 solution of ∆u = λu is u ≡ 0;
(iii) the only bounded, non-negative C 2 solution of ∆u = u is u ≡ 0. For a proof of the equivalence we refer to Theorem 6.2 in [9].
Remark 2.1. Note that the regularity required in the above and in Definition 1.2 can be relaxed to C 0 (M ) ∩ W 1,2 loc (M ) see for instance Section 2 of [1]. This fact is a consequence of a stronger version of Theorem 2.6 below.
We begin with the following observation due to Güneysu, [13]. Proof. To see this, take u ∈ C 2 (M ) a bounded and non-negative function satisfying ∆u ≥ u.
Then, if we set v = −u we have v ∈ L ∞ (M ) (−∆ + 1)v ≥ 0.
By the L ∞ -positivity preserving property, we conclude that v ≥ 0, since u is non-negative, this yields v ≡ 0 and hence u ≡ 0.
Remark 2.3. It is worthwhile noticing that stochastic completeness is in general unrelated to geodesic completeness. It is possible to find Riemannian manifolds which are geodesically but not stochastically complete such as Cartan-Hadamard manifolds whose Ricci curvature diverges at −∞ faster that quadratically. On the other hand, R n \{0} endowed with the Euclidean metric is stochastically complete but geodesically incomplete.
Proposition 2.2 and the above remark explain the failure of the result of Pigola and Veronelli, [25], in the case p = ∞. Since L(−u) ≥ 0 we conclude that Lu − ≥ 0 in the sense of distributions. If u − happened to be a C 2 (M ) function, stochastic completeness would allow us to conclude that u − ≡ 0, hence u ≥ 0. Note that, according to Remark 2.1, u − ∈ C 0 (M ) ∩ W 1,2 loc (M ) would be sufficient. In general, however, this is not the case and, as a matter of fact, it is a stronger requirement than what we actually need. Indeed, if we find w ∈ C 2 (M ) such that sup M w < +∞, 0 ≤ u − ≤ w and Lw ≥ 0, then stochastic completeness applied to w implies that w hence u − are identically zero.
The existence of such function w is implied by the following corollary of Theorem D, whose proof is postponed to the next section. Via a compactness argument we use the functions u Ω to construct the function w. The following theorem, proved by Sattinger in [28], also comes into aid as it allows to obtain L-harmonic function from super/sub solutions of Lu = 0. Theorem 2.6. Let u 1 , u 2 ∈ C ∞ (M ) satisfying
Lu 1 ≥ 0, Lu 2 ≤ 0, u 1 ≤ u 2 on M . Then, there exists some w ∈ C ∞ (M ) such that u 1 ≤ w ≤ u 2 and Lw = 0.
Remark 2.7. Theorem 2.6 is a weaker formulation of a much more general theorem, proved by Ratto, Rigoli and Véron, [27], for a wider class of functions, namely u 1 , u 2 ∈ C 0 (M ) ∩ W 1,2 loc (M ). This result goes under the name of sub and supersolution method or monotone iteration scheme. Note that the results of [27] hold for a larger class of second order elliptic operators. For a survey on the subject, we refer to Heikkilä and Lakshmikantham, [17].
Using the functions constructed locally in Corollary 2.5 together with an exhaustion procedure we obtain the following: Next, take {Ω h } an exhaustion of M by relatively compact sets such that
Ω 1 ⋐ Ω 2 ⋐ . . . ⋐ Ω h ⋐ Ω h+1 ⋐ . . . ⋐ M,
∂Ω h is smooth and M = ∪ h Ω h . On each set Ω h we apply Corollary 2.5 and we obtain a sequence of functions u h ∈ C ∞ (Ω h ) such that
(1) u ≤ u h ≤ 2c in Ω h ; (2) Lu h ≥ 0 strongly on Ω h .
Since Lc ≤ 0, we use Theorem 2.6 on each Ω h to obtain w h ∈ C ∞ (Ω h ) satisfying
(1) Lw h = 0; (2) u h ≤ w h ≤ 2c.
We conclude by showing that {w h } h is bounded respect to the C ∞ (M )-topology and thus converges, up to a subsequence, to some w ∈ C ∞ (M ).
To this end, let K ⊂ M be a compact set and k ∈ N, k ≥ 2. By Schauder estimates for the operator L we have
w h C k (K) ≤ A w h L ∞ (K) + Lw h C k−2,α (K)
for some α ∈ (0, 1). See for instance Section 6.1 of [8]. In particular there exists a constant C = C(K, n, k) > 0 such that w h C k (K) < C for every h ∈ N. Here
||w h || C k (K) = ||w h || L ∞ (K) + ||∇w h || L ∞ (K) + · · · + ||∇ k w h || L ∞ (K) .
Since {w h } h is pre-compact, it converges in the C ∞ (M ) topology up to a subsequence, denoted again with {w h } h . Let w ∈ C ∞ (M ) be the C ∞ limit, we have that u ≤ w ≤ 2c and Lw = 0.
This concludes the proof of Theorem A, apart from the proof of Corollary 2.5.
Monotone approximation results
This section is devoted to the proof of Theorem D. Instead of proving Theorem D directly, we prove an equivalent monotone approximation result for another elliptic differential operator closely related to L. We begin by taking a function α ∈ C ∞ (M ) satisfying
(3.1) Lα = 0 α > 0 .
The existence of such a function is ensured by [7], and is equivalent to the fact that λ −L 1 > 0. In our case it is easy to see that λ −L 1 ≥ 1. Using α we define the following drifted Laplacian
(3.2) ∆ α : u → α −2 div(α 2 ∇u).
With a trivial density argument, one has that ∆ α is symmetric in L 2 with respect to the measure α 2 dµ g . Then, using the following idea due to Protter and Weinberger, [26], we establish the relation between ∆ α and L. See also Lemma 2.3 of [25].
Lemma 3.1. If u ∈ L 1 (Ω) with Ω ⋐ M , then (∆ − 1)u ≥ 0 ⇔ ∆ α u α ≥ 0,
where both inequalities are intended in the sense of distributions.
Proof. Fix 0 ≤ ϕ ∈ C ∞ c (Ω), by direct computation we have
α∆ α ϕ α = α −1 div α 2 ∇ ϕ α = α −1 div (α∇ϕ − ϕ∇α) = ∆ϕ − ϕ ∆α α = Lϕ, (3.3)
where in the last equation we have used (3.1). Thus, using (3.3) and the symmetry of ∆ α we conclude
∆ α u α , αϕ L 2 =ˆΩ u α ∆ α ϕ α α 2 dµ g =ˆΩ u (∆ − 1)ϕ dµ g = ((∆ − 1)u, ϕ) L 2 .
Using Equation (3.3) and setting v = α −1 u, it is possible to obtain Theorem D from an equivalent statement for the operator ∆ α . In this perspective, our goal is to prove the following:
} ⊂ C ∞ (Ω) such that: (i) v k ց v pointwise a.e.; (ii) ∆ α v k ≥ 0 for all k; (iii) v k → v in L 1 (Ω); (iv) sup Ω v k ≤ ess sup Ω v.
3.1.
Representation formula for α-harmonic functions. We begin by establishing some mean value representation formulae involving the Green function of the operator ∆ α on Ω with Dirichlet boundary conditions. Recall that G : Ω × Ω \ {x = y} → R is a symmetric, L 1 (Ω × Ω) function satisfying the following properties: (a) G ∈ C ∞ (Ω × Ω \ {x = y}) and G(x, y) > 0 for all x, y ∈ Ω with x = y; (b) lim x→y G(x, y) = +∞ and G(x, y) = 0 if x ∈ ∂Ω (or y ∈ ∂Ω); (c) ∆ α G(x, y) = −δ x (y) with respect to α 2 dµ g , that is,
ϕ(x) = −ˆΩ G(x, y) ∆ α ϕ(y)α 2 (y)dµ y ∀ϕ ∈ C ∞ C (Ω) .
For r > 0 and x ∈ Ω, we define the following set
(3.4) B r (x) := y ∈ Ω | G(x, y) > r −1 ∪ {x}.
We adopt the convention G(x, x) = +∞ so that B r (x) = y ∈ Ω | G(x, y) > r −1 . Observe that B r (x) ⊂ Ω are open and relatively compact sets, moreover, for almost all r > 0, ∂B r (x) is a smooth hypersurface. This is a consequence of Sard's theorem. In the following, dσ and dµ represent the Riemannian surface and volume measure of ∂B r (x) and B r (x) respectively. Proposition 3.3. For every v ∈ C ∞ (Ω) and almost every r > 0, the following representa-
tion formula holds v(x) =ˆ∂ Br (x) v(y)|∇G(x, y)|α 2 (y)dσ y −ˆB r (x) G(x, y) − 1 r ∆ α v(y)α 2 (y)dµ y (3.5)
Proof. By the Green identity we have
v(x) = −ˆB r (x) G(x, y) ∆ α v(y)α 2 (y)dµ y +ˆ∂ Br(x) G(x, y) ∂v ∂ν (y) − v(y) ∂G ∂ν (x, y) α 2 (y)dσ y . Since ∂G ∂ν = −|∇G|, we obtain v(x) =ˆ∂ Br(x) v(y) ∇G(x, y) α 2 (y)dσ y + 1 rˆ∂ Br(x) ∂v ∂ν (y)α 2 (y)dσ y −ˆB r (x) G(x, y)∆ α v(y)α 2 (y)dµ y =ˆ∂ Br(x) v(y) ∇G(x, y) α 2 (y)dσ y −ˆB r (x) G(x, y) − 1 r ∆ α v(y)α 2 (y)dµ y .
In particular, if v ∈ C 2 (Ω) is α-harmonic, i.e. ∆ α u = 0 on Ω, then
(3.6) v(x) =ˆ∂ Br (x) |∇G(x, y)| v(y) α 2 (y) dσ y .
The formulae (3.6) and (3.5) are a generalization of some standard representation formula for the Laplace-Beltrami operator. See for instance the Appendix of [3], [22] or the very recent [6].
3.2.
Distributional vs. potential α-subharmonic solutions. Before proving the monotone approximation result, we observe that the notion of α-subharmonicity in the distributional sense is closely related to the notion of α-subharmonic solutions in the sense of potential theory.
Definition 3.4. We say that an upper semicontinuous function u : Ω → [−∞, +∞) is α-subharmonic in the sense of potential theory on Ω if the following conditions hold
(i) {x ∈ Ω | u(x) > −∞} = ∅; (ii) for all V ⋐ Ω and for every h ∈ C 2 (V ) ∩ C 0 (V ) such that ∆ α h = 0 in V with u ≤ h on ∂V , then u ≤ h in V.
The key observation, first noted by Sjörgen in [30,Theorem 1] in the Euclidean setting, is that every distributional α-subharmonic function is almost everywhere equal to a function which is α-subharmonic in the sense of potential theory. Note that in [30, Theorem 1], Sjörgen considers a wider class of elliptic differential operators. The drifted Laplace-Beltrami operator falls into that class.
More precisely, if v ∈ L 1 (Ω) satisfies ∆ α v ≥ 0 in the sense of distributions, then v is equal almost everywhere to an α-subharmonic function in the sense of potential theory. Naturally, if v has some better regularity property, for example it is continuous, the equality holds everywhere. This fact holds true also in the Riemannian case, we sketch here the proof for clarity of exposition.
Recall that for every ϕ ∈ C ∞ c (Ω) we have
ϕ(x) = −ˆΩ G(x, y)∆ α ϕ(y) α 2 (y)dµ y . Furthermore, since ∆ α v = dν v is a positive Radon measure, we havê Ω v(x)∆ α ϕ(x) α 2 (x)dµ x =ˆΩ ϕ(x) dν v x for every ϕ ∈ C ∞ c (Ω).
The measure dν v is often referred as the ∆ α -Riesz measure of v. By a direct computation we havê
Ω v(x)∆ α ϕ(x) α 2 (x)dµ x =ˆΩ ϕ(x) dν v x = −ˆΩˆΩ G(x, y)∆ α ϕ(y) α 2 (y)dµ y dν v x =ˆΩ − ˆΩ G(x, y)dν v x ∆ α ϕ(y)α 2 (y)dµ y , hence,ˆΩ v(y) +ˆΩ G(x, y) dν v x ∆ α ϕ(y)α 2 (y)dµ y = 0,
for every 0 ≤ ϕ ∈ C ∞ c (Ω). In other words, the function v +ˆΩ G(x, ·)dν v x is α-harmonic in the sense of distributions. By [30, Theorem 1] of Sjörgen we know that α-harmonic functions are almost everywhere equal to a function which is α-harmonic in the sense of potential theory. When the operator at hand is the Euclidean Laplacian, this result is usually referred as Weyl's lemma. We conclude that
(3.7) v a.e. = h −ˆΩ G(x, ·)dν v x ,
where h is α-harmonic in a strong sense. On the other hand, one can prove that the function
(3.8) − G * dν v = −ˆΩ G(x, ·)dν v x
is α-subharmonic in the sense of potential theory which concludes the sketch of the proof. For this latter statement, we refer to Section 6 of [3].
Proof of Theorem 3.2.
In order to prove Theorem 3.2, we adopt a strategy laid out by Bonfiglioli and Lanconelli in [3], where they obtained some monotone approximation results for a wide class of second order elliptic operators on R n . To do so, we begin by defining the following mean integral operators. If v is an upper semicontinuous function on Ω, x ∈ Ω and r > 0, we set
(3.9) m r (v)(x) :=ˆ∂ Br(x) v(y)|∇ y G(x, y)|α 2 (y) dσ y .
In particular, if v is an α-subharmonic function in the sense of distributions we have the following results, which are an adaptation to the case or Riemannian manifolds of [3]. Proof. By the observation in the previous section, up to a choice of a good representative, we can assume that v is α-subharmonic in the sense of potential, cf. Definition 3.4.
(i) Fix x 0 ∈ Ω and r > 0, consider ϕ ∈ C 0 (∂B r (x 0 )) such that v ≤ ϕ on ∂ B r (x 0 ). Let h : B r (x 0 ) → R be the solution of (3.10) ∆ α h = 0 in B r (x 0 ) h = ϕ on ∂ B r (x 0 ) .
Since v is α-subharmonic in the sense of potential, then v ≤ h in B r (x 0 ). By Proposition 3.3 we have
(3.11) v(x 0 ) ≤ h(x 0 ) =ˆ∂ B r (x0) ϕ(y)|∇ y G(x 0 , y)|dσ α y where dσ α y = α 2 (y) dσ y .
Since v is upper semicontinuous on ∂B r (x 0 ), there exists a sequence {ϕ i } i ⊂ C 0 (∂B r (x 0 )) such that ϕ i (y) ց v(y) almost everywhere on ∂B r (x 0 ). Applying (3.11) to each ϕ i we obtain by Dominated Convergence that v(x 0 ) ≤ˆ∂ Br(x0) v(y)|∇ y G(x 0 , y)|dσ α y = m r (v)(x 0 ).
(ii) Fix 0 < s < r, let ϕ and h be as in (i) so that v ≤ h on B r (x 0 ). By Proposition 3.3 we have
m s (v)(x 0 ) ≤ˆ∂ Bs(x0) h(y)|∇ y G(x 0 , y)|dσ α y = h(x 0 ) =ˆ∂ B r (x0) ϕ(y)|∇ y G(x 0 , y)|dσ α y .
Taking a monotone sequence of continuous functions on the boundary ϕ i ց u and proceeding as above we conclude
m s (v)(x 0 ) ≤ˆ∂ B r (x0) ϕ i (y)|∇ y G(x 0 , y)|dσ α y −→ m r (v)(x 0 ).
(iii) This property is a consequence of the fact that v is (almost everywhere) equal to an upper semicontinuous function. Fix x 0 ∈ Ω and ε > 0 there exists a small enough neighborhood of
x 0 , V (x 0 ), such that . v(y) < v(x 0 ) + ε on V (x 0 )
. Taking for r > 0 small enough, we have
m r (v)(x 0 ) ≤ v(x 0 ) + ε.
Recall that the function constant to 1 is α-harmonic on Ω.
By (i), v(x 0 ) ≤ m r (v)(x 0 ) hence m r (v)(x 0 ) − ε ≤ v(x 0 ) ≤ m r (v)(x 0 ).
Letting ε, and thus r go to 0, we obtain desired property.
(iv) This last property is a consequence of the decomposition of α-subharmonic functions observed in (3.7). Integrating against |∇G|α 2 both sides of (3.7) we obtain
m r (v)(x) = h(x) − m r (G * dν v )(x).
The desired property follows from the fact that the mean integral −m r (G * dν v ) is αsubharmonic in the sense of potential. For details we refer to Section 6 of [3].
The next step is to take a convolution of the mean integral functions m r (v) so to obtain smooth functions which produce the desired approximating sequence {v k } k .
Proof of Theorem 3.2.
Let ϕ ∈ C 1 c ([0, 1]) be a non-negative function with unitary L 1 -norm, we define
(3.12) v k (x) := kˆ+ ∞ 0 ϕ(ks) m s (v)(x)ds
As shown in [3] the functions defined by (3.12) are eventually smooth. The monotonicity of the approximating sequence follows immediately from the monotonicity of m r (v) with respect to r. Combining this with property (i) of Proposition 3.5 we obtain (i). The proof of (ii) is a consequence of (iii) in Proposition 3.5. Both this proofs are straightforward computations, we refer to [3, Theorem 7.1] for the details. The convergence in L 1 (Ω) follows from (i) and (ii), using the fact that |v k | ≤ max{|v|, |v 1 |} and Dominated Convergence. For the uniform estimate of (iv), it is enough to observe that 1 is an α-harmonic function on Ω and ϕ has unitary L 1 norm, hence,
v k (x) = kˆ+ ∞ 0 ϕ(ks) m s (v)(x) ds ≤ ess sup Ω vkˆ+ ∞ 0 ϕ(ks) m s (1)(x) ds = ess sup Ω v.
This concludes the proof of Theorem 3.2.
Remark 3.6. Note that in the last estimate, one actually has ess sup Ω v k ≤ ess sup
B 1/k (x) v ≤ ess sup Ω v.
This observation will be crucial later on.
3.4.
Proof of Theorem D. Finally, we desume the proof of Theorem D from Theorem 3.2. If {v k } k is the approximating sequence for the function v = u α , we define u k := αv k . By Equation (3.3), {u k } k is an approximating sequence for u as it satisfies (i)− (iii) of Theorem D. The proof is trivial and is therefore omitted. A little more effort is required to show that if sup Ω v k ≤ ess sup Ω v, then sup Ω u k ≤ 2 ess sup Ω u for k large enough.
To this end, fix x ∈ Ω. As noted in Remark 3.6 we have
u k (x) = α(x)v k (x) ≤ α(x) ess sup B 1/k (x) v ≤ α(x) inf B 1/k α ess sup Ω u.
Furthermore, for every y ∈ B 1/k (x) we estimate
(3.13) α(x) α(y) ≤ |α(x) − α(y)| α(y) + 1 ≤ r k (x) sup Ω |∇α| inf Ω α + 1
where r k (x) = sup{d(x, z) : z ∈ B 1/k (x)}. Next, we show that the function r k (x) can be uniformly bounded so that (3.13) is bounded above by 2.
Lemma 3.7. There exists some k 0 ∈ N such that
r k (x) ≤ inf Ω α sup Ω |∇α| =: c ∀x ∈ Ω, ∀k ≥ k 0 .
Proof. Suppose by contradiction that there exists a sequence of points {x k } k ⊂ Ω such that r k (x k ) > c for every k ∈ N. By definition of r k (x k ), there exists a sequence of points {y k } k ⊂ B 1/k (x k ) such that d(y k , x k ) > c.
Since Ω is relatively compact, up to a subsequence, we can assume that x k → x ∞ ∈ Ω and y k → y ∞ ∈ Ω. Since y k ∈ B 1/k (x k ) we have (3.14)
G(x k , y k ) > k → +∞.
Note also that the Green function G is smooth and hence continuous on Ω×Ω\{x = y}. Note that since d(x k , y k ) > c, then d(x ∞ , y ∞ ) ≥ c, in particular we deduce that x ∞ ∈ ∂Ω because the Green function G vanishes on the boundary of Ω. If x ∞ ∈ Ω is not on the boundary, fix k ∈ N. By (3.14) and continuity of the Green function we have G(y ∞ , x ∞ ) > k which implies that y ∞ ∈ B 1/k (x ∞ ). In particular we have d(x ∞ , y ∞ ) ≤ r k (x ∞ ) → 0, which is a contradiction since d(x ∞ , y ∞ ) ≥ c. Indeed, for every x ∈ Ω, lim k→+∞ r k (x) = 0.
Clearly, r k (x) is a monotone decreasing sequence in k. If its limit is some r 0 = 0 this implies that r k (x) ≥ r 0 for all k. In particular the geodesic ball B r0 (x) is contained in B 1/k (x) for all k ∈ N. This, however, is a contradiction since ∞ k=1 B 1/k (x) = {x}.
Thanks to Lemma 3.7, up to taking k large enough, we have α(x) ≤ 2α(y) ∀x ∈ Ω and ∀y ∈ B 1/k (x), hence, u k (x) ≤ α(x) inf B 1/k α ess sup Ω u ≤ 2 ess sup Ω u ∀x ∈ Ω.
This concludes the proof of Theorem D.
3.5.
Remarks on the global case. A careful analysis of above proofs shows that the monotone approximation results can be obtained globally on the whole manifold M as long as there exists a minimal positive Green function for the operator ∆ α and the super level sets B r (x) are compact. Not all Riemannian manifolds, however, satisfy these conditions. We recall the following in the sense of distributions. If t > t ε , by direct computation we have u ′ (t) = 2(1 + ε)t 1+2ε e t 2+2ε u ′′ (t) = 2(1 + ε)e t 2+2ε 2(1 + ε)t 2+4ε + (1 + 2ε)t 2ε thus ∆U − U = u ′′ (t) + j ′ (t) j(t) u ′ (t) − u(t) = e t 2+2ε 2(1 + ε)εt 2ε − 1 + e t 2+2ε ε ≥ 0.
On the other hand, if t < t ε the function U is identically zero, so that ∆U − U ≥ 0 also for t ∈ (0, t ε ). To see that ∆U ≥ U in the sense of distributions on the whole manifold we take 0 ≤ ϕ ∈ C ∞ c (M ) and set M := M \ B tε (0). Then we computê
M U (∆ϕ − ϕ) =ˆM U (∆ϕ − ϕ) = −ˆM g(∇ϕ, ∇U ) +ˆ∂ M U ∂ϕ ∂ν −ˆM U ϕ = −ˆM g(∇ϕ, ∇U ) −ˆM U ϕ =ˆM ∆U ϕ −ˆ∂ M ∂U ∂ν ϕ −ˆM U ϕ =ˆM ∆U ϕ +ˆ∂ Bt ε (0) ∂U ∂t ϕ −ˆM U ϕ =ˆM (∆U − U )ϕ +ˆ∂ Bt ε (0) u ′ ϕ ≥ 0.
On the other hand we have:
M |U |dV g = ω mˆ+ ∞ 0 u(t)j(t)dt =ˆ+ ∞ tε 1 t 1+ε dt < +∞.
In conclusion, if we set V = −U we have V ∈ L 1 (M ) and (− ∆ +1)V ≥ 0 but V ≤ 0, which contradicts the validity of the L 1 -positivity preserving property on M .
Definition 1. 1 .
1Let p ∈ [1, +∞], we say that (M, g) has the L p -positivity preserving property if every u ∈ L p (M ) satisfying (1.2) (−∆ + 1)u ≥ 0 in the sense of distributions is non-negative a.e.
Date: December 23, 2021.The combination of these results lead Braverman, Milatovic and Schubin to formulate the following Conjecture (BMS). If (M, g) is a geodesically complete Riemannian manifold then the L 2 -positivity preserving property holds.
Theorem D. Let (M, g) be a Riemannian manifold and let u ∈ L 1 loc (M ) be a solution of Lu ≥ 0 in the sense of distributions. Then for every Ω ⋐ M there exists a sequence {u k } ⊂ C ∞ (Ω) such that:
Corollary E. Let (M, g) be a strongly α-non-parabolic Riemannian manifold and let u ∈ L 1 loc (M ) be a solution of Lu ≥ 0 in the sense of distributions. Then there exists a sequence {u k } ⊂ C ∞ (M ) such that:
Remark 1 . 7 .
17Results such as Proposition C, Theorem D and Corollary E still hold if the constant 1 in the operator (1.1) is replaced by another positive constant. Actually, negative constants are also allowed as long as −L remains a positive operator.
Proposition 2 . 2 .
22If (M, g) has the L ∞ -positivity preserving property, then it is stochastically complete.
2. 1 .
1From stochastic completeness to the L ∞ -positivity preserving property. The goal of this section is to set the ground towards proving the converse of Proposition 2.2.To this end, let (M, g) be a stochastically complete Riemannian manifold and take u ∈ L ∞ (M ) satisfying (−∆ + 1)u ≥ 0 in the sense of distributions. Our purpose is to show that u is non-negative almost everywhere or, equivalently, that the negative part u − = max{0, −u} = (−u) + vanishes a.e.. The next ingredient in our proof is the following Brezis-Kato inequality due toPigola and Veronelli, [25, Proposition 4.1] Theorem 2.4 (Brezis-Kato). Given a Riemannian manifold (M, g), if u ∈ L 1 loc (M ) satisfies Lu ≥ 0 in the sense of distributions, then u + ∈ L 1 loc (M ) and Lu + ≥ 0 in the sense of distributions.
Corollary 2. 5 .
5Let (M, g) be a Riemannian manifold and let u ∈ L ∞ (M ) be a distributional solution of Lu ≥ 0. Then, for every relatively compact Ω ⋐ M there exists some u Ω ∈ C ∞ (Ω) which solves Lu Ω ≥ 0 in a strong sense and such that u ≤ u Ω ≤ 2 ess sup Ω u.
Theorem 2. 8 .
8Let (M, g) be a Riemannian manifold and let u ∈ L ∞ (M ) satisfying Lu ≥ 0 in the sense of distributions. Then, there exists w ∈ C ∞ (M ) such that u ≤ w, Lw ≥ 0 in a strong sense and sup M w < +∞.
Proof. We begin by observing that if u ∈ L ∞ (M ) then, setting c = u L ∞ (M) , we have Lc = −c ≤ 0 on M.
Theorem 3 . 2 .
32Let (M, g) be a Riemannian manifold and let v ∈ L 1 loc (M ) be a solution of ∆ α v ≥ 0 in the sense of distributions. Then, for every Ω ⋐ M there exists a sequence {v k
Proposition 3 . 5 .
35Given a Riemannian manifold (M, g) and Ω ⋐ M , if v ∈ L 1 (Ω) is α-subharmonic in the sense of distributions, then(i) v(x) ≤ m r (v)(x)for almost every x ∈ Ω and almost every r > 0; (ii) let 0 < s < r then m s (v)(x) ≤ m r (v)(x) almost everywhere in Ω; (iii) for almost every x ∈ Ω we have lim r→0 m r (v)(x) = v(x); (iv) for every r > 0 m r (v) is α-subharmonic in the sense of potential on Ω.
Definition 3.8. A Riemannian manifold (M, g) is said to be α-non-parabolic if there exists a minimal positive Green function G for the operator ∆ α . Moreover, if this Green function satisfies(3.15)lim y→∞ G(x, y) = 0, the manifold M is said to be strongly α-non-parabolic.Note that compact Riemannian manifold are always α-parabolic thus we focus on the complete, non-compact case. It is also known that if (M, g) is a geodesically complete, α-non-parabolic manifold, thenis the volume of the geodesic ball of radius t and center x with respect to the measure α 2 dµ g . See for instance Theorem 9.7 of[10]. Furthermore, if we assume a non-negative m-Bakry-Émery Ricci tensor Ric m f := Ric + Hess(f ) − 1 m df ⊗ df ≥ 0 with f = −2 log α, it is possible to prove some Li-Yau type estimates for the heat kernel, see Theorems 5.6 and 5.8 in[5]. Integrating in time these estimates we obtain the following bounds for the Green functiondt.In particular if (3.16) holds true and Ric m f ≥ 0, the previous estimate implies that the manifold at hand is strongly α-non parabolic. It would be interesting to investigate which geometric conditions on the manifold (M, g) imply the existence of a function α such that(3.16)and Ric m f ≥ 0 hold true.A counterexample to the L 1 -positivity preserving propertyThis section is devoted to the proof of Theorem B. Fix ε > 0 and consider the 2dimensional model manifold M = R + × σ S 1 , that is R + ×S 1 with the metric g = dt 2 + σ 2 (t)dθ 2 . Here dθ 2 is the standard round metric on S 1 and σ = σ ε is a C ∞ ((0, +∞)) function satisfyingHere t ε = (2(1 + ε)ε) −1/2ε and the function j is defined asAs a result, outside of a compact set we have the following asymptotic estimate for the Gaussian curvature:= −(1 + ε) 2t 2ε + 4(1 + ε)t 2+4ε + (2 + ε) 1 t 2 g ∼ −4(1 + ε) 2 t 2+4ε g as t → +∞. Next we define the function U (t, θ) = u(t) = (e t 2+2ε − e t 2+2ε ε ) + and prove that it satisfies ∆U ≥ U
Maximum principles and geometric applications. L J Alías, P Mastrolia, M Rigoli, SpringerChamAlías, L. J., Mastrolia, P., and Rigoli, M. Maximum principles and geometric applications. Springer Monographs in Mathematics. Springer, Cham, 2016.
Laplacian cut-offs, porous and fast diffusion on manifolds and other applications. D Bianchi, A G Setti, Calc. Var. Partial Differential Equations. 57433Bianchi, D., and Setti, A. G. Laplacian cut-offs, porous and fast diffusion on manifolds and other applications. Calc. Var. Partial Differential Equations 57, 1 (2018), Paper No. 4, 33.
Subharmonic functions in sub-Riemannian settings. A Bonfiglioli, E Lanconelli, Journal of the European Mathematical Society. 15Bonfiglioli, A., and Lanconelli, E. Subharmonic functions in sub-Riemannian settings. Journal of the European Mathematical Society 15, 2 (2013), 387-441.
Essential selfadjointness of Schrödinger-type operators on manifolds. M Braverman, O Milatovich, M Shubin, Uspekhi Mat. Nauk. 57346Braverman, M., Milatovich, O., and Shubin, M. Essential selfadjointness of Schrödinger-type operators on manifolds. Uspekhi Mat. Nauk 57, 4(346) (2002), 3-58.
Heat kernel estimates and the essential spectrum on weighted manifolds. N Charalambous, Z Lu, J. Geom. Anal. 25Charalambous, N., and Lu, Z. Heat kernel estimates and the essential spectrum on weighted mani- folds. J. Geom. Anal. 25, 1 (2015), 536-563.
On mean value formulas for solutions to second order linear pdes. G Cupini, E Lanconelli, Classe di scienze. 22Annali della Scuola Normale Superiore di PisaCupini, G., and Lanconelli, E. On mean value formulas for solutions to second order linear pdes. Annali della Scuola Normale Superiore di Pisa. Classe di scienze 22, 2 (2021), 777-809.
The structure of complete stable minimal surfaces in 3-manifolds of non-negative scalar curvature. D Fischer-Colbrie, R Schoen, Communications on Pure and Applied Mathematics. 33Fischer-Colbrie, D., and Schoen, R. The structure of complete stable minimal surfaces in 3- manifolds of non-negative scalar curvature. Communications on Pure and Applied Mathematics 33, 2 (1980), 199-211.
Elliptic partial differential equations of second order. D Gilbarg, N S Trudinger, Classics in Mathematics. Springer-VerlagReprint of the 1998 editionGilbarg, D., and Trudinger, N. S. Elliptic partial differential equations of second order. Classics in Mathematics. Springer-Verlag, Berlin, 2001. Reprint of the 1998 edition.
Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds. A Grigor'yan, Bull. Amer. Math. Soc. (N.S.). 36Grigor'yan, A. Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds. Bull. Amer. Math. Soc. (N.S.) 36, 2 (1999), 135-249.
Heat kernels on weighted manifolds and applications. A Grigor'yan, The ubiquitous heat kernel. Providence, RI398Grigor'yan, A. Heat kernels on weighted manifolds and applications. In The ubiquitous heat kernel, vol. 398 of Contemp. Math. Amer. Math. Soc., Providence, RI, 2006, pp. 93-191.
Heat kernel and analysis on manifolds. A Grigor'yan, AMS/IP Studies in Advanced Mathematics. Providence, RI; Boston, MAInternational Press47Grigor'yan, A. Heat kernel and analysis on manifolds, vol. 47 of AMS/IP Studies in Advanced Mathematics. American Mathematical Society, Providence, RI; International Press, Boston, MA, 2009.
Nonlinear characterizations of stochastic completeness. G Grillo, K Ishige, M Muratori, J. Math. Pures Appl. 9Grillo, G., Ishige, K., and Muratori, M. Nonlinear characterizations of stochastic completeness. J. Math. Pures Appl. (9) 139 (2020), 63-82.
Sequences of Laplacian cut-off functions. B Güneysu, J. Geom. Anal. 26Güneysu, B. Sequences of Laplacian cut-off functions. J. Geom. Anal. 26, 1 (2016), 171-184.
The BMS conjecture. B Güneysu, arXiv:1709.07463Ulmer Seminare. 20ArXiv preprintGüneysu, B. The BMS conjecture. Ulmer Seminare 20 (2017), 97-101. ArXiv preprint: arXiv:1709.07463.
Covariant Schrödinger semigroups on Riemannian manifolds. B Güneysu, Birkhäuser/Springer264Chamof Operator Theory: Advances and ApplicationsGüneysu, B. Covariant Schrödinger semigroups on Riemannian manifolds, vol. 264 of Operator The- ory: Advances and Applications. Birkhäuser/Springer, Cham, 2017.
Path integrals and the essential self-adjointness of differential operators on noncompact manifolds. B Güneysu, O Post, Mathematische Zeitschrift. 275Güneysu, B., and Post, O. Path integrals and the essential self-adjointness of differential operators on noncompact manifolds. Mathematische Zeitschrift 275, 1-2 (2013), 331-348.
Monotone iterative techniques for discontinuous nonlinear differential equations. S Heikkilä, V Lakshmikantham, Heikkilä, S., and Lakshmikantham, V. Monotone iterative techniques for discontinuous nonlinear differential equations. Routledge, 2017.
Density and non-density of C ∞ c ֒→ W k,p on complete manifolds with curvature bounds. S Honda, L Mari, M Rimoldi, G Veronelli, Nonlinear Anal. 21126Paper No. 112429Honda, S., Mari, L., Rimoldi, M., and Veronelli, G. Density and non-density of C ∞ c ֒→ W k,p on complete manifolds with curvature bounds. Nonlinear Anal. 211 (2021), Paper No. 112429, 26.
Heat semigroup on a complete Riemannian manifold. P Hsu, Ann. Probab. 17Hsu, P. Heat semigroup on a complete Riemannian manifold. Ann. Probab. 17, 3 (1989), 1248-1254.
Schrödinger operators with singular potentials. T Kato, Israel J. Math. 13Kato, T. Schrödinger operators with singular potentials. Israel J. Math. 13 (1972), 135-148 (1973).
Some functional properties on cartan-hadamard manifolds of very negative curvature. L Marini, G Veronelli, arXiv:2105.09024ArXiv preprintMarini, L., and Veronelli, G. Some functional properties on cartan-hadamard manifolds of very negative curvature, 2021. ArXiv preprint: arXiv:2105.09024.
Mean value theorems on manifolds. L Ni, Asian J. Math. 11Ni, L. Mean value theorems on manifolds. Asian J. Math. 11, 2 (2007), 277-304.
A remark on the maximum principle and stochastic completeness. S Pigola, M Rigoli, A G Setti, Proc. Amer. Math. Soc. 131Pigola, S., Rigoli, M., and Setti, A. G. A remark on the maximum principle and stochastic completeness. Proc. Amer. Math. Soc. 131, 4 (2003), 1283-1288.
Maximum principles on Riemannian manifolds and applications. S Pigola, M Rigoli, A G Setti, Mem. Amer. Math. Soc. 17499Pigola, S., Rigoli, M., and Setti, A. G. Maximum principles on Riemannian manifolds and applications. Mem. Amer. Math. Soc. 174, 822 (2005), x+99.
. S Pigola, G Veronelli, Positivity, arXiv:2105.14847M. Braverman, O. Milatovic and M. ShubinArXiv preprintPigola, S., and Veronelli, G. L p Positivity Preserving and a conjecture by M. Braverman, O. Milatovic and M. Shubin, 2021. ArXiv preprint: arXiv:2105.14847.
Maximum principles in differential equations. M H Protter, H F Weinberger, Springer-VerlagNew YorkCorrected reprint of the 1967 originalProtter, M. H., and Weinberger, H. F. Maximum principles in differential equations. Springer- Verlag, New York, 1984. Corrected reprint of the 1967 original.
Scalar curvature and conformal deformation of hyperbolic space. A Ratto, M Rigoli, L Véron, J. Funct. Anal. 121Ratto, A., Rigoli, M., and Véron, L. Scalar curvature and conformal deformation of hyperbolic space. J. Funct. Anal. 121, 1 (1994), 15-77.
Monotone methods in nonlinear elliptic and parabolic boundary value problems. D H Sattinger, Indiana University Mathematics Journal. 21Sattinger, D. H. Monotone methods in nonlinear elliptic and parabolic boundary value problems. Indiana University Mathematics Journal 21, 11 (1972), 979-1000.
Essential self-adjointness for semi-bounded magnetic schrödinger operators on non-compact manifolds. M Shubin, Journal of Functional Analysis. 186Shubin, M. Essential self-adjointness for semi-bounded magnetic schrödinger operators on non-compact manifolds. Journal of Functional Analysis 186, 1 (2001), 92-116.
On the adjoint of an elliptic linear differential operator and its potential theory. P Sjögren, Arkiv för Matematik. 11Sjögren, P. On the adjoint of an elliptic linear differential operator and its potential theory. Arkiv för Matematik 11, 1 (1973), 153-165.
Some function-theoretic properties of complete Riemannian manifold and their applications to geometry. S T Yau, Indiana Univ. Math. J. 25Yau, S. T. Some function-theoretic properties of complete Riemannian manifold and their applications to geometry. Indiana Univ. Math. J. 25, 7 (1976), 659-670.
Milano Email address: a.bisterzo@campus. unimib.it (L. Marini) Dipartimento di Matematica e Applicazioni, Università degli Studi di Milano-Bicocca, Via R. Cozzi55I-20125I-20125. Milano Email address: [email protected](A. Bisterzo) Dipartimento di Matematica e Applicazioni, Università degli Studi di Milano- Bicocca, Via R. Cozzi 55, I-20125, Milano Email address: [email protected] (L. Marini) Dipartimento di Matematica e Applicazioni, Università degli Studi di Milano- Bicocca, Via R. Cozzi 55, I-20125, Milano Email address: [email protected]
| zyda_arxiv-0640000 |
The dark Stodolsky effect: constraining effective dark matter operators with spin-dependent interactions
25 May 2023
Guillaume Rostagni [email protected]
Institute for Particle Physics Phenomenology
Department of Physics
Durham University
DurhamUK
Jack D Shergold [email protected]
Institute for Particle Physics Phenomenology
Department of Physics
Durham University
DurhamUK
The dark Stodolsky effect: constraining effective dark matter operators with spin-dependent interactions
25 May 2023Prepared for submission to JCAP
We present a comprehensive discussion of the Stodolsky effect for dark matter (DM), and discuss two techniques to measure the effect and constrain the DM parameter space. The Stodolsky effect is the spin-dependent shift in the energy of a Standard Model (SM) fermion sitting in a bath of neutrinos. This effect, which scales linearly in the effective coupling, manifests as a small torque on the SM fermion spin and has historically been proposed as a method of detecting the cosmic neutrino background. We generalise this effect to DM, and give expressions for the induced energy shifts for DM candidates from spin-0 to spin-3 2 , considering all effective operators up to mass dimension-6. In all cases, the effect scales inversely with the DM mass, but requires an asymmetric background. We show that a torsion balance experiment is sensitive to energy shifts of ∆E Á 10´2 8 eV, whilst a more intricate setup using a SQUID magnetometer is sensitive to shifts of ∆E Á 10´3 2 eV. Finally, we compute the energy shifts for a model of scalar DM, and demonstrate that the Stodolsky effect can be used to constrain regions of parameter space that are not presently excluded.
Introduction
There is now overwhelming evidence for dark matter (DM) on both galactic [1][2][3][4][5] and cosmological [6,7] distance scales, which is estimated to constitute " 26% of the total energy density of the universe [8]. Despite this, the exact nature of DM remains a mystery, with all evidence for its existence coming from its gravitational interactions with visible matter. Nevertheless, the possibility remains that DM could interact with Standard Model (SM) fields non-gravitationally, which could allow us to better study its nature.
In order for new fields in a SM extension to be considered as DM candidates, they must be cold in the present epoch and capable of reproducing the observed relic density, electrically neutral, and unable to decay into SM particles over cosmological timescales. This leaves an overwhelmingly large number of DM candidate theories, which are tedious to constrain individually. Effective field theories (EFTs) are an incredibly powerful tool to constrain DM in a model-independent way [9][10][11][12]; by making use of the symmetries of the interaction Lagrangian, EFTs reduce the landscape of underlying theories to a finite number of permitted operators. These operators are typically classified by the spin of the DM particle, along with their mass dimension and coupling to the SM, which can then be constrained and mapped onto the candidate DM theory on a case-by-case basis.
To that end, several experiments have already been set up or proposed to directly search for DM using a variety of techniques, each of which have sensitivity to different ranges of parameter space: scattering on ultracold nuclei [13][14][15]; scattering in Xenon time projection chambers [16][17][18]; axion telescopes [19][20][21][22][23]; scattering in particle accelerators [24]; atom interferometers [25][26][27]. At the same time, DM has also been indirectly constrained using a variety of astrophysical [28][29][30][31][32][33][34][35] and cosmological [7,8,36,37] probes.
In this paper, we propose two experiments to observe spin-dependent energy shift induced by a DM background, which is more commonly known as the Stodolsky effect, and has historically been discussed in the context of cosmic neutrino background (CνB) detection [38][39][40][41]. The Stodolsky effect has several features which make a promising avenue for DM detection. First, unlike scattering, the magnitude of the energy shift depends on the DM-SM coupling linearly rather than quadratically, leading to an effect which is less suppressed by tiny coupling constants. Second, whilst many detection techniques depend heavily on the mass of the DM particle under consideration, the Stodolsky effect depends primarily on the velocity of the background particle. For neutrinos, this leads to an energy shift that is largely independent of the neutrino mass [41], which if also true for dark matter would allow us to probe a wide region of parameter space. On the contrary, the Stodolsky effect for neutrinos requires either a neutrino-antineutrino or left-right helicity asymmetry in the background, the former of which is expected to be absent in the standard CνB scenario. As we will see, analogous requirements persist for DM backgrounds, potentially restricting the range of models that can give rise to the Stodolsky effect. Even so, both chiral and asymmetric [42] models of DM exist, which alongside models with finite chemical potential generate an asymmetry during DM production. We additionally note that there are several mechanisms (e.g. finite chemical potential, DM reflection at the surface of the Earth [43], gravitational potentials [44]) through which either asymmetry may develop post-production.
The remainder of this paper will be structured as follows. In Section 2 we will review the Stodolsky effect for neutrinos and introduce the general formalism that will be used throughout. Following this, in Section 3 we will compute the magnitude of the Stodolsky effect for all effective DM operators ranging from spin-0 to spin- 3 2 , up to dimension-6. Finally, we will discuss the experimental signatures of the Stodolsky effect and the feasibility of this technique for DM detection in Section 4, before concluding in Section 5.
The Stodolsky effect
We begin by reviewing the Stodolsky effect for the CνB, which has been discussed in several previous works [38][39][40][41]. This will closely follow the formalism of [41], with the exception that we will more carefully treat the external states as partially localised wavepackets, rather than eigenstates of definite momentum. Additionally, we will assume that the neutrinos are monochromatic in the CνB reference frame, which is a good approximation when their momentum distribution is narrow.
Working in the mass basis, the effective low energy Hamiltonian density for neutrinoelectron interactions after applying a Fierz transformation is
H int pxq " G F ? 2 ÿ i,jν
i γ µ p1´γ 5 qν jē γ µ pV ij´Aij γ 5 qe, (2.1) where G F is the Fermi constant, V ij and A ij are the effective vector and axial couplings, respectively, and i, j P t1, 2, 3u denote the neutrino mass eigenstate. To leading order in H int , the energy shift of electron helicity state h e is given by ∆E e p⃗ p e , h e q " ÿ ν,i,hν ÿ Nν xe he , ν i,hν | ż d 3 x H int pxq|e he , ν i,hν y, (2.2) where h e and h ν denote the electron and neutrino helicities, respectively, whilst ř ν is the instruction to sum over neutrinos and antineutrinos. Similarly, ř Nν is a sum over all neutrinos in the background with the degrees of freedom specified by the preceding sum. The external states are incoherent superpositions of momentum eigenstates, defined by [45][46][47] |ψpp ψ , x ψ , h ψ qy " ż d 3 q ψ p2πq 3 1 a 2E q ψ ω ψ pp ψ , q ψ qe´i ⃗ q ψ¨⃗ x ψ |tq ψ , h ψ uy, (2.3) with ψ P te, νu, where E q ψ is the energy of the momentum eigenstate with momentum q ψ and ω ψ is a wavepacket function centred on the momentum p ψ . The wavepacket states are normalised to unity, which also sets the normalisation of ω ψ . We use relativistic normalisation for the momentum eigenstates |tp ψ , h ψ uy " a 2E p a : ψ p⃗ p ψ , h ψ q|0y, (2.4) where a : ψ p⃗ p, hq is the particle creation operator for species ψ with momentum and helicity ⃗ p and h, respectively, whilst its Hermitian conjugate is the corresponding annihilation operator. We denote the antiparticle creation and annihilation operators with b : ψ and b ψ , respectively. These satisfy the anticommutation relations
! a i p⃗ p, hq, a : j p⃗ q, h 1 q ) " ! b i p⃗ p, hq, b : j p⃗ q, h 1 q ) " p2πq 3 δ p3q p⃗ p´⃗ qqδ ij δ hh 1 ,(2.ω ν i pp ν i , q ν i qων i pp ν i , q 1 ν i qe´i p⃗ qν i´⃗ q 1 ν i q¨⃗ xν î xtq 1 e , h e u, tq 1 ν i , h ν i u|H int pxq|tq ν i , h ν i u, tq e , h e uy,(2.8)
where we have introduced the shorthand
dΠ " d 3 q e p2πq 3 d 3 q 1 e p2πq 3 d 3 q ν i p2πq 3 d 3 q 1 ν i p2πq 3 1 a 2E qe 1 a 2E q 1 e 1 a 2E qν i 1 b 2E q 1 ν i . (2.9)
In line with [45,47], we now average ∆E e over the regions in which the wavepackets are localised, i.e. we take
∆E e p⃗ p e , h e q Ñ 1 V 2 ż d 3 x e d 3 x ν i ∆E e p⃗ p e , h e q,(2.ÿ Nν 1 4V 2 ż d 3 x d 3 q e d 3 p ν i p2πq 6 E e E ν i |ω e pp e , q e q| 2 |ω ν i pp ν i , q ν i q| 2 xH int y,(2.11)
where
xH int y " xtq e , h e u, tq ν i , h ν i u|H int pxq|tq ν i , h ν i u, tq e , h e uy. (2.12)
Recalling the normalisation ş d 3 q p2πq 3 |ωpp, qq| 2 " 1 allows us to identify |ωpp, qq| 2 {V as the phase space density for a single particle. The sum over all particles in the background can therefore be used to replace the wavepacket functions with momentum distribution functions
ÿ Nν |ω ν i pp ν i , q ν i q| 2 V " n ν pν i,hν qf ν i p⃗ q ν i q, |ω e pp e , q e q| 2 V " 1 V p2πq 3 δ p3q p⃗ p e´⃗ q e q,(2.13)
where n ν pν i,hν q is the number density of background neutrino eigenstate i with helicity h ν . Finally, after noting that nothing in xH int y depends on position and considering an electron at rest in the lab frame, we find
∆E e p ⃗ 0, h e q " 1 4m e ÿ ν,i,hν n ν pν i,hν q ż d 3 p ν i p2πq 3 f ν i p⃗ p ν i q 1 E ν i xH int yˇˇ| ⃗ pe|"0 " 1 4m e ÿ ν,i,hν n ν pν i,hν q B 1 E ν i xH int y F ,(2.14)
where m e is the electron mass and the outermost angled brackets denote an averaged quantity, which must be done in order to account for the relative motion of the Earth to the CνB reference frame. The averaging procedure differs slightly between the CνB and DM, as we do not know the velocity of the former. We therefore use the flux averages from [41] for the CνB, whilst those for DM are discussed at length in Appendix A. Expanding out the external states, applying the appropriate anticommutation relations and taking the traces of Dirac spinor chains yields [41] xH int y " 2
? 2G F A ii m e h e " m ν i h ν i pS e¨Sν i q´pS e¨pν i q ı`f pV ii q,(2.15)
where h "˘1 denotes the particle spin eigenvalue, m ν i denotes the neutrino mass and f pV ii q contains terms that do not depend on the electron spin, which will not contribute to the Stodolsky effect. Note that (2.15) takes the opposite sign for external antineutrino states, whilst for external Majorana neutrino states the expectation value is twice as large. The spin vector for massive fermions is given by
S µ "ˆ⃗ p¨⃗ s m , ⃗ s`p ⃗ p¨⃗ sq⃗ p mpE`mq˙,(2.16)
for a particle with spin vector ⃗ s in its own reference frame. By inspection, we see that S satisfies pp¨Sq " 0. If we restrict our discussion to helicity eigenstates then ⃗ s will be directed along ⃗ p, such that (2.16) reduces to
S µ "ˆ| ⃗ p| m , E m ⃗ p |⃗ p|˙,(2.17)
and we instead identify h "˘1 with the particle helicity 1 . Naturally, we cannot use (2.17) for a particle at rest. The energy splitting between the two electron spin states is then found by taking the difference between the energy shifts for each spin state, which after performing the flux averaging on (2.15) gives
∆E D e " ? 2G F 3 | ⃗ β C | ÿ i A ii " 2 ÿ sν p2´| ⃗ β ν i | 2 qpn ν pν D i,sν q´n ν pν D i,sν q 1 | ⃗ β ν i | p3´| ⃗ β ν i | 2 qpn ν pν D i,L q´n ν pν D i,R q`n ν pν D i,R q´n ν pν D i,L qq ı , (2.18)
for a Dirac neutrino background, where the subscripts L and R denote left and right helicity neutrinos, respectively, with R{L corresponding to h ν i "˘1 p¯1q for (anti)neutrinos. Additionally, ⃗ β C is the relative velocity between the Earth and CνB reference frame, which may be time dependent, and ⃗ β ν i is the lab frame neutrino velocity. For completeness, we note that whilst the term scaling as | ⃗ β ν i |´1 appears divergent, it in fact tends to zero as | ⃗ β ν i | Ñ 0 as a consequence of a vanishing helicity asymmetry for slow neutrinos 2 . Similarly, we find for a Majorana neutrino background
∆E M e " 2 ? 2G F 3 | ⃗ β C | ÿ i A ii | ⃗ β ν i | p3´| ⃗ β ν i | 2 qpn ν pν M i,L q´n ν pν M i,R qq. (2.19)
We immediately see that the Stodolsky effect for neutrinos requires either a non-zero neutrinoantineutrino or helicity asymmetry, but depends only on the neutrino velocity and scales linearly with G F . These features allow an experiment utilising the Stodolsky effect to probe a vast region of DM parameter space, as the effect is less suppressed than scattering in weakly coupled regions, whilst only depending on the dark matter velocity, | ⃗ β DM | » 1.2ˆ10´3 [48], independent of the DM mass.
We are now ready to move onto the Stodolsky effect for DM, which we will henceforth refer to as the dark Stodolsky effect (DSE) to distinguish it from the effect for neutrinos. By analogy with (2.2), the energy shift of an at rest SM fermion ψ in a DM background will be given by
∆E ψ p ⃗ 0, h ψ q " 1 4m ψ ÿ d.o.f. n DM B 1 E DM xH int y F ,(2.20)
where the sum runs over the DM degrees of freedom. For the remainder of this paper we will focus on the object appearing inside the angled brackets, which will typically be some kinematic structure depending on the effective DM operator under consideration. When evaluating these expectation values we will only keep the terms that depend on S ψ , as no other terms will contribute to the DSE. The energy splitting of the two SM fermion spin states can then found by starting with our master equation (2.20), and then taking the difference in the energy shifts for the two spin states. This will typically enter as an overall factor of two.
Effective dark matter operators
We now turn our attention to the rich landscape of effective DM operators that can give rise to the DSE. For the remainder of this work, we will consider an effective DM Lagrangian of the form
L DM " L SM`Lkin`Lint , (3.1)
where L SM is the complete SM Lagrangian, L kin contains the kinetic and mass terms for the DM field, and L int contains effective SM-DM interaction operators. This will take the form
L int "´g ψχ Λ d´4 O µν... DM O SM µν... ,(3.2)
where g ψχ denotes the coupling between the SM fermion and DM field, Λ is the new physics scale, and d is the combined mass dimension of the SM and DM effective operators, O SM and O DM , respectively. We will only work with Lagrangians that are Lorentz invariant, Hermitian, invariant under the SM gauge group and irreducible by the equations of motion, the procedure for which is discussed in Appendix B. By inspection of the expectation value, we immediately see that in order for an operator to contribute to the DSE it must contain at least two copies of the field operator corresponding to each external field. For bosonic DM, this gives a minimum combined mass dimension for O SM and O DM of d " 5, whilst for fermionic DM, the minimum mass dimension is d " 6. As such, we will include all effective DM operators up to d " 6. However, we will not consider DM operators with d ą 6, which become increasingly suppressed by the new physics scale Λ with increasing d.
For an operator O SM "ψΓ µν... ψ, after expanding out the field operators and external states, and applying the appropriate (anti)commutation relations, we will find the general form for the expectation value containing a trace over the SM fermion Dirac structure
xH int y " g ψχ Λ d´4 P µν... χ Trru ψūψ Γ µν... s,(3.3)
where P χ contains details of the DM kinematics, which may itself contain Dirac traces, Γ denotes some string of gamma matrices, and we have used the shorthand u ψ " u ψ pp ψ , s ψ q.
The trace can be simplified in a basis independent way be making use of the identities upp, hqūpp, h 1 q "
1 2 p { p`mqp1`hγ 5 { Sqδ hh 1 , (3.4) vpp, hqvpp, h 1 q " 1 2 p { p´mqp1`hγ 5 { Sqδ hh 1 ,(3.5)
with { A " γ µ A µ for some general four vector A. There are a total of five independent gamma matrix structures that can be included in the fermion trace 1, γ 5 , γ µ , γ µ γ 5 , σ µν , (3.6) where σ µν " i 2 rγ µ , γ ν s, γ 5 " i 4! ε αβµν γ α γ β γ µ γ ν and ε αβµν is the Levi-Civita symbol. Of these, only some will give rise to expectation values that depend on the SM fermion spin, and the remainder can be neglected. Explicitly, we find
Trru ψūψ Γ µν... s " $ ' ' ' ' ' ' & ' ' ' ' ' ' % 2m ψ , Γ " 1, 0, Γ " γ 5 , 2p µ ψ , Γ " γ µ , 2m ψ h ψ S µ ψ , Γ " γ µ γ 5 , 2h ψ ε αβµν p α ψ S β ψ , Γ " σ µν ,(3.7)
such that of the five independent gamma matrix structures appearing in (3.6), we only need to consider γ µ γ 5 and σ µν . There is an additional Lorentz invariant structure that we need to consider, ε αβµν P αβ χūψ σ µν u ψ , (3.8)
which will clearly depend on S ψ . This can be rewritten in terms of γ 5 aś 2iP µν χūψ σ µν γ 5 u ψ , (3.9)
and so we will consider the structureψσ µν γ 5 ψ as an additional 'independent' operator throughout. Finally, we note that there are several operator combinations, e.g.
O DM O SM " |ϕ| 2ψ γ 5 ψ,(3.10)
containing some complex scalar DM field ϕ, that couple left to right-chiral SM fermions and appear to be dimension-5. However, in order for the SM component to be gauge invariant under SUp2q L , we require an additional insertion of the SM fermion mass. As a result, if Λ " m ψ , the operator will effectively scale as one of dimension-6. However, as we do not specify the new physics scale, we will treat such operators as dimension-5 throughout. By extension, we will define the dimension of any operator considered in the remainder of this work as the sum of the mass dimensions of its field content and the number of derivatives.
Spin-0
New scalar fields are popular candidates for DM [49][50][51], which typically take the form of axion or Higgs-like particles. Axions are well motivated DM candidates, naturally arising in any extension to the SM where an approximate global symmetry is spontaneously broken, where they play the role of the pseudo Nambu-Goldstone boson associated with the symmetry breaking. On the other hand, Higgs-like extensions to the SM require very few additional parameters. In fact, a real singlet scalar coupled to the SM Higgs is the minimal renormalisable extension to the SM capable of explaining DM [52]. In our EFT approach, we will make no reference to the underlying theory and simply consider some complex scalar field ϕ, for which the corresponding field decompositions are 12) and the analogous field decomposition for a real scalar DM candidate is found by setting b " a. Unlike neutrinos, the creation and annihilation operators for bosonic DM follow commutation relations " a i p⃗ pq, a : j p⃗ qq ı "
ϕpxq " ż d 3 p p2πq 3 1 a 2E p`a p⃗ pqe´i p¨x`b: p⃗ pqe ip¨x˘, (3.11) ϕ˚pxq " ż d 3 p p2πq 3 1 a 2E p`a : p⃗ pqe ip¨x`b p⃗ pqe´i p¨x˘,(3." b i p⃗ pq, b : j p⃗ qq ı " p2πq 3 δ p3q p⃗ p´⃗ qqδ ij , (3.13)
with all other commutators equal to zero. As it turns out, there is only one scalar operator up to dimension-6 that gives rise to the DSE, with interaction Lagrangian
L ϕ int "´i g ψϕ Λ 2 pϕ˚Ð Ñ B µ ϕqpψγ µ γ 5 ψq,(3.14)
where ϕ˚Ð Ñ B µ ϕ " ϕ˚pB µ ϕq´pB µ ϕ˚qϕ. The corresponding Hamiltonian density is found via a Legendre transformation
H ϕ int " ÿ ϕ 9 ϕ BL ϕ int B 9 ϕ´L ϕ int " ig ψϕ Λ 2´ϕ˚p ⃗ ∇ϕq´p ⃗ ∇ϕ˚qϕ¯¨`ψ ⃗ γγ 5 ψ˘, (3.15)
where the sum runs over ϕ and ϕ˚. In a background of pure ϕ scalars, the relevant expectation value that contributes to the DSE can be computed using the appropriate field decompositions and commutators to find
xH ϕ int y "´2 g ψϕ Λ 2 ⃗ p ϕ¨`ūψ ⃗ γγ 5 u ψ˘"´4 g ψϕ Λ 2 m ψ h ψ p⃗ p ϕ¨⃗ S ψ q, (3.16)
where in going from the first to the second equality we have used the trace identity given in (3.7). If a background of pure ϕ˚scalars is considered instead, the expectation value (3.16) takes the opposite sign. Plugging this into our master equation (2.20), we therefore find the energy shift of the SM fermion state with spin h ψ
∆E ϕ ψ p ⃗ 0, h ψ q "´g ψϕ Λ 2 h ψ C p⃗ p ϕ¨⃗ S ψ q E ϕ G pn ϕ pϕq´n ϕ pϕ˚qq,(3.17)
with n ϕ pϕq and n ϕ pϕ˚q the number densities of background species ϕ and ϕ˚, respectively. Replacing the average with the expression given in (A.7) yields
∆E ϕ ψ p ⃗ 0, h ψ q "´2 g ψϕ Λ 2 h ψ β C pn ϕ pϕq´n ϕ pϕ˚qq, (3.18)
where β C is the magnitude of the relative velocity between the laboratory and DM reference frames. By taking the difference between the energy shift for each SM fermion spin state, we find an energy splitting
∆E ϕ ψ " ∆E ϕ ψ p ⃗ 0, 1q´∆E ϕ ψ p ⃗ 0,´1q "´4 g ψϕ Λ 2 β C pn ϕ pϕq´n ϕ pϕ˚qq. (3.19)
The energy splitting (3.19) is therefore independent of the DM kinematics, potentially allowing us to constrain scalar DM with masses ranging over many orders of magnitude. Notably, however, we still require a matter-antimatter asymmetry in order to generate a DSE for scalar DM. The culprit in this case is the derivative appearing between the scalar fields in (3.14), which generates an overall minus sign between the positive and negative frequency field modes. This differs from the neutrino case presented in Section 2, where the asymmetry results from the anticommutation relations for fermionic operators. Finally, for completeness we note that the corresponding energy splittings for a real scalar DM background are found by setting n ϕ pϕq " n ϕ pϕ˚q, such that (3.19) should vanish identically.
Label Table 1. Lorentz invariant, Hermitian, gauge invariant and irreducible spin-1 2 DM operators contributing to the DSE up to dimension-6, along with their corresponding expectation values in a background of Dirac fermions and antifermions, denoted by |χy and |χy, respectively. We leave the global factors of the coupling, new physics scale and SM fermion spin eigenvalue, h ψ , implicit.
O DM O SM Background xH int y O χ 1 pχγ µ χqpψγ µ γ 5 ψq |χy 4m ψ pp χ¨Sψ q |χy´4m ψ pp χ¨Sψ q O χ 2 pχγ µ γ 5 χqpψγ µ γ 5 ψq |χy 4m ψ m χ h χ pS χ¨Sψ q |χy 4m ψ m χ h χ pS χ¨Sψ q O χ 3 pχσ µν χqpψσ µν ψq |χy 8h χ " pp χ¨Sψ qpS χ¨pψ q pp χ¨pψ qpS χ¨Sψ q ‰ |χy´8 h χ " pp χ¨Sψ qpS χ¨pψ q pp χ¨pψ qpS χ¨Sψ q ‰ O χ 4 ipχσ µν χqpψσ µν γ 5 ψq |χy´8h χ ε αβµν p α χ p β ψ S µ χ S ν ψ |χy 8h χ ε αβµν p α χ p β ψ S µ χ S ν ψ
Spin-1 2
We now turn our attention to spin-1 2 dark matter, popular candidates for which include sterile neutrinos [53][54][55], which may also explain short baseline anomalies [56], and neutralinos [57,58], which naturally arise from supersymmetric models.
As we have already seen for neutrinos, the DSE for spin-1 2 backgrounds differs considerably to the effect for scalar DM, as it can additionally depend on the helicity composition of the background. Furthermore, as the product of four fermion field operators has mass dimension-6, there can be no derivative couplings for fermions at the order considered here. As such, we only need to consider Lorentz structures containing products of linearly independent gamma matrices (3.6) and Levi-Civita symbols, the latter of which can be treated as an additional gamma matrix structure, σ µν γ 5 . In all cases, the absence of derivative couplings, along with the anticommutators for fermionic operators will necessarily lead to energy splittings that require a background asymmetry.
Considering a spin-1 2 DM candidate χ, we tabulate all irreducible operators contributing the DSE up to dimension-6, along with their corresponding expectation values in Table 1. For each, we consider the case where the background consists of Dirac χ and anti-χ, which we denote by |χy and |χy, respectively. The corresponding expectation values in Majorana χ backgrounds are found by summing those in |χy and |χy backgrounds. In addition to the operators shown in Table 1, we could also have considered the operators O χ 5 " ipχσ µν γ 5 χqpψσ µν ψq and O χ 6 " pχσ µν γ 5 χqpψσ µν γ 5 ψq. However, we show in Appendix B that these are exactly equal to O χ 4 and O χ 3 , respectively.
We have already seen the operators O χ 1 and O χ 2 in Section 2, which gave rise to the neutrino-antineutrino and helicity asymmetry terms for neutrinos, respectively. The third operator in Table 1 (3.20) leads to an energy shift
, O χ 3 , with interaction Lagrangian L χ 3 int "´g ψχ Λ 2 pχσ µν χqpψσ µν ψq,∆E χ 3 ψ p ⃗ 0, h ψ q " 2g ψχ m ψ Λ 2 h ψ ÿ hχ h χ « B pp χ¨Sψ qpS χ¨pψ q E χ F´B pp χ¨pψ qpS χ¨Sψ q E χ F ff pn χ pχ hχ q´n χ pχ hχ qq,(3.21)
which after replacing the averages with (A.10) and (A.11) yields
∆E χ 3 ψ p ⃗ 0, h ψ q " 7g ψχ 4Λ 2 h ψ pn χ pχ R q´n χ pχ L q´n χ pχ L q`n χ pχ R qq`O`β 2 C , 1´β r˘, (3.22)
where β r " β C {β c » 1 is the ratio of the relative frame velocity and galactic circular velocity, β c . The resulting energy splitting between the SM fermion spin states has magnitude
∆E χ 3 ψ " 7g ψχ 2Λ 2 h ψ pn χ pχ R q´n χ pχ L q´n χ pχ L q`n χ pχ R qq ,(3.23)
to leading order in small quantities, where the subscripts L and R denote the number densities of left and right helicity DM fermions, which satisfy n χ pχ R q`n χ pχ L q " n χ pχq and n χ pχ L q`n χ pχ R q " n χ pχq, respectively. Remarkably, this is energy shift is not suppressed by the velocity scale provided that β r » 1. Despite this, energy shifts of the form (3.23) are exceedingly difficult to generate; whilst the helicity asymmetry requirement of (3.23) naively appears comparable to that of (2.18) for neutrinos, this is not the case. The first difference is seen when considering Majorana fermions, for which n χ pχ R q " n χ pχ L q and n χ pχ L q " n χ pχ R q. This leads to (3.23) vanishing identically, whilst (2.18) becomes (2.19), which importantly is non-zero. The second difference is more subtle. A chiral theory such as the weak interaction will naturally lead to scenarios in which n χ pχ R q » n χ pχ L q ‰ n χ pχ L q » n χ pχ R q, (3.24) in particular when the DM fermion is produced relativistically, such that its helicity and chirality coincide 3 . This helicity profile is sufficient to generate a DSE through the operator O χ 2 , but not O χ 3 , which requires a further fermion-antifermion asymmetry (e.g. through a chemical potential) to give a non-zero energy splitting. This significantly restricts the number of models that can generate a DSE through operators of the form O χ 3 . Finally, we note that as discussed in Appendix B of [41], background helicity asymmetries vanish for very cold DM as a consequence of the relative frame velocity. This is true irrespective of the DM spin, and so should be taken into account whenever an operator requires a non-zero helicity asymmetry to contribute to the DSE. The final operator appearing in Table 1, O χ 4 , generates an energy shift scaling with
∆E χ 4 ψ p ⃗ 0, s ψ q " ε αβµν p α χ p β ψ S µ χ S ν ψ ,(3.25)
which considering only helicity eigenstates and making use of the identities given in Appendix C vanishes identically. We note, however, that this operator may give rise to a non-zero DSE for an alternative experimental setup where the SM fermion is not at rest in the lab frame.
Spin-1
Vector bosons remain popular in many models of DM, with candidates including additional U p1q gauge bosons [60][61][62][63][64], superpartners to neutrinos [57,58], and Kaluza-Klein states in theories with extra dimensions [58,65,66]. It is also entirely possible to generate dark hadronic vector states in non-Abelian extensions to the SM [67].
The DSE for vector bosons is similar to that for scalar bosons, and may depend on either the total background DM density or require an asymmetry in the presence of derivative couplings. They differ, however, in the fact that vector bosons carry an additional Lorentz index, which expands the number of contributing operators. Here we consider a massive 4 vector field X µ , with field decomposition
X µ pxq " ż d 3 p p2πq 3 1 a 2E p ÿ l`a p⃗ p, lqϵ µ pp, lqe´i p¨x`b: p⃗ p, lqϵμpp, lqe ip¨x˘, (3.26) Xμpxq " ż d 3 p p2πq 3 1 a 2E p ÿ l`a : p⃗ p, lqϵμpp, lqe ip¨x`b p⃗ p, lqϵ µ pp, lqe´i p¨x˘, (3.27)
where the creation and annihilation operators satisfy the commutation relations (3.13), whilst ϵ µ pp, lq " ϵ µ l is the polarisation vector with polarisation l P t´1, 0, 1u. Considering the helicity eigenstates for a state with momentum along the`z direction, these take the form
ϵ µ pp, 1q " ϵ μ " 1 ? 2¨0 1 i 0‹ ‹ ‚ , ϵ µ pp,´1q " ϵ μ " ϵ˚μ , ϵ µ pp, 0q " ϵ µ L " 1 m¨| ⃗ p| 0 0 E‹ ‹ ‚ , (3.28)
which we will refer to as the right, left and longitudinal polarisation states, respectively, together satisfying ϵpp, lq¨ϵpp, l 1 q˚"´δ ll 1 . The polarisation vectors for momenta along other directions are found by applying the appropriate rotation matrix. These will need to be considered in order to perform the averaging appropriately. We tabulate all irreducible operators for vector DM contributing to the DSE up to dimension-6 in Table 2. As before, we consider each of the cases where the background consists of a complex vector field, X µ and its conjugate, Xμ, which we denote by |Xy and |X˚y, respectively. The corresponding expectation values in real X backgrounds are found by summing those in |Xy and |X˚y backgrounds. The first operator in Table 2, O X 1 , is analogous to the one appearing in L ϕ int . This has already been discussed in detail in 3.1; the only difference here is the overall sign of the energy shift, generated by the contraction of two polarisation vectors. As a result, the energy splitting between the two SM fermion spin states will be the same for O X 1 as for its scalar counterpart, and sensitivity to the individual energy shifts is required to distinguish between the two operators.
The second operator in Table 2, O X 2 , generates an energy shift
∆E X 2 ψ p ⃗ 0, h ψ q " g ψX 2m ψ Λ h ψ ÿ l X B 1 E X Im " ε αβµν p α ψ S β ψ ϵ˚µ l X ϵ ν l X ı F`n X pX l X q´n X pXl X q˘, (3.29)
for charged vector bosons, and zero otherwise. We immediately see that the longitudinal modes of X µ with real polarisation vectors do not contribute to the DSE. This leaves the remaining two polarisation states, which after substituting in the average (A.13) and taking the difference between the two energy shifts gives an SM fermion energy splitting
Label O DM O SM Background xH int y O X 1 ipXα Ð Ñ B µ X α qpψγ µ γ 5 ψq |Xy 4m ψ p⃗ p X¨⃗ S ψ q |X˚y´4m ψ p⃗ p X¨⃗ S ψ q O X 2 iXμX ν pψσ µν ψq |Xy 2 Im " ε αβµν ϵ˚α l X ϵ β l X p µ ψ S ν ψ ı |X˚y´2 Im " ε αβµν ϵ˚α l X ϵ β l X p µ ψ S ν ψ ı O X 3 XμX ν pψσ µν γ 5 ψq |Xy 4 Im " pϵl X¨S ψ qpϵ l X¨p ψ q ı |X˚y´4 Im " pϵl X¨S ψ qpϵ l X¨p ψ q ı O X 4 i " X µ˚p B µ X ν q´pB µ Xν qX µ ‰ pψγ ν γ 5 ψq |Xy´4m ψ Re " pϵl X¨S ψ qp⃗ ϵ l X¨⃗ p X q ı |X˚y 4m ψ Re " pϵl X¨S ψ qp⃗ ϵ l X¨⃗ p X q ı O X 5 " X µ˚p B µ X ν q`pB µ Xν qX µ ‰ pψγ ν γ 5 ψq |Xy 4m ψ Im " pϵl X¨S ψ qp⃗ ϵ l X¨⃗ p X q ı |X˚y 4m ψ Im " pϵl X¨S ψ qp⃗ ϵ l X¨⃗ p X q ı O X 6 ipX˚µX ν`X µ Xν qpψ Ð Ñ B µ γ ν γ 5 ψq |Xy´8m ψ Re " pϵl X¨S ψ qp⃗ ϵ l X¨⃗ p ψ q ı |X˚y´8m ψ Re " pϵl X¨S ψ qp⃗ ϵ l X¨⃗ p ψ q ı O X 7 iε αβµν " XαpB β X µ q´pB β XμqX α ‰ pψγ ν γ 5 ψq |Xy 0 |X˚y 0 O X 8 ε αβµν " XαpB β X µ q`pB β XμqX α ‰ pψγ ν γ 5 ψq |Xy 4m ψ Im " ε αβiν ϵ˚α l X ϵ β l X p i X S ν ψ ı |X˚y 4m ψ Im " ε αβiν ϵ˚α l X ϵ β l X p i X S ν ψ ı∆E X 2 ψ "
7g ψX 8m X Λ`n X pX´q´n X pX`q´n X pX˚q`n X pX˚q˘, (3.30) to leading order. Similar to (3.23), the energy shift from O X 2 is not suppressed by the velocity scale, but requires both a polarisation and matter-antimatter symmetry in order to give a non-zero contribution to the DSE. The former requirement is more difficult for vector bosons than fermions, which permit chiral Lagrangians that preferentially produce fermions of a single helicity at high energies. For vector DM, a polarisation asymmetry must therefore be generated through another mechanism such as scattering on a polarised fermionic background. The third operator in Table 2 gives rise to an energy shift
∆E X 3 ψ p ⃗ 0, s ψ q " B 1 E X Im " pp ψ¨ϵl X qpS ψ¨ϵl X q ‰ F ,(3.31)
which vanishes for an SM fermion at rest in the lab frame. We note, however, that there may be a contribution to the energy shift from the right and left polarisation states for other experimental setups. On the other hand, the longitudinal state cannot contribute for any setup as its polarisation vector is real. The fourth operator in Table 2 is unique, and leads to an energy splitting
∆E X 4 ψ " 2g ψX Λ 2 ÿ l X B 1 E X Re " pϵl X¨S ψ qp⃗ ϵ l X¨⃗ p X q ‰ F`n X pX l X q´n X pXl X q"´4 g ψX Λ 2 β C pn X pX L q´n X pXLqq ,(3.32)
which depends solely on the density of longitudinally polarised background states. This energy splitting is most closely related to the one generated by O X 1 , which instead depends on the total asymmetry between X µ and its conjugate. As such, it must always be the case that ∆E X 1 ψ ě ∆E X 4 ψ , which may serve to distinguish the two. Of the remaining operators, all have vanishing contributions to the Stodolsky effect for our experimental setup: the contribution from O X 5 is proportional to the imaginary part of the kinematic structure found in (3.32), which is real valued after averaging over background momenta; the contributions from O X 6 is proportional to ⃗ p ψ , which is zero for our setup; the contribution due to O X 7 vanishes at the kinematic level, as
A H X 7 int E " ε αβµν Re " ϵ˚α l X ϵ β l X ı " 0,(3.33)
whilst the energy splitting due to O X 8 scales with
A H X 8 int E " ε αβiν ϵ˚α l X ϵ β l X p i X S ν ψ ,(3.34)
which is zero for the longitudinal states since ϵL " ϵ L , and for the right and left helicity states as only their spatial components are non-zero. Notice that the operator giving rise the Zeeman effect, O F " F µνψ σ µν ψ, where F µν is the field strength tensor, does not appear in Table 2. This is because it only contains a single copy of the vector field, and as a result has a zero expectation value for incoherent background DM states (2.3). Instead, the Zeeman effect occurs in a coherent background, defined as the minimum uncertainty state and by extension the state which is the closest to a classical background. Importantly, bosonic field operators have non-zero expectation values in coherent backgrounds. Such coherent states can be formed by any boson, leading to SM fermion spin-dependent energy shifts that are generated by lower dimension operators than those for incoherent states. It is possible, therefore, that the energy shifts arising from coherent states are significantly larger than those considered here. It is also worth noting that none of the operators in Table 2 describe U p1q gauge bosons, but that the wider class of operators generating energy splittings for coherent backgrounds can. The operator O F is such an example. We will explore these states in a future work.
Spin-3 2
With the exception of the gravitino [58], there are no known spin- 3 2 fermions in renormalisable theories [68]. Despite this, spin- 3 2 DM has been shown capable of reproducing the observed relic density [69], and can be produced as bound states in non-Abelian extensions to the SM. In particular, the spin- 3 2 baryons are the lightest states of a dark SU p3q with a single quark flavour [67,70].
Whilst sharing many properties with spin-1 2 fermions, the additional spin degrees of freedom carried by Rarita-Schwinger (RS) fermions give rise to operators with richer Lorentz structures. This is turn leads a larger number of operators that generate a DSE, the energy shift from which will depend on up to four helicity states. As we will see, the contribution to the energy shift from each helicity state will differ in both sign and magnitude for RS fermions, which may serve as an additional tool to help distinguish them from spin-1 2 fermions. In this section, we consider a spin-3 2 fermion Ψ with field decomposition
Ψ µ pxq " ż d 3 p p2πq 3 1 a 2E p ÿ λ`a pp, λqξμ pp, λqe´i p¨x`b: pp, λqξμ pp, λqe ip¨x˘, (3.35) Ψ µ pxq " ż d 3 p p2πq 3 1 a 2E p ÿ λ`a : pp, λqξμ pp, λqe ip¨x`b pp, λqξμ pp, λqe´i p¨x˘,(3.36)
where λ P t 3 2 , 1 2 ,´1 2 ,´3 2 u is the helicity of the RS fermion, and again we set a " b for Majorana fermions, whilst with the sum running over the values of l P t´1, 0, 1u and h "˘1 for which l`h 2 " λ. Finally, the Clebsch-Gordan coefficients for an RS field can be found in [60], and are given by
ξμ pp, λq " ÿ tl,su C λ l,h ϵ µ pp, lqupp, hq,(3.λ "`3 2 : C 3 2 1,1 " 1, λ "`1 2 : C 1 2 1,´1 " c 1 3 , C 1 2 0,1 " c 2 3 , (3.39) λ "´3 2 : C´3 2 1,´1 " 1, λ "´1 2 : C´1 2 1,1 " c 1 3 , C´1 2 0,´1 " c 2 3 ,(3.40)
with all other coefficients equal to zero. Once more, we tabulate all irreducible operators for RS fermion DM contributing to the DSE up to dimension-6 in Table 3. As before, we consider backgrounds of RS fermions, Ψ and anti-RS fermions,Ψ, which we denote by |Ψy and |Ψy, respectively. The corresponding expectation values in backgrounds of RS fermions that satisfy the Majorana condition are found by summing those in |Ψy and |Ψy backgrounds. We additionally introduce the shorthand
ÿ C f pl Ψ , h Ψ q " ÿ tl Ψ ,h Ψ u´C λ Ψ l Ψ ,h Ψ¯2 f pl Ψ , h Ψ q,(3.41)
with f some arbitrary function depending on the helicity structure of the background, and note that the argument used to exclude the operators O χ 5 and O χ 6 in Section 3.2 applies here to the equivalent operators with spin-3 2 fields.
Label Table 3. Lorentz invariant, Hermitian, gauge invariant and irreducible spin-3 2 DM operators contributing to the DSE up to dimension-6, along with their corresponding expectation values in a background of RS and anti-RS fermions, denoted by |Ψy and |Ψy, respectively. We leave the global factors of the coupling, new physics scale and SM fermion spin eigenvalue, h ψ , implicit.
O DM O SM Background xH int y O Ψ 1 pΨ α γ µ Ψ α qpψγ µ γ 5 ψq |Ψy´4m ψ pp Ψ¨Sψ q |Ψy 4m ψ pp Ψ¨Sψ q O Ψ 2 pΨ α γ µ γ 5 Ψ α qpψγ µ γ 5 ψq |Ψy´4m ψ m Ψ ř C h Ψ pS Ψ¨Sψ q |Ψy´4m ψ m Ψ ř C h Ψ pS Ψ¨Sψ q O Ψ 3 ipΨ µ Ψ ν qpψσ µν ψq |Ψy 4m Ψ ř C Im " ε αβµν p α ψ S β ψ ϵ˚µ l Ψ ϵ ν l Ψ ı |Ψy´4m Ψ ř C Im " ε αβµν p α ψ S β ψ ϵ˚µ l Ψ ϵ ν l Ψ ı O Ψ 4 pΨ α σ µν Ψ α qpψσ µν ψq |Ψy´8 ř C h Ψ " pp Ψ¨Sψ qpS Ψ¨pψ q pp Ψ¨pψ qpS Ψ¨Sψ q ‰ |Ψy 8 ř C h Ψ " pp Ψ¨Sψ qpS Ψ¨pψ q pp Ψ¨pψ qpS Ψ¨Sψ q ‰ O Ψ 5 pΨ µ Ψ ν qpψσ µν γ 5 ψq |Ψy 8m Ψ ř C Im " pS ψ¨ϵl Ψ qpp ψ¨ϵl Ψ q ı |Ψy´8m Ψ ř C Im " pS ψ¨ϵl Ψ qpp ψ¨ϵl Ψ q ı O Ψ 6 ipΨ α σ µν Ψ α qpψσ µν γ 5 ψq |Ψy 8 ř C h Ψ ε αβµν p α Ψ p β ψ S µ Ψ S ν ψ |Ψy´8 ř C h Ψ ε αβµν p α Ψ p β ψ S µ Ψ S ν ψ
The first operator, O Ψ 1 , gives the same energy shift as the similar spin-1 2 operator O χ 1 up to an overall sign, which results from the contraction of two polarisation vectors. It therefore only requires a matter-antimatter asymmetry in order to generate a DSE, but cannot tell us anything about the helicity structure of the background. This naturally makes it difficult to distinguish from O χ 1 .
The remaining operators are far more interesting. Consider O Ψ 2 , which gives rise to an energy shift
∆E Ψ 2 ψ p ⃗ 0, h ψ q "´g ψΨ Λ 2 m Ψ h ψ ÿ λ Ψ ÿ C h Ψ B 1 E Ψ pS Ψ¨Sψ q F`n Ψ pΨ λ Ψ q`n Ψ pΨ λ Ψ q" 7g ψΨ 8Λ 2 h ψ "`n Ψ pΨ``q`n Ψ pΨ``q´n Ψ pΨ´´q´n Ψ pΨ´´q1
3`n Ψ pΨ`´q`n Ψ pΨ`´q´n Ψ pΨ´`q´n Ψ pΨ´`q˘ı, (3.42) where the subscripts˘˘and˘¯refer to the˘3 2 and˘1 2 helicity states, respectively. Taking the difference between the energy shifts for each spin state gives the energy splitting due to
O Ψ 2 ∆E Ψ 2 ψ " 7g ψΨ 4Λ 2 "`n Ψ pΨ``q`n Ψ pΨ``q´n Ψ pΨ´´q´n Ψ pΨ´´q1
3`n Ψ pΨ`´q`n Ψ pΨ`´q´n Ψ pΨ´`q´n Ψ pΨ´`q˘ı,
(3.43)
which requires a non-zero helicity asymmetry in order to generate a DSE, akin to O χ 2 . This is easily achieved in a chiral theory similar to the weak interaction. Owing to the Clebsch-Gordan coefficients, however, the contribution to the DSE from the˘1 2 helicity states is suppressed by a factor of three, which for the same total DM density leads to a reduced energy shift. As such, if the mass, and by extension the number density of the DM is known, the reduced energy splitting could serve as a tool to distinguish between spin-1 2 and spin-3 2 DM backgrounds. Although difficult to observe, we also note that the energy shifts of the individual spin states differ by an overall sign between O Ψ 2 and O χ 2 . The operator O Ψ 3 yields a similarly suppressed energy splitting
∆E Ψ 3 ψ " 7g ψΨ 4Λ 2 "`n Ψ pΨ``q´n Ψ pΨ``q´n Ψ pΨ´´q`n Ψ pΨ´´q1 3`n Ψ pΨ`´q´n Ψ pΨ`´q´n Ψ pΨ´`q`n Ψ pΨ´`q˘ı, ,(3.44)
which is only non-zero in a background with both a fermion-antifermion and helicity asymmetry. In this case, we note the analogous lower spin operator is in fact bosonic, O X 3 , which should result in a slightly larger splitting for the same background density. However, the biggest difference is in the generation of (3.30) and (3.44); as previously discussed, a helicity asymmetry cannot arise at the Lagrangian level for bosons, but are possible in chiral theories of fermions which if relativistic at production prefer a given helicity. Consequently, it is much easier to generate the DSE from O Ψ 3 . The remaining three operators, O Ψ 4 , O Ψ 5 and O Ψ 6 are analogous to O χ 3 , O X 3 and O χ 4 , respectively, such that only the first contributes a DSE for the experimental setup considered here. In the same way as (3.43), the energy shifts due to O Ψ 4 differ from their analogues by an overall sign and a small suppression factor from the Clebsch-Gordan coefficients. Finally, we have omitted the pseudoscalar analogues of O Ψ 3 and O Ψ 5 , proportional to Ψ µ γ 5 Ψ ν , from Table 3 as the expectation values of their Hamiltonians vanish trivially using (3.7). The discussion here is easily extended to higher spin states, which we naively expect will differ only in the overall sign and magnitude of their DSEs. In particular, the magnitude of the DSE for most operators should decrease with increasing spin, as progressively smaller Clebsch-Gordan coefficients will suppress the contribution from the intermediate helicity states.
Experimental feasibility
Observing the tiny energy splittings induced by the DSE directly is a remarkable challenge due to their small magnitude. Take for example the splitting due to the CνB, whose magnitude is expected to be of order
|∆E ψ | " G F β C n ν,0 » 5ˆ10´3 9 eV,(4.1)
assuming maximal neutrino-antineutrino asymmetry, where we have used β C » 10´3 and n ν,0 " 56 cm´3 is the predicted relic neutrino density per degree of freedom. This is approximately thirty orders of magnitude smaller than the energy splitting due to the Zeeman effect in a 1 G magnetic field. Clearly then, this effect is nigh impossible to observe on the scale of a single target. To that end, we identify two methods utilising macroscopic targets through which the DSE may be observed. Both of these rely on the same property; as a result of the energy splitting due to the DM background, the SM fermion Hamiltonian, H ψ , and spin operators orthogonal to the DM wind, S K , no longer commute, leading to a spin precession
dS K dt " irH ψ , S K s " Op∆E ψ q,(4.2)
which can equivalently be interpreted as a torque. A ferromagnet with polarisation transverse to the DM wind will therefore experience a macroscopic acceleration as a result of the spin precession, which can be observed with a Cavendish-style torsion balance. Alternatively, a target initially polarised along an external magnetic field will develop some transverse magnetisation as a consequence of the DM background, which may measurable with a SQUID magnetometer. We will explore each of these methods in turn.
Torsion balance
The possibility of using a torsion balance to observe the tiny energy splittings due to the CνB was first identified by Stodolsky in [38] and has since been discussed in several works [39][40][41]. A single SM fermion interacting with the DM background will experience a torque τ ψ » |∆E ψ |, such that a macroscopic target consisting of N ψ fermions with degree of polarisation P will experience a total torque
τ tot » P N ψ |∆E ψ | " N A m A P A Mˆ# Z|∆E e |, ψ " e, |∆E N |, ψ " N, (4.3)
where N denotes an atomic nucleus, N A is the Avogadro number, whilst M , A and Z denote the total mass, the mass number and atomic number of the target, respectively. We have additionally introduced the "Avogadro mass" m A " 1 g mol´1. To estimate the sensitivity of a torsion balance to this energy splitting, we consider the same setup as [41] using a torsion balance consisting of N m spherical, uniformly dense ferromagnets a distance R away from some central axis. To maximise the sensitivity, we additionally assume that opposing ferromagnets are polarised antiparallel to one another. For this setup, the torsion balance will experience a linear acceleration
a » N A m A P A N m Rˆ# Z|∆E e |, ψ " e, |∆E N |, ψ " N. (4.4)
As such, if accelerations as small as a 0 can be measured, the experiment is sensitive to energy splittings
|∆E ψ | Á a 0 m A N A A P R N mˆ# 1 Z , ψ " e, 1, ψ " N, " p5.2¨10´2 8 eVq " a 0 10´1 5 cm s´2 ı " R 1 cm ȷ " 2 N m ȷ A Pˆ# 1 Z , ψ " e, 1,
ψ " N,
(4.5)
where for our reference sensitivity we have used a 0 " 10´1 5 cm s´2, which has recently been achieved in torsion balance tests of the weak equivalence principle [71]. By comparison with (4.1), we see that this torsion balance experiment is insensitive to the CνB, but may still be able to observe DM for which the background number density n DM " n ν,0 . In particular, as the background DM number density scales as n DM " ρ DM {m DM , where ρ DM » 0.4 GeV cm´3 is the local dark matter energy density [72], low mass DM scenarios are ideal candidates for detection using this method. Finally, we note that a torsion balance consisting test masses suspended by superconducting magnets has been considered in [73], which has an estimated sensitivity to accelerations as small as a 0 » 10´2 3 cm s´2. This, in turn, would allow us to probe energy splittings of order 10´3 6 eV.
SQUID magnetometer
The DM wind resulting from the relative motion of the Earth through the background can acts similarly to a magnetic field, leading to the spin precession (4.2). As such, if the target spins are initially aligned along some fixed external magnetic field ⃗ B ext that is not colinear with the DM wind, the presence of the background will cause the spins to shift away from the axis of ⃗ B ext and give rise to a small transverse magnetisation. The spins will then precess around the combined magnetic field and DM wind with some characteristic frequency, which can be detected using a highly sensitive SQUID magnetometer. This idea has previously been discussed in the context of axion DM in [74], and is the basis of the CASPEr experiment [75].
Following the calculations in Appendix D, we find that the transverse magnetisation of a target consisting of N ψ spins evolves as
|M K ptq| " 2ρN A m A P A |R sin`ω ψ,0 2 t˘| 1`R 2 c 1`R 2 cos 2´ω ψ,0 2 t¯ˆ# Zµ e , ψ " e, µ N , ψ " N, (4.6)
where ρ is the mass density of the target, µ ψ denotes the magnetic moment of species ψ, R " ∆E ψ {∆E ψ,B is the ratio of the DM and Zeeman energy splittings, and ω ψ,0 " ∆E ψ,B ? 1`R 2 . In (4.6) we have assumed that the DM wind is exactly perpendicular to ⃗ B ext , which maximises the transverse magnetisation, and that both the external magnetic field and DM wind directions are constant in time 5 . We give the full expression for |M K ptq| and discuss the time dependence in Appendix D.
The transverse magnetisation has a maximum of
|M K pt max q| " 2ρN A m A P A |R| 1`R 2ˆ# Zµ e , ψ " e, µ N , ψ " N, t max " p2k`1qπ ω ψ,0 , (4.7)
for R ď 1, with k P t0, 1, 2, . . . u. Supposing that a magnetometer can precisely measure transverse magnetic fields with magnitude B 0 , we will have sensitivity to energy splittings with magnitude
|∆E ψ | Á B 0 | ⃗ B ext | m A ρN A A Pˆ# 1 Z , ψ " e, 1, ψ " N, " p1.0¨10´3 2 eVq " B 0 10´1 6 T ȷ « | ⃗ B ext | 10´1 0 T ff " 7.9 g cm´3 ρ ȷ A Pˆ# 1 Z , ψ " e, 1,
ψ " N, (4.8) Figure 1. Evolution of the transverse magnetic field generated by the DM background, normalised by the SQUID sensitivity B 0 " 10´1 6 T. From bottom to top, the blue, orange and green curves correspond to energy splittings ∆E " 2¨10´3 2 eV, 2¨10´3 0 eV and 2¨10´2 8 eV, respectively, whilst the solid and dotted curves correspond in turn to the cases where β C,∥ " 0 and β C,∥ " 1{ ?
3. Finally, we assume an applied magnetic | ⃗ B ext | " 10´1 0 T, and an iron target with magnetic moment µ ψ " 3.15¨10´8 eV T´1, equal to the nuclear magneton.
for R ! 1, where we have used ∆E ψ,B " 2µ ψ | ⃗ B ext |. For our reference scenario we have chosen B 0 " 5¨10´1 4 T, corresponding to the SQUID magnetometer discussed in [76], and used the density of iron in place of ρ. It is clear that the SQUID magnetometer setup is at the very least as sensitive as the torsion balance setup discussed in Section 4.1, but can be made more sensitive by decreasing the applied magnetic field. The ideal setup would therefore be to initially apply a strong external magnetic field to align the target spins, and then steadily decrease the applied field to maximise the acquired transverse polarisation.
One should also notice that ω ψ,0 " ω ψ,L ? 1`R 2 , where ω ψ,L " 2πf ψ,L is the angular Larmor frequency of the system. The overall magnetisation of the system would therefore initially precess about the applied magnetic field with frequency ω ψ,L , increasing to ? 2ω ψ,L as the applied field is turned down. This effect could also be interpreted as a field dependent gyromagnetic ratio. Given, however, that R ! 1 for most reasonable scenarios, we do not expect this to have an observable effect on the signal. We show the time dependence of the SQUID magnetometer signal in Figure 1, including the case where the DM wind is not exactly orthogonal to the applied magnetic field. In particular, we show the signal when the fraction of the relative frame velocity along ⃗ B ext , β C,∥ " 1{ ? 3, or equivalently when the relative velocity is split equally along each direction. Importantly, this does not have a drastic effect on magnitude of the signal, and so should not severely impact the sensitivity of this method outside the extreme case where β C,∥ Ñ 1.
Example: a scalar DM model
To give a rough estimate of the constraints that can be placed on DM using this method, we consider the two component DM model given in [51], which features a heavy, leptophilic dark vector mediator Z 1 µ and complex scalar ϕ with interaction Lagrangian
L Z 1 " g 2 ϕ Z 1 µ Z 1µ |ϕ| 2´i g ϕ ϕ˚Ð Ñ B µ ϕZ 1µ´Z1 µl γ µ pg L P L`gR P R ql,(4.9)
where l P te, µ, τ u, g ϕ , g L and g R are dimensionless couplings, and P R{L " p1˘γ 5 q{2 are the right and left chirality projection operators. Focusing on the case with l " e, and integrating out the heavy Z 1 leads to the effective low energy Lagrangian
L Z 1 "´i g ϕ pg R`gL q 2m 2 Z 1´ϕ˚Ð Ñ B µ ϕ¯ēγ µ e´i g ϕ pg R´gL q 2m 2 Z 1´ϕ˚Ð Ñ B µ ϕ¯ēγ µ γ 5 e`. . . . (4.10)
Of interest to us is the second term, which by comparison with (3.19) generates an electron energy splitting with magnitude
|∆E e | " 2g ϕ |g R´gL | m 2 Z 1 β C |n ϕ pϕq´n ϕ pϕ˚q|. (4.11)
Next, rewriting |n ϕ pϕq´n ϕ pϕ˚q| " |δ ϕ |ρ DM {m ϕ , where δ ϕ P r´1, 1s parameterises the asymmetry between ϕ and ϕ˚, and considering purely axial couplings, g R "´g L " g A , we find
|∆E e | " 4 Λ 2 Z 1 ρ DM m ϕ β C |δ ϕ |. " p1.2¨10´3 4 eVq « Λ´1 Z 1 356 TeV´1 ff 2 " 10 MeV m ϕ ȷ |δ ϕ |,(4.12)
for β C " 7.6ˆ10´4 [77], where we have defined the effective new physics scale Λ Z 1 " m Z 1 { a g ϕ |g A | and assumed that ϕ makes up the entire local relic density, ρ DM » 0.4 GeV cm´3. If we instead assume production via freeze-out, we can estimate the local DM density of ϕ in terms of Λ Z 1 and m ϕ , which for m 2 Z 1 " m 2 ϕ " m 2 e has a different scaling to (4.12)
|∆E e | " p1.2¨10´3 4 eVq « 356 TeV´1 Λ´1 Z 1 ff 2 " 10 MeV m ϕ ȷ 3 |δ ϕ |. (4.13)
In both cases, the reference value, Λ´1 Z 1 " 356 TeV corresponds to the approximate value required to reproduce the relic density for m ϕ " 10 MeV. More generally, we require
Λ´1 Z 1 Á p356 TeV´1q " 10 MeV m ϕ ȷ 1 2 ,(4.14)
so as not to overclose the universe. It is instructive to recast both (4.12) and (4.13) in terms of the constraints that can be placed on the effective new physics scale using the DSE. Given a sensitivity to energy shifts |∆E 0 | Á 10´3 2 eV, corresponding to the SQUID magnetometer considered in (4.8), we find the constraint on the effective new physics scale
Λ´1 Z 1 Á p38.8 TeV´1q " 10 MeV m ϕ ȷ 3 2 " 10´3 2 eV |∆E 0 | ȷ 1 2 b |δ ϕ |,(4.
15) Figure 2. Constraint projections on the effective DM coupling, Λ´1 Z 1 " ? g ϕ g A {m Z 1 , from the SQUID magnetometer for the generic (green) and freeze-out (orange) production scenarios, where we assume δ ϕ " 1. We compare these with the constraints from direct detection experiments [78][79][80] (blue), assuming constant DM form factors, and anomalous supernova cooling constraints (red), which we compute following the method of [51] for the 18 M @ progenitor discussed in [81]. For comparison, we show the combination of parameters that reproduce the local relic density for a freeze-out scenario with the black curve, corresponding to the saturation of (4.14).
assuming the energy splitting from freeze-out production (4.13), which for Op1q values of the asymmetry parameter, i.e. supposing that the dark sector matter-antimatter asymmetry follows that of the visible sector, is just one order of magnitude away from being able to probe Λ Z 1 that reproduces the measured relic density at m ϕ " 10 MeV. If we instead assume that ϕ makes up the entirety of dark matter independent of m ϕ and Λ Z 1 , corresponding to the energy splitting (4.12), we find the constraint
Λ´1 Z 1 À p3.3¨10 3 TeV´1q " m ϕ 10 MeV ı 1 2 " |∆E 0 | 10´3 2 eV ȷ 1 2 1 a |δ ϕ | ,(4.16)
which is once more roughly an order of magnitude away from the freeze-out band (4.14). We show the constraints that could be placed on Λ´1 Z 1 using a SQUID magnetometer in Figure 2 as a function of m ϕ for both the freeze-out (FO) and unspecified production scenarios, and compare these with the existing constraints from direction detection experiments [78][79][80] and anomalous supernova cooling, computed following the method of [51]. As expected, this experiment significantly outperforms existing direct detection experiments for m ϕ À 30 MeV. Additionally, if freeze-out is assumed, the SQUID magnetometer experiment is instead able to place constraints on the minimum value of Λ´1 Z 1 , owing to the linear scaling of the DSE with the effective coupling. Importantly, this includes regions that are currently unconstrained by SN 1987a. Aside, notice that the energy splitting due to ϕ backgrounds far exceeds that expected from the CνB for the parameter ranges considered here, assuming the same asymmetry for both. It is therefore entirely possible that the DSE completely washes out the Stodolsky effect for neutrinos. One could also envisage scenarios in which the opposite is true, and the DSE is overwhelmed by the CνB, or those in which one acts as a significant background to the other. This should be taken into consideration when using this technique, especially as it is difficult to distinguish between the operators responsible for the DSE. Nevertheless, the observation of either the DSE or Stodolsky effect for neutrinos would be a strong indicator of as-yet-unobserved physics.
Conclusions
Despite comprising " 26% of the energy density of the universe, detecting DM is an incredible challenge that has yet to be accomplished. Here we have explored the possibility of constraining DM models using the DSE: tiny energy splittings between the spin states of SM fermions induced by an incoherent DM background. Throughout, we have used an EFT formalism and identified all effective DM operators up to dimension-6, for DM candidates with spin-0 to spin- 3 2 , that can give rise to the DSE. Our key finding is that the energy splittings due to the DSE scale linearly with the effective DM coupling, inversely with the DM mass, and are roughly independent of the DM kinematics. Importantly, this differs from traditional DM direct detection experiments, where the sensitivity typically decreases with decreasing DM mass. On the other hand, every operator discussed here requires either a particle-antiparticle or helicity asymmetry in the background to give a non-zero contribution to the DSE. This technique therefore favours chiral models and those with a sizeable chemical potential during production, however we note that either asymmetry may develop post-production through several mechanisms e.g. DM reflection at surface of the Earth, scattering on polarised backgrounds.
In this work, we have identified two methods through which these tiny energy splittings can be observed. The first utilises an extremely sensitive, polarised torsion balance, which experiences a torque due to the energy splittings induced by the DM background. For a conservative setup, this experiment is sensitive to energy splittings of ∆E ψ » 10´2 8 eV, but could have a sensitivity to splittings as small as ∆E ψ » 10´3 6 eV for a more optimistic setup. The second utilises a SQUID magnetometer to detect the time-varying magnetisation of a target due to the DM background, which acts similarly to an external magnetic field on the target. We estimate that this experiment will be sensitive to splittings of ∆E ψ » 10´3 2 eV.
Finally, we have explored a scalar DM model, considering both the case where the new scalar constitutes the entire local DM density regardless of the model parameters, and the more realistic scenario where it is produced via freeze-out. In both scenarios, we showed the SQUID magnetometer proposal is able to exclude regions of parameter space that are not already ruled out by direct detection experiments or SN 1987a, provided that there is a sizeable asymmetry in the DM background. For the range of parameters considered, we also demonstrated that the DSE for the scalar DM model far exceeded the Stodolsky effect for neutrinos, provided that the asymmetry in both backgrounds was comparable. Clearly, the DSE is a powerful tool to constrain DM models in otherwise difficult-to-test regions of parameter space.
version of this work. We are also grateful to Yuber F. Perez-Gonzalez, Lucien Heurtier and Animesh Datta for some helpful comments during the preparation of this manuscript. Jack D. Shergold is supported by an STFC studentship under the STFC training grant ST/T506047/1.
A Lab frame averaging
In this appendix we will describe the averaging procedure used to compute the energy shifts in the lab frame. We begin by assuming that the DM is described by an isothermal spherical halo, with galaxy frame velocity distribution
f p⃗ pq "ˆ2 π m 2 DM σ 2˙3 2 e´| ⃗ p| 2 2m 2 DM σ 2 , (A.1)
where ⃗ p is the DM momentum in the galactic reference frame, m DM is its mass and σ is the velocity dispersion. The normalisation factor is found by requiring that ş d 3 p p2πq 3 f p⃗ pq " 1. As a result of the frame transformation, DM particles in the lab frame will not follow (A.1) but instead the transformed distribution function f lab , such that the average of some lab frame quantity X lab will be given by
xX lab y " ż d 3 p p2πq 3 X lab f lab p⃗ pq " 1 p2πq 3 ż X lab f lab p⃗ pq|⃗ p| 2 sin θ d|⃗ p| dθ dϕ. (A.2)
To find f lab p⃗ pq, we first note that since all velocities involved are small, the momentum of the DM particle in the lab frame ⃗ p lab can be written in terms of the relative frame velocity ⃗ β C as
⃗ p lab » ⃗ p`m DM ⃗ β C " |⃗ p|¨c os ϕ sin θ sin ϕ sin θ cos θ‚`m DM β C¨0 0 1‚ , (A.3)
where β C " | ⃗ β C |, and we have chosen ⃗ β C ||z for simplicity. This choice makes no difference at the level of averaging, but becomes important when considering experimental setups. We will therefore write our final expressions for averaged quantities in terms of a general orientation of ⃗ β C . Next, since f lab p⃗ p lab q " f p⃗ pq, the lab frame distribution function will satisfy
f lab p⃗ pq " f p⃗ p´m DM ⃗ β C q "ˆ2 π m 2 DM σ 2˙3 2 e´| ⃗ p| 2`m2 DM β C 2m 2 DM σ 2 e |⃗ p|β C cos θ m DM σ 2 , (A.4)
which can be readily plugged into (A.2) to compute averaged lab frame quantities. In addition to the distribution function, we must also write the lab frame polarisation vectors in terms of DM reference frame quantities. To do so, we rotate the polarisation vectors (3.28) to point along an arbitrary axis, and then use ⃗ p lab to rewrite angles in the lab frame in terms of those in the DM frame, yielding
ϵ μ " pϵ μ q˚" 1 ? 2¨0 1 |⃗ p lab | cos ϕ p|⃗ p| cos θ`β C m DM q´i sin ϕ 1 |⃗ p lab | sin ϕ p|⃗ p| cos θ`β C m DM q`i cos φ |⃗ p| |⃗ p lab | sin θ‹ ‹ ‹ ‚ , (A.5) ϵ µ L "¨| ⃗ p lab | m DM |⃗ p| |⃗ p lab | cos ϕ sin θ |⃗ p| |⃗ p lab | sin ϕ sin θ 1 |⃗ p lab | p|⃗ p| cos θ`β C m DM q‹ ‹ ‹ ‹ ‚ , (A.6)
again assuming ⃗ β C ||z. Relaxing the assumption ⃗ β C ||z, we find the averages relevant to the operators considered in this work
B 1 E DM p⃗ p DM¨⃗ S ψ q F " 2β C s ψ,∥ , (A.7) B 1 E DM pp DM¨Sψ q F "´2β C s ψ,∥ , (A.8) B 1 E DM pS DM¨Sψ q F " " p1´8β 2 r q 8β 2 r Erf p2β r q´1 2 ? πβ r e´4 β 2 r ȷ s ψ,∥ m DM »´7 8 s ψ,∥ m DM`O p1´β r q, (A.9) B 1 E DM pp DM¨Sψ qpS DM¨pψ q F " « p1´16β 2 r´6 4β 4 r q 16β 4 r Erf p2β r q p1`8β 2 r q 4 ? πβ r e´4 β 2 r ff β 2 c m ψ s ψ,∥ »´5β 2 c m ψ s ψ,∥`O p1´β r q, (A.10) B 1 E DM pp DM¨pψ qpS DM¨Sψ q F " " p1´8β 2 r q 8β 2 r Erf p2β r q´1 2 ? πβ r e´4 β 2 r ȷ m ψ s ψ,∥ »´7 8 m ψ s ψ,∥`O p1´β r q, (A.11) B 1 E DM ε αβµν p α DM p β ψ S µ DM S ν ψ F " 0, (A.12) B 1 E DM ε αβµν p α ψ S β ψ ϵ˚μ ϵ ν F "˘i " p1´8β 2 r q 8β 2 r Erf p2β r q´1 2 ? πβ r e´4 β 2 r ȷ m ψ m DM s ψ,∥ »¯7 i 8 m ψ m DM s ψ,∥`O p1´β r q, (A.13) B 1 E DM ε αβµν p α ψ S β ψ ϵ˚µ L ϵ ν L F " 0, (A.14) B 1 E DM pp ψ¨ϵ˘q pS ψ¨ϵ˚q F " 0, (A.15) B 1 E DM pp ψ¨ϵL qpS ψ¨ϵL q F "´2 m ψ m DM β C s ψ,∥ , (A.16) B 1 E DM p⃗ p X¨⃗ ϵ˘qpϵ˚¨S ψ q F " 0, (A.17) B 1 E DM p⃗ p X¨⃗ ϵ L qpϵL¨S ψ q F "´2β C s ψ,∥ , (A.18) B 1 E DM ε αβiν ϵ˚α ϵ β p i X S ν ψ F " 0, (A.19) B 1 E DM ε αβiν ϵ˚α L ϵ β L p i X S ν ψ F " 0, (A.20) with s ψ,∥ " p ⃗ β C¨⃗ s ψ q{β C and β r " β C {β c , where β c " ?
2σ is the circular velocity of the galaxy. The presence of s ψ,∥ indicates that only the spin state directed along the DM wind experiences an energy shift. We will not include this factor explicitly in the main text.
B Operator basis
Here we outline the identities used to reduce the effective DM operator bases to those appearing in Section 3. We begin by noting that excluding field indices, the Lorentz structures that can enter into our effective DM operators are the ones already given in (3.6), along with the partial derivative, B µ , and Levi-Civita tensor, ε αβµν . Considering operators up to dimension-6 with a fermionic SM part, we can therefore have at most a single derivative entering. As such, for spin-0 and spin-1 2 DM particles, the only way that the Levi-Civita tensor can enter a Lagrangian is through operators that contain at least three gamma matrices; which can then be reduced to simpler Lorentz structures via the Chisholm identity and the definition of γ 5 γ α γ β γ µ " η αβ γ µ`ηβµ γ α´ηαµ γ β´i ε σαβµ γ σ γ 5 , (B.1)
γ 5 " i 4! ε αβµν γ α γ β γ µ γ ν , (B.2)
where η µν is the metric tensor. These can also be used to derive the identity ε αβµν σ µν "´2iσ αβ γ 5 ,
(B.3)
which generates the additional Lorentz structure given in (3.9), allowing us to express operators containing a Levi-Civita tensors in terms of operators containing the more convenient σ µν γ 5 structure. The trace of the product of fermion spinors with this structure is
Tr " u ψūψ σ µν γ 5 ‰ " 2i´p µ ψ S ν ψ´S µ ψ p ν ψ¯. (B.4)
Additionally, we can use this to demonstrate that O χ 5 " O χ 4 via ipχσ µν γ 5 χqpψσ µν ψq "´1 2 ε µναβ pχσ αβ χqpψσ µν ψq " ipχσ αβ χqpψσ αβ γ 5 ψq, (B.5) and that O χ 6 " O χ 3 using pχσ µν γ 5 χqpψσ µν γ 5 ψq "´1 4 ε µναβ ε µνδγ pχσ αβ χqpψσ δγ ψq Many operators containing derivatives can be reduced using symmetry currents, and through integration by parts. Take, for example, the scalar operator
B µ |ϕ| 2˘ψ γ µ γ 5 ψ " B µ " |ϕ| 2ψ γ µ γ 5 ψ ‰´| ϕ| 2 B µ`ψ γ µ γ 5 ψ˘. (B.7)
The first term on the right-hand side is a total derivative and so will not contribute to the classical action, whilst the second term contains the derivative of the axial current which can be re-expressed as 2im ψ |ϕ| 2ψ γ 5 ψ using the equations of motion for spin-1 2 fields. We can therefore perform the operator reductioǹ
B µ |ϕ| 2˘ψ γ µ γ 5 ψ ÝÑ |ϕ| 2ψ γ 5 ψ, (B.8)
which as per (3.7) does not contribute to the DSE. Note that similar structures containing vector currents vanish from the requirement B µ pψγ µ ψq " 0. Further reductions in the effective operator basis are obtained using equations of motion. In particular, we make use of the spin-1 and spin-3 2 equations of motion, which lead to the constraints
B µ X µ " 0, B µ Ψ µ " 0, γ µ Ψ µ " 0, (B.9)
allowing us to eliminate or simplify operators in which spin-1 fields share an index with a derivative. Manipulations using the third identity, { Ψ " 0, yields the basis given in Table 3. The operator bases used throughout this work are those in which the equations of motion have been applied maximally. This avoids the need to apply Hamilton's equations to the Hamiltonian when computing the energy shifts, which is a far more involved task than using the Euler-Lagrange equations. For completeness, we also specify the spin-independent members of our operator bases: at spin-0, these are |ϕ| 2ψ ψ, i|ϕ| 2ψ γ 5 ψ, and ipϕ : Ð Ñ B µ ϕqpψγ µ ψq; for spin-1 DM, we have |X| 2ψ ψ, i|X| 2ψ γ 5 ψ, along with the vector current analogues of each axial-vector operator appearing in Table 2, hermitianised appropriately with factors of the imaginary unit. Finally, the full spin-1 2 basis includes products of fermion bilinears not given in Table 1, whilst the complete basis for RS fermions is given in [68]. The same spin-0 basis, along with a similar spin-1 basis can also be found in [82].
C Levi-Civita identity
Here we derive an identity that can be used to evaluate contractions of four-vectors and a Levi-Civita tensor of the form
ε αβµν A α B β C µ D ν , (C.1)
in a more practical manner, where A, B, C and D are some unspecified four-vectors. First we note that this contraction will always contain at least three spatial components, and so will carry a global factor of p´1q 3 regardless of the four-vectors considered. Next, recalling that we can write a contraction of the Levi-Civita symbol and a series of vectors as a matrix determinant, we have
ε αβµν A α B β C µ D ν " p´1q 3ˇA 0 A 1 A 2 A 3 B 0 B 1 B 2 B 3 C 0 C 1 C 2 C 3 D 0 D 1 D 2 D 3ˇ, (C.2)
which can be neatly re-expressed in terms of scalar triple products as
ε αβµν A α B β C µ D ν "´A 0 r ⃗ B, ⃗ C, ⃗ Ds`B 0 r ⃗ A, ⃗ C, ⃗ Ds´C 0 r ⃗ A, ⃗ B, ⃗ Ds`D 0 r ⃗ A, ⃗ B, ⃗ Cs, (C.3)
where r ⃗ A, ⃗ B, ⃗ Cs " ⃗ A¨p ⃗ Bˆ⃗ Cq is the scalar triple product which is unchanged by cyclic permutations, and antisymmetric under the interchange of any two elements. One could also choose to take the determinant in other ways which may better suit the experimental setup.
To demonstrate the use of this identity, we consider a contraction that may occur between a dark fermion χ and a SM fermion ψ, which have four-momenta and spin, p and S, respectively
ε αβµν p α χ p β ψ S µ χ S ν ψ . (C.4)
If the SM fermion is at rest in the lab frame, we can use (C.3) to reduce (C.4) to
ε αβµν p α χ p β ψ S µ χ S ν ψ " m ψ r⃗ p χ , ⃗ S χ , ⃗ S ψ s. (C.5)
In the most general case, this can then be evaluated by explicitly plugging in values for the relevant spins and momenta. However, if we instead consider the DM to be in a helicity eigenstate, we will have ⃗ S χ ∥ ⃗ p χ , from which it follows that ε αβµν p α χ p β ψ S µ χ S ν ψ " 0, (C.6) by making use of the antisymmetric property of the scalar triple product.
D Fermion spin precession
Here we derive the spin precession of an SM fermion in a combined magnetic and DM background field that gives rise to the transverse magnetisation (4.6). To do so, we need to set up the differential equation that governs the evolution of the SM fermion spin. There will be two components to this: the precession due to the DM background, and the precession due to an external magnetic field. Both of these are due to the same effect, a non-diagonal Hamiltonian resulting from the energy splittings due to background fields. We begin with the time-dependent Schrödinger equation, which for our system takes the form i B Bt ψpx, tq " pH kin pxq`V DM`VB qψpx, tq, (D.1)
where ψpx, tq is the fermion wavefunction, H kin pxq is its kinetic Hamiltonian, which is spin and time-independent, whilst V DM and V B are the potentials due to the DM background and applied magnetic field, respectively, which are spin-dependent and we will treat as constant in time here 6 . This motivates the factorisation ψpx, tq " XpxqT ptq,
(D.2)
where Xpxq is a scalar, containing the spatial components of the wavefunction, and T ptq is an eigenspinor of the form T ptq "ˆT`p tq T´ptq˙, |T ptq| 2 " 1.
(D.3)
This factorisation makes (D.1) separable, but it is easier to note that H kin pxqXpxq " E kin Xpxq, (D.4) such that we can absorb E kin as a time-independent, spin-diagonal contribution to the potential. The overall factor of Xpxq can then be factored out, allowing us to writê i B Bt´H˙T ptq " 0, (D. 5) where H is the total Hamiltonian, including the spin-diagonal contribution from H kin . If the magnetic field is defined such that it points along z, then the z oriented spin state will experience an energy shift. Additionally, the up and down spin states should experience a shift of opposite sign. The potential due to the magnetic field should therefore be proportional to the spin operator along z, that is
V B " ∆E ψ,B 2 S z " ∆E ψ,B 2ˆ1 0 0´1˙, (D.6)
where ∆E B is the energy shift due to the magnetic field. We see that this has the desired properties, as if we act on an S z eigenstate with eigenvalue 7 s z "˘1, we get the eigenvalue s z ∆E B {2. Next, we seek to do the same for the potential due to the DM, which should be directed along the DM wind. Explicitly, V DM " ∆E ψ 2 pβ C,x S x`βC,y S y`βC,z S z q " ∆E ψ 2ˆβ C,z β C,x´i β C,y β C,x`i β C,y´βC,z˙, (D.7)
where β C,i " p ⃗ β C¨⃗ e i q{β C P r´1, 1s is the fraction of the relative frame velocity along the direction i. We should also include a diagonal term due to the spin-independent effects from the DM, however this can simply be absorbed into E kin . The total Hamiltonian is then H "
1 2ˆ2
E kin`∆ E ψ,B`∆ E ψ β C,z ∆E ψ pβ C,x´i β C,y q ∆E ψ pβ C,x`i β C,y q 2E kin´∆ E ψ,B´∆ E ψ β C,z˙, (D. 8) such that the solution to (D.5) is given by
T˘ptq "
« s˘cos´ω ψ 2 t¯´p β C,x¯i β C,y qRs¯˘p1˘β C,z Rqsȃ 1`2β C,z R`R 2 i sin´ω ψ 2 t¯ff e´i E kin t , (D.9)
where ω ψ " ∆E ψ,B a 1`2β C,z R`R 2 is the (angular) precession frequency of the system, proportional to the Larmor frequency, s˘" T˘p0q are the initial values of the SM fermion eigenspinor, and R " ∆E ψ {∆E ψ,B P r´1, 1s is the ratio of the energy shifts due to each of the background potentials. We further note that s`is always real, whilst s´may be complex.
To compute the spin precession using T˘, we note that the time derivative of some operator O is given by Heisenberg's equation of motion Plugging in (D.9), we find ds x dt "˜β C,x p1`β C,z Rq a 1`2β C,z R`R 2 sinpω ψ tq`β C,y cospω ψ tq¸∆E ψ , (D.12) ds y dt "˜β C,y p1`β C,z Rq a 1`2β C,z R`R 2 sinpω ψ tq´β C,x cospω ψ tq¸∆E ψ , (D.13) ds z dt "˜´p 1´β 2 C,z qR a 1`2β C,z R`R 2 sinpω ψ tq¸∆E ψ , (D.14)
where we have related s˘to the initial values of s x , s y and s z , using s x,0 " T p0q : S x T p0q " 2s`Re ps´q " 0, (D.15) s y,0 " T p0q : S y T p0q " 2s`Im ps´q " 0, (D. 16) s z,0 " T p0q : S z T p0q " |s`| 2´| s´| 2 " 1, (D. 17) and assumed that the spins are initially aligned with the external magnetic field. Notice that all three of (D.12), (D.13) and (D.14), especially those along x and y, are proportional to the energy splitting due to the background DM field, and so will vanish in its absence. These equations are readily solved to find the expectation values of the SM fermion spin as a function of time s x ptq " 2R a 1`2β C,z R`R 2 « β C,x p1`β C,z Rq a 1`2β C,z R`R 2 sin 2´ω ψ 2 t¯`β C,y 2 sin pω ψ tq ff , (D.18) s y ptq " 2R a 1`2β C,z R`R 2 « β C,y p1`β C,z Rq a 1`2β C,z R`R 2 sin 2´ω ψ 2 t¯´β C,x 2 sin pω ψ tq ff , (D. 19) s z ptq " 1´2 R 2 p1´β 2 C,z q 1`2β C,z R`R 2 sin 2´ω ψ 2 t¯, (D.20)
such that the magnitude of the spin along the transverse direction evolves according to
|s K ptq| " b s x ptq 2`s y ptq 2 "
2|R sin`ω ψ 2 t˘| 1`2β C,z R`R 2 b 1´β 2 C,ẑ c 1`2β C,z R`R 2 " cos 2´ω ψ 2 t¯`β 2 C,z sin 2´ω ψ 2 t¯ı, (D. 21) which vanishes identically when |β C,z | " 1, or equivalently when the DM wind is colinear with the magnetic field. Consequently, the expression equivalent to (D.21) for a general magnetic field orientation is found by making the replacement β C,z Ñ β C,∥ , where β C,∥ is the fraction of the relative frame velocity along the external magnetic field direction. The corresponding transverse magnetisation is simply |M K ptq| " n ψ µ ψ |s K ptq|, where n ψ is the number density of SM fermions in the target, and µ ψ is their magnetic moment. Given that ω ψ,0 " ∆E ψ,B ? 1`R 2 , this reduces to (4.6) when β C,∥ " 0. For completeness, we note that (D.21) has a maximum of |s K pt max q| "
2|R| b 1´β 2 C,∥ b 1`β C,∥ R`β 2 C,∥ R 2 1`β C,∥ R`R 2 , t max " p2k`1qπ ω ψ , (D.22)
where k P t0, 1, 2, . . . u. This recovers (4.7) for β C,∥ " 0.
δ β¯pχ σ αβ χqpψσ δγ ψq " pχσ αβ χqpψσ αβ ψq.(B.6)
the time derivative of each of the expectation values, s i , is ds i dt " T ptq :ˆd S i dt˙T ptq. (D.11)
Table 2 .
2Lorentz invariant, Hermitian, gauge invariant and irreducible spin-1 DM operators contributing to the DSE up to dimension-6, along with their corresponding expectation values in a background of complex vector bosons and the conjugate field, denoted by |Xy and |X˚y, respectively. We leave the global factors of the coupling, new physics scale and SM fermion spin eigenvalue, h ψ , implicit.
For simplicity, we will only consider helicity eigenstates for the remainder of this paper.2 The apparent divergence is an artefact of the frame transformation, and is discussed at length in Section 5.2 and Appendix B of[41].
As helicity is a good quantum number, it is conserved in time. The helicity profile of the DM background today should therefore be the same as at production in the absence of significant late time interactions. See[41] and[59] for the argument as applied to the CνB.
In order to be cold, the dark matter background must be massive.
For a given choice of axes, at least one of either the DM wind or magnetic field direction must have some time dependence due to the evolution of the relative velocity between the laboratory and DM reference frames.
In truth, at least one of these must be time-dependent. If we fix our coordinate system in the lab frame, then due to the relative motion of the Earth to the DM reference frame, the direction of the background wind will change in time. However, this can alternatively be accounted for by weighting the collected data by the projection of the relative velocity onto the magnetic field direction. See supplementary material S10.1 of[83] for details of the weighting, and[84] for a full parametrisation of the relevant coordinate systems.
We adopt the convention Si " σi, with i P tx, y, zu and σ denoting a Pauli matrix.
AcknowledgmentsWe would like to thank Martin Bauer for some very useful comments about the effective operator basis, and Xiao-Dong Ma for highlighting two redundant operators in a previous
Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions. V C Rubin, W K FordJr, 10.1086/150317Astrophys. J. 159V. C. Rubin and W. K. Ford, Jr., Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions, Astrophys. J. 159 (1970) 379-403.
The rotation curve and geometry of M31 at large galactocentric distances. M S Roberts, R N Whitehurst, 10.1086/153889Astrophys. J. 201M. S. Roberts and R. N. Whitehurst, The rotation curve and geometry of M31 at large galactocentric distances., Astrophys. J. 201 (1975) 327-346.
Rotational properties of 21 SC galaxies with a large range of luminosities and radii, from NGC 4605 /R = 4kpc/ to UGC 2885 /R = 122 kpc. V C Rubin, N Thonnard, W K FordJr, 10.1086/158003Astrophys. J. 238471V. C. Rubin, N. Thonnard and W. K. Ford, Jr., Rotational properties of 21 SC galaxies with a large range of luminosities and radii, from NGC 4605 /R = 4kpc/ to UGC 2885 /R = 122 kpc/, Astrophys. J. 238 (1980) 471.
21-cm line studies of spiral galaxies. 2. The distribution and kinematics of neutral hydrogen in spiral galaxies of various morphological types. A Bosma, 10.1086/113063Astron. J. 861825A. Bosma, 21-cm line studies of spiral galaxies. 2. The distribution and kinematics of neutral hydrogen in spiral galaxies of various morphological types., Astron. J. 86 (1981) 1825.
A direct empirical proof of the existence of dark matter*. D Clowe, M Bradač, A H Gonzalez, M Markevitch, S W Randall, C Jones, 10.1086/508162Astrophys. J. 648109D. Clowe, M. Bradač, A. H. Gonzalez, M. Markevitch, S. W. Randall, C. Jones et al., A direct empirical proof of the existence of dark matter*, Astrophys. J. 648 (2006) L109.
The Evolution of Large Scale Structure in a Universe Dominated by Cold Dark Matter. M Davis, G Efstathiou, C S Frenk, S D M White, 10.1086/163168Astrophys. J. 292M. Davis, G. Efstathiou, C. S. Frenk and S. D. M. White, The Evolution of Large Scale Structure in a Universe Dominated by Cold Dark Matter, Astrophys. J. 292 (1985) 371-394.
The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample. S Alam, BOSS collaboration10.1093/mnras/stx7211607.03155Mon. Not. Roy. Astron. Soc. 470BOSS collaboration, S. Alam et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample, Mon. Not. Roy. Astron. Soc. 470 (2017) 2617-2652, [1607.03155].
N Aghanim, Planck collaboration10.1051/0004-6361/2018339101807.06209Planck 2018 results. VI. Cosmological parameters. 6416Planck collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [1807.06209].
Effective field theory of dark matter: a global analysis. S Liem, G Bertone, F Calore, R Ruiz De Austri, T M P Tait, R Trotta, 10.1007/JHEP09(2016)0771603.05994JHEP. 0977S. Liem, G. Bertone, F. Calore, R. Ruiz de Austri, T. M. P. Tait, R. Trotta et al., Effective field theory of dark matter: a global analysis, JHEP 09 (2016) 077, [1603.05994].
Effective Operator Bases for Beyond Standard Model Scenarios: An EFT compendium for discoveries. U Banerjee, J Chakrabortty, S Prakash, S U Rahaman, M Spannowsky, 10.1007/JHEP01(2021)028JHEP. 0128U. Banerjee, J. Chakrabortty, S. Prakash, S. U. Rahaman and M. Spannowsky, Effective Operator Bases for Beyond Standard Model Scenarios: An EFT compendium for discoveries, JHEP 01 (2021) 028, [2008.11512].
A complete effective field theory for dark matter. J C Criado, A Djouadi, M Perez-Victoria, J Santiago, 10.1007/JHEP07(2021)0812104.14443JHEP. 0781J. C. Criado, A. Djouadi, M. Perez-Victoria and J. Santiago, A complete effective field theory for dark matter, JHEP 07 (2021) 081, [2104.14443].
Dark matter effective field theory and an application to vector dark matter. J Aebischer, W Altmannshofer, E E Jenkins, A V Manohar, 10.1007/JHEP06(2022)0862202.06968JHEP. 0686J. Aebischer, W. Altmannshofer, E. E. Jenkins and A. V. Manohar, Dark matter effective field theory and an application to vector dark matter, JHEP 06 (2022) 086, [2202.06968].
First results from the cryogenic dark matter search in the Soudan Underground Lab. D S Akerib, CDMS collaboration10.1103/PhysRevLett.93.211301astro-ph/0405033Phys. Rev. Lett. 93211301CDMS collaboration, D. S. Akerib et al., First results from the cryogenic dark matter search in the Soudan Underground Lab, Phys. Rev. Lett. 93 (2004) 211301, [astro-ph/0405033].
Dark Matter Search Results from the CDMS II Experiment. Z Ahmed, CDMS-II collaboration10.1126/science.1186112Science. 3270912.3592CDMS-II collaboration, Z. Ahmed et al., Dark Matter Search Results from the CDMS II Experiment, Science 327 (2010) 1619-1621, [0912.3592].
Search for Low-Mass Weakly Interacting Massive Particles with SuperCDMS. R Agnese, SuperCDMS collaboration10.1103/PhysRevLett.112.2413021402.7137Phys. Rev. Lett. 112241302SuperCDMS collaboration, R. Agnese et al., Search for Low-Mass Weakly Interacting Massive Particles with SuperCDMS, Phys. Rev. Lett. 112 (2014) 241302, [1402.7137].
First results from the LUX dark matter experiment at the Sanford Underground Research Facility. D S Akerib, LUX collaboration10.1103/PhysRevLett.112.091303Phys. Rev. Lett. 112913031310.8214LUX collaboration, D. S. Akerib et al., First results from the LUX dark matter experiment at the Sanford Underground Research Facility, Phys. Rev. Lett. 112 (2014) 091303, [1310.8214].
Search for New Physics in Electronic Recoil Data from XENONnT. E Aprile, XENON collaboration10.1103/PhysRevLett.129.1618052207.11330Phys. Rev. Lett. 129161805XENON collaboration, E. Aprile et al., Search for New Physics in Electronic Recoil Data from XENONnT, Phys. Rev. Lett. 129 (2022) 161805, [2207.11330].
A direct dark matter search in XMASS-I. K Abe, XMASS collaboration10.1016/j.physletb.2018.10.0701804.02180Phys. Lett. B. 789XMASS collaboration, K. Abe et al., A direct dark matter search in XMASS-I, Phys. Lett. B 789 (2019) 45-53, [1804.02180].
New CAST Limit on the Axion-Photon Interaction. V Anastassopoulos, CAST collaboration10.1038/nphys41091705.02290Nature Phys. 13CAST collaboration, V. Anastassopoulos et al., New CAST Limit on the Axion-Photon Interaction, Nature Phys. 13 (2017) 584-590, [1705.02290].
Extended Search for the Invisible Axion with the Axion Dark Matter Experiment. T Braine, ADMX collaboration10.1103/PhysRevLett.124.1013031910.08638Phys. Rev. Lett. 124101303ADMX collaboration, T. Braine et al., Extended Search for the Invisible Axion with the Axion Dark Matter Experiment, Phys. Rev. Lett. 124 (2020) 101303, [1910.08638].
Search for Invisible Axion Dark Matter in the 3.3-4.2 µeV Mass Range. C Bartram, ADMX collaboration10.1103/PhysRevLett.127.261803Phys. Rev. Lett. 1272618032110.06096ADMX collaboration, C. Bartram et al., Search for Invisible Axion Dark Matter in the 3.3-4.2 µeV Mass Range, Phys. Rev. Lett. 127 (2021) 261803, [2110.06096].
A new experimental approach to probe QCD axion dark matter in the mass range above 40 µeV. P Brun, MADMAX collaboration10.1140/epjc/s10052-019-6683-x1901.07401Eur. Phys. J. C. 79186MADMAX collaboration, P. Brun et al., A new experimental approach to probe QCD axion dark matter in the mass range above 40 µeV, Eur. Phys. J. C 79 (2019) 186, [1901.07401].
. E Armengaud, IAXO collaboration ; IAXO10.1088/1475-7516/2019/06/0471904.09155Physics potential of the International Axion Observatory. 0647JCAPIAXO collaboration, E. Armengaud et al., Physics potential of the International Axion Observatory (IAXO), JCAP 06 (2019) 047, [1904.09155].
Light Dark Matter Annihilation and Scattering in LHC Detectors. M Bauer, P Foldenauer, P Reimitz, T Plehn, 10.21468/SciPostPhys.10.2.030SciPost Phys. 1030M. Bauer, P. Foldenauer, P. Reimitz and T. Plehn, Light Dark Matter Annihilation and Scattering in LHC Detectors, SciPost Phys. 10 (2021) 030, [2005.13551].
AION: An Atom Interferometer Observatory and Network. L Badurina, 10.1088/1475-7516/2020/05/0111911.11755JCAP. 0511L. Badurina et al., AION: An Atom Interferometer Observatory and Network, JCAP 05 (2020) 011, [1911.11755].
AEDGE: Atomic Experiment for Dark Matter and Gravity Exploration in Space. Y A Aedge Collaboration, El-Neaj, 10.1140/epjqt/s40507-020-0080-01908.00802EPJ Quant. Technol. 76AEDGE collaboration, Y. A. El-Neaj et al., AEDGE: Atomic Experiment for Dark Matter and Gravity Exploration in Space, EPJ Quant. Technol. 7 (2020) 6, [1908.00802].
Matter-wave Atomic Gradiometer InterferometricSensor (MAGIS-100) at Fermilab. J Coleman, 10.22323/1.340.00211812.00482PoS. 201821MAGIS-100 collaborationMAGIS-100 collaboration, J. Coleman, Matter-wave Atomic Gradiometer InterferometricSensor (MAGIS-100) at Fermilab, PoS ICHEP2018 (2019) 021, [1812.00482].
The Alpha Magnetic Spectrometer (AMS) on the international space station: Part II -Results from the first seven years. M Aguilar, AMS collaboration10.1016/j.physrep.2020.09.003Phys. Rept. 894AMS collaboration, M. Aguilar et al., The Alpha Magnetic Spectrometer (AMS) on the international space station: Part II -Results from the first seven years, Phys. Rept. 894 (2021) 1-116.
M C Weisskopf, H D Tananbaum, L P Van Speybroeck, S L O'dell, 10.1117/12.391545astro-ph/0004127Chandra x-ray observatory (cxo):overview, Proc. SPIE Int. 4012M. C. Weisskopf, H. D. Tananbaum, L. P. van Speybroeck and S. L. O'Dell, Chandra x-ray observatory (cxo):overview, Proc. SPIE Int. Soc. Opt. Eng. 4012 (2000) 2, [astro-ph/0004127].
The European Photon Imaging Camera on XMM-Newton: The pn-CCD camera. L Struder, 10.1051/0004-6361:20000066Astron. Astrophys. 365L. Struder et al., The European Photon Imaging Camera on XMM-Newton: The pn-CCD camera, Astron. Astrophys. 365 (2001) L18-26.
Combined searches for dark matter in dwarf spheroidal galaxies observed with the MAGIC telescopes, including new data from Coma Berenices and Draco. V A Acciari, MAGIC collaboration10.1016/j.dark.2021.100912Phys. Dark Univ. 351009122111.15009MAGIC collaboration, V. A. Acciari et al., Combined searches for dark matter in dwarf spheroidal galaxies observed with the MAGIC telescopes, including new data from Coma Berenices and Draco, Phys. Dark Univ. 35 (2022) 100912, [2111.15009].
Search for dark matter signals towards a selection of recently detected DES dwarf galaxy satellites of the Milky Way with H. H Abdallah, H.E.S.S. collaboration ; .E.S.S.10.1103/PhysRevD.102.062001Phys. Rev. D. 102620012008.00688H.E.S.S. collaboration, H. Abdallah et al., Search for dark matter signals towards a selection of recently detected DES dwarf galaxy satellites of the Milky Way with H.E.S.S., Phys. Rev. D 102 (2020) 062001, [2008.00688].
The VERITAS Dark Matter Program. B Zitzer, VERITAS collaboration10.22323/1.301.09041708.07447PoS. 2017904VERITAS collaboration, B. Zitzer, The VERITAS Dark Matter Program, PoS ICRC2017 (2018) 904, [1708.07447].
Constraints on dark photon dark matter using data from LIGO's and Virgo's third observing run. Ligo Scientific, Kagra, R Virgo Collaboration, Abbott, 10.1103/PhysRevD.105.063030Phys. Rev. D. 105630302105.13085LIGO Scientific, KAGRA, Virgo collaboration, R. Abbott et al., Constraints on dark photon dark matter using data from LIGO's and Virgo's third observing run, Phys. Rev. D 105 (2022) 063030, [2105.13085].
A direct empirical proof of the existence of dark matter. D Clowe, M Bradac, A H Gonzalez, M Markevitch, S W Randall, C Jones, 10.1086/508162astro-ph/0608407Astrophys. J. Lett. 648D. Clowe, M. Bradac, A. H. Gonzalez, M. Markevitch, S. W. Randall, C. Jones et al., A direct empirical proof of the existence of dark matter, Astrophys. J. Lett. 648 (2006) L109-L113, [astro-ph/0608407].
Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. G Hinshaw, WMAP collaboration10.1088/0067-0049/208/2/191212.5226Astrophys. J. Suppl. 20819WMAP collaboration, G. Hinshaw et al., Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results, Astrophys. J. Suppl. 208 (2013) 19, [1212.5226].
The 6dF Galaxy Survey: Baryon Acoustic Oscillations and the Local Hubble Constant. F Beutler, C Blake, M Colless, D H Jones, L Staveley-Smith, L Campbell, 10.1111/j.1365-2966.2011.19250.xMon. Not. Roy. Astron. Soc. 4161106.3366F. Beutler, C. Blake, M. Colless, D. H. Jones, L. Staveley-Smith, L. Campbell et al., The 6dF Galaxy Survey: Baryon Acoustic Oscillations and the Local Hubble Constant, Mon. Not. Roy. Astron. Soc. 416 (2011) 3017-3032, [1106.3366].
Speculations on Detection of the Neutrino Sea. L Stodolsky, 10.1103/PhysRevLett.34.110Phys. Rev. Lett. 34110L. Stodolsky, Speculations on Detection of the Neutrino Sea, Phys. Rev. Lett. 34 (1975) 110.
Expected signals in relic neutrino detectors. G Duda, G Gelmini, S Nussinov, 10.1103/PhysRevD.64.122001hep-ph/0107027Phys. Rev. D. 64G. Duda, G. Gelmini and S. Nussinov, Expected signals in relic neutrino detectors, Phys. Rev. D 64 (2001) 122001, [hep-ph/0107027].
Detection prospects for the Cosmic Neutrino Background using laser interferometers. V Domcke, M Spinrath, 10.1088/1475-7516/2017/06/0551703.08629JCAP. 0655V. Domcke and M. Spinrath, Detection prospects for the Cosmic Neutrino Background using laser interferometers, JCAP 06 (2017) 055, [1703.08629].
. M Bauer, J D Shergold, 2207.12413M. Bauer and J. D. Shergold, Limits on the cosmic neutrino background, 2207.12413.
Review of asymmetric dark matter. K Petraki, R R Volkas, 10.1142/S0217751X133002871305.4939Int. J. Mod. Phys. A. 281330028K. Petraki and R. R. Volkas, Review of asymmetric dark matter, Int. J. Mod. Phys. A 28 (2013) 1330028, [1305.4939].
The Cosmic Neutrino Background on the Surface of the Earth. A Arvanitaki, S Dimopoulos, 2212.00036A. Arvanitaki and S. Dimopoulos, The Cosmic Neutrino Background on the Surface of the Earth, 2212.00036.
Evolution of primordial neutrino helicities in cosmic gravitational inhomogeneities. G Baym, J.-C Peng, 10.1103/PhysRevD.103.1230192103.11209Phys. Rev. D. 103123019G. Baym and J.-C. Peng, Evolution of primordial neutrino helicities in cosmic gravitational inhomogeneities, Phys. Rev. D 103 (2021) 123019, [2103.11209].
M Ghosh, Y Grossman, W Tangarife, X.-J Xu, B Yu, 2209.07082Neutrino forces in neutrino backgrounds. M. Ghosh, Y. Grossman, W. Tangarife, X.-J. Xu and B. Yu, Neutrino forces in neutrino backgrounds, 2209.07082.
On neutrino-mediated potentials in a neutrino background. D Blas, I Esteban, M C Gonzalez-Garcia, J Salvado, 2212.03889D. Blas, I. Esteban, M. C. Gonzalez-Garcia and J. Salvado, On neutrino-mediated potentials in a neutrino background, 2212.03889.
Neutrino bound states and bound systems. A Y Smirnov, X.-J Xu, 10.1007/JHEP08(2022)1702201.00939JHEP. 08170A. Y. Smirnov and X.-J. Xu, Neutrino bound states and bound systems, JHEP 08 (2022) 170, [2201.00939].
Dark matter direct-detection experiments. T , Marrodán Undagoitia, L Rauch, 10.1088/0954-3899/43/1/0130011509.08767J. Phys. G. 4313001T. Marrodán Undagoitia and L. Rauch, Dark matter direct-detection experiments, J. Phys. G 43 (2016) 013001, [1509.08767].
Axions as Dark Matter Particles. L D Duffy, K Van Bibber, 10.1088/1367-2630/11/10/105008New J. Phys. 111050080904.3346L. D. Duffy and K. van Bibber, Axions as Dark Matter Particles, New J. Phys. 11 (2009) 105008, [0904.3346].
Scalar dark matter candidates. C Boehm, P Fayet, 10.1016/j.nuclphysb.2004.01.015hep-ph/0305261Nucl. Phys. B. 683C. Boehm and P. Fayet, Scalar dark matter candidates, Nucl. Phys. B 683 (2004) 219-263, [hep-ph/0305261].
Scalar dark matter candidates revisited. C Boehm, X Chu, J.-L Kuo, J Pradler, 10.1103/PhysRevD.103.075005Phys. Rev. D. 103750052010.02954C. Boehm, X. Chu, J.-L. Kuo and J. Pradler, Scalar dark matter candidates revisited, Phys. Rev. D 103 (2021) 075005, [2010.02954].
The Minimal model of nonbaryonic dark matter: A Singlet scalar. C P Burgess, M Pospelov, T Veldhuis, 10.1016/S0550-3213(01)00513-2hep-ph/0011335Nucl. Phys. B. 619C. P. Burgess, M. Pospelov and T. ter Veldhuis, The Minimal model of nonbaryonic dark matter: A Singlet scalar, Nucl. Phys. B 619 (2001) 709-728, [hep-ph/0011335].
Sterile-neutrinos as dark matter. S Dodelson, L M Widrow, 10.1103/PhysRevLett.72.17hep-ph/9303287Phys. Rev. Lett. 72S. Dodelson and L. M. Widrow, Sterile-neutrinos as dark matter, Phys. Rev. Lett. 72 (1994) 17-20, [hep-ph/9303287].
Sterile neutrinos, dark matter, and the pulsar velocities in models with a Higgs singlet. A Kusenko, 10.1103/PhysRevLett.97.241301hep-ph/0609081Phys. Rev. Lett. 97241301A. Kusenko, Sterile neutrinos, dark matter, and the pulsar velocities in models with a Higgs singlet, Phys. Rev. Lett. 97 (2006) 241301, [hep-ph/0609081].
Dark-matter sterile neutrinos in models with a gauge singlet in the Higgs sector. K Petraki, A Kusenko, 10.1103/PhysRevD.77.065014Phys. Rev. D. 77650140711.4646K. Petraki and A. Kusenko, Dark-matter sterile neutrinos in models with a gauge singlet in the Higgs sector, Phys. Rev. D 77 (2008) 065014, [0711.4646].
Light Sterile Neutrinos: A White Paper. K N Abazajian, K. N. Abazajian et al., Light Sterile Neutrinos: A White Paper, 1204.5379.
Supersymmetric dark matter. G Jungman, M Kamionkowski, K Griest, 10.1016/0370-1573(95)00058-5hep-ph/9506380Phys. Rept. 267G. Jungman, M. Kamionkowski and K. Griest, Supersymmetric dark matter, Phys. Rept. 267 (1996) 195-373, [hep-ph/9506380].
Dark Matter Candidates from Particle Physics and Methods of Detection. J L Feng, 10.1146/annurev-astro-082708-1016591003.0904Ann. Rev. Astron. Astrophys. 48J. L. Feng, Dark Matter Candidates from Particle Physics and Methods of Detection, Ann. Rev. Astron. Astrophys. 48 (2010) 495-545, [1003.0904].
Detecting non-relativistic cosmic neutrinos by capture on tritium: phenomenology and physics potential. A J Long, C Lunardini, E Sabancilar, 10.1088/1475-7516/2014/08/0381405.7654JCAP. 0838A. J. Long, C. Lunardini and E. Sabancilar, Detecting non-relativistic cosmic neutrinos by capture on tritium: phenomenology and physics potential, JCAP 08 (2014) 038, [1405.7654].
. P A Zyla, Particle Data Group collaboration10.1093/ptep/ptaa104Review of Particle Physics. 2020PTEPParticle Data Group collaboration, P. A. Zyla et al., Review of Particle Physics, PTEP 2020 (2020) 083C01.
The Physics of Heavy Z 1 Gauge Bosons. P Langacker, 10.1103/RevModPhys.81.1199Rev. Mod. Phys. 810801.1345P. Langacker, The Physics of Heavy Z 1 Gauge Bosons, Rev. Mod. Phys. 81 (2009) 1199-1228, [0801.1345].
Hunting All the Hidden Photons. M Bauer, P Foldenauer, J , 10.1007/JHEP07(2018)0941803.05466JHEP. 0794M. Bauer, P. Foldenauer and J. Jaeckel, Hunting All the Hidden Photons, JHEP 07 (2018) 094, [1803.05466].
The Dark Photon. M Fabbrichesi, E Gabrielli, G Lanfranchi, M. Fabbrichesi, E. Gabrielli and G. Lanfranchi, The Dark Photon, 2005.01515.
Dark photon limits: A handbook. A Caputo, A J Millar, C A J O'hare, E Vitagliano, 10.1103/PhysRevD.104.0950292105.04565Phys. Rev. D. 10495029A. Caputo, A. J. Millar, C. A. J. O'Hare and E. Vitagliano, Dark photon limits: A handbook, Phys. Rev. D 104 (2021) 095029, [2105.04565].
Is the lightest Kaluza-Klein particle a viable dark matter candidate?. G Servant, T M P Tait, 10.1016/S0550-3213(02)01012-Xhep-ph/0206071Nucl. Phys. B. 650G. Servant and T. M. P. Tait, Is the lightest Kaluza-Klein particle a viable dark matter candidate?, Nucl. Phys. B 650 (2003) 391-419, [hep-ph/0206071].
Kaluza-Klein dark matter. H.-C Cheng, J L Feng, K T Matchev, 10.1103/PhysRevLett.89.211301hep-ph/0207125Phys. Rev. Lett. 89211301H.-C. Cheng, J. L. Feng and K. T. Matchev, Kaluza-Klein dark matter, Phys. Rev. Lett. 89 (2002) 211301, [hep-ph/0207125].
Dark QCD matters. R Garani, M Redi, A Tesi, 10.1007/JHEP12(2021)1392105.03429JHEP. 12139R. Garani, M. Redi and A. Tesi, Dark QCD matters, JHEP 12 (2021) 139, [2105.03429].
Spin 3/2 Particle as a Dark Matter Candidate: an Effective Field Theory Approach. R Ding, Y Liao, 10.1007/JHEP04(2012)0541201.0506JHEP. 0454R. Ding and Y. Liao, Spin 3/2 Particle as a Dark Matter Candidate: an Effective Field Theory Approach, JHEP 04 (2012) 054, [1201.0506].
Case for decaying spin-3/2 dark matter. M A G Garcia, Y Mambrini, K A Olive, S Verner, 10.1103/PhysRevD.102.083533Phys. Rev. D. 102835332006.03325M. A. G. Garcia, Y. Mambrini, K. A. Olive and S. Verner, Case for decaying spin-3/2 dark matter, Phys. Rev. D 102 (2020) 083533, [2006.03325].
Accidental Composite Dark Matter. O Antipin, M Redi, A Strumia, E Vigiani, 10.1007/JHEP07(2015)0391503.08749JHEP. 0739O. Antipin, M. Redi, A. Strumia and E. Vigiani, Accidental Composite Dark Matter, JHEP 07 (2015) 039, [1503.08749].
Torsion-balance tests of the weak equivalence principle. T A Wagner, S Schlamminger, J H Gundlach, E G Adelberger, 10.1088/0264-9381/29/18/1840021207.2442Class. Quant. Grav. 29184002T. A. Wagner, S. Schlamminger, J. H. Gundlach and E. G. Adelberger, Torsion-balance tests of the weak equivalence principle, Class. Quant. Grav. 29 (2012) 184002, [1207.2442].
The Local Dark Matter Density. J I Read, 10.1088/0954-3899/41/6/0631011404.1938J. Phys. G. 4163101J. I. Read, The Local Dark Matter Density, J. Phys. G 41 (2014) 063101, [1404.1938].
Cosmic neutrinos and their detection. C Hagmann, astro-ph/9905258) Meeting of the Division of Particles and Fields (DPF 99. APS1C. Hagmann, Cosmic neutrinos and their detection, in American Physical Society (APS) Meeting of the Division of Particles and Fields (DPF 99), 1, 1999, astro-ph/9905258.
New Observables for Direct Detection of Axion Dark Matter. P W Graham, S Rajendran, 10.1103/PhysRevD.88.0350231306.6088Phys. Rev. D. 8835023P. W. Graham and S. Rajendran, New Observables for Direct Detection of Axion Dark Matter, Phys. Rev. D 88 (2013) 035023, [1306.6088].
Proposal for a Cosmic Axion Spin Precession Experiment (CASPEr). D Budker, P W Graham, M Ledbetter, S Rajendran, A Sushkov, 10.1103/PhysRevX.4.0210301306.6089Phys. Rev. X. 421030D. Budker, P. W. Graham, M. Ledbetter, S. Rajendran and A. Sushkov, Proposal for a Cosmic Axion Spin Precession Experiment (CASPEr), Phys. Rev. X 4 (2014) 021030, [1306.6089].
Solid state systems for electron electric dipole moment and other fundamental measurements. S K Lamoreaux, 10.1103/PhysRevA.66.022109nucl-ex/0109014Phys. Rev. A. 6622109S. K. Lamoreaux, Solid state systems for electron electric dipole moment and other fundamental measurements, Phys. Rev. A 66 (2002) 022109, [nucl-ex/0109014].
The first VERA astrometry catalog. T Hirota, VERA collaboration10.1093/pasj/psaa018Publ. Astron. Soc. Jpn. 72502002.03089VERA collaboration, T. Hirota et al., The first VERA astrometry catalog, Publ. Astron. Soc. Jpn. 72 (2020) 50, [2002.03089].
SENSEI: Direct-Detection Results on sub-GeV Dark Matter from a New Skipper-CCD. L Barak, SENSEI collaboration10.1103/PhysRevLett.125.171802Phys. Rev. Lett. 125171802SENSEI collaboration, L. Barak et al., SENSEI: Direct-Detection Results on sub-GeV Dark Matter from a New Skipper-CCD, Phys. Rev. Lett. 125 (2020) 171802, [2004.11378].
New Constraints and Prospects for sub-GeV Dark Matter Scattering off Electrons in Xenon. R Essig, T Volansky, T.-T Yu, 10.1103/PhysRevD.96.0430171703.00910Phys. Rev. D. 9643017R. Essig, T. Volansky and T.-T. Yu, New Constraints and Prospects for sub-GeV Dark Matter Scattering off Electrons in Xenon, Phys. Rev. D 96 (2017) 043017, [1703.00910].
Light Dark Matter Search with Ionization Signals in XENON1T. E Aprile, XENON collaboration10.1103/PhysRevLett.123.2518011907.11485Phys. Rev. Lett. 123251801XENON collaboration, E. Aprile et al., Light Dark Matter Search with Ionization Signals in XENON1T, Phys. Rev. Lett. 123 (2019) 251801, [1907.11485].
Probing axions with the neutrino signal from the next galactic supernova. T Fischer, S Chakraborty, M Giannotti, A Mirizzi, A Payez, A Ringwald, 10.1103/PhysRevD.94.0850121605.08780Phys. Rev. D. 9485012T. Fischer, S. Chakraborty, M. Giannotti, A. Mirizzi, A. Payez and A. Ringwald, Probing axions with the neutrino signal from the next galactic supernova, Phys. Rev. D 94 (2016) 085012, [1605.08780].
FCNC B and K meson decays with light bosonic Dark Matter. X.-G He, X.-D Ma, G Valencia, 10.1007/JHEP03(2023)0372209.05223JHEP. 0337X.-G. He, X.-D. Ma and G. Valencia, FCNC B and K meson decays with light bosonic Dark Matter, JHEP 03 (2023) 037, [2209.05223].
Constraints on bosonic dark matter from ultralow-field nuclear magnetic resonance. A Garcon, 10.1126/sciadv.aax45391902.04644Sci. Adv. 54539A. Garcon et al., Constraints on bosonic dark matter from ultralow-field nuclear magnetic resonance, Sci. Adv. 5 (2019) eaax4539, [1902.04644].
On Diurnal and Annual Variations of Directional Detection Rates of Dark Matter. A Bandyopadhyay, D Majumdar, 10.1088/0004-637X/746/1/1071006.3231Astrophys. J. 746107A. Bandyopadhyay and D. Majumdar, On Diurnal and Annual Variations of Directional Detection Rates of Dark Matter, Astrophys. J. 746 (2012) 107, [1006.3231].
| zyda_arxiv-0669000 |
Quantum Quench dynamics in Non-local Luttinger Model: Rigorous Results
20 Nov 2017
Zhituo Wang [email protected]
Institute for Advanced Study in Mathematics
Research Center for Operator Algebras
Harbin Institute of Technology
150006HarbinChina
East China Normal University
Quantum Quench dynamics in Non-local Luttinger Model: Rigorous Results
20 Nov 2017
We investigate, in the Luttinger model with fixed box potential, the time evolution of an inhomogeneous state prepared as a localized fermion added to the noninteracting ground state. We proved that, if the state is evolved with the interacting Hamiltonian, the averaged density has two peaks moving in opposite directions, with a constant but renormalized velocity. We also proved that a dynamical 'Landau quasi-particle weight' appears in the oscillating part of the averaged density, asymptotically vanishing with large time. The results are proved with the Mattis-Lieb diagonalization method. A simpler proof with the exact Bosonization formulas is also provided.
INTRODUCTION
Recent experiments on cold atoms [1] have motivated increasing interest in the dynamical properties of many body quantum systems which are closed and isolated from any reservoir or environment [2]. Nonequilibrium properties can be investigated by quantum quenches, in which the system is prepared in an eigenstate of the non-interacting Hamiltonian and its subsequent time evolution driven by an interacting many-body Hamiltonian is observed. As the resulting dynamical behavior is the cumulative effect of the interactions between an infinite or very large number of particles, the computation of local observables averaged over time-evolved states poses typically great analytical difficulties; therefore, apart for some analysis in two dimensions (see, for instance [3,4]), the problem is mainly studied in one dimension [5]- [30]. A major difference with respect to the equilib-rium case relies on the fact that in such a case a form of universality holds, ensuring that a number of properties are essentially insensitive to the model details. At non-equilibrium the behavior depends instead on model details; for instance integrability in spin chains dramatically affects the non equilibrium behavior [13], [40], [41] while it does not alter the T = 0 equilibrium properties [43]. This extreme sensitivity to the details or approximations asks for a certain number of analytical exact results at non-equilibrium, to provide a benchmark for experiments or approximate computations.
One of the interacting Fermionic system where non-equilibrium properties can be investigated is the Luttinger model [32,33] (see also [34][35][36]), which provides a great number of information in the equilibrium case. In the Luttinger model model the quadratic dispersion relation of the non relativistic fermions is replaced with a linear dispersion relation, leading to the "anomaly" in the distribution of the ground states density. This anomaly is proved to be universal for a large class of one dimensional Fermionic system, called the Luttinger liquid [31]. Luttinger model became of great interest in mathematical physics ever since the exact solutions founded by Mattis-Lieb [34] and is a key to investigate the mathematical properties of condensed matter physics.
It is important to stress that there exist two versions of this model, the local Luttinger model (LLN) and the non local Luttinger model (NLLM); in the former a local delta-like interaction is present while in the latter the interaction is short ranged but non local.
The finite range of the interaction plays as an ultraviolet cut-off. At equilibrium such two models are often confused as they have similar behavior, due to the above mentioned insensitivity to model details; there is however no reason to expect that this is true also at non equilibrium. It should be also stressed that the LLM is plagued by ultraviolet divergences typical of a QFT and an ad-hoc regularization is necessary to get physical predictions; the short time or distance behavior depends on the chosen regularization.
In this paper we study the evolution of inhomogeneous states in the non-local Luttinger Model with a fixed box potential, with the Mattis-Lieb diagonalization method, which was proved to be mathematically rigorous ( [35,36]). Then we perform rigorous analysis of the asymptotic behavior in the infinite volume limit. The main result shows that (see Theorem 2.2), when the interaction is turned on, the dynamics is ballistic with a constant but renormalized velocity, and the interaction produces a dynamical 'Landau quasi-particle weight' in the oscillating part, asymptotically vanishing with time. The expressions we get do not require any ultraviolet regularization, and correctly capture also the short time dynamics. We also invite the physically oriented reader to read this article along with a short letter [18], in which we studied the quench dynamics of non-local Luttinger model but without giving full details of the proof. In the current article we put full details of the proof and specialize to the box potential, for which the change of velocity due to the many-body interaction is more transparent; we provide also a simpler proof of the main theorem with the exact Bosonization formulas.
The quantum quench of homogeneous states in the NLLM was derived in [20], [21], in which steady states were found. However mathematical rigor is lacking in these work.
The quenched evolution of the NLLM prepared in domain wall initial state was studied in [42] and the universality of the quantum Landauer conductance for the final states was proved, in a mathematically rigorous way.
The plan of the paper is the following. We introduce the NLLM with box potential in §II. In §III we prove Theorem 2.2 with the Mattis-Lieb diagonalization method. Some details of the proof are presented in the Appendix. The proof of Theorem 2.2 based on the Bosonization method is given in §IV.
THE LUTTINGER MODEL AND MAIN RESULTS
A. The Luttinger model with box potential
The non-local Luttinger model (NLLM) is defined by the Hamiltonian:
H λ = L/2 −L/2 dx i v F (: ψ + x,1 ∂ x ψ − x,1 : − : ψ + x,2 ∂ x ψ − x,2 :) +λ L 2 − L 2 dxdy v(x − y) : ψ + x,1 ψ − x,1 :: ψ + y,2 ψ − y,2 : (2.1) where ψ ± x,ω = 1 √ L k a k
,ω e ±ikx , ω = 1, 2, k = 2πn L , n ∈ N are fermionic creation or annihilation operators, :: denotes Wick ordering and v F is the Fermi velocity. We are choosing units so that v F = 1. The two-body interaction potential v(x − y) is given by:
v(x − y) = sin(x − y) x − y ,(2.2)
whose fourier transform reads:
v(p) = v 0 f or p ≤ 1, 0 f or p > 1. (2.3)
The potential v(x) or v(p) is also called the box potential and v 0 is called the strength of v(p). Equilibrium Luttinger model with box potential was first considered in [44].
In the Fourier space the Luttinger Hamiltonian can be written as
H = H 0 + V = k>0 k[(a + k,1 a − k,1 + a − −k,1 a + −k,1 ) + (a + −k,2 a − −k,2 + a − k,2 a + k,2 ) + λ L p>0 v(p)[ρ 1 (p)ρ 2 (−p) + ρ 1 (−p)ρ 2 (p)] + λ L v(0) N 1 N 2 (2.4)
where, for p > 0,
ρ ω (p) = k a + k+p,ω a − k,ω , N ω = k>0 (a + k,ω a − k,ω − a − −k,ω a + −k,ω ). (2.5)
It is well known that Fock space canonical commutation relations don't have a unique representation in a system with infinite degree of freedom. So one has to introduce a cutoff function χ Λ (k) with Λ a large positive number such that χ Λ (k) = 1 for |k| ≤ Λ and equals 0 otherwise and the regularized operators ρ ω (p) must be thought as lim Λ→∞ k χ Λ (k)χ Λ (k+ p)a + k+p,ω a − k,ω . The Hamiltonian H as well as ρ ω (p) can be regarded as operators acting on the Hilbert space H constructed as follows. Let H 0 be the linear span of vectors obtained by applying finitely many times creation or annihilation operators on |0 >= k≤0 a + k,1 a + −k,2 |vac > . The basic property of the Luttinger model is the validity of the following anomalous commutation relations, first proved in [34], for p, p ′ > 0
[ρ 1 (−p), ρ 1 (p ′ )] = [ρ 2 (p), ρ 2 (−p ′ )] = pL 2π δ p,p ′ . (2.7)
Remark that this commutator acting on the Fock space is not precise due to the infinitely many degrees of freedom of the system. So one should introduce a cutoff Λ so that the commutator:
− Λ k=Λ+p a + k,ω a − k,ω + Λ−p k=−Λ a + k,ω a − k,ω = −Λ+p k=−Λ a + k,ω a − k,ω − Λ k=Λ−p a + k,ω a − k,ω . (2.8)
on any state of H is equal, in the limit Λ → ∞, to pL 2π . Moreover one can verify that
ρ 2 (p)|0 >= 0 , ρ 1 (−p)|0 >= 0 . (2.9)
Other important commutation relations (see [34,45] for proofs) are as follows:
[H 0 , ρ ω (±p)] = ±ε ω pρ p (±p), [ρ ω , ψ ± ω,x ] = e ipx ψ ± ω,x (2.10)
where ω = 1, 2; ε ω = 1 for ω = 1 and ε ω = −1 for ω = 2.
B. The Mattis-Lieb diagonalization
The Hamiltonian (2.4) can be diagonalized with the method of Lieb-Mattis [34], as follows. First of all we introduce an operator
T = 1 L p>0 [ρ 1 (p)ρ 1 (−p) + ρ 2 (−p)ρ 2 (p)] (2.11) and write H = (H 0 − T ) + (V + T ) = H 1 + H 2 . Note that H 1 is already diagonalized in that it commutes with ρ ω .
The key for the diagonalization of H 2 is the introduction of a bounded operator S acting on the Hilbert space H:
S = 2π L p =0 φ(p)p −1 ρ 1 (p)ρ 2 (−p), tanhφ(p) = − λv(p) 2π . (2.12)
Using the following Bogolyubov transformations for the operators ρ ω (±p):
e iS ρ 1,2 (±p)e −iS = ρ 1,2 (±p) cosh φ(p) + ρ 2,1 (±p) sinh φ,(2.13)
we can easily prove that H 2 can be written in diagonal form:
e iS H 2 e −iS =H 2 := 2π L p sech2φ(p)[ρ 1 (p)ρ 1 (−p) + ρ 2 (−p)ρ 2 (p)] + E 0 .
(2.14)
By Formula (2.12) we can easily find that the operator S hence the transformation in (2.14) is well defined only for |λv(p)| < 2π; The model is instable for |λv(p)| > 2π.
Define
D =H 2 − T = 2π L p σ(p)[ρ 1 (p)ρ 1 (−p) + ρ 2 (−p)ρ 2 (p)] + E 0 , (2.15)
we have [H 0 , D] = 0. The diagonalization formula for the Hamiltonian reads:
e iS e iHt e −iS = e i(H 0 +D)t . (2.16) C.
The time evolution of the one particle state and the main theorem
Define ψ ± x,δ = e iH 0 t ψ ± ω,x e −iH 0 t = 1 √ L k a ± ω,k e ±i(kx−εωkt)−δ|k| ,(2.17)
where δ → 0 + , ε 1 = +, ε 2 = −. By direct calculation we find that:
< 0|ψ εω ω,x,δ ψ −εω ω,y,δ |0 >= (2π) −1 iε ω (x − y) − i(t − s) + δ . (2.18)
The relation between the creation or annihilation Fermionic operators and the quasiparticle operators is
ψ x = e ip F x ψ x,1 + e −ip F x ψ x,2 , (2.19)
where p F is the Fermi momentum and we call e ip F x ψ x,1 =ψ x,1 and e −ip F x ψ x,−1 =ψ x,2 . In momentum space this simply means that the momentum k is measured from the Fermi points, that is c k,ω =c k+εωp F ,ω . The ground state of H is |GS >= e iS |0 >, where |0 > is the ground state of H 0 and the inhomogeneous one particle initial state is given by:
|I t >= e iH λ t (ψ + 1,x +ψ + 2,x )|0 > . (2.20)
Let n(z) be the density operator, which is defined as the limit δ → 0, ε → 0 of the following expression:
1 2 ρ=± (ψ + 1,z+ρεψ − 2,z, +ψ + 2,z+ρε ψ − 1,z +ψ + 2,z+ρεψ − 2,z (2.21) +ψ + 1,z+ρεψ − 1,z +ψ + 1,z+ρεψ − 1,z +ψ + 2,z+ρεψ − 2,z ).
Note that summing over ρ = ± is the point spitting regularization, which plays the same role as the Wick ordering for avoiding divergences. We are interested in the average value of the density operator w. r. t. the 1-particle initial state (2.20), formally defined by:
G(x, z, t, δ) :=< I t |n(z)|I t > (2.22) := ω,ω ′ =1,2 0|ψ − ω,x e iHtψ+ ω,z+ρεψ − ω ′ ,z e −iHtψ+ ω ′ ,x |0 + 0|ψ − ω,x e iHtψ+ ω ′ ,z+ρεψ − ω ′ ,z e −iHtψ+ ω,x |0
As a first step we consider the non-interacting case. Let |I 0,t >:= e iH 0 t (ψ + 1,x +ψ + 2,x )|0 >, we have:
Theorem 2.1 When λ = 0, H = H 0 , we have lim L→∞ < I 0,t |n(z)|I 0,t > (2.23) = 1 2π 2 cos 2p F (x − y) (x − z) 2 − t 2 + 1 4π 2 [ 1 ((x − z) − t) 2 + 1 ((x − z) + t) 2 ].
Proof 2.1 We consider first the term with ω = 1, ω ′ = 2. Using the explicit expressions of the Fermionic operators and taking the limit ε → 0, we can easily find that this term
is equal to e 2ip F (x−y) (4π 2 ) −1 [(x − z) 2 − t 2 ] −1 ;
a similar result is found for the second term.
The third and fourth terms are vanishing as ρ 1 ρε = 0; similarly the last two term give
(4π 2 ) −1 [(x − z) ± t] −2 .
Combine all these terms we can derive Formula (2.23), hence proved this theorem.
Remark 2.1
The physical meaning of Theorem 2.1 is quite clear: when the interaction is turned off, the average of the density is sum of two terms, an oscillating and a non oscillating part (when the particle is added to the vacuum there are no oscillations p F = 0).
At t = 0 the density is peaked at z = x, where the average is singular. With the time increasing the particle peaks move in the left and right directions with constant velocity v F = 1 (ballistic motion); that is, the average of the density is singular at z = x ± t and a "light cone dynamics" is found.
When we turn on the interaction and let the system driven by the full interacting Hamiltonian, the ground states and the dynamics will be significantly changed. The explicit expression of (2.22) can be derived with the Mattis-Lieb diagonalization method followed by a rigorous analysis of the asymptotic behavior for L → ∞ and large t. We have Theorem 2.2 Let the interacting box potential (see (2.3)) be turned on in the Hamilto-
nian, let γ 0 = v 0 2 and ω 0 = 1 − v 0 2π 2 .
The average of the density operator with respect to the one particle initial state |I λ,t > in the limit L → ∞ reads:
lim L→∞ < I λ,t |n(z)|I λ,t > = 1 4π 2 [ 1 ((x − z) − t) 2 + 1 ((x − z) + t) 2 ] + 1 2π 2 cos 2p F (x − z) e Z(t) (x − z) 2 − (ω 0 t) 2 . (2.24)
where
Z(t) = γ 0 1 0 dp p (cos 2ω 0 pt − 1) (2.25)
is the Landau quasi particle factor, such that Z(0) = 1 and
exp Z(t) ∼ cst( 1 2ω 0 t ) γ 0 ,(2.
26)
for t ≥ 1.
PROOF OF THEOREM 2.2
We consider first the term:
0| ψ − 1,x e iHt ψ + 1,z ψ − 2,z e −iHt ψ + 2,x |0 ,(3.27)
and forget the phase factor e ±ip F x for the moment for simplicity; these factors are very easy to restore. The rest of this subsection is devoted to the calculation of (3.27).
Let I be an identity operator in H. Using the fact that e −iεS e iεS = I and e −iHt e iHs | t=s = I, we can write (3.27) as
0| ψ − 1,x e −iεS (e iεS e iHt e −iεS )(e iεS ψ + 1,z e −iεS ) · (3.28) ·(e iεS ψ − 2,z e −iεS )(e iεS e −iHs e −iεS )e iεS ψ + 2,x |0 | ε=1,s=t
Lemma 3.1 LetÎ 1 be an operator valued function of ρ 1 (±p) and ψ ± 1 andÎ 2 be an operator valued function of ρ 2 (±p) and ψ ± 2 , then we have the following factorization Formula for (3.27):
G 1 = I 1 I 2 ,(3.
29)
where I 1 = 0|Î 1 |0 and I 2 = 0|Î 2 |0 .
Proof 3.1 We shall prove this lemma by deriving the explicit expressions ofÎ 1 andÎ 2 .
Using the diagonalization formula (2.16), formula (3.28) can be written as:
0| ψ − 1,x e −iεS e i(H 0 +D)t e iεS ψ + 1,z e −iεS e −i(H 0 +D)t e −iεS · (3.30) ·e iεS e i(H 0 +D)s e iS ψ − 2,z e −iS e −i(H 0 +D)s e iεS ψ + 2,x |0 | ε=1, s=t .
Now we consider the term of e iεS ψ + 1,z e −iεS . It is a well known result [34] that:
e iεS ψ ∓ 1,z e −iεS = ψ ∓ 1,z W ± 1,z R ± 1,z , (3.31) where W ± 1,z = exp{∓ 2π L p>0 1 p [ρ 1 (p)e −ipz − ρ 1 (−p)e ipz ](cosh εφ − 1)} R ± 1,z = exp{± 2π L p>0 1 p [ρ 2 (p)e −ipz − ρ 2 (−p)e ipz ] sinh εφ}. (3.32)
Similarly one has
e iεS ψ ∓ 2,z e −iεS = ψ ∓ 2,z W ± 2,z R ± 2,z (3.33) where W ± 2,z = exp{∓ 2π L p>0 1 p [ρ 1 (p)e −ipz − ρ 1 (−p)e ipz ] sinh εφ} R ± 2,z = exp{± 2π L p>0 1 p [ρ 2 (p)e −ipz − ρ 2 (−p)e ipz ](cosh εφ − 1)}. (3.34)
Then we consider the term Combining the above formula with (2.13) and (2.17) we find that (3.36) can be written as a product of
e −iεS e i(H 0 +D)t W − 1,z R − 1,z e −i(H 0 +D)t e iεS ,(3.[e −iεS e i(H 0 +D)t W − 1,z e −i(H 0 +D)t e iεS ] · [e −iεS e i(H 0 +D)t R − 1,z e −i(H 0 +D)t e iεS ].e −iεS e i(H 0 +D)t W ± 1,z e −i(H 0 +D)t e iεS = exp ± 2π L p (cosh φ − 1) p [ (ρ 1 (−p) cosh εφ − ρ 2 (−p) sinh εφ)e ipx−ipt(σ+1) − (ρ 1 (p) cosh εφ − ρ 2 (p) sinh εφ)e −ipx+ipt(σ+1) ] :=W ± 1,z . (3.39) and e −iεS e i(H 0 +D)t R − 1,z e −i(H 0 +D)t e iεS = exp ± 2π L p sinh φ p [ (ρ 2 (−p) cosh εφ − ρ 1 (−p) sinh εφ)e ipy+ips(σ+1) − (ρ 2 (p) cosh εφ − ρ 1 (p) sinh εφ)e −ipy−ips(σ+1) ] :=R ± 1,z . (3.40)
Using again (3.31), (3.33) and (3.46), we have: whereW −1 1,2,t,ε ,R −1 1,2,t,ε andŴ −1 1,2,t,ε ,R −1 1,2,t,ε are operators depending on ρ 1,2 (±p), respectively and z a , z b are functions of p. The explicit expressions of the above factors are given in the Appendix.
e −iεS e i(H 0 +D)t e iS ψ + 1,z e −iS e −i(H 0 +D)t e iεS = z a A 1+ A 1− A 2+ A 2− ψ + 1,zt,δW −1 1tR −1 1t W −1 1tε R −1 1tεŴ −1 1tεR −1 1tε ,(3.
Then we can easily find that the terms depending on ρ 1 (±p) and ψ ± 1 are factorized with respect to the terms depending on ρ 2 (±p) and ψ ± 2 . Let
I 1 := 0|Î 1 |0 := 0|ψ 1x A 1+ A 1− ψ + 1,ztW −1 1W −1 1tŴ −1 1tW 2tW2tŴ2t B 1+ B 1− |0 ,(3.
43)
and
I 2 := 0|Î 2 |0 := 0|A 2+ A 2−R −1 1R −1 1R −1 1R 2R2R2 ψ 2,zt B 2+ B 2− ψ † 2x |0 ,(3.
44)
and using the fact that z a = z −1 b we have
G 1 = I 1 I 2 ,(3.
45)
So we proved Lemma 3.1.
A. Calculation of I 1 and I 2
In this part we derive the explicit expressions for I 1 and I 2 . It is also useful to introduce the following proposition, which can be easily proved using (2.10):
Proposition 3.1 Let f (p, t)
is an arbitrary regular function. Then we have:
e iH 0 t e f (p,t) ρω(±p) e −iH 0 t = e f (p,t) e ±εω i(σ+1)pt ρω(±p) , ω = 1, 2; ε 1 = +, ε 2 = −,(3.46)
The basic idea to calculate I 1 and I 2 is to use repeatedly the Hausdorff to move the operators ρ 1 (−p) and ρ 2 (p) to the right most of the expressions in (3.43) and (3.44), and move ρ 1 (p), ρ 2 (−p) to the left most of the above expressions. By formula (2.9) and its adjoint form we know that these operator annihilate |0 and 0|, respectively; the survived terms are those independent of ρ 1,2 (±p). Setting ε = 1, we have:
I 1 = exp{ 2π L p 1 p [(e −ip(σ+1)(t+s) − 1)(2 cosh 2 φ sinh 2 φ + cosh 3 φ sinh φ) + (e ip(σ+1)(t+s) − 1) cosh φ sinh 3 φ + e −ipσt (− cosh 2 φ − sinh 2 φ) + e ip(x−z)+ip(σ+1)s (cosh φ sinh φ + cosh 2 φ) − e ip(x−z)+ips + e ip(x−z)−ip(σ+1)t (− sinh φ − sinh 2 φ) ]} 0|ψ 1x ψ + 1,z,t,δ |0 ,(3.47)
and
I 2 = 0|ψ + 2,z,t,δ ψ 2x |0 exp{ 2π L p 1 p [(e −ip(σ+1)(t+s) − 1) cosh φ sinh 3 φ + (e ip(σ+1)(t+s) − 1)(cosh 3 φ sinh φ + 2 cosh 2 φ sinh 2 φ) + e −ipσt (cosh 2 φ + sinh 2 φ) − e ip(x−z)−ipt + e ip(x−z)+ip(σ+1)s (− cosh φ sinh φ − sinh 2 φ) + e ip(x−z)−ip(σ+1)t (sinh φ + cosh 2 φ) ]}. (3.48)
Combining (3.47) with (3.48) and setting s = t, we get:
0| ψ − 1,x e iHt ψ + 1,z ψ − 2,z e −iHt ψ + 2,x |0 (3.49) = 0|ψ 1x ψ + 1,z,t,δ |0 0|ψ + 2,z,t,δ ψ 2x |0 × exp p 1 p (e ip(x−z)+ip(σ+1)t − e ip(x−z)+ipt ) +(e ip(x−z)−ip(σ+1)t − e ip(x−z)−ipt ) +2 sinh φ cosh φ(sinh φ + cosh φ) 2 (cos 2p(σ + 1)t − 1) .
It is useful to derive the asymptotic behavior for the second line in (3.49) and we have:
lim δ→0 lim L→∞ 0|ψ 1x ψ + 1,zt,δ |0 0|ψ + 2,ztδ ψ 2x |0 = 1 4π 2 1 (x − z) 2 − t 2 (3.50)
With the same method we can derive the explicit expression for the other terms in (2.22). Restoring the phase factor e ±ip F (x−z) and combine all the terms of (2.22), we obtain the following desired result:
< I λ,t |n(z)|I λ,t >= 1 4π 2 [ 1 ((x − z) − t) 2 + 1 ((x − z) + t) 2 ]
(3.51)
+ 1 4π 2 e Z(t) (x − z) 2 − t 2 e 2ip F (x−z) e Qa(x,z,t) + e −2ip F (x−z) e Q b (x,z,t) , where Z(t) = p 2 p sinh φ cosh φ(sinh φ + cosh φ) 2 (cos 2p(σ + 1)t − 1), (3.52) Q a = p 1 p [(e ip(x−z)+ip(σp+1)t − e ip(x−z)+ipt ) + (e ip(x−z)−ip(σp+1)t − e ip(x−z)−ipt )], Q b = p 1 p [(e −ip(x−z)+ip(σp+1)t − e −ip(x−z)+ipt ) + (e −ip(x−z)−ip(σp+1)t − e −ip(x−z)−ipt )] .1 + v(p) 4π 1 + v(p) 2π − 1 , cosh φ = 1 2 1 + v(p) 4π 1 + v(p) 2π + 1 , (3.54)
where v(p) is the box potential with strength v 0 (see Formula (2.3)), we have the following expression for the critical exponent:
γ(p) = 2 sinh φ(p) cosh φ(p)(sinh φ(p) + cosh φ(p)) 2 = v(p) 4π .
(3.55)
Taking the limit L → ∞ means that we should consider the discrete sum over p as integral over continuous variables. We have: There are three cases to be considered, depending on the range of t:
Z(t) = ∞ 0 γ(p)dp p (cos 2ω 0 pt − 1) = γ 0 1 0 dp p (cos 2ω 0 pt − 1) = γ 0 2ω 0 t 0 d(2ω 0 p) 2ω 0 p (cos 2ω 0 pt − 1),(3.
• when t ≪ 1, which corresponds to the short time behavior and implies that y ≪ 1 and w ≪ 1 (to remember that the v(p) is vanishing for p > 1); In this case we have
Z(t) = γ 0 w 0 dy y (cos y − 1) ∼ γ 0 w 0 dy(− y 2 + O(y 3 )) ≪ 1. (3.58)
So that Z(t) is well defined for y ≪ 1. Furthermore, it is vanishing as y → 0 + and we have e Z(t) | t→0 + → 1.
• when t ∈ (0, 1]; In this case we can repeat the analysis as above and easily prove that Z(t) is a bounded function.
• when t ∈ [1, ∞]; let p 0 > 0 be the minimal value of p and u = 2ω 0 p 0 t, we have where C = 0.577215 · · · is the Euler constant and u 0 cos y−1 y dy is a bounded function.
Z(t) = γ 0
Remark that (3.59) is well defined for u → 0, due to the cancellation of ln u.
So we have
e Z(t) ∼ cst · [ 1 2ω 0 t ] γ 0 , f or t ≥ 1. (3.61)
Now we derive the asymptotic formula for Q a and Q b . Replacing the discrete sum over p in (3.53) by integrals and performing the integrations, we can easily find that:
Q a = Q b = ln (x − z) 2 − t 2 (x − z) 2 − ω 2 0 t 2 .
(3.62)
Collecting all the above terms we have: Bosonization formulas was given very recently in a paper by Langmann and Moosavi [45]. In this section we shall prove Theorem 2.2 with the exact Bosonization formulas in [45]. This can be considered as a verification of the use of Bosonization formula in the non-equilibrium setting.
lim L→∞ < I λ,t |n(z)|I λ,t > = 1 4π 2 [ 1 ((x − z) − t) 2 + 1 ((x − z) + t) 2 ] + 1 2π 2 cos 2p F (x − z) e Z(t) (x − z) 2 − (ω 0 t) 2 .
First of all we shall derive Formula (3.51). Following the notations in [45] we have Proposition 4.1 Let ρ ω be the Bosonic operators introduced before and let R εω ω be the Klein factor, then we can express the Fermionic operators ψ − in terms of the Bosonic operators and the Klein factor as follows:
ψ − ω (x, δ) = : N δ e iπεωxQω/L R −εω ω e iπεωxQω/L × (4.64) exp ε ω p>0 2π Lp [ρ ω (p)e −ipx−δ|p| − ρ ω (−p)e ipx−δ|p| ,
where ω, ω ′ = 1, 2,
ε 1 = +, ε 2 = −, Q ω = ρ ω (0) and N δ = 1 L(1−e −2πδ/L ) 1/2
is the normalization factor. R ± ω is the Klein factor such that R − ω = (R + ω ) † . They obey the following commutation relation (see [45] for the detailed derivation):
[ρ ω (p), R ω ′ ] = ε ω δ ω,ω ′ δ p,0 R ω , [H 0 , R ω ] = ε ω π L ρ ω (0), R ω , (4.65) 0|R q 1 ω R q 2 ω ′ |0 = δ ω,ω ′ δ q 1 ,0 δ q 2 ,0 , R q 1 1 R q 2 2 = (−1) q 1 q 2 R q 2 2 R q 1 1 , [Q ω , R q 1 1 R q 2 2 ] = q ω R q 1 1 R q 2 2 , q ω ∈ Z .
We shall not repeat the proof here and the interested reader is invited to look at [45] for details.
LetẐ − ω = e iπεωxQω/L R −εω ω e iπεωxQω/L andẐ + be its adjoint, we can write the Fermionic operators as:
ψ ± ω (x, δ) = N δẐ ± ω e ∓εω p>0 2π Lp [ρω(p)e −ipx−δ|p| −ρω(−p)e ipx−δ|p| . (4.66)
We calculate first the term 0| ψ − 1,x e iHt ψ + 1,z ψ − 2,z e −iHt ψ + 2,x |0 in(2.22) forget the phase factor e ip F (x−z) for the moment. Inserting the identity operators I = e iHt e −iHt and I = e iS e −iS we derived Formula (3.28), which is the starting point of our analysis.
First of all, it is easily to find that Using the fact that:
e iSẐ ± ω e −iS =Ẑ ± ω .[H 0 + D, R ± ω ] = ± 2π(σ(0) + 1) L R ± ω (2ε ω ρ ω (0) + 1),(4.69)
and
e i(H 0 +D)t R ± ω e −i(H 0 +D)t = R ± ω exp [± 2π(σ(0) + 1) L (2ε ω ρ ω (0) + 1)t ],(4.70)
we have: where
e −iS e i(H 0 +D)t e iS ψ + 1,z e −iS e i(H 0 +D)t e iS (4.71) = N δẐ1 (t) exp 2π L p>0 1 p e −δp [A 1 ρ 1 (p) + A −1 ρ 1 (−p) + A 2 ρ 2 (p) + A −2 ρ 2 (−p)] ,A ±1 = ±e ∓ip[z+(σ+1)t] sinh 2 φ ∓ e −ip[z−(σ+1)t] cosh 2 φ , A ±2 = ±e ∓ip[z−(σ+1)t] sinh φ cosh φ ∓ e ∓ip[z+(σ+1)t] cosh φ sinh φ, B ±1 = ±e ∓ip[z+(σ+1)t] sinh φ cosh φ ∓ e ∓ip[z−(σ+1)t] cosh φ sinh φ, B ±2 = ±e ∓ip[z−(σ+1)t] sinh 2 φ ∓ e ∓ip[z+(σ+1)t] cosh 2 φ Z 1 (t) = e iπxρ 1 (0)/L exp[− 2π(σ(0) + 1) L (2ρ 1 (0) + 1) t]R −1 1 e iπxρ 1 (0)/L , Z 2 (t) = e iπxρ 2 (0)/L exp{ 2π(σ(0) + 1) L (−2ρ 2 (0) + 1)t}R 2 e iπxρ 2 (0)/L . (4.73)
When p = 0, by using the fact that ρ ω (0)|0 = 0 and 0|R q 1 ω R q 2 ω ′ |0 = δ ω,ω ′ δ q 1 ,0 δ q 2 ,0 , we have
0|Ẑ 1Ẑ + 1 (t)Ẑ 2 (t)Ẑ † 2 |0 = 1. (4.74)
So the nontrivial contributions come from the p > 0 part. Using repeatedly the Hausdorff formula we can factorize the terms depending on ρ 1 (±p) and ρ 2 (±p):
0| N δ exp 2π L p>0 1 p [e −δp e −ipx ρ 1 (p) − e −δp e ipx ρ 1 (−p)] ×N δ exp 2π L p>0 1 p e −δp [A +1 ρ 1 (p) + A −1 ρ 1 (−p) + A +2 ρ 2 (p) + A −2 ρ 2 (−p)] ×N δ exp 2π L p>0 1 p e −δp [B +1 ρ 1 (p) + B −1 ρ 1 (−p) + B +2 ρ 2 (p) + B −2 ρ 2 (−p)] ×N δ e 2π L p>0 1 p [e −δp e ipx ρ 2 (p)−e −δp e −ipx ρ 2 (−p)] |0 =: N 4 δ I 1 I 2 ,(4.75)
where
I 1 = 0|e 2π L p>0 1 p e −δp [e −ipx ρ 1 (p)−e ipx ρ 1 (−p)] e 2π L p>0 1 p e −δp [A +1 ρ 1 (p)+A −1 ρ 1 (−p)] × e 2π L p>0 1 p e −δp [B +1 ρ 1 (p)+B −1 ρ 1 (−p)] |0 ,(4.I 2 = 0|e 2π L p>0 1 p e −δp [A +2 ρ 2 (p)+A −2 ρ 2 (−p)] e 2π L p>0 1 p e −δp [B +2 ρ 2 (p)+B −2 ρ 2 (−p)] × e 2π L p>0 1 p e −δp [e −ipx ρ 2 (p)−e ipx ρ 2 (−p)] |0 . (4.77)
Following exactly the same procedure as section 3 A, namely using repeatedly the Hausdorff formula and the annihilation formulas we have:
I 1 I 2 = exp 2π L p>0 1 p e −2δp [(e ip(x−z)+ip(σ+1)t − 1) + (e ip(x−z)−ip(σ+1)t − 1) + 2 sinh φ cosh φ(sinh φ + cosh φ) 2 (cos 2p(σ + 1)t − 1)]. (4.78)
In order to reproduce the expressions in (3.51) we need to extract from the above formula the noninteracting 2-point correlation function (see [45]), as follows. We write the terms e ±ip(x−z)±ip(σ+1)t − 1 in the above formula as
(e ip(x−z)±ip(σ+1)t − e ip(x−z)±ip(σ+1)t ) + (e ip(x−z)±ip(σ+1)t − 1),
while the first term gives the factors Q, the second term contributes to the non-interacting correlation function: A 2± = exp ± 2π L p 1 p ρ 2 (±p) sinh εφ(e ∓ipx±ipt − e ∓ipx±ipt(σ+1) ).
N 4 δ exp 2π L p>0z b = exp 2π L p 1 p (1 − e −ipσt ) = z −1 a ,(5.W 2tε = exp − 2π L p 1 p (cosh φ − 1) sinh εφ[ρ 1 (p)e −ipz−ipt(σ+1) − ρ 1 (−p)e ipz+ipt(σ+1) ], R 2tε = exp − 2π L p sinh φ sinh εφ p [ ρ 2 (p)e −ipz+ipt(σ+1) − ρ 2 (−p)e ipz−ipt(σ+1) ], R 2tε = exp 2π L p 1 p (cosh φ − 1) cosh εφ [ ρ 2 (p)e −ipz−ipt(σ+1) − ρ 2 (−p)e ipz+ipt(σ+1) ].
way we get an abstract linear space to which we introduced scalar products between any pair of vectors. H is defined as the completion of H 0 in the scalar product just introduced. Moreover the operators H and ρ ω (p), regarded as operators on H with domain H 0 , are self adjoint.
35) which, after inserting the identity operator I = e iεS e −iεS and I = e −i(H 0 +D)t e i(H 0 +D)t , is equal to
( 3 . 36 )
336Let f (p, t) be an arbitrary regular function, define σ(p) = sech2φ − 1 and ω(p) = σ(p) + 1 = sech2φ, we have the following commutation relation [H 0 + D, ρ ω (±p)] = ±ε ω p(σ(p) + 1)ρ ω (±p), ω = 1, 2, ε 1 = +, ε 2 = − ,(3.37) which implies that e i(H 0 +D)t e f (p,t)ρω (±p) e −i(H 0 +D)t = e e ±εω i(σ+1)pt f (p,t)ρω (±p) .(3.38)
41) and e −iS e iS e iHs e −iS e iS ψ − 2,z e −iS e iS e −iHs e −iS e iS = z bW2sεR2sεŴ2sεR2sεW2R2 ψ 2,z,s,δ B 1− B 1+ B 2− B 2+ , (3.42)
The asymptotic behavior for L → ∞ In this section we shall derive the asymptotic behavior of Formula (3.51) in the limit L → ∞. Using definitions of the hyper-geometric functions we find that sechφ(p) = 1 2
56) where γ 0 := v 0 4π and ω 0 := 1 − v 0 2π 2 . The second line is true is due to the fact that γ(p) = 0 for p ∈ (1, ∞]. Let y = 2ω 0 pt and w = 2ω 0 t, Z(t) can be written as:
− [ln 2ω 0 t − ln u] = γ 0 (− ln 2ω 0 t − C −
Lieb-Mattis method for solving Luttinger model is mathematically rigorous, technically it is very complicated. There exist another very popular method for studying the one dimensional interacting Fermions models, called the Bosonizations, which states that certain two dimensional models of fermions are equivalent to the corresponding Bosonic models: the corresponding Fermionic Hilbert space and the Bosonic one are isomorphic and the the Fermionic operator can be expressed in terms of the Bosonic operators. While the Bosonization method can reduce significantly the difficulty for the calculation, it has the reputation of not mathematically rigorous. A Rigorous proof of
− e −δp e −ipz [cosh φρ 1 (p) + sinh φρ 2 (p)] +e −δp e ipz [cosh φρ 1 (−p) + sinh φρ 2 (−p)] , e −δp e −ipz [cosh φρ 2 (p) + sinh φρ 1 (p)] +e −δp e ipz [cosh φρ 2 (−p) + sinh φρ 1 (−p)] .(4.68)
and e −iS e i(H 0 +D)t e iS ψ − 2,z e −iS e i(H 0 +D)t e iS (4.72) = N δẐ2 (t) exp 2π L p>0 1 p e −δp [B 1 ρ 1 (p) + B −1 ρ 1 (−p) + B 2 ρ 2 (p) + B −2 ρ 2 (−p)] ,
2δp [(e ip(x−z)+ipt − 1) + (e −ip(x−z)+ipt − 1)]. (4.79)Now we derive the asymptotic formula for (4.79). Using the Poisson summation for− z) 2 − t 2 .
same procedure we can calculate all the other terms in (2.22) and derive Formula (3.51). The asymptotic expressions for the terms in the exponential can be derived with the same procedure as in the last section and we shall not repeat it here. So we proved Theorem (2.2) with the exact Bosonization formulas.
expressions of the factors in Formulas (3.43) and (3.44) With some very long but elementary calculation we find that the expressions of the terms in formula (3.43) and (3.44) read: (±p) cosh εφ(∓e ∓ipx±ipt ± e ∓ipx±ipt(σ+1) )
(±p) sinh εφ(∓e ∓ipz∓ipt ± e ∓ipz∓ipt(σ+1) ) (±p) cosh εφ(±e ∓ipz∓ipt(σ+1) ∓ e ∓ipz∓ipt ) .cosh εφ − 1)e −ipz+ipt ρ 1 (p) − (cosh εφ − 1)e ipz−ipt ρ 1 (−p) ] εφe −ipz+ipt ρ 2 (p) − sinh εφe ipz−ipt ρ 2 (−p) ]. cosh φ − 1) cosh εφe −ipz+ip(σ+1)t ρ 1 (p) (5.86) − (cosh φ − 1) cosh εφe ipz−ip(σ+1)t ρ 1 (−p) ] cosh φ − 1) sinh εφe −ipz+ip(σ+1)t ρ 2 (p) − (cosh φ − 1) sinh εφe ipz−ip(σ+1)t ρ 2 (−p) ] 1 (p) sinh εφe −ipx−ipt(σ+1) − ρ 1 (−p) sinh εφe ipx+ipt(σ+1) ], ρ 2 (−p) cosh εφe ipx+ipt(σ+1) − ρ 2 (p) cosh εφe −ipx−ipt(σ+1) ] . εφ − 1)(e −ipz−ipt ρ 2 (p) − e ipz+ipt ρ 2 (−p)). sinh φ cosh εφ [e −ipz+ip(σ+1)t ρ 1 (p) (5.88) − e ipz−ip(σ+1)t ρ 1 (−p) ],
76)
. I Bloch, J Dalibard, W Zwerger, Rev. Mod. Phys. 80885I. Bloch, J. Dalibard and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
. A Polkovnikov, K Sengupta, Silva A Vengalattore, M , Rev. Mod. Phys. 83863Polkovnikov A, Sengupta, K, Silva A and Vengalattore M Rev. Mod. Phys. 83 863 (2011)
. M Rigol, V Dunjko, M Olshanii, Nature. 452854M. Rigol, V. Dunjko, and M. Olshanii, Nature 452, 854 (2008).
. N Nessi, A Iucci, A M Cazalilla, Phys. Rev, Lett. 113210402N. Nessi, A. Iucci, A.M. Cazalilla, Phys. Rev, Lett. 113, 21, 210402 (2014)
. T Antal, Z Racz, G M Schutz, Phys. Rev. E. 594912T. Antal, Z. Racz, G.M. Schutz, Phys. Rev. E 59, 5 4912 (1999)
. M Rigol, A Muramatsu, Phys. Rev. Lett. 93230404M. Rigol and A. Muramatsu, Phys. Rev. Lett. 93, 230404 (2004).
. M Rigol, A Muramatsu, Phys. Rev. Lett. 94240403M. Rigol and A. Muramatsu, Phys. Rev. Lett. 94, 240403 (2005)
. M Rigol, A Muramatsu, M Olshanii, Phys. Rev. A. 7453616M. Rigol, A. Muramatsu, and M. Olshanii,Phys. Rev. A 74, 053616 (2006)
. M Cazalilla, Phys. Rev. Lett. 97156403M Cazalilla Phys. Rev. Lett. 97 156403 (2006)
. A Iucci, A , M , Cazalilla Phys. Rev. A. 8063619A. Iucci A, M. Cazalilla Phys. Rev. A 80 063619 (2009)
. S R Manmana, S Wessel, R M Noack, A Muramatsu, Phys. Rev. Lett. 98210405S.R. Manmana, S. Wessel, R.M. Noack, A. Muramatsu Phys. Rev. Lett.98.210405 (2006)
. P Calabrese, J Cardy, Phys.Rev.Lett. 96136801P. Calabrese, J. Cardy Phys.Rev.Lett. 96, 136801 (2006)
. S Langer, F Heidrich-Meisner, J Gemmer, I P Mccul-Loch, U Schollwock, Phys. Rev. B. 79214409S. Langer, F. Heidrich-Meisner, J. Gemmer, I.P. McCul-loch, and U. Schollwock, Phys. Rev. B 79, 214409 (2009)
. D Fioretto, M , Mussardo New J.Phys. 1255015D.Fioretto, M. Mussardo New J.Phys.12:055015,2010
. J Lancaster, A Mitra, Phys. Rev. E. 8161134J. Lancaster A. Mitra Phys. Rev. E 81, 061134 (2010)
. A Mitra, T Giamarchi, Phys. Rev. Lett. 107150602A. Mitra and T. Giamarchi, Phys. Rev. Lett. 107, 150602 (2011).
. J Lancaster, T Giamarchi, A Mitra, Phys. Rev. B. 8475143J. Lancaster, T. Giamarchi, A. Mitra Phys. Rev. B 84, 075143 (2011)
. Vieri Mastropietro, Zhituo Wang, Phys. Rev. B. 9185123Vieri Mastropietro, Zhituo Wang, Phys. Rev. B 91, 085123 (2015)
. B Dra, M Haque, G Zarnd, Phys. Rev. Lett. 106156406B. Dra, M. Haque, G. Zarnd Phys. Rev. Lett. 106, 156406 (2011)
. J Rentrop, D Schuricht, V , Meden New J. Phys. 1475001J. Rentrop, D. Schuricht, V. Meden New J. Phys. 14, 075001 (2012)
. C Karrasch, J Rentrop, D Schuricht, V Meden, Phys. Rev. Lett. 109126406C. Karrasch, J. Rentrop, D. Schuricht, V. Meden Phys. Rev. Lett. 109, 126406 (2012)
. S A Hamerla, G Uhrig, New J. Phys. 1573012S. A. Hamerla, G. S Uhrig New J. Phys. 15 073012 (2013)
. D M Kennes, V Meden, Phys. Rev. B. 88165131D. M. Kennes and V. Meden Phys. Rev. B 88, 165131 (2013)
. T Sabetta, G Misguich, Phys. Rev. B. 88245114T. Sabetta, G. Misguich Phys. Rev. B 88, 245114 (2013)
. L Bonnes, F H L Essler, A M Luchli, Phys. Rev. Lett. 113187203L. Bonnes, F. H. L. Essler, A. M. Luchli Phys. Rev. Lett. 113, 187203 (2014)
. V Alba, F Heidrich-Meisner, Phys. Rev. B. 9075144V. Alba, F. Heidrich-Meisner Phys. Rev. B 90, 075144 (2014)
. R Sachdeva, Nag, A Agarwal, Dutta, Phys. Rev. B. 9045421R Sachdeva, T Nag, A Agarwal, A. Dutta Phys. Rev. B 90, 045421 (2014)
. W Liu, N Andrei, Phys. Rev. Lett. 112257204W,Liu, N. Andrei Phys. Rev. Lett. 112, 257204 (2014)
. C Karrasch, J E Moore, F Heidrich-Meisner, Phys. Rev. B. 8975139C. Karrasch, J. E. Moore, F. Heidrich-Meisner Phys. Rev. B 89, 075139 (2014)
. D M Kennes, C Klckner, V Meden, Phys. Rev. Lett. 113116401D.M. Kennes, C. Klckner, V. Meden Phys. Rev. Lett. 113, 116401 (2014)
. D Haldane, Phys. Rev. Lett. 451358D. Haldane. Phys. Rev. Lett. 45, 1358 (1980)
. J M Luttinger, J. Math. Phys. 41154J. M. Luttinger, J. Math. Phys. 4 (1963) 1154.
. S Tomonaga, Prog. Theor. Phys. 5544S. Tomonaga,Prog. Theor. Phys. 5, 544 (1950)
. D C Mattis, E H Lieb, J. Math. Phys. 62304D. C. Mattis and E. H. Lieb J. Math. Phys. 6, 2304 (1965)
. V Mastropietro, Il Nuovo Cimento. 1091V. Mastropietro Il Nuovo Cimento 109, 1 (1994)
V , Mastropietro D Mattis, Luttinger Model. World ScientificV. Mastropietro D. Mattis (Editors), Luttinger Model. World Scientific 2014
. P Calabrese, J Cardy, J. Stat. Mech. 10004P. Calabrese and J. Cardy, J. Stat. Mech. 10004 (2007)
. M Ganahl, E Rabel, F H L Essler, H G Evertz, Phys. Rev. Lett. 10877206M. Ganahl, E. Rabel, F.H.L. Essler, and H.G. Evertz, Phys. Rev. Lett. 108, 077206 (2012).
. G Benfatto, P Falco, V Mastropietro, Phys. Rev. Lett. 10475701G.Benfatto, P. Falco V. Mastropietro. Phys. Rev. Lett. 104 075701 (2010).
. J Sirker, R G Pereira, I , Affleck Phys. Rev. B. 8335115J. Sirker, R.G. Pereira, I. Affleck Phys. Rev. B 83, 035115 (2011)
. T Prosen, Phys. Rev. Lett. 106217206T. Prosen Phys. Rev. Lett. 106, 217206 (2011)
. E Langmann, J Lebowitz, V Mastropietro, P Moosavi, Comm. Math. Phys. 349551582E. Langmann, J. Lebowitz, V. Mastropietro, P. Moosavi, Comm. Math. Phys. 349, 2, 551582, (2017)
. V Mastropietro, Phys. Rev. E. 8742121V. Mastropietro. Phys. Rev. E 87, 042121 (2013)
. A Theumann, J. Math. Phys. 82460A. Theumann J. Math. Phys. 8, 2460 (1967)
. E Langmann, P Moosavi, J. Math. Phys. 5691902E. Langmann, P. Moosavi, J. Math. Phys., 56, 091902 (2015)
| zyda_arxiv-0690000 |
A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic Optimization
January 3, 2023
Albert S Berahas
Miaolan Xie
Baoyu Zhou
A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic Optimization
January 3, 2023
A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems. It is assumed that constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed via inexact probabilistic zeroth-and first-order oracles. Under reasonable assumptions, a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived. Numerical results on standard nonlinear optimization test problems illustrate the advantages and limitations of our proposed method. †
Introduction
In this paper, we propose a step-search 1 sequential quadratic programming (SQP) algorithm for solving nonlinear equality-constrained stochastic optimization problems of the form min x∈R n f (x) s.t. c(x) = 0, (1.1) where f : R n → R and c : R n → R m are both continuously differentiable. We consider the setting in which exact function and derivative information of the objective function is unavailable, instead, only random estimates of the objective functionf (x; Ξ 0 (x)) ≈ f (x) and its first-order derivativeḡ(x; Ξ 1 (x)) ≈ ∇f (x) are available via inexact probabilistic oracles, where Ξ 0 (x) (with probability space (Ω 0 , F Ω 0 , P 0 )) and Ξ 1 (x) (with probability space (Ω 1 , F Ω 1 , P 1 )) denote the underlying randomness in the objective function and gradient estimates, respectively. On the other hand, the constraint function value c(x) and its Jacobian ∇c(x) T are assumed to be available. Such deterministically constrained stochastic optimization problems arise in multiple science and engineering applications, including but not limited to computer vision [37], multi-stage optimization [39], natural language processing [30], network optimization [9], and PDE-constrained optimization [35]. The majority of the methods proposed in the literature for solving deterministically equality-constrained stochastic optimization problems follow either projection or penalty approaches. The former type of methods (e.g., stochastic projection methods [21,[23][24][25]) require that the feasible region satisfies strict conditions, to ensure well-definedness, that are not satisfied by general nonlinear functions and thus are not readily applicable. In contrast, the latter, stochastic penalty methods [14,34], do not impose such conditions on the feasible region. These methods transform constrained problems into unconstrained problems via a constraint penalization term in the objective function and apply stochastic algorithms to solve the transformed unconstrained optimization problems. Stochastic penalty methods are easy to implement and well-studied, however, the empirical performance of such methods is sensitive to parameter choices and ill-conditioning, and is usually inferior to paradigms that treat constraints as constraints.
Recently, a class of stochastic SQP methods has been developed for solving (1.1). These methods outperform stochastic penalty methods empirically and have convergence guarantees in expectation [7,28]. In [7], the authors propose an objective-function-free stochastic SQP method with adaptive step sizes for the fully stochastic regime. In contrast, in [28], the authors propose a stochastic step search (referred to as line search in the paper [28]) SQP method for the setting in which the errors in the function and derivative approximations can be diminished. We note that several algorithm choices in the two papers [7,28], e.g., merit functions and merit parameters, are different. Several other extensions have been proposed [3,6,8,17,27,32], and very few of these works (or others in the literature) derive worst-case iteration complexity (or sample complexity) due to the difficulties that arise because of the constrained setting and the stochasticity. Notable exceptions are, [16] where the authors provide convergence rates (and complexity guarantees) for the algorithm proposed in [7], and [3,29] that provide complexity bounds for variants of the stochastic SQP methods under additional assumptions and in the setting in which the errors can be diminished. We note that, with the exception of [32], all methods mentioned above assume access to unbiased estimates of the gradients (and function values where necessary), whereas in this paper, we propose an algorithm that can handle biased function and gradient estimates.
For all aforementioned methods, the most vital ingredient is the quality and reliability of the random estimates of the objective function and its derivatives. In our setting, neither the objective function nor its derivatives are assumed to be directly accessible, only stochastic approximations of them are accessible to the algorithm in the form of inexact probabilistic zeroth-order and first-order oracles (precise definitions will be introduced in Section 2.3). Such oracles have been proposed and utilized in several works; e.g., [1,12,20,22]. Moreover, these probabilistic oracles and their variants have been proposed for directsearch methods [20,36], trust-region methods [1,10,15,19], and step-search methods [2,13,28,33]. We note that only [28] considers the setting with (equality) constraints, but iteration complexity (or sample complexity) results are not provided.
Contributions
In this paper, we design, analyze, and implement a step-search SQP (SS-SQP) method for solving nonlinear equality-constrained stochastic optimization problems where exact constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed. These stochastic approximations are computed via inexact probabilistic zeroth-and first-order oracles, which are similar to those in [22], with parameters controlling the accuracy and reliability of the approximations, and allowing for biased approximations. Our proposed algorithm is inspired by state-of-the-art line search SQP methods [11] in conjunction with the recent stochastic adaptive step-search framework developed in [22] for the unconstrained stochastic setting. At every iteration, the algorithm constructs a model of the reduction in the merit function that serves the dual purpose of a measure of sufficient progress (part of the step size computation) and a proxy for convergence. To mitigate the challenges that arise due to the noise in the objective function evaluations, our step-search method employs a relaxed sufficient decrease condition similar to that proposed in [4]. Under reasonable assumptions, we provide a high probability worst-case iteration complexity bound for the proposed algorithm. Specifically, we prove that with overwhelmingly high probability, our proposed algorithm generates a first-order ε-stationary iterate in O(ε −2 ) iterations, where ε is bounded away from zero and its lower bound is dictated by the noise and bias in the zeroth-and first-order oracles. The complexity bound derived matches that of the deterministic algorithm provided in [16]. There are two key differences between our paper and [16]: (i) our algorithm requires access to the objective function whereas the method in [16] is objective-function-free; and (ii) our first-order oracle provides estimates with sufficient accuracy only with some probability and can provide arbitrarily bad estimates otherwise. Finally, numerical results on standard nonlinear equality-constrained test problems [18] illustrate the efficiency and efficacy of our proposed algorithm.
Notation
Let R denote the set of real numbers, R n denote the set of n-dimensional real vectors, R m×n denote the set of m-by-n-dimensional real matrices, N denote the set of natural numbers, and S n denote the set of n-by-n-dimensional real symmetric matrices. For any a ∈ R, let R >a (R ≥a ) denote the set of real numbers strictly larger than (larger than or equal to) a. We use · to denote the 2 -norm. We use k ∈ N as the iteration counter of the algorithm, and for brevity, we use a subscript k for denoting information at the kth iterate, e.g., f k := f (x k ). All quantities with over-bars are stochastic, e.g.,f (x; Ξ 0 (x)) andḡ(x; Ξ 1 (x)) (see Section 2.3), andf (x; ξ 0 (x)) (resp.ḡ(x; ξ 1 (x))) denote realizations of f (x; Ξ 0 (x)) (resp.ḡ(x; Ξ 1 (x))).
Organization
The rest of this paper is organized as follows. The algorithmic framework is introduced in Section 2. The analysis of the algorithm is established in Section 3. We report numerical results in Section 4. Concluding remarks and future research directions are given in Section 5.
Algorithm
To solve (1.1), we design an iterative algorithm based on the SQP paradigm that generates: (i) a primal iterate sequence {x k }, (ii) a primal trial iterate sequence {x + k }, (iii) a primal search direction sequence {d k }, (iv) a dual iterate sequence {ȳ k }, (v) a step size sequence {α k }, (vi) a merit parameter sequence {τ k }, and, (vii) a trial merit parameter sequence {τ trial k }. We discuss each of these sequences in below. We make the following assumption throughout the remainder of this paper. The objective function f : R n → R is continuously differentiable and bounded below over X . The objective gradient function ∇f : R n → R n is L-Lipschitz continuous and bounded over X . The constraint function c : R n → R m (where m ≤ n) is continuously differentiable and bounded over X , and each gradient ∇c i : R n → R n is γ i -Lipschitz continuous and bounded over X for all i ∈ {1, . . . , m}. The singular values of J := ∇c T are bounded away from zero over X . Assumption 2.1 is a standard assumption in the deterministic constrained optimization literature [31]. Under Assumption 2.1, there exist constants {κ g , κ c , κ J , κ σ } ⊂ R >0 and f inf ∈ R such that for all k ∈ N,
f inf ≤ f k , ∇f k ≤ κ g , c k 1 ≤ κ c , J k ≤ κ J , and (J k J T k ) −1 ≤ κ σ .
We should note that by Assumption 2.1, linear independence constraint qualifications (LICQ) hold. Moreover, under Assumption 2.1, for all x ∈ R n , d ∈ R n and α ∈ R ≥0 it follows that
f (x + αd) ≤ f (x) + α∇f (x) T d + L 2 α 2 d 2 and c(x + αd) 1 ≤ c(x) + α∇c(x) T d 1 + Γ 2 α 2 d 2 , where Γ = m i=1 γ i . (2.1)
In this paper, we are particularly interested in finding some primal-dual iterate (x, y) ∈ R n × R m that satisfies the first-order stationarity conditions of (1.1). To this end, let L : R n × R m → R be the Lagrangian of (1.1), defined as
L(x, y) = f (x) + y T c(x),(2.2)
where y ∈ R m are the dual variables. The first-order stationarity conditions for (1.1), which are necessary by Assumption 2.1 (due to the inclusion of the LICQ), are
0 = ∇ x L(x, y) ∇ y L(x, y) = ∇f (x) + ∇c(x)y c(x) . (2.3)
In the remainder of this section we introduce the key algorithmic components: the merit function and its associated models, the search direction computation and merit parameter updating mechanism, and the inexact probabilistic zeroth-and first-order oracles. The main algorithm is Algorithm 1.
Merit function
The merit function φ :
R n × R >0 → R is defined as φ(x, τ ) := τ f (x) + c(x) 1 , (2.4)
where τ ∈ R >0 , the merit parameter, acts as a balancing parameter between the objective function and the constraint violation. Given the gradient (approximation) g ∈ R n and a search direction d ∈ R n , the model of merit function l :
R n × R >0 × R n × R n → R is defined as l(x, τ, g, d) := τ (f (x) + g T d) + c(x) + ∇c(x) T d 1 .
Given a search direction d ∈ R n that satisfies linearized feasibility, i.e., c(x) + ∇c(x) T d = 0, the reduction in the model of the merit function ∆l : R n × R >0 × R n × R n → R is defined as ∆l(x, τ, g, d) :=l(x, τ, g, 0) − l(x, τ, g, d)
= − τ g T d + c(x) 1 − c(x) + ∇c(x) T d 1 = − τ g T d + c(x) 1 .
(2.5)
We use the reduction in the model of the merit function (2.5) to monitor the progress made by our proposed algorithm. We discuss this in more detail in Section 2.2.
Algorithmic components
We now establish how to: (i) compute the primal search direction sequence {d k }, (ii) update the merit parameter sequence {τ k }, and (iii) update the primal iterate sequence {x k }. These sequences depend on the approximation of the gradient of the objective function sequence {ḡ(x k ; Ξ 1 (x k ))}. Letḡ(x k ; ξ 1 (x k )) denote the realization ofḡ(x k ; Ξ 1 (x k )).
To simplify the notation, in this subsection we drop the dependence on the randomness, e.g.,ḡ k =ḡ(x k ; ξ 1 (x k )). At each iteration k ∈ N, the primal search directiond k ∈ R n and the dual variablē y k ∈ R m are computed by solving the linear system of equations
H k J T k J k 0 d k y k = − ḡ k c k , (2.6)
where {H k } satisfies the following assumption.
Assumption 2.2. For all k ∈ N, H k ∈ S n is chosen independently fromḡ k . Moreover, there exist constants {κ H , ζ} ⊂ R >0 such that for all k ∈ N, H k ≤ κ H and u T H k u ≥ ζ u 2 for any u ∈ Null(J k ).
It is well known that under Assumptions 2.1 and 2.2, there is a unique solution (d k ,ȳ k ) to (2.6), and, thus, the vectorsd k ∈ R n andȳ k ∈ R m are well-defined [31].
Next, we present the merit parameter updating mechanism. Given constants { τ , σ} ⊂ (0, 1), for all k ∈ N, we computeτ k viā
τ k ← τ k−1 ifτ k−1 ≤τ trial k ; min (1 − τ )τ k−1 ,τ trial k otherwise, (2.7) whereτ trial k ← ∞ ifḡ T kd k + max d T k H kdk , 0 ≤ 0; (1−σ) c k 1 g T kd k +max{d T k H kdk ,0}
otherwise.
(2.8)
The merit parameter updating mechanism ensures that the sequence of merit parameter values is non-increasing. Moreover, the updating mechanism is designed to ensure that the reduction in the model of the merit function is sufficiently positive. By (2.7) and (2.8), it follows that (see Lemma 3.7)
∆l(x k ,τ k ,ḡ k ,d k ) ≥τ k max d T k H kdk , 0 + σ c k 1 . (2.9)
In the deterministic setting, the reduction in the model of the merit function is zero only at iterates that satisfy (2.3). After updating the merit parameterτ k , we evaluate ∆l(x k ,τ k ,ḡ k ,d k ), the stochastic model reduction of the merit function, and use it to check for sufficient progress. Specifically, given a step size α k , we compute a candidate iterate x + k := x k + α kdk and check whether sufficient progress can be made via the following modified sufficient decrease conditionφ
(x + k ,τ k ; ξ 0 (x + k )) ≤φ(x k ,τ k ; ξ 0 (x k )) − α k θ∆l(x k ,τ k ,ḡ k ,d k ) + 2τ k f ,(2.10)
whereφ(x + k ,τ k ; ξ 0 (x + k )) andφ(x k ,τ k ; ξ 0 (x k )) are merit function estimates, θ ∈ (0, 1) is a user-defined parameter and f is an upper bound on the expected noise in the objective function approximations. We note thatφ(x + k ,τ k ; ξ 0 (x + k )) andφ(x k ,τ k ; ξ 0 (x k )) are realizations of the zeroth-order oracle described in detail in Section 2.3. The positive term on the right-hand-side allows for a relaxation in the sufficient decrease condition, i.e., the merit function may increase after a step, and serves to correct for the noise in the merit function approximations. If (2.10) is satisfied, we accept the candidate point x + k by setting x k+1 ← x + k , and potentially increase the step size for the next iteration, i.e., α k+1 ≥ α k . If (2.10) is not satisfied, the algorithm does not accept the candidate iterate, instead, it sets x k+1 ← x k and shrinks the step size for the next iteration, i.e., α k+1 < α k . This step update rule is the centerpiece of our step-search method, and is fundamentally different from traditional line-search strategies; see [5,13,22] and the references therein. Contrary to line search methods, which compute a search direction and then look for a step size along that direction, in our approach the search direction changes in every iteration.
We conclude this section by drawing a few parallels to the unconstrained setting. First, in the unconstrained setting (with H k = I), the quantity ∆l(x k ,τ k ,ḡ k ,d k ) reduces to ḡ k 2 , which provides a sufficient descent measure and is an approximate first-order stationarity measure. In the constrained setting, the reduction in the model of the merit function will play a similar role. Second, in the unconstrained optimization setting, (2.10) recovers the sufficient decrease condition used by some noisy unconstrained optimization algorithm; see [4,Eq. (3.11)].
Probabilistic oracles
In many real-world applications exact objective function and derivative information cannot be readily computed. Instead, in lieu of these quantities, approximations are available via inexact probabilistic zeroth-and first-order oracles. These oracles produce approximations of different accuracy and reliability, and are formally introduced below.
Oracle 0 (Probabilistic zeroth-order oracle). Given x ∈ R n , the oracle computes f (x; ξ 0 (x)), a realization off (x; Ξ 0 (x)), which is a (random) estimate of the objective function value f (x), where Ξ 0 (x) denotes the underlying randomness (may depend on x) with associated probability space Ω 0 , F Ω 0 , P 0 . Let e(x; Ξ 0 (x)) := |f (x; Ξ 0 (x)) − f (x)|. For any x ∈ R n , e(x; Ξ 0 (x)) is a "one-sided" sub-exponential random variable with parameters {ν, b} ⊂ R ≥0 , whose mean is bounded by some constant f ∈ R ≥0 . Specifically, for all x ∈ R n and λ ∈ [0, 1/b],
E Ξ 0 (x) e(x; Ξ 0 (x)) ≤ f and E Ξ 0 (x) exp(λ(e(x; Ξ 0 (x)) − E e(x; Ξ 0 (x)) )) ≤ exp λ 2 ν 2 2 .
(2.11)
The stochastic approximation of the merit function value is defined asφ(x, τ ; ξ 0 (x)) = τf (x; ξ 0 (x)) + c(x) 1 .
Oracle 1 (Probabilistic first-order oracle). Given x ∈ R n and α ∈ R >0 , the oracle computesḡ(x; ξ 1 (x)), a realization ofḡ(x; Ξ 1 (x)), which is a (random) estimate of the gradient of the objective function ∇f (x), such that
P Ξ 1 (x) ḡ(x; Ξ 1 (x)) − ∇f (x) ≤ max g , κ FO α ∆l(x,τ (x; Ξ 1 (x)),ḡ(x; Ξ 1 (x)),d(x; Ξ 1 (x))) ≥ 1 − δ,
where Ξ 1 (x) denotes the underlying randomness (may depend on x) with associated probability space (Ω 1 , F Ω 1 , P 1 ), (1 − δ) ∈ ( 1 2 , 1] is the probability that the oracle produces a gradient estimate that is "sufficiently accurate" (related to the reliability of the oracle) and { g , κ FO } ⊂ R ≥0 are constants intrinsic to the oracle (related to the precision of the oracle).
In the rest of the paper, to simplify the notation we drop the dependence on x in ξ 0 (x) and ξ 1 (x). Moreover, we use ξ + k to represent ξ 0 (x + k ), the randomness in the zeroth-order oracle evaluated at the trial point x + k .
Remark 2.3.
We make a few remarks about Oracles 0 and 1:
• Oracles 0 and 1 are similar to those defined in [12,22]. For a full discussion and examples of the oracles, we refer interested readers to [22,Section 5].
• Oracle 1 is a natural generalization of the ones defined in [12,22] to the constrained setting. In particular, the right-hand-side of Oracle 1 reduces to max g , κ FO α ḡ(x; Ξ 1 ) in the unconstrained setting, and is precisely what is used in [12,22].
• The presence of g ∈ R ≥0 in the max term in Oracle 1 allows the gradient approximations to be biased; the magnitude of the bias is proportional to g .
Algorithmic framework
We are ready to introduce our stochastic step-search SQP method (SS-SQP) in Algorithm 1.
Remark 2.4. We make the following remarks about SS-SQP:
• (Step-search) Algorithm 1 is a step-search algorithm, whose main difference from traditional line-search methods is that only a single trial iterate is tested at every iteration. That is, if (2.10) is not satisfied, the step size is reduced and a new search direction and candidate iterate are computed in the next iteration. This strategy has been employed in other papers; e.g., see [5,13,22,28]. We should note that at every iteration, even if the iterate does not change, our algorithm requires new objective function and gradient estimates in the next iteration.
Algorithm 1 Adaptive Step-Search SQP (SS-SQP)
Require: initial iterate x 0 ∈ R n ; initial merit parameterτ −1 ∈ R >0 ; maximum step size α max ∈ (0, 1]; initial step size α 0 ∈ (0, α max ]; parameter f ∈ R ≥0 of the zeroth-order oracle (Oracle 0); and other constant parameters {γ, θ, σ, τ } ⊂ (0, 1) 1: for all k ∈ N do 2:
Generateḡ k =ḡ(x k ; ξ 1 k ) via Oracle 1 with α = α k ,d k =d(x k ; ξ 1 k ) as in (2.6), and τ k =τ (x k ; ξ 1 k ) as in (2.7)-(2.8) 3
:
Let x + k = x k + α kdk , and generateφ(x k ,τ k ; ξ 0 k ) andφ(x + k ,τ k ; ξ + k ) via Oracle 0 4:
if (2.10) holds then 5:
Set x k+1 ← x + k and α k+1 ← min{α max , γ −1 α k } 6: else 7: Set x k+1 ← x k and α k+1 ← γα k 8: end if 9: end for • (Modified sufficient decrease condition (2.10))
The 2τ k f term on the right-handside of (2.10) is a correction term added to compensate for the inexactness of the probabilistic zeroth-order oracle (Oracle 0). This correction provides a relaxation to the sufficient decrease requirement. In contrast to traditional sufficient decrease conditions, the modified condition (2.10) allows for a relaxation that is proportional to the noise level of Oracle 0.
• (Objective function evaluations; Line 3) The randomness associated with the evaluation of the objective function value at the candidate iterate x + k (Line 3) is not the same as that of the evaluation at the current point x k . Moreover, we note that even for unsuccessful iterations (where the iterates do not change) the objective function values are re-evaluated.
• (Objective gradient evaluations; Line 2) In order to generate an estimate of the gradient of the objective function that satisfies the conditions of Oracle 1, one can employ a procedure (a loop) similar to [38,Algorithm 2]. The idea is to refine the estimate progressively in order to generate one that satisfies the condition. Indeed, in many real-world problems, including empirical risk minimization in machine learning, one can improve the gradient approximation by progressively using a larger number of samples.
• (Maximum step size α max ) We pick α max ∈ (0, 1] mainly to simplify our analysis. That being said, the unit upper bound on α max is motivated by the deterministic constraint setting. In the deterministic setting (without any noise), the merit function decrease is upper bounded by a nonsmooth function, whose only point of nonsmothness is at α = 1, which complicates the analysis; see [7, Lemma 2.13].
Before we proceed, we define the stochastic process related to the algorithm.
Let M k denote {Ξ 0 k , Ξ + k , Ξ 1 k } with realizations {ξ 0 k , ξ + k , ξ 1 k }.
The algorithm generates a stochastic process:
{(G k , D k , T k ,φ(X k , T k ; Ξ 0 k ),φ(X + k , T k ; Ξ + k ), X k , A k )} with realizations {(ḡ k ,d k ,τ k ,φ(x k ,τ k ; ξ 0 k ),φ(x + k ,τ k ; ξ + k ), x k , α k )}, adapted to the filtration {F k : k ≥ 0}, where F k = σ(M 0 , M 1 , . . . , M k ) and σ denotes the σ-algebra. At iteration k, G k is the random gradient, D k is the random primal search direction, T k is the random merit param- eter,φ(X k , T k ; Ξ 0 k ) andφ(X + k , T k ; Ξ + k )
are the random noisy merit function evaluations at the current point and the candidate point, respectively, X k is the random iterate at iteration k and A k is the random step size. Note that G k , D k , T k are dictated by Ξ 1 k (Oracle 1) and the noisy merit function evaluations are dictated by Ξ 0 k , Ξ + k (Oracle 0).
Theoretical analysis
In this section, we analyze the behavior of Algorithm 1. For brevity, throughout this section, we assume Assumptions 2.1 and 2.2 hold and do not restate this fact in every lemma and theorem. We begin by presenting some preliminary results, definitions, and assumptions and then proceed to present a worst-case iteration complexity bound for Algorithm 1.
Preliminaries, definitions & assumptions
We first define some deterministic quantities that are used in the analysis of Algorithm 1, and which are never explicitly computed in the implementation of the algorithm. Let (d k , y k ) ∈ R n × R m be the solution of the deterministic counterpart of (2.6), i.e.,
H k J T k J k 0 d k y k = − ∇f k c k . (3.1)
The norm of the gradient of the Lagrangian (defined in (2.2)) of (1.1), which is used as a first-order stationarity measure, can be upper bounded at every primal-dual iterate (x k , y k ) as
∇f k + J T k y k c k = −H k d k −J k d k ≤ (κ H + κ J ) d k , (3.2)
where the equality is by (3.1) and the inequality follows by Assumptions 2.1 and 2.2. Thus, (3.2) implies that d k , the primal search direction, can be used as a proxy of the first-order stationary measure. The following lemma shows that the tuple (d k , y k ) is bounded for all k ∈ N.
Lemma 3.1. There exist constants {κ d , κ y } ⊂ R >0 such that d k ≤ κ d and y k ≤ κ y for all k ∈ N.
Proof. By the Cauchy-Schwarz inequality and (3.1), we have
d k y k = H k J T k J k 0 −1 ∇f k c k ≤ H k J T k J k 0 −1 ∇f k c k ,
where both terms on the right-hand side of the inequality are bounded by Assumptions 2.1 and 2.2, which concludes the proof.
Moreover, we define τ k ∈ R >0 and τ trial k ∈ R >0 , the deterministic counterparts of (2.7) and (2.8),
τ k ← τ k ifτ k ≤ τ trial k ; min (1 − τ )τ k , τ trial k otherwise, (3.3) where τ trial k ← ∞ if ∇f T k d k + max d T k H k d k , 0 ≤ 0; (1−σ) c k 1 ∇f T k d k +max{d T k H k d k ,0} otherwise. (3.4)
We emphasize again that {(τ k , τ trial k )} k∈N are introduced only for the purposes of the analysis, and in Algorithm 1 they are never computed (not even in the setting in which the true gradient is used, i.e.,ḡ k = ∇f (x k )). We also note that this definition is not the same as that in [7,16]. The difference is in the fact that in the computation of τ k , the comparison is made toτ k instead ofτ k−1 . This is important for the analysis, since this guarantees τ k ≤τ k .
We assume that the merit parameter sequence {τ k } generated in the stochastic setting is bounded away from zero (Assumption 3.2). Such an assumption has been adopted in previous literature [6-8, 16, 17]; we refer readers to [7, Section 3.2] and [16, Section 4.2] for detailed discussions. Finally, we note that we only assume that {τ k } is bounded away from zero, and never require the knowledge ofτ min in the algorithm.
Assumption 3.2. Let {τ k } be the merit parameter sequence generated by Algorithm 1. There exists a constantτ min ∈ R >0 such that for every realization of Algorithm 1,τ k ≥τ min for all k ∈ N.
Next, we state and prove a provide a useful property with regards to the deterministic merit parameter sequence {τ k } defined in (3.3). Lemma 3.3. Suppose Assumption 3.2 holds, then there exists a positive constant τ min ∈ R >0 such that for every realization of Algorithm 1, τ k ≥ τ min for all k ∈ N.
Proof. By [7, Lemma 2.16], {τ trial k } ⊂ R >0 ∪ {+∞} is always bounded away from zero. We define τ trial min ∈ R >0 such that τ trial min ≤ τ trial k for all k ∈ N. By (3.3)-(3.4)
and Assumption 3.2, one may pick τ min = min{(1 − τ )τ trial min ,τ min } to conclude the proof.
Our final assumption relates to the zeroth-order oracle (Oracle 0).
Assumption 3.4. Let E k and E + k be the errors in the objective function evaluations from Oracle 0, i.e.,
E k := f (X k ; Ξ 0 k ) − f (X k ) , and E + k := f (X + k ; Ξ + k ) − f (X + k )
. We assume that either {E k } and {E + k } are deterministically bounded by f ∈ R ≥0 , or that the summation of the errors {E k + E + k } are independent over different iterations.
Next, we introduce several definitions necessary for the analysis of Algorithm 1. Specifically, we define true/false iterations (Definition 1), successful/unsuccessful iterations (Definition 2) and large/small steps (Definition 3), and introduce three indicator variables respectively.
Definition 1. An iteration k ∈ N is true if ḡ k − ∇f k ≤ max g , κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ) and e k + e + k ≤ 2 f , (3.5) where ∆l(x k ,τ k ,ḡ k ,d k ) is defined in (2.5
) and the constants f , g and κ FO are the same ones as in Oracles 0 and 1. If (3.5) does not hold, we call the iteration a false iteration. We use the random indicator variable I k to denote if an iteration is true.
Definition 2. Given a constant θ ∈ (0, 1), letφ(x k ,τ k ; ξ k ) andφ(x + k ,τ k ; ξ + k )
be obtained by Oracle 0. If inequality (2.10) holds, then iteration k is successful, otherwise, it is an unsuccessful iteration. We use the random indicator variable Θ k to denote whether an iteration is successful.
Definition 3. For any k ∈ N, if min{α k , α k+1 } ≥α whereα is some problem-dependent positive real number (defined explicitly in Lemma 3.15), then we call the step a large step and set the indicator variable U k = 1. Otherwise, we call the step k a small step and set U k = 0.
We show that under appropriate conditions, if the step is a small step and the iteration is true, then, the iteration is guaranteed to be successful (see Lemma 3.15). The last definition is for the stopping time (T ε ∆l ) and a measure of progress ({Z k }).
Definition 4. For any realization of Algorithm 1, define T ε ∆l = min{k :
∆l(x k , τ k , ∇f k , d k ) ≤ ε ∆l }, the number of iterations required to reach a first-order εstationary iterate, where ε = Ω(ε ∆l ). We discuss the explicit relationship between ε and ε ∆l in Remark 3.5. Moreover, for all k ∈ N,
let Z k := φ(x k ,τ k ) − φ min − (τ k f inf −τ min f inf ),
where φ min is a lower bound of φ(·,τ min ) over X andτ min is defined in Assumption 3.2.
Remark 3.5.
A key ingredient of our algorithm is the stopping time T ε ∆l that is related to ∆l(x k , τ k , ∇f k , d k ). In fact, by (3.2), Assumption 3.2 and Lemma 3.9 (see below), the stopping time T ε ∆l defined in Definition 4 is the number of iterations needed to achieve a first-order ε-stationary iterate, i.e.,
max{ ∇f k + J T k y k , c k } ≤ ε, where ε = max{κ H ,1} √ κ l τ min · ε ∆l . (3.6)
We note that (3.6) is the same stationarity measure as that used in [16,Eq. (5)], and is a non-standard first-order stationary measure compared to
∇f k + J T k y k c k . That said, one can show that ∇f k + J T k y k c k ≤ 2 max{ ∇f k + J T k y k , c k } ≤ 2 max{κ H ,κ J } √ κ l τ min · ε ∆l = Ω(ε).
Throughout this paper we focus on (and provide complexity bounds for) (3.6) as it provides a stronger result for feasibility ( c k ) when ε < 1.
Main Technical Results
We build toward the main result of the paper (Theorem 3.18) through a sequence of technical lemmas. Our first lemma shows that Z k (defined in Definition 4) is always nonnegative.
Lemma 3.6. For all k ∈ N, Z k ≥ 0.
Proof. It follows from (2.4) and Definition 4 that
Z k = φ(x k ,τ k ) − φ min − (τ k f inf −τ min f inf ) = (τ k (f k − f inf ) + c k 1 ) − φ min +τ min f inf ≥ (τ min (f k − f inf ) + c k 1 ) − φ min +τ min f inf = (τ min f k + c k 1 ) − φ min = φ(x k ,τ min ) − φ min ≥ 0,
which concludes the proof.
The next lemma reveals the critical role of the merit parameter update.
Lemma 3.7. For all k ∈ N, (2.9) is satisfied. Furthermore, ifτ k =τ k−1 , then 0 <τ k ≤ (1 − τ )τ k−1 .
Proof. By Algorithm 1, we haveτ k ≤τ trial k . Moreover, by (2.5), (2.7) and (2.8), it follows that (2.9) is satisfied for all k ∈ N.
By (2.7), ifτ k =τ k−1 , thenτ k = min (1 − τ )τ k−1 ,τ trial k ≤ (1 − τ )τ k−1 . Moreover, when c k = 0, it follows from Assump- tion 2.2, (2.6) and (2.8) thatd k ∈ Null(J k ) andḡ T kd k +max{d T k H kdk , 0} =ḡ T kd k +d T k H kdk = c T kȳ k = 0, which impliesτ trial k = ∞. Therefore, we haveτ trial k > 0 for all k ∈ N. Finally, bȳ τ −1 ∈ R >0 and (2.7), we haveτ k > 0 for all k ∈ N.
The next lemma provides a useful lower bound for the reduction in the model of the merit function, ∆l(x k ,τ k ,ḡ k ,d k ), that is related to the primal search direction ( d k 2 ) and a measure of infeasibility ( c k ).
Lemma 3.8. There exists some constant κ l ∈ R >0 such that for all k ∈ N, ∆l(x k ,τ k ,ḡ k ,d k ) ≥ κ lτk ( d k 2 + c k ).
Proof. For any iteration k ∈ N, by [7, Lemma 3.4], there exists some constant κ l ∈ R >0 such that
−τ k (ḡ T kdk + 1 2 max{d T k H kdk , 0}) + c k 1 ≥ κ lτk ( d k 2 + c k 1 ).
Byτ k ∈ R >0 (from Lemma 3.7), this implies that
∆l(x k ,τ k ,ḡ k ,d k ) = −τ kḡ T kdk + c k 1 ≥ −τ k (ḡ T kdk + 1 2 max{d T k H kdk , 0}) + c k 1 ,
which concludes the proof.
Lemma 3.9. There exists some constant κ l ∈ R >0 such that for all k ∈ N,
∆l(x k , τ k , g k , d k ) ≥ κ l τ k ( d k 2 + c k ).
Proof. The proof follows the same logic as that of Lemma 3.8 with the stochastic quantities replaced by their deterministic counterparts. By [7,Lemma 3.4], the desired inequality is satisfied for the same constant κ l defined in Lemma 3.8.
The next lemma bounds the errors in the stochastic search directions and dual variables, respectively, with respect to the errors in the gradient approximations.
} ⊂ R >0 such that d k − d k ≤ ζ −1 ḡ k − ∇f k and ȳ k − y k ≤ ω 1 ḡ k − ∇f k , where ζ is defined in Assumption 2.2.
Proof. By the Cauchy-Schwarz inequality, Assumption 2.2, (3.1), and the fact that (d k − d k ) ∈ Null(J k ), it follows that
d k − d k ḡ k − ∇f k ≥ (d k − d k ) T (∇f k −ḡ k ) = (d k − d k ) T (H k (d k − d k ) + J T k (ȳ k − y k )) = (d k − d k ) T H k (d k − d k ) ≥ ζ d k − d k 2 ,
which proves that d k −d k ≤ ζ −1 ḡ k −∇f k . Next, by (3.1) and Assumption 2.1 it follows thatȳ
k − y k = −(J k J T k ) −1 J k (ḡ k − ∇f k ) + H k (d k − d k ) .
By the triangle inequality, the Cauchy-Schwarz inequality, Assumptions 2.1 and 2.2 and the fact that
d k − d k ≤ ζ −1 ḡ k − ∇f k , it follows that ȳ k − y k = (J k J T k ) −1 J k (ḡ k − ∇f k ) + H k (d k − d k ) ≤ (J k J T k ) −1 J k ( ḡ k − ∇f k + H k d k − d k ) ≤ κ σ κ J (1 + κ H ζ −1 ) ḡ k − ∇f k .
Setting ω 1 = κ σ κ J (1 + κ H ζ −1 ) concludes the proof.
The next lemma relates the inner product of the stochastic gradient and stochastic search direction to the stochastic reduction in the model of the merit function. We consider two cases that are related to the two cases in the max term of Oracle 1.
Lemma 3.11. For all k ∈ N:
• If ḡ k − ∇f k ≤ κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ), then τ k |ḡ T kdk | ≤ max{κ H ,κy} κ l + √τ k (1+κHζ −1 )κFOαk √ κ l ∆l(x k ,τ k ,ḡ k ,d k ). • If ḡ k − ∇f k ≤ g , τ k |ḡ T kdk | ≤ max{κ H ,κy}+1 κ l · ∆l(x k ,τ k ,ḡ k ,d k ) +τ k (1+κHζ −1 ) 2 4 2 g .
Proof. If ḡ k −∇f k ≤ κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ), by the triangle inequality, (2.6), Assumption 2.2, and Lemmas 3.1, 3.8 and 3.10, it follows that
τ k |ḡ T kdk | =τ k |(H kdk + J T k y k + J T k (ȳ k − y k )) Td k | ≤τ k (|d T k H kdk | + |y T k J kdk | + |(ȳ k − y k ) T J kdk |) ≤τ k (κ H d k 2 + y k c k + (ḡ k − ∇f k ) + H k (d k − d k ) d k ) ≤ max{κ H , κ y } ·τ k ( d k 2 + c k ) +τ k ( ḡ k − ∇f k + κ H d k − d k ) d k ≤ max{κ H ,κy} κ l ∆l(x k ,τ k ,ḡ k ,d k ) +τ k 1 + κ H ζ −1 ḡ k − ∇f k d k ≤ max{κ H ,κy} κ l ∆l(x k ,τ k ,ḡ k ,d k ) + √τ k (1+κHζ −1 )κFOαk √ κ l ∆l(x k ,τ k ,ḡ k ,d k ),
which completes the first part of the proof. Using similar logic, if ḡ k −∇f k ≤ g , by the triangle inequality, (2.6), Assumption 2.2, Lemmas 3.1, 3.3, 3.8, 3.10, and the fact that ab ≤ a 2 + b 2 4 holds for any {a, b} ⊂ R, it follows
thatτ k |ḡ T kdk | ≤ max{κ H ,κy} κ l ∆l(x k ,τ k ,ḡ k ,d k ) +τ k 1 + κ H ζ −1 ḡ k − ∇f k d k ≤ max{κ H ,κy} κ l ∆l(x k ,τ k ,ḡ k ,d k ) +τ k 1 + κ H ζ −1 g d k ≤ max{κ H ,κy} κ l ∆l(x k ,τ k ,ḡ k ,d k ) + √τ k (1+κHζ −1 ) √ κ l g ∆l(x k ,τ k ,ḡ k ,d k ) ≤ max{κ H ,κy}+1 κ l ∆l(x k ,τ k ,ḡ k ,d k ) +τ k (1+κHζ −1 ) 2 4 2 g ,
which completes the proof.
The next lemma provides a useful upper bounds for the errors related to the stochastic search directions (and gradients) for the same two cases as in Lemma 3.11. Lemma 3.12. For all k ∈ N:
• If ḡ k − ∇f k ≤ κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ), then |∇f T k d k −ḡ T kdk | ≤ (1+κ H ζ −1 )κ FO α k √ κ lτk + κ 2 FO α 2 k ζ ∆l(x k ,τ k ,ḡ k ,d k ) and |d T k H k d k −d T k H kdk | ≤ 2κ H ζ −1 κ FO α k √ κ lτk + κ H κ 2 FO α 2 k ζ 2 ∆l(x k ,τ k ,ḡ k ,d k ). • If ḡ k − ∇f k ≤ g , then |∇f T k d k −ḡ T kdk | ≤ (1+κ H ζ −1 ) g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) + ζ −1 2 g and |d T k H k d k −d T k H kdk | ≤ 2κ H ζ −1 g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) + κ H ζ −2 2 g .
Proof. We begin with ḡ k − ∇f k ≤ κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ). By the triangle and Cauchy-Schwarz inequalities, Assumption 2.1, and Lemmas 3.1, 3.8 and 3.10,
|∇f T k d k −ḡ T kdk | = |(ḡ k − ∇f k ) Td k + (∇f k −ḡ k ) T (d k − d k ) +ḡ T k (d k − d k )| = |(ḡ k − ∇f k ) Td k + (∇f k −ḡ k ) T (d k − d k ) − (H kdk + J T kȳk ) T (d k − d k )| ≤ |(ḡ k − ∇f k ) Td k | + |(∇f k −ḡ k ) T (d k − d k )| + |d T k H k (d k − d k )| + |ȳ T k J k (d k − d k )| ≤ ḡ k − ∇f k d k + ∇f k −ḡ k d k − d k + κ H d k d k − d k ≤ (1 + κ H ζ −1 ) ∆l(x k ,τ k ,ḡ k ,d k ) κ lτk ḡ k − ∇f k + ζ −1 ḡ k − ∇f k 2 ≤ (1+κ H ζ −1 )κ FO α k √ κ lτk + ζ −1 κ 2 FO α 2 k ∆l(x k ,τ k ,ḡ k ,d k ).
Additionally, under Assumption 2.2 it follows that
|d T k H k d k −d T k H kdk | = |2d T k H k (d k − d k ) − (d k − d k ) T H k (d k − d k )| ≤ 2|d T k H k (d k − d k )| + |(d k − d k ) T H k (d k − d k )| ≤ 2κ H d k d k − d k + κ H d k − d k 2 ≤ 2κ H ζ −1 ∆l(x k ,τ k ,ḡ k ,d k ) κ lτk ḡ k − ∇f k + κ H ζ −2 ḡ k − ∇f k 2 ≤ 2κ H ζ −1 κ FO α k √ κ lτk + κ H ζ −2 κ 2 FO α 2 k ∆l(x k ,τ k ,ḡ k ,d k ),
which completes the first part of the proof. If ḡ k − ∇f k ≤ g , following similar logic as the first part of the proof, by the triangle and Cauchy-Schwarz inequalities, (3.1), and Lemmas 3.1, 3.9 and 3.10,
|∇f T k d k −ḡ T kdk | = |(ḡ k − ∇f k ) T (d k − d k ) + (ḡ k − ∇f k ) T d k + ∇f T k (d k − d k )| = |(ḡ k − ∇f k ) T (d k − d k ) + (ḡ k − ∇f k ) T d k − (H k d k + J T k y k ) T (d k − d k )| ≤ |(ḡ k − ∇f k ) T (d k − d k )| + |(ḡ k − ∇f k ) T d k | + |d T k H k (d k − d k )| + |y T k J k (d k − d k )| ≤ ζ −1 ḡ k − ∇f k 2 + (1 + κ H ζ −1 ) d k ḡ k − ∇f k ≤ ζ −1 ḡ k − ∇f k 2 + 1+κ H ζ −1 √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) ḡ k − ∇f k ≤ ζ −1 2 g + (1+κ H ζ −1 ) g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ).
Additionally, under Assumption 2.2 it follows that
|d T k H k d k −d T k H kdk | = |(d k −d k ) T H k (d k −d k ) + 2d T k H k (d k − d k )| ≤ κ H d k −d k 2 + 2κ H d k d k −d k ≤ κ H ζ −2 ḡ k − ∇f k 2 + 2κ H √ ∆l(x k ,τ k ,∇f k ,d k ) √ κ l τ k ζ −1 ḡ k − ∇f k ≤ κ H ζ −2 2 g + 2κ H ζ −1 g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ),
which completes the proof.
The next lemma provides a bound on the merit function across an iteration.
Lemma 3.13. For all k ∈ N φ(x k + α kdk ,τ k ) − φ(x k ,τ k ) ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) + α kτk (∇f k −ḡ k ) Td k +τ k L+Γ 2 α 2 k d k 2 .
Proof. By Algorithm 1, for any k ∈ N, 0 < α k ≤ α max ≤ 1. Moreover, by the triangle inequality, (2.1), (2.4) and (2.6), it follows that
φ(x k + α kdk ,τ k ) − φ(x k ,τ k ) =τ k (f (x k + α kdk ) − f k ) + ( c(x k + α kdk ) 1 − c k 1 ) ≤τ k (α k ∇f T kdk + L 2 α 2 k d k 2 ) + ( c k + α k J kdk 1 − c k 1 + Γ 2 α 2 k d k 2 ) ≤ α kτk ∇f T kdk + |1 − α k | c k 1 + α k c k + J kdk 1 − c k 1 +τ k L+Γ 2 α 2 k d k 2 = α kτk ∇f T kdk − α k c k 1 +τ k L+Γ 2 α 2 k d k 2 = α kτkḡ T kdk − α k c k 1 + α kτk (∇f k −ḡ k ) Td k +τ k L+Γ 2 α 2 k d k 2 = − α k ∆l(x k ,τ k ,ḡ k ,d k ) + α kτk (∇f k −ḡ k ) Td k +τ k L+Γ 2 α 2 k d k 2 ,
which completes the proof.
Due to the quality and reliability of the zeroth-and first-order oracles (Oracles 0 and 1), one can only guarantee convergence to a neighborhood of the solution. Assumption 3.14 provides a lower bound on the size of the convergence neighbourhood in terms of ε (and ε ∆l ).
Assumption 3.14. Let
ε > max g η , √ f ω 7 ω 8 · max{κ H ,1} √ κ l τ min ,
which is equivalent to ε ∆l > max g η ,
√ f ω 7 ω 8 by Remark 3.5, where 0 < η < 2(1 − θ) min 1 η 1 +η 2 , 1 η 3 +η 4
and {η 1 , η 2 , η 3 , η 4 } ⊂ R >0 are defined as
η 1 = (1−θ)(1+ τ )τ −1 1+ κ H ζ √ κ l τ min η 2 = (1 − θ) 2τ −1 1 + κ H ζ 2 (1+ τ ) 2τ −1 κ l τ min + τ + 4τ −1 1+ τ ω 2 κ l + (1−θ) 2 (1+ τ ) ζ η 3 = (1−θ)τ −1 τ −1 1+ 3κ H ζ +(1−σ)τ min 1+ κ H ζ (1−σ)τ min √ κ l τ min and η 4 = (1−θ) 2τ 2 −1 τ −1 1+ 3κ H ζ +(1−σ)τ min 1+ κ H ζ 2 (1−σ) 2 τ 3 min κ l + 4τ −1 κ l + 4(1 − θ) 2 τ 2 −1 1+ κ H ζ (1−σ)τ min ζ +τ −1 ζ
with p ∈ 1 2 , 1 , and {ω 2 , ω 3 , ω 4 , ω 5 , ω 6 , ω 7 , ω 8 } ⊂ R >0 defined as
ω 2 = max{κ H ,κy}+1 κ l , ω 3 = (1+κ H ζ −1 )κ FO √τ −1 αmax √ κ l +τ −1 κ 2 FO α 2 max ζ , ω 4 = max τ max{κ H ,κy} κ l + √τ −1( 1+κ H ζ −1 )κFOαmax √ κ l + ω 3 , τ −1 (1−σ)τ min (1+3κ H ζ −1 )κ FO √τ −1 αmax √ κ l + (1 + κ H ζ −1 )τ −1 κ 2 FO α 2 max ζ , ω 5 = (1 + τ )τ −1 η ζ + 1+κ H ζ −1 √ κ l τ min + ττ−1(1+κH ζ −1 ) 2 η 4 , ω 6 =τ 2 −1 · (1+κ H ζ −1 ) η ζ + 1+3κ H ζ −1 √ κ l τ min (1−σ)τ min +τ −1 η ζ + 1+κ H ζ −1 √ κ l τ min , ω 7 = 4τ −1 (p− 1 2 )θ max 1+ τ ω 2 1−ηω 5 , 1 1−ηω 6 , 1 + ω 3 + ω 4 , and ω 8 = max τ −1 κ l κ FO + L 2κ l + Γ 2τ min κ l 1−θ ,τ min L+Γ 2τ min κ l 1−θ−η τ −1 κ l max 1+ τ ω 2 1−ηω 5 , 1 √ 1−ηω 6 .
Assumption 3.14 involves many constants and is indeed hard to parse. We make all constants explicit in order to show the exact dependence on the convergence neighborhood. That being said, what is important is that the lower bound of ε is proportional to the bias in the gradient approximations and proportional to the square root of the noise level in the function approximations.
We are now ready to present the key lemma of this section. In Lemma 3.15, we first define (p,α, h(·)), where p ∈ 1 2 , 1 is a lower bound on the probability of a true iteration conditioned on the past (before the stopping time),α ∈ R >0 is the large step threshold, and h : R >0 → R >0 is a monotonically increasing function (in α) that bounds the potential progress made at any given iteration. Moreover, we prove five results that can be summarized as follows: (i) lower bound (proportional to f ) on the potential progress with step sizeα; (ii) conditioned on the past, the next iteration is true with probability at least p; (iii) bound the potential progress made in any true and successful iterations; (iv) true iterations with small step sizes are successful ; and, (v) bound (proportional to f ) the damage incurred at any iteration.
= 1 − δ − exp − min{ u 2 2ν 2 , u 2b } otherwise (with u = inf x∈X { f − E[E(x)]}, where E(x) = |f (x; Ξ 0 (x)) − f (x)|), •α = min 1−θ τ −1 κ l κ FO + L 2κ l + Γ 2τ min κ l , 2τ min κ l 1−θ−η τ −1 κ l max 1+ τ ω 2 1−ηω 5 , 1 √ 1−ηω 6 τ min L+Γ , • h(α) = αθε 2 ∆l min 1−ηω 5 1+ τ ω 2 , 1 − ηω 6 , 1 1+ω 3 +ω 4 .
Then, the following results hold:
(i) h(α) > 4τ −1 p− 1 2 f . (ii) P [I k = 1|F k−1 ] ≥ p with some p ∈ 1 2 + 4τ −1 f h(α) , 1 .
(iii) If iteration k is true and successful, then
Z k+1 ≤ Z k − h(α k ) + 4τ −1 f .
(iv) If α k ≤α and iteration k is true, then iteration k is also successful.
(
v) Z k+1 ≤ Z k + 2τ −1 f +τ −1 (e k + e + k ).
Proof. First, we note that: (1) due to the constants and the form, p is a valid probability, i.e., p ∈ ( 1 2 , 1], (2)α > 0 is guaranteed by the restriction on η in Assumption 3.14, and (3) h : R >0 → R >0 is a positive function that measures the potential progress made if iterations are true and successful. We proceed with this proof by showing all five statements separately.
(i) This result follows directly from the definition of h(α) and the lower bound on ε ∆l ; see Assumption 3.14.
(ii) This proof is essentially the same as that from [22, Proposition 1(ii)]. Let
J k := 1 G k − ∇f (X k ) ≤ max g , κ FO A k ∆l(X k , T k , G k , D k ) .
Clearly, by Definition 1,
P [I k = 0 | F k−1 ] = P J k = 0 or E k + E + k > 2 f | F k−1 ≤ P [J k = 0 | F k−1 ] + P E k + E + k > 2 f | F k−1 .
The first term on the right-hand-side of the inequality is bounded above by δ, by the first-order probabilistic oracle (Oracle 1). The second term is zero in the case where f is a deterministic bound on the noise. Otherwise, since E k and E + k individually satisfy the one-sided sub-exponential bound in (2.11) with parameters f and (ν, b), one can show that E k + E + k satisfies (2.11) with parameters 2 f and (2ν, 2b). Hence by the one-sided Bernstein inequality, the second term is bounded above by e − min u 2 2ν 2 , u 2b ,
with u = inf x∈X { f − E[E(x)]}.
As a result,
P [I k = 1 | F k−1 ] ≥ p
for all k, for p as defined in the statement. The range of p ∈ 1 2 + 4τ −1 f h(α) , 1 follows from the definitions of h(·) andα in the statement, together with the inequality on ε ∆l in Assumption 3.14.
(iii) Suppose iteration k is true and successful. Since iteration k is true, by Definition 1 we have
ḡ k − ∇f k ≤ max g , κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ) ,
and we consider the two cases separately. We further subdivide the analysis into the case where ∇f T k d k ≤ 0 and ∇f T k d k > 0.
Case A When ḡ k − ∇f (x k ) ≤ κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ), by Lemma 3.10, d k − d k ≤ ζ −1 ḡ k − ∇f (x k ) ≤ ζ −1 κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ).
Case A.1 If ∇f T k d k ≤ 0, by the fact thatτ k ≥ τ k , the triangle inequality, (2.5) and Lemma 3.12, it follows that
∆l(x k , τ k , ∇f k , d k ) − ∆l(x k ,τ k ,ḡ k ,d k ) =τ kḡ T kdk − τ k ∇f T k d k ≤τ k (ḡ T kdk − ∇f T k d k ) ≤τ k |ḡ T kdk − ∇f T k d k | ≤τ k (1+κ H ζ −1 )κ FO α k √ κ lτk + κ 2 FO α 2 k ζ ∆l(x k ,τ k ,ḡ k ,d k ).
(3.7)
Case A.2 If ∇f T k d k > 0, by the triangle inequality, (2.5) and Lemma 3.12,
∆l(x k , τ k , ∇f k , d k ) − ∆l(x k ,τ k ,ḡ k ,d k ) =τ kḡ T kdk − τ k ∇f T k d k ≤ |τ kḡ T kdk − τ k ∇f T k d k | ≤ |(τ k − τ k )∇f T k d k | +τ k |ḡ T kdk − ∇f T k d k | ≤ |(τ k − τ k )∇f T k d k | +τ k (1+κ H ζ −1 )κ FO α k √ κ lτk + κ 2 FO α 2 k ζ ∆l(x k ,τ k ,ḡ k ,d k ).
(3.8)
We proceed to bound the term |(τ k − τ k )∇f T k d k |; we consider three cases due to the merit parameter updating formulae ((2.7)-(2.8) and (3.3)-(3.4)).
Case A.2.1 If τ k =τ k , then |(τ k − τ k )∇f T k d k | = 0. Case A.2.2 If τ k = (1 − τ )τ k ,
by the triangle inequality and Lemmas 3.11 and 3.12,
|(τ k − τ k )∇f T k d k | = ττk |∇f T k d k | ≤ ττk (|ḡ T kdk | + |∇f T k d k −ḡ T kdk |) ≤ τ max{κ H ,κy} κ l + √τ k (1+κHζ −1 )κFOαk √ κ l ∆l(x k ,τ k ,ḡ k ,d k ) + ττk (1+κ H ζ −1 )κ FO α k √ κ lτk + κ 2 FO α 2 k ζ ∆l(x k ,τ k ,ḡ k ,d k ). Case A.2.3 Ifτ k > τ k = (1−σ) c k 1 ∇f T k d k +max{d T k H k d k ,0} , by (2.7)-(2.8), ∇f T k d k + max d T k H k d k , 0 > (1−σ) c k 1 τ k ≥ḡ T kdk + max d T k H kdk , 0 . (3.9)
By Lemma 3.3, we have τ k ≥ τ min for all k ∈ N. Moreover, it follows from (2.5) and Lemma 3.
9 that 0 ≤ ∆l(x k , τ k , ∇f k , d k ), which implies τ k ∇f T k d k ≤ c k 1 . Using the fact that τ k ∈ R >0 and ∇f T k d k > 0, |∇f T k d k | c k 1 = ∇f T k d k c k 1 ≤ 1 τ k . (3.10)
By Lemma 3.12, (3.9) and (3.10), it follows that
|(τ k − τ k )∇f T k d k | = τ k − (1−σ) c k 1 ∇f T k d k +max{d T k H k d k ,0} · |∇f T k d k | ≤ (∇f T k d k +max{d T k H k d k ,0})−(ḡ T kd k +max{d T k H kdk ,0}) ∇f T k d k +max{d T k H k d k ,0} ·τ k |∇f T k d k | ≤ |(∇f T k d k +max{d T k H k d k ,0})−(ḡ T kd k +max{d T k H kdk ,0})| (1−σ) c k 1 ·τ 2 k |∇f T k d k | ≤ |∇f T k d k −ḡ T kd k |+| max{d T k H k d k ,0}−max{d T k H kdk ,0}| (1−σ) c k 1 ·τ 2 k |∇f T k d k | ≤τ 2 k (1−σ)τ k · |∇f T k d k −ḡ T kdk | + | max d T k H k d k , 0 − max{d T k H kdk , 0}| ≤τ 2 k (1−σ)τ k · |∇f T k d k −ḡ T kdk | + |d T k H k d k −d T k H kdk | ≤τ 2 k (1−σ)τ min (1+3κ H ζ −1 )κ FO α k √ κ lτk + (1 + κ H ζ −1 )ζ −1 κ 2 FO α 2 k ∆l(x k ,τ k ,ḡ k ,d k ).
Combining (3.7), (3.8) and Cases A.2.1-A.2.3, it follows that
∆l(x k , τ k , ∇f k , d k ) − ∆l(x k ,τ k ,ḡ k ,d k ) ≤ τ k (1+κ H ζ −1 )κ FO α k √ κ lτk + ζ −1 κ 2 FO α 2 k + max τ max{κ H ,κy} κ l + √τ k (1+κHζ −1 )κFOαk √ κ l + ττk (1+κ H ζ −1 )κ FO α k √ κ lτk + ζ −1 κ 2 FO α 2 k , τ 2 k (1−σ)τ min (1+3κ H ζ −1 )κ FO α k √ κ lτk + (1 + κ H ζ −1 )ζ −1 κ 2 FO α 2 k ∆l(x k ,τ k ,ḡ k ,d k ) ≤ (ω 3 + ω 4 ) · ∆l(x k ,τ k ,ḡ k ,d k ),
where {ω 3 , ω 4 } ⊂ R >0 are as defined in Assumption 3.14. By
{ω 3 , ω 4 } ⊂ R >0 , ∆l(x k ,τ k ,∇f k ,d k ) 1+ω 3 +ω 4 ≤ ∆l(x k ,τ k ,ḡ k ,d k ).
By the fact that iteration k is successful and Definition 2, it follows that
φ(x + k ,τ k ; ξ + k ) −φ(x k ,τ k ; ξ k ) ≤ −α k θ∆l(x k ,τ k ,ḡ k ,d k ) + 2τ k f ≤ −α k θ ∆l(x k ,τ k ,∇f k ,d k ) 1+ω 3 +ω 4 + 2τ −1 f .
Hence, it follows that
Z k+1 − Z k = φ(x k+1 ,τ k+1 ) − φ(x k ,τ k ) −τ k+1 f inf +τ k f inf ≤ φ(x k+1 ,τ k+1 ) −φ(x k ,τ k ; ξ k ) −τ k+1 f inf +τ k f inf +τ k e k = φ(x k+1 ,τ k+1 ) −φ(x k+1 ,τ k ; ξ + k ) +φ(x k+1 ,τ k ; ξ + k ) −φ(x k ,τ k ; ξ k ) −τ k+1 f inf +τ k f inf +τ k e k ≤ − α k θ ∆l(x k ,τ k ,∇f k ,d k ) 1+ω 3 +ω 4 + 2τ −1 f + (τ k+1 −τ k )(f (x k+1 ) − f inf ) +τ k (e k + e + k ) ≤ − α k θ ∆l(x k ,τ k ,∇f k ,d k ) 1+ω 3 +ω 4 + 2τ −1 f +τ k (e k + e + k ).
(3.11)
Case B When ḡ k − ∇f (x k ) ≤ g , by the condition that k < T ε ∆l and Definition 4, it follows that ∆l(x k , τ k , ∇f k , d k ) > ε ∆l > g η . By Lemma 3.10,
d k − d k ≤ ζ −1 ḡ k − ∇f k ≤ ζ −1 g < ζ −1 η ∆l(x k , τ k , ∇f k , d k ).
Case B.1 If ∇f T k d k ≤ 0, by the fact thatτ k ≥ τ k , the triangle inequality, (2.5) and
Lemma 3.12, it follows that
∆l(x k , τ k , ∇f k , d k ) − ∆l(x k ,τ k ,ḡ k ,d k ) =τ kḡ T kdk − τ k ∇f T k d k ≤τ k (ḡ T kdk − ∇f T k d k ) ≤τ k |ḡ T kdk − ∇f T k d k | ≤τ k ζ −1 2 g + (1+κ H ζ −1 ) g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) ≤τ k ζ −1 η + 1+κ H ζ −1 √ κ l τ k η∆l(x k , τ k , ∇f k , d k ).
(3.12)
Case B.2 If ∇f T k d k > 0, by the triangle inequality, (2.5) and Lemma 3.12,
∆l(x k , τ k , ∇f k , d k ) − ∆l(x k ,τ k ,ḡ k ,d k ) =τ kḡ T kdk − τ k ∇f T k d k ≤ |τ kḡ T kdk − τ k ∇f T k d k | ≤ |(τ k − τ k )∇f T k d k | +τ k |ḡ T kdk − ∇f T k d k | ≤ |(τ k − τ k )∇f T k d k | +τ k ζ −1 2 g + (1+κ H ζ −1 ) g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) ≤ |(τ k − τ k )∇f T k d k | +τ k ζ −1 η + 1+κ H ζ −1 √ κ l τ k η∆l(x k , τ k , ∇f k , d k ). (3.13) We proceed to bound the term |(τ k − τ k )∇f T k d k |. Case B.2.1 If τ k =τ k , then |(τ k − τ k )∇f T k d k | = 0. Case B.2.2 If τ k = (1 − τ )τ k ,
then by Lemmas 3.11 and 3.12 and Assumption 3.14,
|(τ k − τ k )∇f T k d k | = ττk |∇f T k d k | ≤ ττk |ḡ T kdk | + |∇f T k d k −ḡ T kdk | ≤ τ ω 2 ∆l(x k ,τ k ,ḡ k ,d k ) + ττk (1+κ H ζ −1 ) 2 4 2 g + ττk ζ −1 2 g + (1+κ H ζ −1 ) g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) ≤ τ ω 2 ∆l(x k ,τ k ,ḡ k ,d k ) + ττk η (1+κ H ζ −1 ) 2 η 4 + η ζ + 1+κ H ζ −1 √ κ l τ k ∆l(x k , τ k , ∇f k , d k ). Case B.2.3 Ifτ k > τ k = (1−σ) c k 1 ∇f T k d k +max{d T k H k d k ,0}
, following the same logic as in Case A.2.3, by Lemma 3.12, (3.9) and (3.10),
|(τ k − τ k )∇f T k d k | ≤τ 2 k (1−σ)τ k · |∇f T k d k −ḡ T kdk | + |d T k H k d k −d T k H kdk | ≤τ 2 k (1−σ)τ min · ζ −1 2 g + (1+κ H ζ −1 ) g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) +κ H ζ −2 2 g + 2κ H ζ −1 g √ κ l τ k ∆l(x k , τ k , ∇f k , d k ) ≤τ 2 k (1−σ)τ min (1 + κ H ζ −1 )ζ −1 η + 1+3κ H ζ −1 √ κ l τ k η∆l(x k , τ k , ∇f k , d k ).
Combining (
∆l(x k , τ k , ∇f k , d k ) − ∆l(x k ,τ k ,ḡ k ,d k ) ≤ max τ ω 2 ∆l(x k ,τ k ,ḡ k ,d k ) + ττk η (1+κ H ζ −1 ) 2 η 4 + η ζ + 1+κ H ζ −1 √ κ l τ k ∆l(x k , τ k , ∇f k , d k ), ητ 2 k · (1+κ H ζ −1 )ζ −1 η+ 1+3κ H ζ −1 √ κ l τ k (1−σ)τ min ∆l(x k , τ k , ∇f k , d k ) +τ k ζ −1 η + 1+κ H ζ −1 √ κ l τ k η∆l(x k , τ k , ∇f k , d k ) ≤ max τ ω 2 ∆l(x k ,τ k ,ḡ k ,d k ) + ηω 5 ∆l(x k , τ k , ∇f k , d k ), ηω 6 ∆l(x k , τ k , ∇f k , d k ) ,(3.
14) where {ω 2 , ω 5 , ω 6 } ⊂ R >0 are defined in Assumption 3.14. Thus, it follows,
∆l(x k ,τ k ,ḡ k ,d k ) ≥ min 1−ηω 5 1+ τ ω 2 , 1 − ηω 6 · ∆l(x k , τ k , ∇f k , d k ). (3.15)
By selecting η following Assumption 3.14, using the fact that iteration k is successful and Definition 2,
φ(x + k ,τ k ; ξ + k ) −φ(x k ,τ k ; ξ k ) ≤ − α k θ∆l(x k ,τ k ,ḡ k ,d k ) + 2τ k f ≤ − α k θ min 1−ηω 5 1+ τ ω 2 , 1 − ηω 6 · ∆l(x k , τ k , ∇f k , d k ) + 2τ −1 f .
Hence, following similar logic as in (3.11), it follows that
Z k+1 − Z k ≤ φ(x k+1 ,τ k+1 ) −φ(x k+1 ,τ k ; ξ + k ) +φ(x k+1 ,τ k ; ξ + k ) −φ(x k ,τ k ; ξ k ) −τ k+1 f inf +τ k f inf +τ k e k ≤ − α k θ min 1−ηω 5 1+ τ ω 2 , 1 − ηω 6 · ∆l(x k , τ k , ∇f k , d k ) + 2τ −1 f + (τ k+1 −τ k )(f (x k+1 ) − f inf ) +τ k (e k + e + k ) ≤ − α k θ min 1−ηω 5 1+ τ ω 2 , 1 − ηω 6 · ∆l(x k , τ k , ∇f k , d k ) + 2τ −1 f +τ k (e k + e + k ).
Combining the results for Case A and Case B, together with the assumption that the iteration is true, it follows that
Z k+1 − Z k ≤ − min 1−ηω 5 1+ τ ω 2 , 1 − ηω 6 , 1 1+ω 3 +ω 4 α k θ∆l(x k , τ k , ∇f k , d k ) + 2τ −1 f +τ −1 (e k + e + k ) ≤ − h(α k ) + 4τ −1 f ,
where the last inequality is from the conditions that ∆l(x k , τ k , ∇f k , d k ) > ε 2 ∆l and e k + e + k ≤ 2 f .
(iv) We first show that for any k ∈ N, if α k ≤α and iteration k is true, then
φ(x k + αd k ,τ k ) ≤ φ(x k ,τ k ) − α k θ∆l(x k ,τ k ,ḡ k ,d k ).
Since iteration k is true, by Definition 1, it follows that
ḡ k − ∇f k ≤ max g , κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ) ,
and we consider the two cases separately.
Case A When ḡ k − ∇f k ≤ κ FO α k ∆l(x k ,τ k ,ḡ k ,d k ), by
α k ≤α ≤ 1−θ τ −1 κ l κ FO + L 2κ l + Γ 2τ min κ l ,
the Cauchy-Schwarz inequality, Assumption 3.2 and Lemmas 3.8 and 3.13,
φ(x k + α kdk ,τ k ) − φ(x k ,τ k ) ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) + α kτk (∇f k −ḡ k ) Td k +τ k L+Γ 2 α 2 k d k 2 ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) + α kτk ∇f k −ḡ k d k +τ k L+Γ 2 α 2 k d k 2 ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) + τ k κ l κ FO α 2 k ∆l(x k ,τ k ,ḡ k ,d k ) +τ k L+Γ 2τ k κ l α 2 k ∆l(x k ,τ k ,ḡ k ,d k ) ≤ − 1 − τ −1 κ l κ FO + L 2κ l + Γ 2τ min κ l α α k ∆l(x k ,τ k ,ḡ k ,d k ) ≤ − α k θ∆l(x k ,τ k ,ḡ k ,d k ).
Case B When ḡ k − ∇f k ≤ g and iteration k is true, (3.15) holds. Moreover, by the condition that k < T ε ∆l and Definition 4, it follows that
ḡ k − ∇f k ≤ g < ηε ∆l < η ∆l(x k , τ k , ∇f k , d k ).
Therefore, by
α k ≤α ≤ 2τ min κ l 1−θ−η τ −1 κ l ·max 1+ τ ω 2 1−ηω 5 , 1 √ 1−ηω 6 τ min L+Γ ,
the Cauchy-Schwarz inequality, Assumption 3.2, (3.15) and Lemmas 3.8 and 3.13,
φ(x k + α kdk ,τ k ) − φ(x k ,τ k ) ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) + α kτk (∇f k −ḡ k ) Td k +τ k L+Γ 2 α 2 k d k 2 ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) + α kτk ∇f k −ḡ k d k +τ k L+Γ 2 α 2 k d k 2 ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) +τ k L+Γ 2τ k κ l α 2 k ∆l(x k ,τ k ,ḡ k ,d k ) + α kτk η ∆l(x k , τ k , ∇f k , d k ) ∆l(x k ,τ k ,ḡ k ,d k ) κ lτk ≤ − α k ∆l(x k ,τ k ,ḡ k ,d k ) + α 2 kτ k L+Γ 2τ k κ l ∆l(x k ,τ k ,ḡ k ,d k ) + α k η τ k κ l max 1+ τ ω 2 1−ηω 5 , 1 √ 1−ηω 6 ∆l(x k ,τ k ,ḡ k ,d k ) ≤ − α k 1 − η τ −1 κ l max 1+ τ ω 2 1−ηω 5 , 1 √ 1−ηω 6 −τ min L+Γ 2τ min κ lα ∆l(x k ,τ k ,ḡ k ,d k ) ≤ − α k θ∆l(x k ,τ k ,ḡ k ,d k ).
Combining Cases A and B, together with the fact the iteration is true, we conclude the proof of (iv) bȳ
φ(x k + α kdk ,τ k ; ξ + k ) −φ(x k ,τ k ; ξ k ) ≤ −α k θ∆l(x k ,τ k ,ḡ k ,d k ) +τ k e k +τ k e + k ≤ −α k θ∆l(x k ,τ k ,ḡ k ,d k ) + 2τ k f .
(v) If iteration k is unsuccessful, then by definition Z k+1 = Z k , so the inequality holds trivially. On the other hand, if iteration k is successful, then starting with the second equation from (3.11)
Z k+1 − Z k ≤ φ(x k+1 ,τ k+1 ) −φ(x k+1 ,τ k ; ξ + k ) +φ(x k+1 ,τ k ; ξ + k ) −φ(x k ,τ k ; ξ k ) −τ k+1 f inf +τ k f inf +τ k e k ≤ − α k θ∆l(x k ,τ k ,ḡ k ,d k ) + (τ k+1 −τ k )(f (x k+1 ) − f inf ) + 2τ k f +τ k (e k + e + k ) ≤ 2τ −1 f +τ −1 (e k + e + k ).
Therefore, we conclude the proof of (v).
The next two lemmas will be used in the iteration complexity analysis that follows. Lemma 3.17. For any positive integer t and anyp ∈ 1 2 , 1 , we have
P T ε ∆l > t, t−1 k=0 I k ≥pt, t−1 k=0 Θ k I k U k < p − 1 2 t − l 2 = 0 where l = max − ln α 0 −lnα ln γ , 0 .
Proof. The proof is the same as [22,Lemma 3.5].
We are now ready to present the main theorem of the manuscript; the iteration complexity of Algorithm 1.
Theorem 3.18. Suppose Assumptions 2.1, 2.2, 3.2, 3.4 and 3.14 hold and that the conditions of Oracles 0 and 1 are satisfied. Then, for any s ≥ 0,p ∈ 1 2 +
4τ −1 f +s h(α) , p , and t ≥ R p− 1 2 − 4τ −1 f +s h(α) , P [T ε ∆l ≤ t] ≥ 1 − e − (p−p) 2 2p 2 t − e − min s 2 t 2(2τ −1 ν) 2 , st 2(2τ −1 b) ,
where R = Z 0 h(α) + max lnα−ln α 0 2 ln γ , 0 , and (p,α, h(·)) are as defined in Lemma 3.15.
Proof. By the law of total probability,
P [T ε ∆l > t] =P T ε ∆l > t, 1 t t−1 k=0 (2τ −1 f +τ −1 (E k + E + k )) > 4τ −1 f + s A + P T ε ∆l > t, 1 t t−1 k=0 (2τ −1 f +τ −1 (E k + E + k )) ≤ 4τ −1 f + s B .
First we bound P[A]. For each iteration k, since E k and E + k satisfy the one-sided subexponential bound (2.11) with parameters (ν, b), one can show thatτ −1 (E k + E + k ) satisfies (2.11) with parameters (2τ −1 ν, 2τ −1 b). Moreover, sinceτ −1 (E k + E + k ) has mean bounded by 2τ −1 f , applying (one-sided) Bernstein's inequality, for any s ≥ 0
P[A] ≤ P 1 t t−1 k=0τ −1 (E k + E + k ) > 2τ −1 f + s ≤ e − min s 2 t 2(2τ −1 ν) 2 , st 2(2τ −1 b) . Let l = max − ln α 0 −lnα ln γ , 0 . To bound P[B]
we apply the law of total probability,
P[B] = P T ε ∆l > t, 1 t t−1 k=0 (2τ −1 f +τ −1 (E k + E + k )) ≤ 4τ −1 f + s, t−1 k=0 Θ k I k U k < p − 1 2 t − l 2 B 1 + P T ε ∆l > t, 1 t t−1 k=0 (2τ −1 f +τ −1 (E k + E + k )) ≤ 4τ −1 f + s, t−1 k=0 Θ k I k U k ≥ p − 1 2 t − l 2 B 2 .
We first show that P[B 2 ] = 0. By Lemma 3.15, for any iteration k < T ε ∆l , it follows that
Z k+1 ≤ Z k −h(α)+2τ −1 f +τ −1 (E k +E + k ) ≤ Z k −h(α)+4τ −1 f if U k I k Θ k = 1, and Z k+1 ≤ Z k + 2τ −1 f +τ −1 (E k + E + k ) if U k I k Θ k = 0.
By the definition of the zeroth-order oracle (Oracle 0), E[E k ] and E[E + k ] are bounded above by f for all k. The event T ε ∆l > t implies that Z t > 0 (since Z t = 0 can only happen when T ε ∆l ≤ t by the proof of Lemma 3.6). This together with 1 t t−1
k=0 (2τ −1 f +τ −1 (E k + E + k )) ≤ 4τ −1 f + s in turn implies the event t−1 k=0 Θ k I k U k < p − 1 2 t − l 2 . To see this, assume that t−1 k=0 Θ k I k U k ≥ p − 1 2 t − l 2 , then Z t ≤ Z 0 − p − 1 2 t − l 2 h(α) − t−1 k=0 (2τ −1 f +τ −1 (E k + E + k )) ≤ Z 0 − p − 1 2 t − l 2 h(α) + t(4τ −1 f + s) = Z 0 − p − 1 2 h(α) − (4τ −1 f + s) t + l 2 h(α) ≤ 0.
The last inequality above is due to the assumption thatp > 1 2 +
4τ −1 f +s h(α) and t ≥ R p− 1 2 − 4τ −1 f +s h(α)
. Hence, P[B 2 ] = 0.
We now bound P[B 1 ]; by Lemmas 3.16 and 3.17,
P[B 1 ] ≤ P T ε ∆l > t, t−1 k=0 Θ k I k U k < p − 1 2 t − l 2 = P T ε ∆l > t, t−1 k=0 Θ k I k U k < p − 1 2 t − l 2 , t−1 k=0 I k <pt + P T ε ∆l > t, t−1 k=0 Θ k I k U k < p − 1 2 t − l 2 , t−1 k=0 I k ≥pt ≤ P t−1 k=0 I k <pt + P T ε ∆l > t, t−1 k=0 Θ k I k U k < p − 1 2 t − l 2 , t−1 k=0 I k ≥pt ≤ e − (p−p) 2 2p 2 t + 0 = e − (p−p) 2 2p 2 t .
Combining P[A] and P[B], completes the proof.
+ 4τ −1 f +s αθωpε 2 ∆l , p and t ≥R p− 1 2 − 4τ −1 f +s αθωpε 2 ∆l , P [T ε ∆l ≤ t] ≥ 1 − e − (p−p) 2 2p 2 t − e − min s 2 t 2(2τ −1 ν) 2 , st 2(2τ −1 b) , (3.16) whereR = φ(x 0 ,τ −1 )−φ min −(τ −1 −τ min )f inf αθωpε 2 ∆l + max lnα−ln α 0 2 ln γ , 0 , equivalently, by Remark 3.5, R = max{κ 2 H ,1} κ l τ min · φ(x 0 ,τ −1 )−φ min −(τ −1 −τ min )f inf αθωpε 2 + max lnα−ln α 0 2 ln γ , 0 , ω p = min 1−ηω 5 1+ τ ω 2 , 1 − ηω 6 , 1
1+ω 3 +ω 4 , and the rest of the constants are defined in Assumption 3.14.
Remark 3.20. We make a few remarks about the main theoretical results of the paper (Theorem 3.18 and Corollary 3.19).
• (Iteration Complexity) By Definition 4 (and Remark 3.5) and Corollary 3.19, we conclude that, with overwhelmingly high probability, the iteration complexity of Algorithm 1 to generate a primal-dual iterate (x k , y k ) ∈ R n × R m that satisfies
max{ ∇f k + J T k y k , c k } ≤ ε is O(ε −2 )
. This iteration complexity is of the same order in terms of the dependence on ε as the iteration complexity that can be derived for the deterministic counterpart [16], with the additional restriction that ε is bounded away from zero (Assumption 3.14) due to the noise and bias in the oracles.
• (Almost Sure Convergence) We note that Algorithm 1 finds an ε-stationary iterate in a finite number of iterations with probability 1, i.e., P[∩ ∞ k=1 ∪ ∞ t=k (T ε ∆l > t)] = 0. This is a direct consequence of the Borel-Cantelli lemma, since it follows from (3.16) that the probability of failure events is summable, i.e.,
∞ t=1 P[T ε ∆l > t] = ∞ t=1 (1 − P[T ε ∆l ≤ t]) < ∞.
• (Unconstrained Setting) The high probability complexity bound in this paper is a generalization of the unconstrained version. In the unconstrained setting, the parameters reduce to σ = 0, ω 1 = 0, ω 2 = 1, Γ = 0, ζ = 1, κ H = 1, κ l = 1, τ = 0, andτ k = 1 for all k ∈ N. Using these values in the results of Corollary 3.19 does not exactly recover the result from the unconstrained setting [22]. That being said, the order of the results is the same in terms of the dependence on ε. The existence of the gap is due to complications that arise in the constrained setting related to the adaptivity of the merit parameter. We conclude by emphasizing again that though there is a constant difference in function h and valueα comparing to [22], our algorithm recovers the complexity bound of the deterministic variant algorithm [16].
Numerical Results
In this section, we present numerical results for our proposed algorithm on standard equality constrained nonlinear optimization problems. The goal of the numerical experiments is to investigate the efficiency and robustness of the SS-SQP algorithm across a diverse set of test problems with different levels of noise in the objective function and gradient evaluations. All experiments were conducted in MATLAB. Before we present the numerical results, we describe the test problems, implementation details, and evaluation metrics.
Test Problems
We ran the numerical experiments on a subset of the equality-constrained optimization problems from the CUTEst collection [18]. We selected the problems that satisfy the following criteria: (i) the objective function is not a constant function, (ii) the total number of variables and constraints are not larger than 10 3 , and (iii) the singular values of Jacobians of the constraints at all iterates in all runs were greater than 10 −8 . This resulted in 35 test problems of various dimensions. We considered noisy (noisy objective function and gradient evaluations) versions of the 35 CUTEst problems. Specifically, whenever an objective function or objective gradient evaluation was required, approximations,f (x; ξ) = N f (x), 2 f,N and g(x; ξ ) = N ∇f (x), 2 g,N n I , respectively, were utilized. We considered 4 different noise levels in the objective function and gradient evaluations, dictated by the constants f,N ∈ 0, 10 −4 , 10 −2 , 10 −1 and g,N ∈ 0, 10 −4 , 10 −2 , 10 −1 , respectively. Each CUTEst problem has a unique initial starting point, which was used as the starting point of all runs of all algorithms. Moreover, for each selected tuple of noise levels ( f,N , g,N ) ∈ 0, 10 −4 , 10 −2 , 10 −1 × 10 −4 , 10 −2 , 10 −1 ∪ {0} × {0}, where appropriate, we ran each problem with five different random seeds.
Implementation Details
We compared SS-SQP (Algorithm 1) to the adaptive stochastic SQP algorithm proposed in [7] (which we call AS-SQP) on the previously described noisy CUTEst problems. We set user-defined parameters for SS-SQP as follows: f = f,N , g = g,N , τ = 10 −2 ,τ −1 = σ = 0.1, γ = 0.5, θ = 10 −4 , α 0 = α max = 1, and H k = I for all k ∈ N. For AS-SQP [7] we set the parameters as follows (this parameter selection was guided by the choice of parameters in [7]):τ −1 = σ = 0.1,ξ −1 = 1, = 10 −2 , θ = 10 4 , H k = I and β k = 1 for all k ∈ N. The AS-SQP step size rule requires knowledge (or estimates) of the Lipschitz constants L and Γ. To this end, we estimated these constants using gradient differences near the initial point, and set L k = L and Γ k = Γ for all k ∈ N. We note that while the analysis of the SS-SQP algorithm requires that the condition of Oracles 1 hold, such conditions are not enforced or checked, and rather in each experiment, the algorithms were given random gradient estimates with the same, fixed, pre-specified accuracy (as described above). That being said, a clear distinction between SS-SQP and AS-SQP is the fact that the former requires function evaluations of the objective function (for the step search) whereas AS-SQP does not (AS-SQP is an objective-function-free method). We discuss this further when presenting the numerical results.
Termination Conditions and Evaluation Metrics
In all of our experiments, results are given in terms of infeasibility ( c(x k ) ∞ ) and stationarity (KKT) (max{ c(x k ) ∞ , min y∈R m ∇f (x k ) + ∇c(x k )y ∞ }) with respect to different evaluation metrics (iterations and work). We ran all algorithms with a budget of iterations (10 3 ), and only terminated a run early if an approximate stationary point was found, which we define as x * ∈ R n such that c(x * ) ∞ ≤ 10 −6 and min y∈R m ∇f (x * ) + ∇c(x * )y ∞ ≤ 10 −4 .
We present results in the form of performance profiles with respect to iterations and work (defined as the number of function and gradient evaluations), and use the convergence metric as described in [26], i.e., m(
x 0 ) − m(x) ≥ (1 − pp )(m(x 0 ) − m b ), where m(x) is either c(x) ∞ (infeasibility) or max{ c(x) ∞ , min y∈R m ∇f (x) + ∇c(x)y ∞ } (stationarity (KKT)
), x 0 is the initial iterate, and m b is the best value of the metric found by any algorithm for a given problem instance within the budget, and pp ∈ (0, 1) is the tolerance. For all experiments presented, we chose pp = 10 −3 .
Noisy Gradients, Exact Functions ( f = 0)
In our first set of experiments, we consider problems with exact objective function evaluations and noisy objective gradient evaluations and compare SS-SQP and AS-SQP. The goal of this experiment is to show the effect of noise in the gradient and the advantages of using (exact) function values. Each row in Figure 1 shows performance profiles for a different noise level in the gradient (bottom row, highest noise level) and each column shows a different evaluation metric. Starting from the noise-less benchmark case ( f = 0 and g = 0, the first row of Figure 1), it is clear that the performance of the methods in terms of both infeasibility error and KKT error is similar with a slight advantage in effectiveness (total problems that can be solved) for SS-SQP in terms of KKT error. As the noise in the gradient is increased, the gap between the performance of the two methods (in terms of all metrics) increases favoring SS-SQP. This, of course, is not surprising as SS-SQP uses additional information (exact function values). These results highlight the effect reliable function information can have on the performance of the methods.
Noisy Functions and Gradients
Here we present results with noise in both the objective function and gradient evaluations. As in Figure 1, in Figure 2 different rows show results for different noise levels in the gradient (the bottom row has the highest noise) and different columns show results for different evaluation metrics. Each performance profile has 4 lines: the AS-SQP (that is objectivefunction-free and is not affected by the noise in the function evaluations) and three variants of the SS-SQP method with different levels of noise in the objective function evaluations. One can make the following observations. First, not surprisingly, the performance of the SS-SQP method degrades as the noise in the objective function evaluations increases. Second, AS-SQP and SS-SQP are competitive and achieve similar robustness levels with respect to infeasibility errors. Third, and most interestingly, the performance of the methods depends on the relative errors of the function and gradient evaluations. In particular, when the objective function noise level is sufficiently small compared to the objective gradient bias, SS-SQP performs better. On the other hand, when the function estimations are too noisy compared to the noise level in the gradient evaluations, AS-SQP performs slightly better. These results highlight the power of objective-function-free optimization methods in the presence of noise (especially high noise in the objective function evaluations) and the value of quality (or at least relative quality) function evaluations in methods that require zeroth-order information.
Conclusion
We have proposed a step-search SQP algorithm (SS-SQP) for solving stochastic optimization problems with deterministic equality constraints, i.e., the setting in which constraint function values and derivatives are available, but only stochastic estimates of the objective function and its associated derivatives can be computed. We showed that under reasonable assumptions on the inexact probabilistic zeroth-and first-order oracles, with overwhelmingly high probability, in O(ε −2 ) iterations our algorithm can produce an iterate that satisfies the first-order ε-stationarity, which matches the iteration complexity of the deterministic counterparts of the SQP algorithm [16]. Numerical results provide strong evidence for the efficiency and efficacy of the proposed method. Some future directions include but are not limited to, (1) incorporating stochastic constraint evaluations into the algorithm design and analysis, and (2) extending the framework to the setting with inequality constraints. Both avenues above are subjects of future work as they require significant adaptations in the design, analysis, and implementation of the algorithm.
Assumption 2 . 1 .
21Let X ⊆ R n be an open convex set including iterates {x k } and trial iterates {x + k }.
Lemma 3 . 10 .
310For all k ∈ N, there exist constants {ζ, ω 1
Lemma 3. 15 .
15Suppose Assumptions 3.2, 3.4 and 3.14 hold. For all k < T ε ∆l , let • p = 1 − δ when the noise is bounded by f , and p
Lemma 3 . 16 .
316For all t ≥ 1, and anyp ∈ [0, p), The proof is the same as [22, Lemma 3.1].
Corollary 3. 19 .
19Under the conditions of Theorem 3.18, for any s ≥ 0,p ∈ 1 2
Figure 1 :
1Performance profiles for AS-SQP and SS-SQP on CUTEst collection[18] with deterministic objective function evaluations ( f = 0) and noisy objective gradient evaluations. Each column corresponds to a different evaluation metric (infeasibility and KKT errors vs. iterations and work). The noise in the objective gradient evaluations g increases from top to bottom (First row: g = 0; Second row: g = 10 −4 ; Third row: g = 10 −2 ; Fourth row: g = 10 −1 ).
Figure 2 :
2Performance profiles for AS-SQP and SS-SQP on CUTEst collection[18] with noise in both the objective function and gradient evaluations. Each column corresponds to a different evaluation metric (infeasibility and KKT vs. iterations and work). The noise in the objective gradient evaluations g increases from top to bottom (First row: g = 10 −4 ; Second row: g = 10 −2 ; Third row: g = 10 −1 ). The different variants of SS-SQP correspond to different levels of noise in the objective function evaluations.
AcknowledgmentsThis material is based upon work supported by the Office of Naval Research under award number N00014-21-1-2532. We would like to thank Professors Frank E. Curtis and Katya Scheinberg for their invaluable support and feedback.
Convergence of trustregion methods based on probabilistic models. S Afonso, Katya Bandeira, Luis Nunes Scheinberg, Vicente, SIAM J. Optim. 243Afonso S Bandeira, Katya Scheinberg, and Luis Nunes Vicente. Convergence of trust- region methods based on probabilistic models. SIAM J. Optim., 24(3):1238-1264, 2014.
Linesearch Newton-CG methods for convex optimization with noise. Stefania Bellavia, Eugenio Fabrizi, Benedetta Morini, arXiv:2205.06710arXiv preprintStefania Bellavia, Eugenio Fabrizi, and Benedetta Morini. Linesearch Newton-CG methods for convex optimization with noise. arXiv preprint arXiv:2205.06710, 2022.
An adaptive sampling sequential quadratic programming method for equality constrained stochastic optimization. S Albert, Raghu Berahas, Baoyu Bollapragada, Zhou, arXiv:2206.00712arXiv preprintAlbert S Berahas, Raghu Bollapragada, and Baoyu Zhou. An adaptive sampling sequential quadratic programming method for equality constrained stochastic opti- mization. arXiv preprint arXiv:2206.00712, 2022.
Derivative-free optimization of noisy functions via quasi-Newton methods. S Albert, Berahas, H Richard, Jorge Byrd, Nocedal, SIAM J. Optim. 292Albert S Berahas, Richard H Byrd, and Jorge Nocedal. Derivative-free optimization of noisy functions via quasi-Newton methods. SIAM J. Optim., 29(2):965-993, 2019.
Global convergence rate analysis of a generic line search algorithm with noise. S Albert, Liyuan Berahas, Katya Cao, Scheinberg, SIAM J. Optim. 312Albert S Berahas, Liyuan Cao, and Katya Scheinberg. Global convergence rate analysis of a generic line search algorithm with noise. SIAM J. Optim., 31(2):1489-1518, 2021.
A stochastic sequential quadratic optimization algorithm for nonlinear equality constrained optimization with rank-deficient jacobians. S Albert, Frank E Berahas, Curtis, J O' Michael, Daniel P Neill, Robinson, arXiv:2106.13015arXiv preprintAlbert S Berahas, Frank E Curtis, Michael J O'Neill, and Daniel P Robinson. A stochastic sequential quadratic optimization algorithm for nonlinear equality con- strained optimization with rank-deficient jacobians. arXiv preprint arXiv:2106.13015, 2021.
Sequential quadratic optimization for nonlinear equality constrained stochastic optimization. S Albert, Frank E Berahas, Daniel Curtis, Baoyu Robinson, Zhou, SIAM J. Optim. 312Albert S Berahas, Frank E Curtis, Daniel Robinson, and Baoyu Zhou. Sequen- tial quadratic optimization for nonlinear equality constrained stochastic optimization. SIAM J. Optim., 31(2):1352-1379, 2021.
Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction. S Albert, Jiahao Berahas, Zihong Shi, Baoyu Yi, Zhou, arXiv:2204.04161arXiv preprintAlbert S Berahas, Jiahao Shi, Zihong Yi, and Baoyu Zhou. Accelerating stochastic se- quential quadratic programming for equality constrained optimization using predictive variance reduction. arXiv preprint arXiv:2204.04161, 2022.
Network optimization: continuous and discrete models. Dimitri Bertsekas, Athena Scientific. 8Dimitri Bertsekas. Network optimization: continuous and discrete models, volume 8. Athena Scientific, 1998.
Convergence rate analysis of a stochastic trust-region method via supermartingales. Jose Blanchet, Coralia Cartis, Matt Menickelly, Katya Scheinberg, INFORMS J. Optim. 12Jose Blanchet, Coralia Cartis, Matt Menickelly, and Katya Scheinberg. Convergence rate analysis of a stochastic trust-region method via supermartingales. INFORMS J. Optim., 1(2):92-119, 2019.
An inexact SQP method for equality constrained optimization. H Richard, Frank E Byrd, Jorge Curtis, Nocedal, SIAM J. Optim. 191Richard H Byrd, Frank E Curtis, and Jorge Nocedal. An inexact SQP method for equality constrained optimization. SIAM J. Optim., 19(1):351-369, 2008.
First-and second-order high probability complexity bounds for trust-region methods with noisy oracles. Liyuan Cao, S Albert, Katya Berahas, Scheinberg, arXiv:2205.03667arXiv preprintLiyuan Cao, Albert S Berahas, and Katya Scheinberg. First-and second-order high probability complexity bounds for trust-region methods with noisy oracles. arXiv preprint arXiv:2205.03667, 2022.
Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Coralia Cartis, Katya Scheinberg, Math. Program. 1692Coralia Cartis and Katya Scheinberg. Global convergence rate analysis of un- constrained optimization methods based on probabilistic models. Math. Program., 169(2):337-375, 2018.
Constraint-aware deep neural network compression. Changan Chen, Frederick Tung, Naveen Vedula, Greg Mori, Proceedings of the ECCV. the ECCVChangan Chen, Frederick Tung, Naveen Vedula, and Greg Mori. Constraint-aware deep neural network compression. In Proceedings of the ECCV, pages 400-415, 2018.
Stochastic optimization using a trust-region method and random models. Ruobing Chen, Matt Menickelly, Katya Scheinberg, Math. Program. 1692Ruobing Chen, Matt Menickelly, and Katya Scheinberg. Stochastic optimization using a trust-region method and random models. Math. Program., 169(2):447-487, 2018.
Worst-case complexity of an SQP method for nonlinear equality constrained stochastic optimization. E Frank, Curtis, J O' Michael, Daniel P Neill, Robinson, arXiv:2112.14799arXiv preprintFrank E Curtis, Michael J O'Neill, and Daniel P Robinson. Worst-case complexity of an SQP method for nonlinear equality constrained stochastic optimization. arXiv preprint arXiv:2112.14799, 2021.
Inexact sequential quadratic optimization for minimizing a stochastic objective function subject to deterministic nonlinear equality constraints. E Frank, Curtis, P Daniel, Baoyu Robinson, Zhou, arXiv:2107.03512arXiv preprintFrank E Curtis, Daniel P Robinson, and Baoyu Zhou. Inexact sequential quadratic optimization for minimizing a stochastic objective function subject to deterministic nonlinear equality constraints. arXiv preprint arXiv:2107.03512, 2021.
CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. I M Nicholas, Dominique Gould, Philippe L Orban, Toint, Comput. Optim. Appl. 603Nicholas IM Gould, Dominique Orban, and Philippe L Toint. CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimiza- tion. Comput. Optim. Appl., 60(3):545-557, 2015.
Complexity and global rates of trust-region methods based on probabilistic models. Serge Gratton, W Clément, Royer, N Luís, Zaikun Vicente, Zhang, IMA J. Numer. Anal. 383Serge Gratton, Clément W Royer, Luís N Vicente, and Zaikun Zhang. Complexity and global rates of trust-region methods based on probabilistic models. IMA J. Numer. Anal., 38(3):1579-1597, 2018.
Direct search based on probabilistic descent. Serge Gratton, W Clément, Luís Royer, Zaikun Nunes Vicente, Zhang, SIAM J. Optim. 253Serge Gratton, Clément W Royer, Luís Nunes Vicente, and Zaikun Zhang. Direct search based on probabilistic descent. SIAM J. Optim., 25(3):1515-1541, 2015.
Variance-reduced and projection-free stochastic optimization. Elad Hazan, Haipeng Luo, International Conference on Machine Learning. PMLRElad Hazan and Haipeng Luo. Variance-reduced and projection-free stochastic opti- mization. In International Conference on Machine Learning, pages 1263-1271. PMLR, 2016.
High probability complexity bounds for line search based on stochastic oracles. Billy Jin, Katya Scheinberg, Miaolan Xie, arXiv:2106.06454arXiv preprintBilly Jin, Katya Scheinberg, and Miaolan Xie. High probability complexity bounds for line search based on stochastic oracles. arXiv preprint arXiv:2106.06454, 2021.
Stochastic approximation methods for constrained and unconstrained systems. Joseph Harold, Kushner, S Dean, Clark, Springer Science & Business Media26Harold Joseph Kushner and Dean S Clark. Stochastic approximation methods for constrained and unconstrained systems, volume 26. Springer Science & Business Media, 2012.
First-order and stochastic optimization methods for machine learning. Guanghui Lan, SpringerGuanghui Lan. First-order and stochastic optimization methods for machine learning. Springer, 2020.
Generalized stochastic Frank-Wolfe algorithm with stochastic "substitute" gradient for structured convex optimization. Haihao Lu, M Robert, Freund, Math. Program. 1871Haihao Lu and Robert M Freund. Generalized stochastic Frank-Wolfe algorithm with stochastic "substitute" gradient for structured convex optimization. Math. Program., 187(1):317-349, 2021.
Benchmarking derivative-free optimization algorithms. J Jorge, Stefan M Moré, Wild, SIAM J. Optim. 201Jorge J Moré and Stefan M Wild. Benchmarking derivative-free optimization algo- rithms. SIAM J. Optim., 20(1):172-191, 2009.
Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming. Sen Na, Mihai Anitescu, Mladen Kolar, arXiv:2109.11502arXiv preprintSen Na, Mihai Anitescu, and Mladen Kolar. Inequality constrained stochastic non- linear optimization via active-set sequential quadratic programming. arXiv preprint arXiv:2109.11502, 2021.
An adaptive stochastic sequential quadratic programming with differentiable exact augmented lagrangians. Sen Na, Mihai Anitescu, Mladen Kolar, Math. Program. Sen Na, Mihai Anitescu, and Mladen Kolar. An adaptive stochastic sequential quadratic programming with differentiable exact augmented lagrangians. Math. Pro- gram., pages 1-71, 2022.
Asymptotic convergence rate and statistical inference for stochastic sequential quadratic programming. Sen Na, W Michael, Mahoney, arXiv:2205.13687arXiv preprintSen Na and Michael W Mahoney. Asymptotic convergence rate and statistical inference for stochastic sequential quadratic programming. arXiv preprint arXiv:2205.13687, 2022.
A primal dual formulation for deep learning with constraints. Yatin Nandwani, Abhishek Pathak, Parag Singla, Advances in Neural Information Processing Systems. 32Yatin Nandwani, Abhishek Pathak, and Parag Singla. A primal dual formulation for deep learning with constraints. Advances in Neural Information Processing Systems, 32, 2019.
Numerical optimization. Jorge Nocedal, Stephen Wright, Springer Series in Operations Research and Financial Engineering. New YorkSpringer-VerlagJorge Nocedal and Stephen Wright. Numerical optimization. Springer Series in Op- erations Research and Financial Engineering. Springer-Verlag New York, 2006.
Constrained optimization in the presence of noise. Figen Oztoprak, Richard Byrd, Jorge Nocedal, arXiv:2110.04355arXiv preprintFigen Oztoprak, Richard Byrd, and Jorge Nocedal. Constrained optimization in the presence of noise. arXiv preprint arXiv:2110.04355, 2021.
A stochastic line search method with expected complexity analysis. Courtney Paquette, Katya Scheinberg, SIAM J. Optim. 301Courtney Paquette and Katya Scheinberg. A stochastic line search method with ex- pected complexity analysis. SIAM J. Optim., 30(1):349-376, 2020.
Explicitly imposing constraints in deep networks via conditional gradients gives improved generalization and faster convergence. N Sathya, Tuan Ravi, Dinh, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Vishnu Suresh Lokhande, and Vikas SinghSathya N Ravi, Tuan Dinh, Vishnu Suresh Lokhande, and Vikas Singh. Explicitly imposing constraints in deep networks via conditional gradients gives improved gener- alization and faster convergence. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4772-4779, 2019.
Optimal solvers for PDEconstrained optimization. Tyrone Rees, Sue Dollar, Andrew J Wathen, SIAM J. Sci. Comput. 321Tyrone Rees, H Sue Dollar, and Andrew J Wathen. Optimal solvers for PDE- constrained optimization. SIAM J. Sci. Comput., 32(1):271-298, 2010.
Direct search based on probabilistic descent in reduced spaces. Lindon Roberts, W Clément, Royer, arXiv:2204.01275arXiv preprintLindon Roberts and Clément W Royer. Direct search based on probabilistic descent in reduced spaces. arXiv preprint arXiv:2204.01275, 2022.
Geometry aware constrained optimization techniques for deep learning. Zakaria Soumava Kumar Roy, Mehrtash Mhammedi, Harandi, Proceedings of CVPR. CVPRSoumava Kumar Roy, Zakaria Mhammedi, and Mehrtash Harandi. Geometry aware constrained optimization techniques for deep learning. In Proceedings of CVPR, pages 4460-4469, 2018.
Stochastic adaptive regularization method with cubics: A high probability complexity bound. Katya Scheinberg, Miaolan Xie, OPT 2022: Optimization for Machine Learning. NeurIPS 2022 WorkshopKatya Scheinberg and Miaolan Xie. Stochastic adaptive regularization method with cubics: A high probability complexity bound. In OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop).
Lectures on stochastic programming: modeling and theory. Alexander Shapiro, Darinka Dentcheva, Andrzej Ruszczynski, SIAMAlexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczynski. Lectures on stochas- tic programming: modeling and theory. SIAM, 2021.
| zyda_arxiv-0703000 |
Thermal-bath effects in quantum quenches within quantum critical regimes Ettore Vicari
Francesco Tarantelli
Dipartimento di Fisica dell'Università di Pisa and INFN
Dipartimento di Fisica dell'Università di Pisa
Largo Pontecorvo 3, Largo Pontecorvo 3, II-56127, 56127Pisa, PisaItaly, Italy
Thermal-bath effects in quantum quenches within quantum critical regimes Ettore Vicari
(Dated: May 15, 2023)
We address the out-of-equilibrium dynamics arising from quantum-quench (QQ) protocols (instantaneous changes of the Hamiltonian parameters) in many-body systems within their quantum critical regime and in contact with (homogeneously coupled) thermal baths. We consider two classes of QQ protocols. In one of them the thermal bath is used to prepare the initial finite-temperature Gibbs state; then, after quenching, the thermal bath is removed and the dynamics of the system is unitary. We also address a more complex QQ protocol where the thermal bath is not removed after quenching, thus the quantum evolution is also driven by the interaction with the bath, which may be described by appropriate master equations for the density matrix of the system, where a further relevant time scale, or inverse decay rate, characterizes the system-bath coupling. Under these QQ protocols, the critical system develops out-of-equilibrium scaling behaviors, which extend those for isolated critical systems, by introducing further scaling variables proportional to the temperature of the thermal bath and the decay rate of the system-bath interactions. These out-of-equilibrium scaling behaviors are checked by analyzing QQ protocols within fermionic Kitaev wires, or equivalently quantum Ising chains, supplemented with a particular modelization of thermal bath that guarantees the asymptotic thermalization within the Lindblad master equation for the dynamics of open systems.
I. INTRODUCTION
Thanks to the recent experimental progress in the realization and control of the dynamics of quantum manybody systems, see e.g. Refs. [1,2], the out-of-equilibrium quantum dynamics of many-body systems has become an important theoretical issue. In particular, out-ofequilibrium phenomena have been addressed within the critical regimes of many-body systems at continuous quantum transitions (CQTs) [3][4][5], where collective behaviors give rise to zero-temperature singularities in the equilibrium low-energy properties of the system, and the universal critical behaviors are determined by a limited number of relevant features, such as the global symmetry, the symmetry-breaking pattern, dimensionality, etc.. Within critical regimes and in the appropriate thermodynamic or finite-size scaling (FSS) limits, one can achieve a complete characterization of the complex dynamics of many-body systems by controlling a limited number of renormalization-group (RG) perturbations. The universal scaling behaviors at CQTs extend beyond the equilibrium conditions [5]. Indeed dynamic protocols entailing out-of-equilibrium evolutions develop scaling behaviors as well, in the appropriate limits, related to the universality class of the CQT. For example, out-of-equilibrium scaling behaviors emerge when analyzing the quantum evolutions arising from a quantum quench (QQ), see e.g. Refs. [5][6][7][8][9][10][11], or from slow changes of the Hamiltonian parameters across the transition point, such as the protocols associated with the so-called quantum Kibble-Zurek problem, see e.g. Refs. [5,[12][13][14][15][16][17][18][19][20][21][22][23].
These out-of-equilibrium issues have been mostly addressed within isolated many-body systems, unitarily driven by their Hamiltonian and the Schrödinger equation. In this paper we extend such studies to investigate how the interaction with a thermal bath, coupled homogeneously to the system, affects the out-ofequilibrium dynamics of many-body systems within the critical regime of a zero-temperature quantum transition, such as that arising from a QQ or a slow crossing of the quantum critical regime.
The role of the temperature within the equilibrium critical behavior at a CQT is generally associated with one of the relevant RG perturbations at the stable fixed point of the RG flow controlling the quantum criticality [3][4][5]24]. Therefore, the quantum scaling behavior can be only observed in the zero-temperature limit. More precisely, the quantum scaling limit requires that the zero-temperature critical point is approached keeping the ratio T /∆ fixed, where ∆ is the gap at the quantum critical point, which is generally power-law suppressed. For example, in the FSS limit the gap is suppressed as ∆ ∼ L −z at the critical point, where L is the size of the system and z > 0 is the universal dynamic exponent associated with universality class of the CQT. Within the equilibrium critical regime the temperature enters the asymptotic FSS laws through a further dependence of the scaling functions on the scaling variable Ξ ≡ T L z ∼ T /∆.
The role of the temperature becomes less definite when we consider out-of-equilibrium behaviors, because the temperature of the system is an equilibrium concept. However, one may consider the effects of thermal baths in contact with the system during its out-of-equilibrium dynamics. The main feature of a thermal bath is that it eventually drives the system toward thermalization at its temperature T , in the large-time limit of the evolu-arXiv:2305.05494v2 [cond-mat.stat-mech] 12 May 2023 tion of the system in contact with the thermal bath. The thermalization process must somehow introduce a further time scale τ in the problem, characterizing the approach of the system to the thermal state when it is put in contact with the thermal bath. Such time scale is expected to play an inportant role in the out-of-equilibrium dynamics of the system in contact with the thermal bath. In this paper we investigate these issues within the simplest dynamic protocols giving rise to out-of-equilibrium behaviors, i.e. those entailing instantaneous QQs of the Hamiltonian parameters starting from equilibrium thermal conditions.
A quench protocol is generally performed by suddenly varying a parameter within a family of Hamiltonians, such asĤ
(w) =Ĥ c + wĤ p ,(1)
whereĤ c andĤ p are independent of the parameter w, and [Ĥ c ,Ĥ p ] = 0. In a standard QQ protocol for closed systems, one usually starts from the ground state |Φ 0 , w i of the HamiltonianĤ(w i ) associated with an initial value w i of the parameter w, with corresponding density matrix ρ i = |Φ 0 , w i Φ 0 , w i |. At a given time, t = 0 say, the Hamiltonian parameter is suddenly changed from w i to w = w i , and the subsequent quantum evolution is supposed to be unitarily driven by the HamiltonianĤ(w), that is |Ψ(t) = e −iĤ(w)t |Φ 0 , w i (hereafter we set = 1). Several interesting issues have been investigated within QQ dynamic protocols. They include the long-time relaxation and the consequent spreading of quantum correlations and entanglement, the statistics of the work, localization effects due to the mutual interplay of interactions and disorder, dynamical phase transitions, the dynamic scaling close to quantum transitions, effects of dissipation or of measurements due to interactions with an environment (see, e.g., Refs. [5,9,).
To focus on the out-of-equilibrium dynamics close to a quantum transition, we assume that the Hamiltonian H c in Eq. (1) is critical, thus w = w c = 0 represents a quantum critical point. We recall that the critical behavior around the CQT point w c = 0 is characterized by a diverging length scale ξ ∼ |w| −ν of the quantum critical modes, and the power-law suppression ∆ ∼ ξ −z of the gap. The out-of-equilibrium dynamics at CQTs develops scaling behaviors controlled by the universality class of the quantum transition, for example when the Hamiltonian parameters are slowly varied across the critical regime [5,21,23], and in the case of soft QQ protocols when both the initial and final values of the quenching parameters are such to maintain the system within the critical regime [5,9,59]. In particular, soft QQs require that the energy scale of the QQ [i.e. the difference of the energy Ψ(t)|Ĥ(w)|Ψ(t) of the evolving state |Ψ(t) for t > 0 and the ground state ofĤ(w)] is sufficiently small, i.e. comparable with the energy gap ∆ ∼ L −z of the spectrum at the transition point in finite-size systems.
To study the effects of a thermal bath in the out-of-equilibrium behavior arising from a QQ within the critical regime, we consider two protocols where the thermal baths are involved in different ways: (i) Within the first protocol the thermal bath is used to prepare the system in a finite-temperature Gibbs state, described by the thermal density matrix (hereafter we set the Boltzmann constant k B = 1)
ρ t (w i , T ) = n e −En(wi)/T |Φ n , w i Φ n , w i |,(2)
where |Φ n , w i are the eigenstates ofĤ(w i ). Then the quantum evolution after the quench of the Hamiltonian parameters at t = 0 is unitary and driven by the Hamilto-nianĤ(w) only, i.e., the thermal bath is removed during the quantum evolution for t > 0. Therefore, the evolution of the density matrix is driven by the equation
∂ t ρ(t) = −i[Ĥ(w), ρ(t)], ρ(t = 0) = ρ t (w i , T ). (3)
(ii) In the second protocol the starting point is the same, i.e. the Gibbs state (2), but the thermal bath is not removed after quenching. Therefore, the out-ofequilibrium quantum evolution for t > 0 is not unitary anymore, but it is also driven by the interaction with the thermal bath. Under some conditions, discussed in Refs. [5,[79][80][81][82][83][84], the nonunitary evolution arising from the thermal baths can be described by a Lindbald master equation governing the time evolution of the density matrix of the system, which can be written as
∂ t ρ = L[ρ] ≡ −i Ĥ (w), ρ + γ D T [ρ],(4)
where L is a Liouvillian superoperator, and D T is a dissipative driving whose strength is controlled by the homogeneous coupling γ, playing the role of the decay rate (inverse time scale) associated with the interactions between the system and the bath. The operator D T is assumed to be such that the Lindbald master equation (4) drives the system toward an equilibrium Gibbs state at temperature T in the large-time limit. We argue that, for both types of protocols and sufficiently small temperatures of the thermal baths, the out-of-equilibrium time evolution within the critical regime develop a nontrivial out-of-equilibrium FSS (OFSS) limit, with peculiar scaling behaviors, similar to those arising for closed systems. The effects of the thermal baths can be taken into account by appropriate extensions of the out-of-equilibrium zero-temperature scaling laws describing soft quantum QQs within the critical regime of isolated systems, already put forward by earlier works [5,9]. As a theoretical laboratory to check our extended OFSS laws, we consider the quantum Ising chain [4], or the equivalent fermionic Kitaev wire [85], supplemented with a particular modelization of the thermal bath that guarantees the asymptotic thermalization within the Lindblad formulation of the dynamics of open systems with quadratic Hamiltonians [84,86], such as the fermionic Kitaev wire.
Our analyses are developed within FSS frameworks, which generally simplify the study of the universal features of critical behaviors, with respect to studies in the thermodynamic limit. In the FSS limit the general requirement of a large length scale ξ of the critical correlations is not subject to further conditions on the system size L. It only requires that ξ ∼ L, while critical behaviors in the thermodynamic limit requires ξ L. Therefore much larger systems are necessary to probe analogous length scales ξ in the thermodynamic limit. Equilibrium and out-of-equilibrium FSS behaviors are often observed for systems of moderately large size, see e.g. Refs. [5,9,57,87,88]. Thus FSS behaviors should be more easily accessed by numerical computations and experiments where the quantum dynamics can be monitored for a limited number of particles or spins, such as experiments with quantum simulators in laboratories, e.g., by means of trapped ions [89,90], ultracold atoms [91,92], or superconducting qubits [93,94].
The paper is organized as follows. In Sec. II we present the fermionic Kitaev wire, equivalent to the quantum Ising chain, and the model of thermal bath that we use as theoretical laboratory for our study; we also outline the QQ protocols that we consider and define the observables to monitor the quantum evolution after quenching. In Sec. III we outline the out-of-equilibrium scaling scenarios that are expected to be developed under the dynamic QQ protocols considered, and support them by numerical computations for the fermionic Kitaev wires in contact with the thermalizing bath. Finally, in Sec. IV we summarize, draw our conclusions, and add some remarks on the extension of this study to the dynamic Kibble-Zurek protocols slowly crossing quantum critical regimes. The appendix reports some details on the numerical computations for the QQ protocols within fermionic Kitaev wires in contact with a thermal bath.
II. KITAEV FERMIONIC WIRES AND THERMAL BATHS
A. The fermionic Kitaev chain
We consider fermionic Kitaev wires of L sites with open boundary conditions, whose quantum unitary dynamics is driven by the Hamiltonian [85]
H K = −J L−1 x=1 ĉ † xĉx+1 +ĉ † xĉ † x+1 + h.c. − µ L x=1n x , (5)
whereĉ x is the fermionic annihilation operator associated with the site x of the chain,n x ≡ĉ † xĉx is the particle density operator. In the following we assume J as the energy scale, thus we set J = 1.
The Hamiltonian (5) can be mapped into a quantum Ising chain, by means of the Jordan-Wigner transformation, see, e.g., Ref. 4. The corresponding spin model is the quantum Ising chain with open boundary conditions, i.e.Ĥ
Is = − L−1 x=1σ (1) xσ (1) x+1 − g L x=1σ (3) x , (6) σ (k)
x being the Pauli matrices and g = −µ/2. In the following we prefer to stick with the Kitaev quantum wire, because the thermal baths and observables that we consider are best defined within the fermionic model. However, the general scaling scenarios that will emerge apply to both models.
The Kitaev model undergoes a CQT at µ = µ c = −2 (corresponding to g = g c = 1 in the quantum Ising chain), between a disordered quantum phase for µ < µ c (corresponding to g > 1) and an ordered quantum phase for |µ| < |µ c | (corresponding to |g| < 1). Thus, we define
w = µ − µ c = µ + 2,(7)
so that one can easily see the correspondence between the Kitaev Hamiltonian (5) and the generic one reported in Eq. (1), i.e.Ĥ c corresponds to the Hamiltonian (5) for µ = µ c , andĤ p = − L x=1n x . The continuous transition at w = w c belongs to the two-dimensional Ising universality class [4,5], characterized by the length-scale critical exponent ν = 1, related to the RG dimension y w = 1/ν = 1 of the Hamiltonian parameter w. This implies that, approaching the critical point, the length scale ξ of the critical quantum fluctuations diverges as ξ ∼ |w| −ν . The dynamic exponent z = 1 associated with the unitary quantum dynamics can be obtained from the power law ∆ ∼ ξ −z of the vanishing gap with increasing ξ. Moreover, the RG dimension of the fermionic operatorsĉ j andĉ † j at the CQT is y c = 1/2, and that of the particle density operatorn x is y n = 1 [4,5].
B. Modelization of the thermal bath
In our study we consider a modelization of interaction with a thermal bath within the Lindblad master equation (4), whose asymptotic large-time behavior leads to a Gibbs density matrix at a given finite temperature T . In particular, we consider the proposal developed in Ref. [84] which applies to quantum models described by quadratic Hamiltonians, such as that of the fermionic Kitaev wires. This provides a relatively simple modelization of a thermal bath leading to thermalization in the largetime limit of the corresponding Lindblad master equation for the density matrix of the system. The Kitaev Hamiltonian (5) with open boundary conditions can be diagonalized in the Nambu field space by a Bogoliubov transformation, see e.g. Refs. [84,95,96], so that we can rewrite it aŝ
H K = L k=1 ω kb † kb k ,(8)
where ω k are values of the spectrum of the Bogoliubov eigenoperatorsb k (we are neglecting an irrelevant constant term). Note that both ω k andb k depend on the Hamiltonian parameter µ. The relation between the fermionic operatorsĉ x and the Bogoliubov eigenoperatorsb k can be generally written as [84,95,96]
c x = L k=1 A xkbk + B xkb † k ,(9)
where A and B are appropriate L×L matrices depending on µ. Following Refs. [84,86], we write the dissipator D T [ρ] in the Lindblad master equation (4) in terms of the Bogoliubov eigenoperators as
D T [ρ] = k [1 − f (ω k , T )] 2b k ρb † k − {b † kb k , ρ} + k f (ω k , T ) 2b † k ρb k − {b kb † k , ρ} ,(10)
where
f (ω k , T ) = 1 + e ω k /T −1 .(11)
When using this homogeneous dissipator term, the Lindblad master equation (4) ensures the asymptotic largetime thermalization [84]. Therefore,
lim t→∞ ρ(t) = ρ t (w, T ),(12)ρ t (w, T ) = n e −En(w)/T |Φ n , w Φ n , w|,(13)
where ρ t (w, T ) is the density matrix representing the thermal state, E n (w) and |Φ n , w are the eigenvalues and eigenstates ofĤ(w). The asymptotic approach to the thermal distribution is controlled by the decay-rate parameter γ [84]. Indeed the Liouvillian gap ∆ L that controls the exponential approach to the asymptotic stationary state of the Lindblad equation is proportional to the decay rate γ, i.e.
∆ L ∼ γ.(14)
The above modelization of thermal baths provides a useful theoretical laboratory to investigate issues related to the out-of-equilibrium dynamics in the presence of thermal baths. Its derivation has been thoroughly discussed in Ref. [84]. We also mention that it has been employed in Refs. [86,97]. Some details of the computations using the Lindblad master equation (4) with the dissipator (10) are reported in the appendix.
C. Quantum-quench protocols
As already anticipated in Sec. I, we consider two protocols, differing for the absence or presence of the contact with the thermal bath during the quantum evolution after quenching, giving respectively rise to unitary or dissipative dynamics after quenching. We call them unitary and dissipative QQ protocols, respectively.
• Unitary QQ protocol: In this simplest QQ protocol the role of the thermal bath is limited to that of preparing the initial Gibbs state ρ t (w i , T ) at t = 0, reported in Eq. (2). This can be obtained by keeping the thermal bath in contact with the system for a sufficiently long time t th , i.e t th γ −1 . Then at t = 0 the Hamiltonian parameter is instantaneously quenched from w i < 0 to w ≥ 0 and the thermal bath is removed, so that the subsequent time evolution is that of a closed fermionic wire, i.e. it is unitary and only driven by the Hamiltonian of the system, cf. Eq. (3).
• Dissipative QQ protocol: The quantum evolution starts from the same initial Gibbs state ρ t (w i , T ), but the thermal bath is maintained in contact with the system after the QQ from w i < 0 to w ≥ 0, at t = 0. Therefore, the quantum evolution for t > 0 is driven by the Lindblad master equation (4) with the dissipator term (10). Note that this dynamic protocol entails a further time scale τ = γ −1 , characterizing the asymptotic exponential approach to the large-time stationary Gibbs state associated with the HamiltonianĤ(w) and temperature T .
D. Observables monitoring the time evolution
To characterize the dynamic properties of the quantum evolution after the QQ at t = 0, we consider the subtracted particle-density average
n s (t, L) = 1 L Tr ρ(t) L x=1n x − n c (L),(15)
where n c (L) is the ground-state energy density of the Kitaev wire of size L at the critical point w c = 0 (in the infinite-size limit n c = 1/2 − 1/π [95]). Note that the particle density operatorn x and the transverse spin componentσ (3) x of the quantum Ising chain (6) are trivially related, indeedσ
(3) x = 2n x .
In the definition of n s , the subtraction of n c (L) simplifies the scaling behavior of n s (t, L) within the critical regime, cancelling the leading analytical behavior [5,24]. To monitor the spatial correlations, we also consider
P (x, y, t) = Tr[ρ(t) (ĉ † xĉ † y +ĉ yĉx )],(16)C(x, y, t) = Tr[ρ(t) (ĉ † xĉy +ĉ † yĉx )].(17)
Some details on the computation of the above quantities during the time evolution of the QQ protocols are reported in the appendix.
III. OUT-OF-EQUILIBRIUM SCALING
We now discuss the out-of-equilibrium behaviors arising from the QQ protocols outlined in Sec. II C. We show that they develop OFSS behaviors where the effects of the thermal baths are taken into account by appropriate extensions of the out-of-equilibrium zerotemperature scaling laws describing soft QQs in closed systems within their critical regime, already put foward by earlier works [5,9]. 4) and (10). These curves refer to a system of size L = 60, temperature T = 2 of the thermal bath, quenching from wi = −0.01 to w = 0, and various values of the decay rate γ (the case γ = 0 corresponds to the evolution of the close system). We plot the difference ns(t, L, T ) − ns,eq(L, T ) which is expected to vanish in the large-time limit. In this figure and in the following ones, the unity that we use are such that = 1, kB = 1, and J = 1.
The OFSS behaviors that we put forward for QQ protocols considered are verified by numerical computations for the fermionic Kitaev wire up to relatively large sizes. See the appendix for details on such calculations.
As a preliminary example of out-of-equilibriun QQ behaviors that we want to address, in Fig. 1 we show some results for the quantum evolution of the subtracted particle density (15) along the dissipative protocol outlined in Sec. II C, after quenching a fermionic Kitaev wire of size L = 60, from w i = −0.01 to w = 0, in the presence of a thermal bath at a temperature T = 2, and various values of the decay rate γ. The quantum evolution turns out to have a significant dependence on the decay-rate parameter γ that characterized the interactions between the system and the thermal bath. Indeed, the curves of the substracted particle density appear to approach its equilibrium value n s,eq (w = 0, T = 2) ≈ 0.0004601... (while at t = 0 we have n s,eq (w = w i , T = 2) ≈ 0.126598...), faster and faster with increasing γ, actually exponentially as exp(−t/τ ) with τ ∼ γ −1 , conferming the role of decay rate of the parameter γ within the Lindblad master equation, cf. Eq. (14). Analogous results are obtained for other observables, such as fermionic correlation functions defined in Sec. II D. In the following we put forward an out-of-equilibrium scaling theory for these out-of-equilibrium phenomena within the quantum critical regime.
A. Zero-temperature scaling in quantum quenches
We now provide a brief summary of the out-ofequilibrium scaling theory for close systems, describing QQ protocols within the critical regime [5,9]. The initial state is the ground state associated with an initial value w i < 0, and, after the instantaneous quench at t = 0 from w i to w, the quantum evolution is driven by the Schrödinger equation.
Out-of-equilibrium scaling laws can be obtained by extending those valid at equilibrium, allowing for a time dependence essentially controlled by the time scaling variable Θ ∼ t ∆, which is obtained by assuming that the relevant time scale of the critical modes is proportional to the inverse energy difference ∆ of the lowest states. We refer to Ref. [5] for a through presentation of the scaling arguments leading to the asymptotic OFSS behaviors.
Let us consider the out-of-equilibrium evolution (after quenching) of generic observables, such as the expectation value O at time t of a local operatorÔ(x) and its fixed-time correlations G O = Ô (x)Ô(y) . The general working hypothesis underlying out-of-equilibrium FSS frameworks is that the expectation value ofÔ(x) and its correlation functions obey asymptotic homogeneous scaling laws [5], such as
O(t, x, L, w i , w) ≈ b −yo O(t/b z , x/b, L/b, b yw w i , b yw w),(18)
where b is an arbitrary (large) length scale, y o is the RG dimension of the local operatorÔ x and the RG exponents y w and z are determined by the universality class of the CQT (they are the RG dimensions of the Hamiltonian parameter w and the temperature T , respectively). Thus both the initial and final values of w, i.e. w i and w, take the same RG exponent y w , being coupled to the RG perturbationĤ p within the Hamiltonian. Note that we do not assume translation invariance, which is generally broken by the presence of boundaries, such as those arising from open boundary conditions. OFSS can be straightforwardly derived by fixing b = L in the above homogenous scaling law. Then, we expect the OFSS of the expectation value O of a generic local operatorÔ x , of its spatial averageÔ a = L −d xÔ x , and its two-point correlation function G O , develop the asymptotic OFSS behavior [5,9]
O(t, x, L, w i , w) ≈ L −yo O(Θ, X, Φ i , Φ), O a (t, L, w i , w) ≈ L −yo O a (Θ, Φ i , Φ),(19)G O (t, x 1 , x 2 , L, w i , w) ≈ L −2yo G O (Θ, X 1 , X 2 , Φ i , Φ),
where the scaling variables appearing in the scaling functions O, O a , and G O are defined as
Θ ≡ t L z , X i ≡ x i L , Φ i ≡ L yw w i , Φ ≡ L yw w. (20)
The OFSS limit is obtained in the large-L and large-t limit keeping the above scaling variables fixed. These conditions ensure that the system remains within the universal critical regime during the quantum evolution. Note that in the scaling law (20) the dynamic features are essentially encoded in the time dependence of the scaling variable Θ ∼ t ∆. The other features, in particular when w i = w, are analogous to those arising from equilibrium FSS at CQTs [5,24], where the argument Φ = L yw w of the scaling functions is controlled by the RG dimension y w of the relevant parameters w at the RG fixed point associated with the CQT. The above OFSS equations can be straightforwardly applied to the observables defined in Sec. II D, after a quench from w i to w at t = 0, keeping into account that the RG dimension of the subtracted particle density is y n = 1, and that of the fermionic operatorĉ x is y c = 1/2. Note that the dominant analytical contributions to the particle density [5,24] coming from the analytical background are canceled in the difference n s defined in Eq. (15), whose leading asymptotic behavior arises from the quantum critical modes, therefore it is analogous to that of O a in Eq. (19), with y o = y n . Analogously one can apply the OFSS in Eq. (19) to observables and correlation functions constructed with the spin operators of the quantum spin chain (6). The OFSS functions are expected to be universal with respect to the microscopic details of the model, apart from nonuniversal multiplicative rescaling and normalizations of its arguments. Within isolated fermionc Kitaev wires and quantum Ising chains, the OFSS arising from soft QQs has been verified by numerical computations for various boundary conditions, and also along their quantum first-order transition line [5,9].
The OFSS limit is expected to be approached with power-law suppressed corrections. There are various sources of scaling corrections when approaching the OFSS. Of course, they include those that are already present at equilibrium. In particular, the irrelevant RG perturbations are sources of scaling corrections for the asymptotic behavior of the free-energy density [5,99]. In the case of one-dimensional quantum systems undergoing CQTs belonging to the two-dimensional Ising universality class, the leading scaling corrections from irrelevant RG perturbations are suppressed as L −ω with ω = 2 [24,98]. However, other contributions may become more relevant [5,24,99], such as those arising from the presence of analytical backgrounds, from the presence of boundaries (which generally gives rise to O(1/L) corrections), and, in the case of correlation functions, from RG mixings of the source fields [this for example happens in the case of the correlation functions of the fermionic fieldĉ x , for which corrections are O(1/L)].
These scaling corrections have been confirmed by numerical results [5,24]. Therefore, we expect that the asymptotic OFSS of fermionic Kitaev wires and quantum Ising chains with open boundary conditions is generally approached with O(1/L) corrections.
B. OFSS along the unitary QQ protocol
For the simplest unitary protocol reported in Sec. II C, where the quantum evolution is that of the isolated fermionic wire, the request that the dynamics remains within the critical regime implies that the temperature of the initial Gibbs state must be appropriately suppressed in the large-L OFSS limit, to obtain a nontrivial out-ofequilibrium critical limit. This is analogous to what happens within the equilibrium FSS, where one introduces the scaling variable [3][4][5]
Ξ ≡ L z T,(21)
to allow for a nonzero temperature in the FSS of the observables. Therefore, like equilibrium FSS, we conjecture that the temperature of the initial Gibbs state enters the OFSS associated with the unitary QQ protocol by adding a further dependence on Ξ in the scaling functions (19).
In other words, a nontrivial asymptotic OFSS limit is expected to be realized in the large-L and large-t limits keeping also Ξ fixed, beside the scaling variables already defined in Eq. (20). Therefore, we expect that the OFSS of standard QQ protocols starting from ground states, cf. Eq. (19), changes into (22) and analogously for its spatial average O a and the correlation function G O . The numerical analysis for the fermionic Kitaev wire under the unitary protocol fully support to this OFSS, obtained by extending the QQ FSS behaviors of closed systems starting from an initial ground state. This is clearly demonstrated by the curves reported in Fig. 2, associated with the quantum evolutions of the subtracted particle density n s (t) and the fermionic correlation P (x, y, t) (the other fermionic correlation C(x, y, t) develops an analogous OFSS).
O(t, x, L, w i , w, T ) ≈ L −yo O(Θ, X, Φ i , Φ, Ξ),
C. OFSS along the dissipative QQ protocol
We now discuss the dynamics arising from the dissipative protocol outlined in Sec. II C, when the quantum evolution after quenching is described by the Lindblad master equation (4) with the thermal-like dissipator (10), to modelize the interaction with a thermal bath characterized by a temperature T (which does not change after quenching) and decay rate γ.
We expect that the temperature T of the thermal bath must be rescaled as in the case of the unitary QQ protocol, i.e. we must consider again the associated scaling variable Ξ already defined in Eq. (21). However, since the QQ moves the system out-of-equilibrium, also the decay rate γ, and corresponding time scale τ = γ −1 , associated with the interactions with the thermal bath is expected to play a relevant role to establish a corresponding nontrivial OFSS limit. This was already noted in Ref. [97] in the analysis of dynamic protocols entailing the variation of the temperature at the critical point. When keeping τ constant in the FSS limit where the scaling variable Θ = t/L z is kept fixed, in the large-L limit we have eventually that
t = Θ L z τ,(23)
which is the condition ensuring thermalization for any finite value Θ > 0. Therefore, when keeping τ fixed, the quantum evolution is not expected to develop a nontrivial OFSS limit. Indeed, in the large-L limit, the : Quantum evolution of the subtracted particle density arising from the dissipative QQ protocol, when rescaling all quantities involved in the quench protocol, except for the decay rate γ. With increasing L, the curves appear to approach the equilibrium FSS value at finite temperature (where the temperature dependence enters through the scaling variable Ξ = L z T ) faster and faster, reflecting a nonuniform convergence for any Θ > 0. The dashed line shows the equilibrium value of ns for Φ = 0 and Ξ = 1, which is asymptotically approached by the various curves.
system turns out to suddenly approach an equilibrium Gibbs state (associated with the Hamiltonian parameter w and temperature T ) with respect to the rescaled time Θ, without any further relevant evolution of the system for any Θ > 0. Therefore, if the temperature is rescaled by keeping Ξ = L z T fixed, we must recover the equilibrium FSS behavior in the presence of a thermal bath at temperature T , such as that associated with the subtracted particle density [5,24] n s,eq (w, L, T ) ≈ L −yn N (Φ, Ξ),
where Φ = L yw w, and the temperature dependence enters through the associated scaling variable Ξ = L z T . In Fig. 3 we show some equilibrium data at the critical point w = Φ = 0, versus Ξ, showing the approach to the asymptotic large-L equilibrium FSS (24). The realization of the equilibrium FSS within the QQ protocol at fixed γ is demonstrated by the plots reported in Fig. 4, which show the somewhat trivial convergence toward the equilibrium FSS for any finite Θ > 0.
The above results suggest that also the the decay rate γ of the system-bath interactions must be rescaled to observe a nontrivial OFSS limit as a function of the time scaling variable Θ, to create the conditions for a balanced competition between the critical Hamiltonian driving and the interactions with the thermal bath. As already put forward in the case of other homogeneous dissipative terms in the Lindblad equation [5,55,[100][101][102], for example associated with particle-decay or particlepumping dissipative mechanisms, a nontrivial OFSS limit is obtained by rescaling the decay rate of the dissipative term, so that the scaling variable
Γ ≡ L z γ ∼ γ/∆(25)
is kept fixed in the OFSS limit, where ∆ is the energy difference of the lowest eigenstates ofĤ(w) at the critical point w = w c = 0. Then an OFSS behavior emerges from the nontrivial competition between the critical unitary dynamics and the dissipative driving arising from the thermal bath.
In conclusion, on the basis of the above scaling arguments, the OFSS arising from the dissipative QQ protocols in the presence of a thermal bath is expected to be given by (26) and
O a (t, L, w i , w, T, γ) ≈ L −yo O a (Θ, Φ i , Φ, Ξ, Γ),G O (t, x 1 , x 2 , L, w i , w, T, γ) ≈ (27) L −2yo G O (Θ, X 1 , X 2 , Φ i , Φ, Ξ, Γ).
In the large-Γ limit the above OFFS behaviors at fixed Ξ is expected to approach the corresponding equilibrium FSS, faster and faster in terms of Θ, matching the behavior at finite γ. Moreover, we also expect that the equilibrium FSS is also approached in the large-Θ limit at fixed Γ and Ξ, independently of Γ, but faster and faster with increasing Γ.
Again, the numerical results for the particle density n s (t) and correlation functions P and C fully support the above OFSS equations, i.e. Eq. (26) for n s (t) with y o = y n = 1, and Eq. (27) for P and C with y o = y c = 1/2. Some results are reported in Fig. 5. We also stress that analogous results are expected for other observables, for example the correlation functions of the spin operator of the equivalent formulation provided by the quantum Ising chains.
IV. CONCLUSIONS
We have reported a study of the effects of thermal baths to the out-of-equilibrium dynamics of many-body systems within their quantum critical regime close to a zero-temperature CQT. In particular, we analyze the outof-equilibrium quantum evolution arising from QQs of the Hamiltonian parameters within two different protocols involving a thermal bath coupled homogeneously to the system. Within the first protocol, named unitary QQ protocol, the thermal bath is used to prepare the system at t = 0 in a finite-temperature Gibbs state, then the dynamics after quenching of the Hamiltonian parameters is assumed unitary, i.e., the thermal bath is removed during the quantum evolution for t > 0. The second protocol, named dissipative QQ protocol, starts from the same initial condition, but the thermal bath is not removed after quenching, and the quantum evolution for t > 0 is assumed to be described by the Lindblad master equation (4). The dissipative term of the Lindblad equation is supposed to simulate a thermal bath, such that the manybody system is driven to a large-time finite-temperature Gibbs state. This dissipative protocol is characterized by a further time scale τ = γ −1 , related to the decay rate of the interactions between the system and the bath.
Within OFSS frameworks, we argue that, when the thermal baths are associated with a sufficiently small temperature, their effects can be taken into account by appropriate extensions of the zero-temperature out-ofequilibrium scaling laws describing soft QQs of isolated systems within the critical regime. For the unitary QQ protocol, where the thermal bath only determines the initial Gibbs state and the evolution is unitary, a nontrivial OFFS limit is simply obtained by rescaling the temperature as T ∼ L −z , similarly to equilibrium FSS. Along the dissipative QQ protocol, where the thermal bath is not removed after quenching, the dynamics is more complicated, and the decay rate γ plays a relevant role. Indeed, in addition to the rescaling of the temperature T associated with thermal bath, one also needs to rescale γ as γ ∼ L −z to obtain a nontrivial OFSS. Otherwise, when keeping γ fixed, the dynamics converges toward the equilibrium FSS at finite temperature, which happens suddenly after quenching with respect to the time scale t c ∼ L z of the critical regime. Therefore the scaling behavior when keeping γ fixed becomes somehow trivial, reproducing the equilibrium FSS for any rescaled time Θ = L −z t > 0 in the large-L limit.
Our scaling arguments are supported by numerical results with the paradigmatic fermionic Kitaev model, or equivalently quantum Ising chain, at its CQT separating quantum disordered and ordered phases. We consider a particular modelization of the thermal bath that guarantees the asymptotic thermalization within the Lindblad formulation of the dynamics of open systems. However, we note that the scaling arguments used to arrive at the OFSS laws for critical QQs are general, and therefore we expect that the emerging out-of-equilibrium scenar-ios also apply to many-body systems at generic CQTs in contact with homogenous thermal baths, in any dimension.
We finally remark that the out-of-equilibrium scaling arguments we put forward, leading to the OFSS of QQs in the presence of a thermal bath, can be extended to other protocols giving rise to out-of-equilibrium dynamics. Another interesting class of dynamic protocols entails slow variations of the Hamiltonian parameters across the critical regime of a quantum transition, such as those associated with the quantum Kibble-Zurek (KZ) problem (see e.g. Refs. [5,[12][13][14][15][16][17][18][19][20][21][22][23]). In standard KZ protocols starting from the ground state for an initial parameter w i < 0, the out-of-equilibrium quantum evolution arises from the linear time dependence of one Hamiltonian parameter, w(t) = t/t s in Eq. (1), where t s is the time scale of the KZ protocol. Since w(t) crosses the critical point at t = 0, the system passes through the quantum critical regime, moving it away from equilibrium even in the large-t s limit, and developing a peculiar out-of-equilibrium scaling behaviors. In particular, the interplay between the size L of the system and the time scale t s of the protocol develops OFSS behaviors [5,23] when t s → ∞ and L → ∞, keeping the scaling variables Ω t ≡ t/t κ s = t/t z/(yw+z) s and Υ ≡ t s /L yw+z (thus Ω t = t/t 1/2 s and Υ = t s /L 2 for the fermionic Kitaev wire or quantum Ising chain) fixed.
KZ-like protocols can be also extended to systems interacting with a thermal bath, such as that outlined in Sec. II B, starting from a Gibbs state for an initial w i < 0 and the temperature T of the thermal bath. Then we may consider a time evolution driven by the Lindblad master equation (4), with a time-dependent Hamiltonian H[w(t)] and the dissipator term (10), where also the Bogoliubov operators are assumed to be time dependent to adapt themselves to the time dependence of w. Analogously to the OFSS of QQs in contact with thermal baths, to define a nontrivial OFSS limit in KZ protocols, we expect that both the temperature T and the decay rate γ associated with the bath must be rescaled, as T ∼ L −z and γ ∼ L −z . If only the temperature of the thermal bath is rescaled as T ∼ L −z , while γ > 0 is kept fixed, the time interval associated with a variation of Ω t in the KZ scaling limit, i.e. ∆ Ω t ∼ t κ s ∆Ω t , becomes eventually much larger than the time scale τ ∼ γ −1 of the interaction with the thermal bath. Since τ /∆ Ω t → 0 in the KZ limit, the system effectively thermalizes at each rescaled time Ω t . Therefore, in the KZ limit the quantum evolution is expected to pass through equilibrium finite-temperature states, thus effectively resulting into adiabatic evolutions reproducing the equilibrium finitetemperature FSS as a function of L yw w(t). Therefore, like dissipative QQ protocols, the observation of a nontrivial OFSS in KZ protocols requires the simultaneous rescaling of the time scale τ associated with the interaction with the thermal bath. The necessary rescaling of the decay rate γ of the dissipative term in the Lindblad master equation has been also put forward for KZ pro-tocols in the presence of other dissipative mechanisms, such as those related to particle decay or pumping [100].
The dynamics of the system in contact with the thermal bath described by the Lindblad master equation (4) with the dissipator term (10) leads to thermal states, such as those described by the density matrix reported in Eq. (13). To compute the correlation functions of the fermionic operatorsĉ x in thermal states of the Hamilto-nianĤ(w), one can use the relation with the Bogoliubov eigenoperatorsb k , cf. Eq. (9), and the thermal correlations of the Bogoliubov operatorsb k , i.e.
b † k b q ≡ Tr[ρ t (w, T )b † k b q ] = δ kq 1 + e ω k /T ,(A1)
corresponding to the standard Fermi-Dirac distribution function. Note also that the other correlations b k b q and b † k b † q vanish. Then the correlation functions of the original fermionic fieldĉ x can be straightforwadly obtained from Eq. (9).
Computations for the unitary protocol
In the unitary QQ protocol, one starts from a Gibbs state associated with the Hamiltonian parameter w i and the temperature T , then at t = 0 one instantaneously changes w i → w and removes the contact with the thermal bath. Therefore the quantum evolution is unitary, described by the Schrödinger equation (3). One may easily obtain closed equations for the evolution of the correlation functions C and P defined in Eqs. (16) and (17).
We introduce the correlations C x,y = Tr ρ(t)ĉ † xĉy , P x,y = Tr ρ(t)ĉ † xĉ † y , (A2) whose quantum evolution can be written as dC x,y dt = i C x,y+1 − C x−1,y + C x,y−1 − C x+1,y − −i P † y,x−1 − P † y,x+1 +i P x,y−1 − P x,y+1 , (A3) dP x,y dt = −i P x,y+1 + P x+1,y + P x,y−1 + P x−1,y − − 2 i µ P x,y − i δ x−1, y − δ x+1, y − − i C x,y−1 − C y,x−1 − C x,y+1 + C y,x+1 . (A4)
The initial conditions are easily obtained by the relations with the thermal correlations of the Bogoliubov operators associated with the initial Gibbs state. Then the fermionic correlation function are obtained by C(x, y, t) = 2 ReC x,y (t), P (x, y, t) = 2 ReP x,y (t). (A5)
The above differential equations are solved using the fourorder Runge-Kutta method. The particle density is obtained from the data of C x,x = Tr ρ(t)ĉ † xĉx .
Computations for the dissipative protocol
For the dissipative QQ protocol, where the thermal bath is kept in contact with the system, the evolution is driven by the Lindblad master equation (4), which can be equivalently written in terms of the time dependence of Heisenberg operatorsÔ H (t), i.e. [84,86]:
∂ tÔH (t) = i Ĥ (w),Ô H (t) + γ D T [Ô H (t)],(A6)
where
D T [Ô H (t)] = k f (ω k ) 2b † kÔ H (t)b k − Ô H (t),b kb † k + k (1 − f (ω k )) 2b kÔH (t)b † k − Ô H (t),b † kb k ,(A7)
whereb k are the Bogoliubov operators associated with the HamiltonianĤ(w). The initial state at t = 0 is the Gibbs state for the Hamiltonian parameter w i . This state corresponds to the steady state solution of the Eq. (A6) withĤ(w i ). Then, the change of the Hamiltonian parameter to w = w i leads to a change of the Bogoliubov operators diagonalizing the Hamiltonian. We call {b k } the operators which diagonal-izesĤ(w),Ĥ
(w) = L k=1 ω kb † kb k ,(A8)
where {ω k } is the Bogoliubov spectrum associated witĥ H(w). To evaluate the correlations of the Bogoliubov operatore {b k }, one can solve the Eq. (A6) for couples of operators {b k }, obtaining [84] b
† k b k = (1 − e −2γt )f (ω k ) + e −2γt b † k b k 0 , b † k b q = e i(ω k −ω q )t−2γt b † k b q 0 , b † k b † q = e i(ω k +ω q )t−2γt b † k b † q 0 , b k b q = e −i(ω k +ω q )t−2γt b k b q 0 .(A9)
The initial values b † k b q 0 of the correlations is computed on the initial Gibbs state associated with w i , and it can be obtained using the relations between {b k } to {b k }. This relation can be formally derived as follows [84]. Introducing the fermionic Nambu field C † = (ĉ † 1 , ...,ĉ † L ,ĉ 1 , ...,ĉ L ), their relations with the Bogoliubov operators B(w) † = (b † 1 , ...,b † L ,b 1 , ...,b L ) corresponding to the HamiltonianĤ K (w) are obtained by a unitary transformation, C = T(w)B(w). See e.g. Ref. [84] for more details. Therefore one can formally derive the relation between the Bogoliubov operatorsb k andb k , corresponding to the Hamiltonian parameters w i and w respectively, from the general relation B(w 2 ) = T(w 2 ) † T(w 1 )B(w 1 ).
(A10)
Finally, to compute the time-dependent observables defined in Sec. II D, one can use the relations between the fermionic correlation functions associated withĉ x and those of the Bogoliubov operatorsb k , such as
C(x, y) = L k,q=1 A * xk A yq b † k b q + B * xk B yq b k b † q +A * xk B yq b † k b † q + B * xk A yq b k b q (A11)
where A and B are the matrices entering Eq. (9).
FIG. 1 :
1The quantum evolution of the subtracted particle density ns(t), cf. Eq. (15), for the dissipative QQ protocol entailing a dissipative dynamics after the QQ at t = 0 of the Hamiltonian parameter w, describing the persistent interaction with the thermal bath, cf. Eqs. (
FIG. 2 :
2OFSS behavior of the subtracted particle density (bottom) and the fermionic correlation function P (x = L/3, y = 2L/3, t), cf. Eq.(16), arising from the unitary QQ protocol, for various lattice sizes L, at fixed Ξ = L z T = 1, Φi = L yw wi = −1 and Φ = L yw w = 0, versus the time scaling variable Θ = t/L z . These computations nicely support the OFSS behaviors reported in Eq.(22). The inset of the bottom figure shows that the approach to the OFSS limit is consistent with O(1/L) corrections. Analogous results are obtained for other values of the scaling variables.
FIG. 3 :
3Equilibrium FSS of the subtracted particle density ns,eq at the critical point w = 0, versus the rescaled temperature Ξ = L z T . With increasing L, the data show the expected convergence to the equilibrium FSS reported in Eq. (24) with yn = 1.
FIG. 5 :
5Quantum evolutions along the dissipative protocol, fully supporting the OFSS reported in Eqs.(26) and(27). We report curves for L ns (bottom), L P (x = L/3, y = 2L/3, t) (middle), andC(x = L/3, y = 2L/3, t) (top),for various values of L, at fixed Φi = −1, Φ = 0, Ξ = 1, and two values of Γ = L z γ, i.e. Γ = 1, 10 (except for the top figure where we only report data for Γ = 10 to ensure a good readability). The inset of the top figure shows that the OFSS is approached with O(1/L) corrections. Analogous results are obtained for other values of the scaling variables.
. Asymptotic thermal states
AcknowledgmentsWe thank Giulia Piccitto and Davide Rossini for interesting and useful discussions.Appendix A: Details on the computationsIn this section we provide some details of the computations for the fermionic Kitaev wire in the presence of a thermal bath.
Quantum coherence and entanglement with ultracold atoms in optical lattices. I Bloch, Nature. 4531016I. Bloch, Quantum coherence and entanglement with ultracold atoms in optical lattices, Nature 453, 1016 (2008).
Quantum simulation. I M Georgescu, S Ashhab, F Nori, Rev. Mod. Phys. 86153I. M. Georgescu, S. Ashhab, and F. Nori, Quantum sim- ulation, Rev. Mod. Phys. 86, 153 (2014).
S L Sondhi, S M Girvin, J P Carini, D Shahar, Continuous quantum phase transitions. 69315S. L. Sondhi, S. M. Girvin, J. P. Carini, and D. Sha- har, Continuous quantum phase transitions, Rev. Mod. Phys. 69, 315 (1997).
S Sachdev, Quantum Phase Transitions. Cambridge, EnglandCambridge UniversityS. Sachdev, Quantum Phase Transitions, (Cambridge University, Cambridge, England, 1999).
Coherent and dissipative dynamics at quantum phase transitions. D Rossini, E Vicari, Phys. Rep. 9361D. Rossini and E. Vicari, Coherent and dissipative dy- namics at quantum phase transitions, Phys. Rep. 936, 1 (2021).
Signature of a continuous quantum phase transition in nonequilibrium energy absorption: Footprints of criticality on higher excited states. S Bhattacharyya, S Dasgupta, A Das, Sci. Rep. 516490S. Bhattacharyya, S. Dasgupta, and A. Das, Signa- ture of a continuous quantum phase transition in non- equilibrium energy absorption: Footprints of criticality on higher excited states, Sci. Rep. 5, 16490 (2015).
Locating topological phase transitions using nonequilibrium signatures in local bulk observables. S Roy, R Moessner, A Das, Phys. Rev. B. 95R41105S. Roy, R. Moessner, and A. Das, Locating topolog- ical phase transitions using nonequilibrium signatures in local bulk observables, Phys. Rev. B 95, 041105(R) (2017).
Detecting Equilibrium and Dynamical Quantum Phase Transitions in Ising Chains via Out-of-Time-Ordered Correlators. M Heyl, F Pollmann, B Dora, Phys. Rev. Lett. 12116801M. Heyl, F. Pollmann, and B. Dora, Detecting Equi- librium and Dynamical Quantum Phase Transitions in Ising Chains via Out-of-Time-Ordered Correlators, Phys. Rev. Lett. 121, 016801 (2018).
Dynamic finitesize scaling after a quench at quantum transitions. A Pelissetto, D Rossini, E Vicari, Phys. Rev. E. 9752148A. Pelissetto, D. Rossini, and E. Vicari, Dynamic finite- size scaling after a quench at quantum transitions, Phys. Rev. E 97, 052148 (2018).
Probing Ground-State Phase Transitions through Quench Dynamics. P Titum, J T Iosue, J R Garrison, A V Gorshkov, Z.-X Gong, Phys. Rev. Lett. 123115701P. Titum, J. T. Iosue, J. R. Garrison, A. V. Gorshkov, and Z.-X. Gong, Probing Ground-State Phase Transi- tions through Quench Dynamics, Phys. Rev. Lett. 123, 115701 (2019).
A Pelissetto, E Vicari, arXiv:2302.08238Scaling behaviors at quantum and classical first-order transitions. A. Aharony, O. Entin-Wohlman, D. Huse, and L. RadzihovskyWorld Scientificto appear in 50 years of the renormalization group, dedicated to the memory of Michael E. FisherA. Pelissetto and E. Vicari, Scaling behaviors at quantum and classical first-order transitions, arXiv:2302.08238, to appear in 50 years of the renor- malization group, dedicated to the memory of Michael E. Fisher, edited by A. Aharony, O. Entin-Wohlman, D. Huse, and L. Radzihovsky, World Scientific.
Topology of Cosmic Strings and Domains. T W B Kibble, J. Phys. A. 91387T. W. B. Kibble, Topology of Cosmic Strings and Do- mains, J. Phys. A 9, 1387 (1976).
Some implications of a cosmological phase transition. T W B Kibble, Phys. Rep. 67183T. W. B. Kibble, Some implications of a cosmological phase transition, Phys. Rep. 67, 183 (1980).
Cosmological Experiments in Superfluid Helium?. W H Zurek, Nature. 317505W. H. Zurek, Cosmological Experiments in Superfluid Helium?, Nature 317, 505 (1985).
Cosmological experiments in condensed matter systems. W H Zurek, Phys. Rep. 276177W. H. Zurek, Cosmological experiments in condensed matter systems, Phys. Rep. 276, 177 (1996).
Dynamics of a quantum phase transition. W H Zurek, U Dorner, P Zoller, Phys. Rev. Lett. 95105701W. H. Zurek, U. Dorner, and P. Zoller, Dynamics of a quantum phase transition, Phys. Rev. Lett. 95, 105701 (2005).
Breakdown of the adiabatic limit in low-dimensional gapless systems. A Polkovnikov, V Gritsev, Nature Phys. 4477A. Polkovnikov and V. Gritsev, Breakdown of the adi- abatic limit in low-dimensional gapless systems, Nature Phys. 4, 477 (2008).
Dynamics of a quantum phase transition and relaxation to a steady state. J Dziarmaga, Adv. Phys. 591063J. Dziarmaga, Dynamics of a quantum phase transition and relaxation to a steady state, Adv. Phys. 59, 1063 (2010).
Quantum phase transitions in transverse field spin models: From statistical physics to quantum information. A Dutta, G Aeppli, B K Chakrabarti, U Divakaran, T F Rosenbaum, D Sen, Cambridge University PressA. Dutta, G. Aeppli, B. K. Chakrabarti, U. Divakaran, T. F. Rosenbaum, and D. Sen, Quantum phase tran- sitions in transverse field spin models: From statistical physics to quantum information, Cambridge University Press (2015).
Colloquium: Nonequilibrium dynamics of closed interacting quantum systems. A Polkovnikov, K Sengupta, A Silva, M Vengalattore, Rev. Mod. Phys. 83863A. Polkovnikov, K. Sengupta, A. Silva, and M. Ven- galattore, Colloquium: Nonequilibrium dynamics of closed interacting quantum systems, Rev. Mod. Phys. 83, 863 (2011).
Kibble-Zurek problem: Universality and the scaling limit. A Chandran, A Erez, S S Gubser, S L Sondhi, Phys. Rev. B. 8664304A. Chandran, A. Erez, S. S. Gubser, and S. L. Sondhi, Kibble-Zurek problem: Universality and the scaling limit, Phys. Rev. B 86, 064304 (2012).
Symmetry Breaking Bias and the Dynamics of a Quantum Phase Transition. M M Rams, J Dziarmaga, W H Zurek, Phys. Rev. Lett. 123130603M. M. Rams, J. Dziarmaga, and W. H. Zurek, Sym- metry Breaking Bias and the Dynamics of a Quantum Phase Transition, Phys. Rev. Lett. 123, 130603 (2019).
Out-of-equilibrium finitesize scaling in generalized Kibble-Zurek protocols crossing quantum phase transitions in the presence of symmetry-breaking pertubations. F , De Franco, E Vicari, Phys. Rev. B. 107115175F. De Franco and E. Vicari, Out-of-equilibrium finite- size scaling in generalized Kibble-Zurek protocols cross- ing quantum phase transitions in the presence of symmetry-breaking pertubations, Phys. Rev. B 107, 115175 (2023).
Finitesize scaling at quantum transitions. M Campostrini, A Pelissetto, E Vicari, Phys. Rev. B. 8994516M. Campostrini, A. Pelissetto, and E. Vicari, Finite- size scaling at quantum transitions, Phys. Rev. B 89, 094516 (2014).
Some exact calculations on a chain of spins 1/2 II. T Niemeijer, Physica. 36313PhysicaT. Niemeijer, Some exact calculations on a chain of spins 1/2, Physica 36, 377 (1967); Some exact calculations on a chain of spins 1/2 II, Physica 39, 313 (1968).
Statistical mechanics of the XY model. I. E Barouch, B Mccoy, M Dresden, Phys. Rev. A. 21075E. Barouch, B. McCoy, and M. Dresden, Statistical me- chanics of the XY model. I, Phys. Rev. A 2, 1075 (1970).
Quench dynamics across quantum critical points. K Sengupta, S Powell, S Sachdev, Phys. Rev. A. 6953616K. Sengupta, S. Powell and S. Sachdev, Quench dynam- ics across quantum critical points, Phys. Rev. A 69, 053616 (2004).
Entanglement entropy dynamics of Heisenberg chains. G De Chiara, S Montangero, P Calabrese, R Fazio, J. Stat. Mech. 3001G. De Chiara, S. Montangero, P. Calabrese, and R. Fazio, Entanglement entropy dynamics of Heisenberg chains, J. Stat. Mech. P03001 (2006).
Spontaneous symmetry breaking in a quenched ferromagnetic spinor Bose-Einstein condensate. L E Sadler, J M Higbie, S R Leslie, M Vengalattore, D M Stamper-Kurn, Nature. 443312L. E. Sadler, J. M. Higbie, S. R. Leslie, M. Ven- galattore, and D. M. Stamper-Kurn, Spontaneous sym- metry breaking in a quenched ferromagnetic spinor Bose-Einstein condensate, Nature 443, 312 (2006).
Relaxation in a Completely Integrable Many-Body Quantum System: An Ab Initio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons. M Rigol, V Dunjko, V Yurovsky, M Olshanii, Phys. Rev. Lett. 9850405M. Rigol, V. Dunjko, V. Yurovsky, and M. Olshanii, Re- laxation in a Completely Integrable Many-Body Quan- tum System: An Ab Initio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons, Phys. Rev. Lett. 98, 050405 (2007).
Thermalization and its mechanism for generic isolated quantum systems. M Rigol, V Dunjko, M Olshanii, Nature. 452854M. Rigol, V. Dunjko, and M. Olshanii, Thermalization and its mechanism for generic isolated quantum sys- tems, Nature 452, 854 (2008).
Many-body localization in the Heisenberg XXZ magnet in a random field. M Žnidarič, T Prosen, P Prelovšek, Phys. Rev. B. 7764426M.Žnidarič, T. Prosen, and P. Prelovšek, Many-body localization in the Heisenberg XXZ magnet in a random field, Phys. Rev. B 77, 064426 (2008).
Matrix product simulations of non-equilibrium steady states of quantum spin chains. T Prosen, M Žnidarič, J. Stat. Mech. 2035T. Prosen and M.Žnidarič, Matrix product simulations of non-equilibrium steady states of quantum spin chains, J. Stat. Mech. (2009) P02035.
Quantum Relaxation after a Quench in Systems with Boundaries. F Iglói, H Rieger, Phys. Rev. Lett. 10635701F. Iglói and H. Rieger, Quantum Relaxation after a Quench in Systems with Boundaries, Phys. Rev. Lett. 106, 035701 (2011).
Semiclassical theory for quantum quenches in finite transverse Ising chains. H Rieger, F Iglói, Phys. Rev. B. 84165117H. Rieger and F. Iglói, Semiclassical theory for quantum quenches in finite transverse Ising chains, Phys. Rev. B 84, 165117 (2011).
Large Deviations and Universality in Quantum Quenches. A Gambassi, A Silva, Phys. Rev. Lett. 109250602A. Gambassi and A. Silva, Large Deviations and Uni- versality in Quantum Quenches, Phys. Rev. Lett. 109 250602 (2012).
Quantum quench in the transverse field Ising chain: I. Time evolution of order parameter correlators. P Calabrese, F H Essler, M Fagotti, J. Stat. Mech. 7016P. Calabrese, F. H. Essler, and M. Fagotti, Quantum quench in the transverse field Ising chain: I. Time evo- lution of order parameter correlators, J. Stat. Mech. (2012) P07016.
Quantum quenches in the transverse field Ising chain: II. Stationary state properties. P Calabrese, F H Essler, M Fagotti, J. Stat. Mech. 7022P. Calabrese, F. H. Essler, and M. Fagotti, Quantum quenches in the transverse field Ising chain: II. Station- ary state properties, J. Stat. Mech. (2012) P07022.
Quantum relaxation and finite size effects in the XY chain in a transverse field after global quenches. B Blass, H Rieger, F Iglói, Eur. Phys. Lett. 9930004B. Blass, H. Rieger, and F. Iglói, Quantum relaxation and finite size effects in the XY chain in a transverse field after global quenches, Eur. Phys. Lett. 99, 30004 (2012).
Time evolution of local observables after quenching to an integrable model. J.-S Caux, F H L Essler, Phys. Rev. Lett. 110257203J.-S. Caux and F. H. L. Essler, Time evolution of lo- cal observables after quenching to an integrable model, Phys. Rev. Lett. 110, 257203 (2013).
Dynamical quantum phase transitions in the transverse-field Ising mode. M Heyl, A Polkovnikov, S Kehrein, Phys. Rev. Lett. 110135704M. Heyl, A. Polkovnikov, and S. Kehrein, Dynamical quantum phase transitions in the transverse-field Ising mode, Phys. Rev. Lett. 110, 135704 (2013).
Relaxation after quantum quenches in the spin-1/2 Heisenberg XXZ chain. M Fagotti, M Collura, F H L Essler, P Calabrese, Phys. Rev. B. 89125101M. Fagotti, M. Collura, F. H. L. Essler, and P. Cal- abrese, Relaxation after quantum quenches in the spin- 1/2 Heisenberg XXZ chain, Phys. Rev. B 89, 125101 (2014).
Quantum quenches and competing orders. W Fu, L.-Y Hung, S Sachdev, Phys. Rev. B. 9024506W. Fu, L.-Y. Hung, and S. Sachdev, Quantum quenches and competing orders, Phys. Rev. B 90, 024506 (2014).
Thermalization and Revivals after a Quantum Quench in Conformal Field Theory. J Cardy, Phys. Rev. Lett. 112220401J. Cardy, Thermalization and Revivals after a Quantum Quench in Conformal Field Theory, Phys. Rev. Lett. 112, 220401 (2014).
Many body localization and thermalization in quantum statistical mechanics. R Nandkishore, D A Huse, Annu. Rev. Condens. Matter Phys. 615R. Nandkishore and D. A. Huse, Many body localiza- tion and thermalization in quantum statistical mechan- ics, Annu. Rev. Condens. Matter Phys. 6, 15 (2015).
Short-time universal scaling and light-cone dynamics after a quench in an isolated quantum system in d spatial dimensions. A Chiocchetta, M Tavora, A Gambassi, A Mitra, Phys. Rev. B. 94134311A. Chiocchetta, M. Tavora, A. Gambassi, and A. Mitra, Short-time universal scaling and light-cone dynamics af- ter a quench in an isolated quantum system in d spatial dimensions, Phys. Rev. B 94, 134311 (2016).
Quantum quenches in 1 + 1 dimensional conformal field theories. P Calabrese, J Cardy, J. Stat. Mech. 64003P. Calabrese and J. Cardy, Quantum quenches in 1 + 1 dimensional conformal field theories, J. Stat. Mech. (2016) 064003.
Conformal field theory out of equilibrium: a review. D Bernard, B Doyon, J. Stat. Mech. 64005D. Bernard and B. Doyon, Conformal field theory out of equilibrium: a review, J. Stat. Mech. (2016) 064005.
Quasilocal charges in integrable lattice systems. E Ilievski, M Medenjak, T Prosen, L Zadnik, J. Stat. Mech. 64008E. Ilievski, M. Medenjak, T. Prosen, and L. Zadnik, Quasilocal charges in integrable lattice systems, J. Stat. Mech. (2016) 064008.
Prethermalization and universal dynamics in near-integrable quantum systems. T Langen, T Gasenzer, J Schmiedmayer, J. Stat. Mech. 64009T. Langen, T. Gasenzer, and J. Schmiedmayer, Prether- malization and universal dynamics in near-integrable quantum systems, J. Stat. Mech. (2016) 064009.
Nonequilibrium quantum dynamics and transport: from integrability to manybody localization. R Vasseur, J E Moore, J. Stat. Mech. 64010R. Vasseur and J. E. Moore, Nonequilibrium quantum dynamics and transport: from integrability to many- body localization, J. Stat. Mech. (2016) 064010.
Quantum Entanglement Growth under Random Unitary Dynamics. A Nahum, J Ruhman, S Vijay, J Haah, Phys. Rev. X. 731016A. Nahum, J. Ruhman, S. Vijay, and J. Haah, Quantum Entanglement Growth under Random Unitary Dynam- ics, Phys. Rev. X 7, 031016 (2017).
Dynamical quantum phase transitions: a review. M Heyl, Rep. Prog. Phys. 8154001M. Heyl, Dynamical quantum phase transitions: a re- view, Rep. Prog. Phys. 81, 054001 (2018).
Dynamic scaling of work fluctuations after quenches near quantum transitions. D Nigro, D Rossini, E Vicari, J. Stat. Mech. 23104D. Nigro, D. Rossini, and E. Vicari, Dynamic scaling of work fluctuations after quenches near quantum transi- tions, J. Stat. Mech. (2019) 023104.
Competing coherent and dissipative dynamics close to quantum criticality. D Nigro, D Rossini, E Vicari, Phys. Rev. A. 10052108D. Nigro, D. Rossini, and E. Vicari, Competing coher- ent and dissipative dynamics close to quantum criti- cality, Phys. Rev. A 100, 052108 (2019);
Scaling behavior of stationary states arising from dissipation at continuous quantum transitions. D Rossini, E Vicari, Phys. Rev. B. 100174303D. Rossini and E. Vicari, Scaling behavior of stationary states aris- ing from dissipation at continuous quantum transitions, Phys. Rev. B 100, 174303 (2019).
Operator content of entanglement spectra after global quenches in the transverse field Ising chain. J Surace, L Tagliacozzo, E Tonni, Phys. Rev. B. 101241107J. Surace, L. Tagliacozzo, and E. Tonni, Operator con- tent of entanglement spectra after global quenches in the transverse field Ising chain, Phys. Rev. B 101, 241107(R) (2020).
Dynamics after quenches in one-dimensional quantum Ising-like systems. D Rossini, E Vicari, Phys. Rev. B. 10254444D. Rossini and E. Vicari, Dynamics after quenches in one-dimensional quantum Ising-like systems, Phys. Rev. B 102, 054444 (2020).
Entanglement oscillations near a quantum critical point. O A Castro-Alvaredo, M Lencsés, I M Szécsényi, J Viti, Phys. Rev. Lett. 124230601O. A. Castro-Alvaredo, M. Lencsés, I. M. Szécsényi, and J. Viti, Entanglement oscillations near a quantum crit- ical point, Phys. Rev. Lett. 124, 230601 (2020).
Scaling properties of the dynamics at first-order quantum transitions when boundary conditions favor one of the two phases. A Pelissetto, D Rossini, E Vicari, Phys. Rev. E. 10212143A. Pelissetto, D. Rossini, and E. Vicari, Scaling proper- ties of the dynamics at first-order quantum transitions when boundary conditions favor one of the two phases, Phys. Rev. E 102, 012143 (2020).
Quenches in initially coupled Tomonaga-Luttinger liquids: a conformal field theory approach. P Ruggiero, P Calabrese, L Foini, T Giamarchi, SciPost Phys. 1155P. Ruggiero, P. Calabrese, L. Foini, and T. Giamarchi, Quenches in initially coupled Tomonaga-Luttinger liq- uids: a conformal field theory approach, SciPost Phys. 11, 055 (2021).
Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms. M Greiner, O Mandel, T Esslinger, T W Hänsch, I Bloch, Nature. 41539M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms, Nature 415, 39 (2002).
A quantum Newton's cradle. T Kinoshita, T Wenger, D S Weiss, Nature. 440900T. Kinoshita, T. Wenger, and D. S. Weiss, A quantum Newton's cradle, Nature 440, 900 (2006).
Non-equilibrium coherence dynamics in one-dimensional Bose gases. S Hofferberth, I Lesanovsky, B Fischer, T Schumm, J Schmiedmayer, Nature. 449324S. Hofferberth, I. Lesanovsky, B. Fischer, T. Schumm, and J. Schmiedmayer, Non-equilibrium coherence dy- namics in one-dimensional Bose gases, Nature 449, 324 (2007).
Probing the relaxation towards equilibrium in an isolated strongly correlated one-dimensional Bose gas. S Trotzky, Y.-A Chen, A Flesch, I P Mcculloch, U Schollwöck, J Eisert, I Bloch, Nat. Phys. 8325S. Trotzky, Y.-A. Chen, A. Flesch, I. P. McCulloch, U. Schollwöck, J. Eisert, and I. Bloch, Probing the relax- ation towards equilibrium in an isolated strongly cor- related one-dimensional Bose gas, Nat. Phys. 8, 325 (2012).
Light-cone-like spreading of correlations in a quantum many-body system. M Cheneau, P Barmettler, D Poletti, M Endres, P Scbauß, T Fukuhara, C Gross, I Bloch, C Kollath, S Kuhr, Nature. 481484M. Cheneau, P. Barmettler, D. Poletti, M. Endres, P. Scbauß, T. Fukuhara, C. Gross, I. Bloch, C. Kollath, and S. Kuhr, Light-cone-like spreading of correlations in a quantum many-body system, Nature 481, 484 (2012).
Relaxation and Prethermalization in an Isolated Quantum System. M Gring, M Kuhnert, T Langen, T Kitagawa, B Rauer, M Schreitl, I Mazets, D Smith, E Demler, J Schmiedmayer, Science. 3371318M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Rauer, M. Schreitl, I. Mazets, D. Adu Smith, E. Dem- ler, and J. Schmiedmayer, Relaxation and Prethermal- ization in an Isolated Quantum System, Science 337, 1318 (2012).
Observation of many-body localization of interacting fermions in a quasirandom optical lattice. M Schreiber, S S Hodgman, P Bordia, H P Lüschen, M H Fischer, R Vosk, E Altman, U Schneider, I Bloch, Science. 349842M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider and I. Bloch, Observation of many-body localization of inter- acting fermions in a quasirandom optical lattice, Science 349, 842 (2015).
Emergence of coherence and the dynamics of quantum phase transitions. S Braun, M Friesdorf, S S Hodgman, M Schreiber, J P Ronzheimer, A Riera, M Rey, I Bloch, J Eisert, U Schneider, Proc. Natl. Acad. Sci. USA. 1123641S. Braun, M. Friesdorf, S. S. Hodgman, M. Schreiber, J. P. Ronzheimer, A. Riera, M. del Rey, I. Bloch, J. Eisert, and U. Schneider, Emergence of coherence and the dynamics of quantum phase transitions, Proc. Natl. Acad. Sci. USA 112, 3641 (2015).
Measurement-induced localization of an ultracold lattice gas. Y S Patil, S Chakram, M Vengalattore, Phys. Rev. Lett. 115140402Y.S. Patil, S. Chakram, and M. Vengalattore, Measurement-induced localization of an ultracold lat- tice gas, Phys. Rev. Lett. 115, 140402 (2015).
Quantum thermalization through entanglement in an isolated manybody system. A M Kaufman, M E Tai, A Lukin, M Rispoli, R Schittko, P M Preiss, M Greiner, Science. 353794A. M. Kaufman, M. E. Tai, A. Lukin, M. Rispoli, R. Schittko, P. M. Preiss, and M. Greiner, Quantum ther- malization through entanglement in an isolated many- body system, Science 353, 794 (2016).
Monroe, Many-body localization in a quantum simulator with programmable random disorder. J Smith, A Lee, P Richerme, B Neyenhuis, P W Hess, P Hauke, M Heyl, D A Huse, C , Nat. Phys. 12907J. Smith, A. Lee, P. Richerme, B. Neyenhuis, P. W. Hess, P. Hauke, M. Heyl, D. A. Huse, and C. Mon- roe, Many-body localization in a quantum simulator with programmable random disorder, Nat. Phys. 12, 907 (2016).
Periodically driving a many-body localized quantum system. P Bordia, H Lüschen, U Schneider, M Knap, I Bloch, Nat. Phys. 13460P. Bordia, H. Lüschen, U. Schneider, M. Knap, and I. Bloch, Periodically driving a many-body localized quan- tum system, Nat. Phys. 13, 460 (2017).
. J Zhang, G Pagano, P W Hess, A Kyprianidis, P Becker, H Kaplan, A V Gorshkov, Z.-X Gong, C , J. Zhang, G. Pagano, P. W. Hess, A. Kyprianidis, P. Becker, H. Kaplan, A. V. Gorshkov, Z.-X. Gong, and C.
Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator. Monroe, Nature. 551601Monroe, Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator, Nature 551, 601 (2017).
Observation of the Mott insulator to superfluid crossover of a driven-dissipative Bose-Hubbard system. T Tomita, S Nakajima, I Danshita, Y Takasu, Y Takahashi, Sci. Adv. 31701513T. Tomita, S. Nakajima, I. Danshita, Y. Takasu, and Y. Takahashi, Observation of the Mott insulator to su- perfluid crossover of a driven-dissipative Bose-Hubbard system, Sci. Adv. 3, e1701513 (2017).
Dynamical quantum correlations after sudden quenches. U Mishra, H Cheraghi, S Mahdavifar, R Jafari, A Akbari, Phys. Rev. A. 9852338U. Mishra, H. Cheraghi, S. Mahdavifar, R. Jafari, and A. Akbari, Dynamical quantum correlations after sud- den quenches, Phys. Rev. A 98, 052338 (2018).
Quench dynamics and zero-energy modes: The case of the Creutz model. R Jafari, H Johannesson, A Langari, M A Martin-Delgado, Phys. Rev. B. 9954302R. Jafari, H. Johannesson, A. Langari, and M. A. Martin-Delgado, Quench dynamics and zero-energy modes: The case of the Creutz model, Phys. Rev. B 99, 054302 (2019).
Observation of many-body localization in a one-dimensional system with a single-particle mobility edge. T Kohlert, S Scherg, X Li, H P Lüschen, S Sarma, I Bloch, M Aidelsburger, Phys. Rev. Lett. 122170403T. Kohlert, S. Scherg, X. Li, H. P. Lüschen, S. Das Sarma, I. Bloch, and M. Aidelsburger, Observation of many-body localization in a one-dimensional system with a single-particle mobility edge, Phys. Rev. Lett. 122, 170403 (2019).
Environment-assisted quantum transport in a 10-qubit network. C Maier, T Brydges, P Jurcevic, N Trautmann, C Hempel, B P Lanyon, P Hauke, R Blatt, C F Roos, Phys. Rev. Lett. 12250501C. Maier, T. Brydges, P. Jurcevic, N. Trautmann, C. Hempel, B. P. Lanyon, P. Hauke, R. Blatt, and C. F. Roos, Environment-assisted quantum transport in a 10- qubit network, Phys. Rev. Lett. 122, 050501 (2019).
On the generators of quantum dynamical semigroups. G Lindblad, Commun. Math. Phys. 48119G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys. 48, 119 (1976).
Completely positive dynamical semigroups of N-level systems. V Gorini, A Kossakowski, E C G Sudarshan, J. Math. Phys. 17821V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, Completely positive dynamical semigroups of N-level systems, J. Math. Phys. 17, 821 (1976).
H.-P Breuer, F Petruccione, The Theory of Open Quantum Systems. New YorkOxford University PressH.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, New York, 2002).
A Rivas, S F Huelga, Open Quantum System: An Introduction (SpringerBriefs in Physics. SpringerA. Rivas and S. F. Huelga, Open Quantum System: An Introduction (SpringerBriefs in Physics, Springer, 2012).
Keldysh field theory for driven open quantum systems. L M Sieberer, M Buchhold, S Diehl, Rep. Prog. Phys. 7996001L. M. Sieberer, M. Buchhold, and S. Diehl, Keldysh field theory for driven open quantum systems, Rep. Prog. Phys. 79, 096001 (2016).
Self-consistent microscopic derivation of Markovian master equations for open quadratic quantum systems. A , D Rossini, Phys. Rev. A. 10352209A. D'Abbruzzo and D. Rossini, Self-consistent micro- scopic derivation of Markovian master equations for open quadratic quantum systems, Phys. Rev. A 103, 052209 (2021).
Unpaired Majorana fermions in quantum wires. A Yu, Kitaev, Phys. Usp. 44131A. Yu. Kitaev, Unpaired Majorana fermions in quantum wires, Phys. Usp. 44, 131 (2001).
The Ising critical quantum Otto engine. G Piccitto, M Campisi, D Rossini, New Journal of Physics. 24103023G. Piccitto, M. Campisi, and D. Rossini The Ising crit- ical quantum Otto engine, New Journal of Physics 24, 103023 (2022).
Out-of-equilibrium dynamics arising from slow round-trip variations of Hamiltonian parameters across quantum and classical critical points. F Tarantelli, E Vicari, Phys. Rev. B. 105235124F. Tarantelli and E. Vicari, Out-of-equilibrium dynam- ics arising from slow round-trip variations of Hamilto- nian parameters across quantum and classical critical points, Phys. Rev. B 105, 235124 (2022).
Quantum phase transition dynamics in the two-dimensional transverse-field Ising model. M Schmitt, M M Rams, J Dziarmaga, M Heyl, W Zurek, Sci. Adv. 86850M. Schmitt, M. M. Rams, J, Dziarmaga, M. Heyl, and W. Zurek, Quantum phase transition dynamics in the two-dimensional transverse-field Ising model, Sci. Adv. 8, eabl6850 (2022).
Onset of a quantum phase transition with a trapped ion quantum simulator. R Islam, E E Edwards, K Kim, S Korenblit, C Noh, H Carmichael, G.-D Lin, L.-M Duan, C.-C. Joseph Wang, J K Freericks, C Monroe, Nat. Commun. 2377R. Islam, E. E. Edwards, K. Kim, S. Korenblit, C. Noh, H. Carmichael, G.-D. Lin, L.-M. Duan, C.-C. Joseph Wang, J. K. Freericks, and C. Monroe, Onset of a quan- tum phase transition with a trapped ion quantum sim- ulator, Nat. Commun. 2, 377 (2011).
Demonstration of a small programmable quantum computer with atomic qubits. S Debnath, N M Linke, C Figgatt, K A Landsman, K Wright, C Monroe, Nature. 53663S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe, Demonstration of a small programmable quantum computer with atomic qubits, Nature 536, 63 (2016).
Quantum simulation of antiferromagnetic spin chains in an optical lattice. J Simon, W S Bakr, R Ma, M E Tai, P M Preiss, M Greiner, Nature. 472307J. Simon, W. S. Bakr, R. Ma, M. E. Tai, P. M. Preiss, and M. Greiner, Quantum simulation of antiferromag- netic spin chains in an optical lattice, Nature 472, 307 (2011).
Tunable twodimensional arrays of single Rydberg atoms for realizing quantum Ising models. H Labuhn, D Barredo, S Ravets, S De Leseleuc, T Macri, T Lahaye, A Browaeys, Nature. 534667H. Labuhn, D. Barredo, S. Ravets, S. de Leseleuc, T. Macri, T. Lahaye, and A. Browaeys, Tunable two- dimensional arrays of single Rydberg atoms for realizing quantum Ising models, Nature 534, 667 (2016).
Digital quantum simulation of spin models with circuit quantum electrodynamics. Y Salathé, M Mondal, M Oppliger, J Heinsoo, P Kurpiers, A Potocnik, A Mezzacapo, U Heras, L Lamata, E Solano, S Filipp, A Wallraff, Phys. Rev. X. 521027Y. Salathé, M. Mondal, M. Oppliger, J. Heinsoo, P. Kurpiers, A. Potocnik, A. Mezzacapo, U. Las Heras, L. Lamata, E. Solano, S. Filipp, and A. Wallraff, Digital quantum simulation of spin models with circuit quan- tum electrodynamics, Phys. Rev. X 5, 021027 (2015).
Exact Ising model simulation on a quantum computer. A Cervera-Lierta, 2114A. Cervera-Lierta, Exact Ising model simulation on a quantum computer, Quantum 2, 114 (2018).
The one-dimensional Ising model with a transverse field. P Pfeuty, Ann. Phys. 5779P. Pfeuty, The one-dimensional Ising model with a transverse field, Ann. Phys. 57, 79 (1970).
Quantum Theory of Finite Systems. J.-P Blaizot, G Ripka, MIT PressJ.-P. Blaizot and G. Ripka, Quantum Theory of Finite Systems. (MIT Press, 1986).
Kibble-Zurek scaling due to environment temperature quench in the transverse field Ising model. A Bácsi, B Dóra, Scientific Rep. 131A. Bácsi and B. Dóra, Kibble-Zurek scaling due to en- vironment temperature quench in the transverse field Ising model, Scientific Rep. 13, 1 (2023).
Irrelevant operators in the two-dimensional Ising model. M Caselle, M Hasenbusch, A Pelissetto, E Vicari, J. Phys. A. 354861M. Caselle, M. Hasenbusch, A. Pelissetto, and E. Vicari, Irrelevant operators in the two-dimensional Ising model, J. Phys. A 35, 4861 (2002).
Critical phenomena and renormalization group theory. A Pelissetto, E Vicari, Phys. Rep. 368549A. Pelissetto and E. Vicari, Critical phenomena and renormalization group theory, Phys. Rep. 368, 549 (2002).
Dynamic Kibble-Zurek scaling framework for open dissipative many-body systems crossing quantum transitions. D Rossini, E Vicari, Phys. Rev. Research. 223611D. Rossini and E. Vicari, Dynamic Kibble-Zurek scal- ing framework for open dissipative many-body systems crossing quantum transitions, Phys. Rev. Research 2, 023611 (2020).
Quantum critical systems with dissipative boundaries. F Tarantelli, E Vicari, Phys. Rev. B. 10475140F. Tarantelli and E. Vicari, Quantum critical systems with dissipative boundaries, Phys. Rev. B 104, 075140 (2021).
Liouvillian gap and out-ofequilibrium dynamics of a sunburst Kitaev ring: from local to uniform dissipation. A Franchi, F Tarantelli, arXiv:2303.04207A. Franchi and F. Tarantelli, Liouvillian gap and out-of- equilibrium dynamics of a sunburst Kitaev ring: from local to uniform dissipation, arXiv:2303.04207.
| zyda_arxiv-0705000 |
ELEMENTARY PROOFS OF REPRESENTATION BY TERNARY QUADRATIC FORMS
Benjamin Rainear
Katherine Thompson
ELEMENTARY PROOFS OF REPRESENTATION BY TERNARY QUADRATIC FORMS
arXiv:2206.00589v1 [math.NT] 1 Jun 2022
Mordell in 1958 [15]gave a new proof of the three squares theorem. Those techniques were generalized by Blackwell, et al., in 2016 [1] to characterize the integers represented by the remaining six "Ramanujan-Dickson ternaries". We continue the generalization of these techniques to four additional forms.
Introduction
The theory of quadratic forms has a long and rich history. Of particular interest is the question of representation of an integer by a form. In studying universal and almost universal positive definite forms (which, in particular, concerns four or more variables), knowing which integers are represented by ternary subforms not only is key from both theoretical and computational purposes, but also is a delicate and nontrivial matter. The four-squares theorem of Lagrange appeared in 1770, and the three-squares theorem of Legendre did not appear until 1797. And yet, assuming the three-squares theorem, the proof of the four squares theorem is at most a few lines. Even much more recent results such as the 451 paper of Rouse [18], which gives conditions under which quadratic forms are guaranteed to represent all odd positive integers, makes assumptions about certain ternary subforms-conditional on the Generalized Riemann Hypothesis; in considering the 24888 escalators Rouse used knowledge of ternary subforms to handle 9812 of these cases. Among these was a form where, if instead one took a more standard analytic approach and considered its corresponding theta series, would have required looking at a space of modular forms where the cuspidal subspace was 2604 dimensional.
This project is heavily influenced by recent work of the second author in Blackwell et al. [1]. Many of the results in that paper were not new; however, it was the technique that was unique. Concentrating mainly on the Ramanujan-Dickson ternaries (which, in particular, were ternary forms of determinant at most 10), the authors showed which positive were represented by certain positive definite ternary forms. Proving what fails to be represented by a quadratic form is typically simple and straightforward; it is proving that m ∈ N is represented that is challenging. The authors began with a generic quadratic form of determinant D. They then showed a series of congruence conditions simultaneously held which guaranteed that the form in question represented a particular m ∈ N, and also represented values that inequivalent forms of the same determinant D failed to represent.
As noted in the abstract, this paper begins by considering ternary forms not handled in [1]; indeed, the forms considered here are not as famous as the Ramanujan-Dickson forms and the following representation results do not seem to appear anywhere else in the literature. More crucially, we consider determinants D much higher than those considered before, thus forcing many more candidate forms to be simultaneously eliminated. Last, we note that in [1] those excepted values by the other candidates happened to lie in the same congruence class, a congruence class which in turn had little to do directly with the determinant of the form. That is not the case here. The excepted values of the other candidates of determinant D are of the form Dk, where k is a quadratic nonresidue modulo D. While in [1] it was designed so that x = 1, y = z = 0 would be a vector so that Q(x, y, z) produces an excepted value, that is not possible here because of what specifically is not represented by the other forms of determinant D. Therefore, the arguments for specific vector evaluation and subsequent elimination of other candidate forms are much more intricate.
Our main results are: Theorem 1. A positive integer m is represented by 2x 2 +2xy+2xz+2y 2 +2yz+3z 2 if and only if m 4 k (8ℓ+1).
Theorem 2.
A positive integer m is represented by x 2 + 2y 2 + 2yz + 6z 2 if and only if m 4 k (8ℓ + 5).
Theorem 3. A positive integer m is represented by x 2 + 3y 2 + 2yz + 5z 2 if and only if m 4 k (16ℓ + 2). Theorem 4. A positive integer m is represented by 2x 2 + 2xy + 3y 2 + 2yz + 5z 2 if and only if m 4 k (8ℓ + 1).
The remainder of the paper is organized as follows: after a brief but more detailed background section, we proceed to the proofs of the theorems in order. As these proofs are constructive, we end with concrete examples.
Background
For general references on the theory of quadratic forms, we refer the reader to [3] and [13].
A n-ary quadratic form over Z is a polynomial Q : Z n → Z given by
Q( x) = 1≤i≤ j≤n a i j x i x j ∈ Z[x 1 , ..., x n ].
We say a quadratic form is positive definite if Q( x) ≥ 0 for all x ∈ Z n and if Q( x) = 0 if and only if x = 0.
To each quadratic form there is an associated symmetric matrix A Q ∈ M n 1 2 Z whose entries c i j are given by
c i j = a i j , i = j a i j /2, i j.
With this, we note that x T A Q x = Q( x). When A Q ∈ M n (Z) we say that Q is classically integral. We say that the determinant det(Q) = det(A Q ). Last, we say that two forms Q and Q ′ are equivalent if there exists M ∈ S L n (Z) such that M t A Q M = A Q ′ .
From now on, when we use the word "form" we mean "ternary, positive definite, classically integral quadratic form."
In [1] the main idea was as follows: to show that some square-free m ∈ N is represented by a form R of determinant D, suppose mR(x, y, z) = (Ax + By + mz) 2 + ax 2 + 2hxy + by 2 .
If one can show all of the following conditions hold for some integers A, B, h, b, a:
• ab − h 2 = Dm; • A 2 + a ≡ B 2 + b ≡ 2AB + 2h ≡ 0 (mod m); • −Dm a = 1 = −a p
where p|m is prime;
• R( x) = k where k ∈ N is not represented by forms Q ′ of determinant D with Q ′ not equivalent to Q, then R must be equivalent to Q. We continue to use this same basic approach. However, in [1] simplicities were made that can no longer be afforded. Namely, they took b ≡ B ≡ h ≡ 0 (mod m), and they made A 2 +a m = k (therefore, x = (1, 0, 0)). Here, we do not (necessarily) have b ≡ B ≡ h ≡ 0 (mod m), and moreover, we have (with one exception) x = (1, 1, 0).
We note that the choice of b ≡ B ≡ h ≡ 0 (mod m) in [1] was to make 2AB + 2h ≡ 0 (mod m) immediate. One key realization here is that once A and B have been determined so that A 2 + a ≡ B 2 + b ≡ 0 (mod m), even if A, B 0 (mod m), one still can ensure 2AB + 2h ≡ 0 (mod m) and in fact that AB + h ≡ 0 (mod m). For a prime p|m:
(AB + h) 2 ≡ A 2 B 2 + 2ABh + h 2 (mod p) (AB + h) 2 ≡ h 2 + 2ABh + h 2 (mod p) (AB + h) 2 − 2h(h + AB) ≡ 0 (mod p) (AB + h)(AB − h) ≡ 0 (mod p).
And so, AB + h ≡ 0 (mod p) can be ensured for all p|m, which by the Chinese Remainder Theorem guarantees AB + h ≡ 0 (mod m) has a solution.
3. Proof of Theorem 1
In this section, we provide a proof of Theorem 1, noting that there are three forms of determinant 7: Q 1 : 2x 2 + 2xy + 2xz + 2y 2 + 2yz + 3z 2 , Q 2 : x 2 + y 2 + 7z 2 , Q 3 :
x 2 + 2y 2 + 4z 2 + 2yz. Lemma 1. For any m ≡ 1 (mod 8), Q 1 does not represent m.
Proof. This is a simple exercise left to the reader.
Lemma 2. If m is odd, 4m is represented by Q 1 if and only if m is.
Proof. One direction is trivial. So suppose 4m is represented, where m is odd. Note that then 4m ≡ 4 (mod 8). By a computer search, one can determine that this forces all of x, y, z to be even. Substituting
x = 2X, y = 2Y, z = 2Z we have 4m = 4(2X 2 ) + 4(2XY) + 4(2XZ) + 4(2Y 2 ) + 4(2YZ) + 4(3Z 2 )
and dividing through, we see m is represented by Q 1 .
Lemma 3. If m = 4 k (8ℓ + 1) for integers k, ℓ, then m is not represented by Q 1 .
Proof. This follows immediately from the previous two lemmas.
Lemma 4. If m = 7n where n ≡ 3, 5, 6 (mod 7), then m is not represented by Q 2 .
Proof. Suppose that Q 2 represents 7n for some n ∈ Z. Then necessarily x ≡ y ≡ 0 (mod 7), and substituting x = 7X and y = 7Y we see
49X 2 + 49Y 2 + 7z 2 = 7n 7X 2 + 7Y 2 + z 2 = n.
This implies that n is a quadratic residue modulo 7.
Lemma 5. If m = 7n where n ≡ 3, 5, 6 (mod 7) then m is not represented by Q 3 .
Proof. A computer search shows that for Q 3 to represent any number divisible by 7, x ≡ 0 (mod 7). Moreover, modulo 7, (y, z) ∈ {±(1, 5), ±(2, 3), ±(3, 1)}. Consider the first case, writing x = 7X, y = 7Y + 1 and z = 7Z + 5:
49X 2 + 2(7Y + 1) 2 + 4(7Z + 5) 2 + 2(7Y + 1)(7Z + 5) = 7n 7X 2 + 14Y 2 + 14Y + 14YZ + 28Z 2 + 42Z + 16 = n.
This implies that n ≡ 2 (mod 7). Similarly, substituting y = 7Y + 2, z = 7Z + 3 we have
49X 2 + 2(7Y + 2) 2 + 4(7Z + 3) 2 + 2(7Y + 2)(7Z + 3) = 7n 7X 2 + 14Y 2 + 14Y + 14YZ + 28Z 2 + 28Z + 8 = n,
which means n ≡ 1 (mod 7). Last, with y = 7Y + 3, z = 7Z + 1 we have
49X 2 + 2(7Y + 3) 2 + 4(7Z + 1) 2 + 2(7Y + 3)(7Z + 1) = 7n 7X 2 + 14Y 2 + 14YZ + 28Z 2 + 14Z + 4 = n,
and n ≡ 4 (mod 7). Now we suppose m 1 (mod 8) is squarefree. We will show m is represented by Q 1 . We proceed by cases.
In the interest of space, and so as not to belabor the reader with repetition, we make note of when cases become identical to those completed in more detail.
(Case 1) Suppose m ≡ 3 (mod 4). We choose a ≡ 1 (mod 4) and a ≡ 3 (mod 49) a prime such that ( −a p ) = 1 for all primes p|m. This guarantees
−7m a = −1 a 7 a m a = 1 = −a m .
Considering the equation ab
−h 2 = 7m, we see that modulo 7, (b, h) ∈ {(0, 0), (3, ±3), (5, ±1), (6, ±2)}.
Switching h with −h as necessary, we can safely assume modulo 7, (b, h) ∈ {(0, 0), (3,4), (5, 1), (6, 2)}. This automatically guarantees there is a solution to
(A + B) 2 + a + b + 2h ≡ 0 (mod 7)
which means that Q will represent multiples of seven. Moreover, for all but the case where (b, h) = (3,4) there are solutions to each of (A + B) 2 + a + b + 2h ≡ 0, 7, 14, 21, 28, 35, 42 (mod 49)
This will guarantee that when x ≡ y ≡ 1 (mod 49) and z = 0, Q will represent 7n where n is a quadratic nonresidue modulo 7 (setting the equation to 21, 35, 42 (mod 42) when m is a quadratic residue (mod 7) and to 7, 14, 28 when m is a quadratic nonresidue (mod 7) ). In the case where (b, h) = (3, 4) there is a solution to which means when x ≡ 2 (mod 49), y ≡ 1 (mod 49) and z = 0 then Q will represent 7n where n is a quadratic nonresidue modulo 7 (with similar restrictions based on m being a quadratic residue (mod 7) or not.). Regardless, in each case A and B are predetermined (mod m), such that
4A 2 + 4AB + B 2 + 4a + b + 4h ≡ 2 2 (A 2 + a) + 2(2AB + 2h) + B 2 + b ≡ 0,A 2 + a ≡ B 2 + b ≡ 2(AB + h) ≡ 0 (mod m). (Case 2) Suppose m ≡ 6 (mod 8). Then m = 2m ′ where m ′ ≡ 3 (mod 4). Choose a ≡ 1 (mod 8) and a ≡ 3 (mod 49) to be prime, where additionally ( −a p ) = 1 for all primes p|m ′ . This guarantees −7m a = −1 a 7 a 2 a m ′ a = 1 = −a m .
The rest of this case is identical to (Case 1).
(Case 3) Suppose m ≡ 5 (mod 8). Let a = 2a ′ where a ′ is a prime satisfying a ′ ≡ 26 (mod 49), a ′ ≡ 1 (mod 4) and where for all p|m, ( −2a ′ p ) = 1. This guarantees −7m a ′ = −1 a ′ 7 a ′ m a ′ = 1 = −a m .
Moreover, if a ′ ≡ 26 (mod 49), 2a ′ ≡ 3 (mod 49). That means the rest of this case will reduce to (Case 1). The rest of this case is identical to (Case 1).
Proof of Theorem 2
There are three forms of determinant 11: Q 1 : x 2 + 2y 2 + 2yz + 6z 2 , Q 2 : x 2 + y 2 + 11z 2 , and Q 3 :
x 2 + 3y 2 + 2yz + 4z 2 .
Lemma 6. If m ≡ 5 (mod 8), then Q 1 does not represent m.
Proof. This is a simple proof by exhaustion and is left to the reader. Proof. One direction is trivial. Suppose 4m is represented by Q 1 . Looking (mod 2) this implies x is even.
Looking then (mod 4), we see that y and z cannot both be odd; however, any one of them even forces the other to be even. Writing x = 2X, y = 2Y, z = 2Z and dividing through by 4 gives the result.
Lemma 8. If m = 4 k (8ℓ + 5) then m is not represented by Q 1 .
Proof. This follows immediately from the previous lemmas.
Lemma 9. If m = 11n where n is a quadratic nonresidue modulo 11, then m is not represented by Q 2 .
Proof. Without loss of generality, suppose m is squarefree. Suppose Q 2 represents m = 11n. This immediately forces x ≡ y ≡ 0 (mod 11). Substituting x = 11X and y = 11Y this means 121X 2 + 121Y 2 + 11z 2 = 11n 11X 2 + 11Y 2 + z 2 = n which means that n is a quadratic residue modulo 11.
Lemma 10. If m = 11n where n is a quadratic nonresidue modulo 11, then m is not represented by Q 3 .
Proof. For Q 3 to represent any multiple of 11, we must have x ≡ 0 (mod 11). Assuming that m is squarefree, this moreover, modulo 11, means (y, z) ∈ {± (1,8), ±(2, 5), ±(3, 2), ±(4, 10), ±(5, 7)}. Again, we proceed by cases. Writing first y = 11Y + 1 and z = 11Z + 8 gives
121X 2 + 3(11Y + 1) 2 + 4(11Z + 8) 2 + 2(11Y + 1)(11Z + 8) = 11n 11X 2 + 33Y 2 + 44Z 2 + 22Y + 66Z + 22YZ + 25 = n
which means that n ≡ 3 (mod 11). Next if y = 11Y + 2, z = 11Z + 5 we have 121X 2 + 3(11Y + 2) 2 + 4(11Z + 5) 2 + 2(11Y + 2)(11Z + 5) = 11n 11X 2 + 33Y 2 + 44Z 2 + 22Y + 44Z + 22YZ + 12 = n which means n ≡ 1 (mod 11). Supposing next y = 11Y + 3 and z = 11Z + 2 we see 121X 2 + 3(11Y + 3) 2 + 4(11Z + 2) 2 + 2(11Y + 3)(11Z + 2) = 11n 11X 2 + 33Y 2 + 44Z 2 + 22Y + 22Z + 22YZ + 5 = n and again n is a quadratic residue modulo 11. When y = 11Y + 4 and z = 11Z + 10 we have 121X 2 + 3(11Y + 4) 2 + 4(11Z + 10) 2 + 2(11Y + 4)(11Z + 10) = 11n 11X 2 + 33Y 2 + 44Z 2 + 44Y + 88Z + 22YZ + 48 = n which makes n ≡ 4 (mod 11). Last, if y = 11Y + 5 and z = 11Z + 7: 121X 2 + 3(11Y + 5) 2 + 4(11Z + 7) 2 + 2(11Y + 5)(11Z + 7) = 11n 11X 2 + 33Y 2 + 44Z 2 + 44Y + 66Z + 22YZ + 31 = n and n ≡ 9 (mod 11).
Next, suppose that m 5 (mod 8) is squarefree. This gives the following cases:
Proof of Theorem 3
There are three forms of determinant 14: Q 1 : x 2 + 3y 2 + 2yz + 5z 2 , Q 2 : x 2 + y 2 + 14z 2 , Q 3 : x 2 + 2y 2 + 7z 2 .
Lemma 11. If m ≡ 2 (mod 16) then m is not represented by Q 1 .
Proof. We leave the proof to the reader.
Lemma 12. Let m be even. Then m is represented by Q 1 if and only if 4m is represented by Q 1 .
Proof. For the nontrivial direction, if m is even, then 4m ≡ 0 (mod 8), which forces x, y, z all even.
Lemma 13. If m = 4 k (16ℓ + 2), then m is not represented by Q 1 .
Proof. This follows from the previous lemmas.
Lemma 14. If m = 7n where n is a quadratic nonresidue modulo 7, then m is not represented by Q 2 .
Proof. Suppose m = 7n is squarefree and is represented by Q 2 . This forces x ≡ y ≡ 0 (mod 7). Substituting x = 7X and y = 7Y and simplifying gives 7X 2 + 7Y 2 + 2z 2 = n which means that n is a quadratic residue modulo 7.
Lemma 15. If m = 7n where n is a quadratic nonresidue modulo 7, then m is not represented by Q 3 .
Proof. Suppose m = 7n is squarefree and is represented by Q 2 . This forces x ≡ y ≡ 0 (mod 7). Substituting x = 7X and y = 7Y and simplifying gives 7X 2 + 14Y 2 + z 2 = n which means that n is a quadratic residue modulo 7.
We now proceed to show that if m 2 (mod 16) is squarefree, then m is represented by Q 1 . Again, we proceed by caess. The proof now follows (Case 1) (Case 4) Let m ≡ 10 (mod 16). Then m = 2m ′ where m ′ ≡ 5, 13 (mod 16). We note that the total number of primes p|m which are congruent to either 5 or 7 (mod 8) is odd. With that, we take a = 2a ′ where a ′ is prime, satisfying a ′ ≡ 1 (mod 8), a ′ ≡ 26 (mod 49), and −2a ′ p = 1 for all p|m ′ . This gives
−14m a ′ = −1 a ′ 2 a ′ 2 7 a ′ m ′ a ′ = 1.
Moreover, we note that 2a ′ ≡ 3 (mod 49), which means next considering 2a ′ b − h 2 = 14m we are reduced to earlier cases.
Proof of Theorem 4
We begin by noting there are five forms of determinant 23: Q 1 : 2x 2 +2xy+3y 2 +2yz+5z 2 , Q 2 : x 2 +y 2 +23z 2 , Q 3 : x 2 + 2y 2 + 2yz + 12z 2 , Q 4 : x 2 + 3y 2 + 2yz + 8z 2 , and Q 5 :
x 2 + 4y z + 2yz + 6z 2 .
Lemma 16. If m ≡ 1 (mod 8) then Q 1 does not represent m.
Proof. Left to reader.
Lemma 17. Let m ∈ N be odd. Then Q 1 represents m if and only if Q 1 represents 4m.
Proof. One direction is trivial, so suppose Q 1 represents 4m where m is odd. Then 4m ≡ 4 (mod 8) and a computer search will verify that in this case each of x, y, z must be even in order for Q 1 (x, y, z) ≡ 4 (mod 8).
Lemma 18. If m = 23n where n is a quadratic nonresidue modulo 23, then m is not represented by Q 2 .
Proof. Considering x 2 + y 2 + 23z 2 ≡ 0 (mod 23) immediately yields x ≡ y ≡ 0 (mod 23). Substituting x = 23X, y = 23Y gives (23X) 2 + (23Y) 2 + 23z 2 = 23n 23X 2 + 23Y 2 + z 2 = n, which means n is a quadratic residue (mod 23).
Lemma 19. If m = 23n where n is a quadratic nonresidue modulo 23, then m is not represented by Q 3 .
Proof. Setting Q 3 (x, y, z) ≡ 0 (mod 23) immediately gives x ≡ 0 (mod 23). There are additional constraints on y and z modulo 23. These cases behave similarly to those in previous sections, and so in the interest of space, we simply provide a summary of the data.
(y (mod 23), z (mod 23)) n (mod 23)
(±1, ±21) 2 (±2, ±19) 8 (±3, ±17) 18 (±4, ±15) 9 (±5, ±13) 4 (±6, ±11) 3 (±7, ±9) 6 (±8, ±7) 13 (±9, ±5) 1 (±10, ±3) 16 (±11, ±1) 12
In each case, n is a quadratic residue (mod 23), which completes the proof.
Lemma 20. If m = 23n where n is a quadratic nonresidue modulo 23, then m is not represented by Q 4 .
Proof. Setting Q 4 (x, y, z) ≡ 0 (mod 23) immediately gives x ≡ 0 (mod 23). There are additional constraints on y and z modulo 23. These cases behave similarly to those in previous sections, and so in the interest of space, we simply provide a summary of the data.
(y (mod 23), z (mod 23)) n (mod 23) (±1, ±20)
3 (±2, ±17) 12 (±3, ±14) 4 (±4, ±11) 2 (±5, ±8) 6 (±6, ±5) 16 (±7, ±2) 9 (±8, ±22) 8 (±9, ±19) 13 (±10, ±16) 1 (±11, ±13) 18
In each case, n is a quadratic residue (mod 23), which completes the proof.
Lemma 21. If m = 23n where n is a quadratic nonresidue modulo 23, then m is not represented by Q 5 .
Proof. Setting Q 5 (x, y, z) ≡ 0 (mod 23) immediately gives x ≡ 0 (mod 23). There are additional constraints on y and z modulo 23. These cases behave similarly to those in previous sections, and so in the interest of space, we simply provide a summary of the data.
(y (mod 23), z (mod 23)) n (mod 23)
(±1, ±19) 4 (±2, ±15) 16 (±3, ±11) 13 (±4, ±7) 18 (±5, ±3) 8 (±6, ±22) 6 (±7, ±18) 12 (±8, ±14) 3 (±9, ±10) 2 (±10, ±6) 9 (±11, ±2) 1
In each case, n is a quadratic residue (mod 23), which completes the proof. Now suppose m 1 (mod 8) is squarefree. We will show that m is represented by Q 1 with the following cases: Considering the equation ab−h 2 = 23m and replacing h with −h as necessary we see that modulo 23, (b, h) ∈ {(0, 0), (5,5), (7,9), (10,2), (11,3), (14,1), (15,11), (17,4), (19,7), (20, 10), (21, 6), (22,8)}. This automatically guarantees there is a solution to
((A + B) 2 + a + b + 2h ≡ 0 (mod 23)
which means that Q will represent multiples of 23. Moreover, for each case there are solutions to each of (A + B) 2 + a + b + 2h ≡ 23k for k = 0, 1, 2, ..., 22. This will guarantee that when x ≡ y ≡ 1 (mod 529) and z = 0 that Q will represent 23n where n is a quadratic nonresidue modulo 23 (with different congruence conditions necessary when m is or is not a quadratic residue (mod 23)). But also, noting that a ≡ 2 · 267 ≡ 5 (mod 529) we are able to mimic previous cases at this point. And as the conditions (mod 23) on a are the same as in (Case 1), the rest of the proof follows similarly.
Examples
We end this paper with hopefully helpful if not entertaining to the reader concrete examples of choices of A, B, a, b, h as outlined in the proofs of the theorems.
Example. To show that m = 51 = 3 · 17 is represented by 2x 2 + 2xy + 2xz + 2y 2 + 2yz + 3z 2 , we consider (Case 1) of the proof of Theorem 1.
We choose a prime a ≡ 1 (mod 4) and a ≡ 3 (mod 49), and (without loss of generality) a ≡ 2 (mod 3) and a ≡ 1 (mod 17). The smallest prime satisfying all of these conditions is a = 4217. Then solving 4217b − h 2 = 7 · 51 for b and h, we see we can take b = 1613 and h = 2608. Note this is not the "smallest" solution with respect to b > 0; however, here b ≡ 3 (mod 7) and h ≡ 4 (mod 7 Q(x, y, z) = 48291x 2 + 64544xy + 3136xz + 21567y 2 + 2096yz + 51z 2 .
Last, we note that Q(2, 1, 0) = 7 · 49117, where as 49117 ≡ 5 (mod 7) means Q represents 7n where n is a quadratic nonresidue modulo 7. We conclude that Q is equivalent to 2x 2 + 2xy + 2xz + 2y 2 + 2yz + 3z 2 .
Example. To show that m = 67 is represented by x 2 + 2y 2 + 2yz + 6z 2 we refer to (Case 3) of the proof of Theorem 2.
We choose a prime a ≡ 1 (mod 4) and a ≡ 2 (mod 121) and noting ( −2 67 ) = 1, we also take a ≡ 2 (mod 67). The smallest prime satisfying all of these conditions is a = 170249. Then solving 170249b − h 2 = 11 · 67 for b and h, we see we can take b = 4413 and h = −27410. The requirements of A 2 + a ≡ B 2 + b ≡ 2AB + 2h ≡ 0 (mod 67), give A ≡ 20 (mod 67) and B ≡ 64 (mod 67), or A ≡ 47 (mod 67) and B ≡ 3 (mod 67). We choose the former. Solving (A + B) 2 + a + b + 2h ≡ 22 (mod 121) gives, among many options, A ≡ 60 (mod 121) and B ≡ 0 (mod 121). We then take A = 4174 and B = 3146. This then means Q(x, y, z) = 262575x 2 + 391164xy + 8348xz + 147787y 2 + 6292yz + 67z 2 .
We note that Q(1, 1, 0) = 801526 = 2 · 11 · 36433, and ( 2·36433 11 ) = −1. Example. To show that m = 26 = 2 · 13 is represented by x 2 + 3y 2 + 2yz + 5z 2 , we consider (Case 4) of Theorem 3.
We choose a prime a ′ ≡ 1 (mod 8) and a ′ ≡ 26 (mod 49) and a ′ ≡ 2 (mod 13). The smallest such prime is a ′ = 27809. Set a = 2a ′ . Considering next the equation ab − h 2 = 14 · 26, we get b = 8440 and h = 21666 as a possible solution. Solving A 2 + a ≡ 0 (mod 26) and B 2 + b ≡ 0 (mod 26) gives A ∈ {10, 16} and B ∈ {6, 20}. Taking into account we must have 2AB + 2h ≡ 0 (mod 26) we see the pairs (A, B) are (10,20) and (16, 6). Because 26 is a quadratic residue mod 7, we next solve for A, B (mod 49) such that (A + B) 2 + a + b + 2h ≡ 7 (mod 35). This gives 98 pairs. One such pair is A ≡ 1 (mod 49) and B ≡ 43 (mod 49). Using the Chinese Remainder Theorem, we take A = 1128 ad B = −6. This yields Q(x, y, z) = 51077x 2 + 1146xy + 2256xz + 326y 2 − 12yz + 26.
Last, we note that when x = y = 1 and z = 0, Q(x, y, z) = 7 · 7507, and 7507 ≡ 3 (mod 7).
Lemma 7 .
7A natural number m ∈ N is represented by Q 1 if and only if 4m is.
(
Case 1) m ≡ 1 (mod 8). We set a = 2a 1 , where a 1 is an prime satisfying a 1 ≡ 1 (mod 4) and a 1 ≡ 1 (mod 121) and ( −a p ) = 1, for all primes p|m. with −h as necessary, we can assume (b, h) (mod 11) + B) 2 = a + b + 2h ≡ 0 (mod 11) which means that Q will represent multiples of 11. Moreover, for each pair (b, h) there are solutions to(A + B) 2 + a + b + 2h ≡ 11k(mod 121)for k = 0, 1, 2, ..., 10. This will guarantee that when x ≡ y ≡ 1 (mod 121) and z = 0, Q will represent 11k where k is a quadratic nonresidue modulo 11 (setting the equation to 11, 33, 44, 55, 99 when m is a quadratic residue (mod 11) and to 22, 55, 66, 77, 88, 110 when m is a quadratic nonresidue (mod 11)). (Case 2) m ≡ 2 (mod 8). We start with writing m = 2ℓ, where ℓ ≡ 1 (mod 4). a to be a prime satisfying a≡ 5 (mod 8), a ≡ 2 (mod 121), and ( −a p ) = 1 for all odd p|m. The rest of the proof then follows (Case 1). (Case 3) m ≡ 3 (mod 4). Here we choose a prime a ≡ 1 (mod 4), a ≡ 2 (mod 121) and ( −a p ) = 1 for all primes p|m. rest of the proof mimics (Case 1). (Case 4) m ≡ 6 (mod 8). We write m = 2ℓ, where ℓ ≡ 3 (mod 4). Here we choose a prime satisfying a ≡ 1 (mod 8), a ≡ 2 (mod 121), and ( −a p ) = 1 for all primes p|m. Then −11m a = −a m = 1 and the rest of the proof follows like the others.
(Case 1 ))
1Suppose m ≡ 1 (mod 4). Choose a prime a such that ( −a p ) = 1 for all p|m, where additionally a ≡ 5 (mod 8) and a ≡ 3 (mod 49). equation to consider now is ab − h 2 = 14m, this now behaves identically to the proof of Theorem 1 (Case 1). (Case 2) Suppose m ≡ 3 (mod 4). Choose a prime a such that ( −a p ) = 1 for all p|m, where additionally a ≡ 1 (mod 8) and a ≡ 3 (mod 49). Let m ≡ 6, 14 (mod 16). Then m = 2m ′ where m ′ ≡ 3, 7, 11, 15 (mod 16). Let a ≡ 1 (mod 8) and a ≡ 3 (mod 49), and ( −a p ) = 1 for all p|m ′ . This is enough to guarantee that
Case 1 )
1Let m ≡ 3 (mod 4). Let a be a prime satisfying a ≡ 1 (mod 4), a ≡ 5 (mod 529), and ( −a p ) = 1 for all primes p|m. This will ensure
(Case 2 )
2Let m ≡ 6 (mod 8), so m = 2m ′ where m ′ ≡ 3 (mod 4). Let a be a prime satisfying a ≡ 1 (mod 8), a ≡ 5 (mod 529), and ( −a p ) = 1 for all primes p|m ′ . the conditions (mod 23) on a are the same as in (Case 1), the rest of the proof follows similarly. (Case 3) Let m ≡ 5 (mod 8). We note that the total number of primes p|m which are congruent to either 5 or 7 (mod 8) is odd. With that, we write a = 2a ′ where a ′ is a prime satisfying a ≡ 1 (mod 4), a ≡ 267 (mod 529) and ( −a p ) = 1 for all p|m.
(Case 4 )
4Let m ≡ 2 (mod 8). Let m ≡ 2 (mod 8), so m = 2m ′ where m ′ ≡ 1 (mod 4). Let a be a prime satisfying a ≡ 5 (mod 8), a ≡ 5 (mod 529), and ( −a p ) = 1 for all primes p|m ′ .
(Case 4) Suppose m ≡ 2 (mod 8). Then m = 2m ′ where m ′ ≡ 1, 5 (mod 8). Let a ≡ 5 (mod 8) and a ≡ 3 (mod 49) be prime with ( −a p ) = 1 for all primes p|m ′ . This guarantees−7m
a
=
−1
a
7
a
2
a
m ′
a
= 1.
). Noting now that A 2 + a ≡ 0 (mod 51) means modulo 51, A ∈ {4, 13, 38, 47}. Similarly with B 2 + b ≡ 0 (mod 51) we see B ∈ {11, 23, 28, 40}. Noting, however, that we must also have 2AB + 2h ≡ 0 (mod 51) we seethe possible pairs of (A, B) modulo 51 are (A, B) ∈ {(4, 11), (13, 23), (38, 28), (47, 40)}. Accounting for 4A 2 + 4AB + B 2 + 4a + b + 4h ≡ 2 2 (A 2 + a) + 2(2AB + 2h) + B 2 + b ≡ 21 (mod 49) gives 98 choices for (A, B) (mod 49). Among these choices is A ≡ 0 (mod 49) and B ≡ 19 (mod 49). Selecting from our (mod 51) conditions A ≡ 38 (mod 51) and B ≡ 28 (mod 51) and using the Chinese Remainder Theorem gives minimum positive values of A = 1568 and B = 1048. This then means
A generalization of a method of Mordell to ternary quadratic forms. S Blackwell, G Durham, K Thompson, T Treece, International Journal of Number Theory. 128Blackwell, S., Durham, G., Thompson, K., and Treece, T., A generalization of a method of Mordell to ternary quadratic forms, International Journal of Number Theory, Vol. 12 No. 8 (2016), pg 2081-2105.
H Brandt, O Intrau, Tabellen reduzierter positiver ternärer quadratischer Formen. 45261Brandt, H., Intrau, O., Tabellen reduzierter positiver ternärer quadratischer Formen, Abh. Sächs. Akad. Wiss. Math.-Nat. Kl. 45 (1958), no. 4, 261
Rational Quadratic Forms. J W S Cassels, DoverCassels, J.W.S., Rational Quadratic Forms, Dover, 2008
An Introduction to the Geometry of Numbers. J W S Cassels, Springer-VerlagCassels, J.W.S., An Introduction to the Geometry of Numbers, Springer-Verlag, 1959.
Primes of the form x 2 + ny 2. D Cox, John WileyCox, D., Primes of the form x 2 + ny 2 , John Wiley
History of the Theory of Numbers. L E Dickson, Dover PublicationsIIIDickson, L.E., History of the Theory of Numbers, Volume III: Quadratic and Higher Forms, Dover Publications, 2012.
Integers represented by positive ternary quadratic forms. L E Dickson, Bull. Amer. Math. Soc. 33Dickson, L.E., Integers represented by positive ternary quadratic forms., Bull. Amer. Math. Soc. 33 (1927), 63-70.
Untersuchungen Über die Eigenschaften der positiven ternären quadratischen Forem usw. Göttingsche Gelehrte Anzeigen, 1831, Juli 9. C F Gauss, L A Besprechung Des Buchs Von, Seeber, IIReprinted in Werke (1876Gauss, C.F., Besprechung des Buchs von L.A. Seeber: Untersuchungen Über die Eigenschaften der positiven ternären quadratischen Forem usw. Göttingsche Gelehrte Anzeigen, 1831, Juli 9. Reprinted in Werke (1876), Vol. II, 188-196.
. C F Gauss, Trans A A Disquisitiones Arithmeticae, Clarke, SpringerNew YorkGauss, C.F., Disquisitiones Arithmeticae, trans. A.A. Clarke, Springer New York, 1986.
The regularity of a genus of positive ternary quadratic forms. B Jones, Trans. Amer. Math. Soc. 33Jones, B., The regularity of a genus of positive ternary quadratic forms, Trans. Amer. Math. Soc. 33 (1931), 111-124.
The first nontrivial genus of positive definite ternary forms. I Kaplansky, Mathematics of Computation. 64NumberKaplansky, I., The first nontrivial genus of positive definite ternary forms, Mathematics of Computation, Volume 64, Number 209, January 1995, pgs. 341-345.
Kaplansky's ternary quadratic form. J Kelley, International Journal of Mathematics and Mathematical Sciences. 255Kelley, J., Kaplansky's ternary quadratic form, International Journal of Mathematics and Mathematical Sciences, Volume 25, Issue 5 (2001), pg. 289-292.
Introduction to Quadratic Forms over Fields. T Y Lam, AMSLam, T.Y., Introduction to Quadratic Forms over Fields, AMS, 2005
Essai sur la thèorie des nombres. A.-M Legendre, Paris, An VILegendre, A.-M., Essai sur la thèorie des nombres, Paris, An VI (1797-1798)
On the representation of a number as a sum of three squares. L J Mordell, Rev. Math. Pres Appl. 3Mordell, L.J., On the representation of a number as a sum of three squares., Rev. Math. Pres Appl. 3 (1958), 25-27.
On the expression of a number in the form ax 2 + by 2 + cz 2 + du 2. S Ramanujan, Proc. Camb. Phil. Soc. 19Ramanujan, S., On the expression of a number in the form ax 2 + by 2 + cz 2 + du 2 , Proc. Camb. Phil. Soc. 19 (1916), 11-21.
Quadratic Forms Representing All Odd Positive Integers. J Rouse, American Journal of Mathematics. 136Rouse, J., Quadratic Forms Representing All Odd Positive Integers, American Journal of Mathematics. Vol 136 (2011).
The Sage Developers. SageMath, the Sage Mathematics Software System. United States Naval Academy, Annapolis, MD921402SageMath, the Sage Mathematics Software System (Version 9.3), The Sage Developers, 2015, http://www.sagemath.org B.Rainear, Department of Mathematics, United States Naval Academy, Annapolis, MD., 21402
. E-Mail Address, B Rainear, [email protected] address, B. Rainear: [email protected]
K Thompson, Department of Mathematics. United States Naval Academy, Annapolis, MD21402K. Thompson, Department of Mathematics, United States Naval Academy, Annapolis, MD., 21402
. E-Mail Address, K Thompson, [email protected] address, K. Thompson: [email protected]
| zyda_arxiv-0706000 |
Bose-Einstein condensation on hyperbolic spaces
14 Feb 2022 February 12, 2022
Marius Lemm
Department of Mathematics
University of Tübingen
Auf der Morgenstelle 1072076TübingenGermany
Oliver Siebert
Department of Mathematics
University of Tübingen
Auf der Morgenstelle 1072076TübingenGermany
Bose-Einstein condensation on hyperbolic spaces
14 Feb 2022 February 12, 2022
A well-known conjecture in mathematical physics asserts that the interacting Bose gas exhibits Bose-Einstein condensation (BEC) in the thermodynamic limit.We consider the Bose gas on certain hyperbolic spaces. In this setting, one obtains a short proof of BEC in the infinite-volume limit from the existence of a volumeindependent spectral gap of the Laplacian.Main result: BEC in the infinite-volume limit on hyperbolic spaceIn this paper, we study the Bose gas in a hyperbolic geometry. The intriguing features of hyperbolic geometry have long inspired mathematicians and physicists alike. The first uses of hyperbolic geometry in condensed matter theory were to our knowledge based on the AdS-CFT correspondence principle [Wit98; Mal99] which has deepened our understanding of quantum entanglement and may hold the key to quantum error correction.More recently, experimentalists have been able to physically construct hyperbolic structures in the laboratory by confining particles to discrete hyperbolic lattices in circuit QED [Hu+19; KFH19; Kol+20; Alt+21; SMR21] and by using topoelectric circuits [Len+21]. These setups can serve, among other things, as tabletop simulators of quantum gravity. These recent experimental advances spawned a new interdisciplinary subfield of theoretical physics that blends condensed-matter physics with quantum information science and
Introduction
The Bose gas plays a central role in quantum many-body physics. The key effect it displays is Bose-Einstein condensation (BEC), macroscopic occupation of a single-particle quantum state. BEC is immensely useful for technological applications because it amplifies microsopic quantum effects to macroscopic scales. The original theoretical prediction of BEC was made for an ideal (i.e., non-interacting) Bose gas by Bose and Einstein [Bos24;Ein24]. A more realistic description of the Bose gas involves interactions between particles, resulting in a full quantum many-body problem.
To describe N bosons confined to an Euclidean torus T L = (−L/2, L/2) 3 and interacting via a repulsive two-particle interaction V ≥ 0, one uses the Hamiltonian
H N = N i=1 (−∆ x i ) + 1≤i<j≤N V (x i − x j ).
(1.1) A famous conjecture, mathematically formulated by Lieb in 1998 [Lie98] but probably several decades older, states that the Hamiltonian H N exhibits Bose-Einstein condensation in its ground state in the thermodynamic limit. More precisely, Lieb's conjecture asserts that there exists a constant c > 0 such that
T L T L γ(x, y)dx dy ? ≥ cL 3 (1.2)
where γ(x, y) = T N−1 L Ψ 0 (x, x)Ψ 0 (y, x)dx denotes the 1-particle correlation function of the unique ground state Ψ 0 ≥ 0 of H N . Indeed, (1.2) captures the macroscopic occupation of a single one-particle state. Note for example that (1.2) is satisfied for the non-interacting Bose gas with V ≡ 0.
Despite the central role that the Bose gas plays in mathematical physics, this conjecture remains open. Existing proofs of BEC in the Euclidean thermodynamic limit require reflection positivity [Aiz+04] or other special positivity properties [Kom21]. Nonetheless, spectacular progress has been achieved on the mathematics of the Bose gas in the past twenty years. For instance, the famous Lee-Huang-Yang formula for the subleading energy correction in the dilute limit ρa 3 → 0 has been rigorously derived [YY09; FS20; FS21], which resolved a longstanding open problem. Many works have contributed to precise understanding of BEC and the ground state energy in the Gross-Pitaevskii scaling regime; see e.g., [Dys57; LY98; LSY01a; LS02; LS06; Lie+09; NRS16; Boc+18; Boc+19; Boc+20; DS20; Nam+21 ;Hai21]. For further background and references, see [Lie+09] and the recent survey [Rou21]. general relativity [Boe+20; MR21; IAM21; Ste+21; Zha+21]. In particular, it was shown that continuum limits of some of these hyperbolic quantum lattice gases produce suitable hyperbolic continuum models [Boe+20]. Hyperbolic Bose gases were studied, e.g., in [CV93; Kir15;Zhu+21] for example.
To summarize, it can be said that hyperbolic geometry has emerged as a viable, if still exotic, theater of condensed-matter physics in general and Bose gases in particular. Nonetheless, quantum many-body physics in hyperbolic space is mathematically considerably less explored than its Euclidean counterpart.
The main result of this paper can be summarized as follows.
Main result: Interacting Bose gases on two-and three-dimensional hyperbolic manifolds rigorously display Bose-Einstein condensation in the infinite-volume limit.
One can thus say that the hyperbolic analog of the conjecture formulated by Lieb holds true. The change of the geometry from Euclidean to hyperbolic is instrumental to proving our results. The key spectral feature in our appropriately chosen hyperbolic setting is the existence of a volume-independent spectral gap for the Laplacian. It will not come as a surprise to experts that the large spectral gap removes many of the central difficulties present in the Euclidean case and leads to a rather short proof of BEC. Of course, one still needs to account for the change to hyperbolic geometry in the analytical arguments, in particular in the relevant two-particle scattering problem, but altogether this can be handled rather straightforwardly. Apart from the size of the spectral gap, the change in geometry then mostly surfaces in a different notion of scattering length a that is described in the appendix.
We now describe the models of hyperbolic manifolds that are used in this paper and summarize the result on BEC for these models. The precise setup is discussed in more detail in Section 2.
Model 1: Quotients by congruence subgroups. We consider hyperbolic manifolds of the form
X L = H d /Γ(L), L ≥ 2
where d ∈ {2, 3} and Γ(L) is a group of isometries (congruence subgroup). The {X L } L≥2 form a family of non-compact hyperbolic manifolds with finite volume increasing to infinity, vol(X L ) → ∞, L → ∞.
Since they are generated by quotienting the whole space with respect to isometries, the X L 's can be regarded as natural hyperbolic analogs of Euclidean tori. For d = 2, the X L are known as modular surfaces.
Let us now formulate the result on BEC. Let ρ = N vol(X L ) be the particle density. We fix a potential V ≥ 0 with supp V ⊆ B R 0 (0) for some finite range R 0 > 0. We write a for a hyperbolic analog of the scattering length; see Appendix A for its precise definition. Then we introduce the auxiliary parameter
Y = ρ ln 1 tanh(a/2) , for d = 2, ρ tanh a, for d = 3.
We prove that
lim Y →0 lim N,L→∞ ρ= N vol(X L ) ψ 0 , γψ 0 = 1, where ψ 0 = ½ X L vol(X L ) , (1.3)
where inner product is taken with respect to L 2 (X L ).
Comparing with (1.2), we see that (1.3) indeed proves BEC in the infinite-volume limit for any sufficiently small Y . Crucially, "sufficiently small" does not depend on the system size L since the Y -limit is taken after the infinite-volume limit in (1.3). In fact, (1.3) proves that the occupation of the ψ 0 -state converges to 1 as Y → 0. If all the particles belong to a single state up to subleading errors, one speaks of complete condensation.
The precise statements for d = 2 and d = 3 are given in Corollaries 2.7 and 2.11 below. They show that the requirement that Y is "sufficiently small" can be made explicit in terms of the other system parameters. Moreover, the same condition on Y being sufficiently small also applies to finite systems of the same density. The infinite-volume limit in (1.3) is added only for emphasis and the actual result is more general.
Model 2: Random compact hyperbolic surfaces. There is a natural probability measure on compact hyperbolic manifolds of fixed volume called the Weil-Petersson measure P WP g . Let M g denote the set of compact hyperbolic surfaces with genus g up to isometry. By the Gauss-Bonnet theorem, the volume of any hyperbolic surface X ∈ M g equals 2π(2g − 2). By taking g → ∞, we obtain compact hyperbolic manifolds whose volumes go to infinity.
Let ε > 0. We prove that there exists a family of measurable subsets (events) A g ⊂ M g such that lim
ρ ln 1 tanh a →0 lim N,g→∞: ρ= N 2π(2g−2) P WP g (A g ) = 1 (1.4)
and for every hyperbolic manifold X ∈ A g , we have
ψ 0 , γ N ψ 0 ≥ 1 − ε, where ψ 0 = ½ X vol(X) .
This proves that BEC occurs for random compact hyperbolic manifolds with probability going to 1 in the infinite-volume limit. For the precise statement, see Corollary 2.15. These results rely on two main estimates for general hyperbolic Bose gases, Proposition 2.2 and Theorem 2.3. The key common feature of the hyperbolic models described above is that the spectral gap of the corresponding Laplacian (Laplace-Beltrami operator) is bounded from below independently of the volume by deep results of Selberg [Sel65] and Mirzakhani [Mir13]. Quantitative improvements followed in [Sar83; LPS87; EGM90; BS91; Clo03; KS03], respectively [WX21; LW21].
Comparison to the Gross-Pitaevskii regime
As mentioned above, a commonly studied scaling limit in Euclidean setting is the Gross-Pitaevskii (GP) regime which is suitable for describing dilute Bose gases with strong short-ranged interaction. It is characterized by linking the length scale L of the torus to the particle density ρ = N L 3 and the scattering length a ∈ R (which captures the essential features of strength and range of the potential V for two-boson scattering) via
L = C 1 √ ρa (1.5)
Combined with the dilute limit ρa 3 → 0 this defines the GP scaling limit. This is in contrast to the thermodynamic limit (which is the subject of the open conjecture in the Euclidean setting and which we consider here in the hyperbolic setting), where one can take N, L → ∞ independently for fixed values of ρ and a. The investigation of the ground state energy asymptotics and BEC in the GP regime is a success story of mathematical physics. Landmark works in this direction include Dyson's study [Dys57], the various works of Lieb, Seiringer, and Yngvason [LY98; LSY01a; LS02; LS06] (see also [Lie+09]) and more recent advances [Boc+18; Boc+19; Boc+20; Nam+21]
Recent approaches rigorously implement a heuristic description of excitations above the condensate due to Bogoliubov [Bog47] through localization in Fock space, higher-order Bogoliubov transformations, and other technical innovations. The occurrence of BEC can be pushed to length scales larger than the GP scale (1.5) by further exploiting the close link between energy estimates and BEC on different spatial scales [Lie+09; Fou21; ABS21]. For instance, it was shown in [Fou21] that BEC occurs up to length scales
L = C (ρa 3 ) −δ √ ρa , 0 < δ < 1 4 .
and the upper bound on δ could be further improved somewhat by using methods from [FS20] as described in [Fou21].
Despite these advances, the conjectured occurrence of BEC in the thermodynamic limit (i.e. for L arbitrarily large) has remained open. A key reason for the failure of these "energy methods" beyond certain length scales is the fact that the spectral gap of the Laplacian on the torus T L = (−L/2, L/2) 3 vanishes as L −2 for L → ∞.
Our modest observation here is that energy methods are much more powerful in certain hyperbolic spaces because the change in geometry implies that the spectral gap of the Laplacian can be bounded independently of volume. (It is worth pointing out here that there are also other plenty of other apparently-natural hyperbolic settings where the spectral gap decreases with volume, e.g., balls with Neumann boundary conditions of increasing radii [Cha84, Theorem 5], so Models 1 and 2 considered above have to be chosen carefully.) At any rate, as a consequence of the spectral gap in Models 1 and 2, the proof of the main result is quite short and does not require recent advances on rigorous implementation of Bogoliubov's heuristic. This means that the argument provides no new insight on the Euclidean case.
The result raises some potentially interesting questions for further study in the hyperbolic setting.
(i) Our result on BEC is proved without identifying even the leading order of the energy asymptotics in the dilute limit. The spatial localization that commonly appears in energetic lower bounds is technically more challenging in the hyperbolic world because the local spectral gap of the Laplacian can be smaller than the global gap depending on the choice of boundary conditions. We leave it as an open problem to identify the leading asymptotics of the ground state energy in the hyperbolic setting.
(ii) A more precise analysis of the energy asymptotics would presumably be linked to a hyperbolic rendition of Bogoliubov theory [Bog47; Boc+18; Boc+19; Boc+20; BS20; FS20; FS21; Hai21]. This should reveal finer information about excitations above the condensate. It is conceivable that, as in the case of BEC considered here, the price for studying a slightly more complicated geometry is made up by its favorable spectral properties.
(iii) Another natural question concerns the fate of the BEC in the infinite-volume limit at positive temperature. The analogous problem has been resolved in the Euclidean setting, see e.g. the recent work [DS20] and references therein. This could be of practical relevance in case one finds a significantly larger critical temperature in the hyperbolic setting, keeping in mind that it is now possible to set up quantum gases in hyperbolic structures in the laboratory [Hu+19; KFH19; Kol+20; Alt+21; SMR21; Len+21].
(iv) Similarly to Point (iii), one could consider a magnetic field in a hyperbolic geometry analogously to [Sei02; LS06; NRS16] and others who proved BEC in the presence of magnetic fields in the Euclidean GP setting.
Structure of the paper
This paper is organized as follows.
• In Section 2, we state the main abstract results, the upper and lower bounds Theorem 2.3 and Proposition 2.2. Afterwards, we apply them to the concrete infinitevolume limits of hyperbolic manifolds described above to derive Corollaries 2.7, 2.11 and 2.15.
• In Section 3, we prove the upper bound Theorem 2.3. This follows an argument going back to [Dys57] in the modern form of [LY01;Lie+09]. The change in geometry leads to a few changes and, similarly to the Euclidean case, we obtain an upper bound on the effective 2-particle problem by estimating integrals over the fundamental domain by integrals over the universal cover (for us, this is H d ), cf (3.11).
• In Section 4, we use the spectral gap of the Laplacian and the non-negativity of the potential to prove the lower bound, Proposition 2.2.
• In Appendix A we generalize the scattering length to the hyperbolic setting, defining it via the radius of a hardcore potential, cf. Theorem A.2. As in the Euclidean case, we particularly use integration by parts and the inequality (
A.3) to estimate I(f R ), J(f R ) and K(f R ).
The results are analogous to the Euclidean setting and a key role is played by the harmonic function in the hyperbolic setting (A.2).
Models and Main Results
General facts about hyperbolic Bose gases
For any d ≥ 2 let H d denote the d-dimensional hyperbolic space. In d = 2 we will work in the upper-half plane model
H 2 = {z 1 + iz 2 : z 1 ∈ R, z 2 > 0} ⊆ C equipped with the Riemannian metric ds 2 = 1 z 2 2 (dz 2 1 + dz 2 2 ).
For dimensions d ≥ 3 it will be more convenient to work in the hyperboloid model
H d := {z ∈ R d+1 : z 0 > 0 and q d (z) = 1}, q d (z) := z 2 0 − z 2 1 − . . . − z 2 d , (2.1)
equipped with the pullback of the standard Lorentzian metric on R d+1 .
Remark 2.1. Throughout this paper we always use x for elements of the hyperbolic manifolds and z for elements of their universal cover H d .
Let X be a d-dimensional hyperbolic manifold, that is, a complete Riemannian manifold of constant curvature −1. Equivalently, X = H d /Γ, where Γ is a discrete subgroup of Iso(H d ) -the group of isometries of H d . We assume that X has finite volume. Denote by −∆ ≥ 0 the standard Laplace-Beltrami operator acting on L 2 (X). Furthermore, let V ∈ L ∞ (R + ) be a function with compact support and let
R 0 > 0 such that supp V ⊆ [0, R 0 ].
For N ∈ N particles consider the Hilbert space of N bosonic particles
H N := P + N L 2 (X ×N ),
P + N being the symmetrization operator in the N components. In this space we define the Hamiltonian for the Bose gas on X by
H N := −µ N i=1 ∆ i + 1≤i<j≤N V (d(x i ,x j )), (2.2) with domain D(H N ) := P + N D( N i=1 ∆ i ) = P + N H 2 (X ×n ), where ∆ i denotes the operator acting as ∆ on the i-th component, d : X × X → [0, ∞) the distance function on X, and d(x i ,x j ) the multiplication operator by the function X n ∋ (x 1 , . . . , x n ) → d(x i , x j ).
Note that H N is self-adjoint on D(H N ) as V is assumed to be essentially bounded. Furthermore, H N has a unique normalized ground state Ψ 0 ∈ H N with corresponding ground state energy E N . This can be deduced from the strict positivity of corresponding semigroup (cf. [RS78, Theorem XIII.44]), which in turn follows from V ≥ 0 and the fact that the semigroup associated to the Laplace-Beltrami on connected manifolds is positivityimproving (proven in [Dod83], see also [KLW21,p.139]).
The one-particle density matrix γ as a bounded operator on L 2 (X) is given by the integral kernel
γ(x, x ′ ) := X ×(N−1) Ψ 0 (x, x)Ψ 0 (x ′ , x)dx. (2.3)
Furthermore, let ψ 0 := vol(X) −1/2 1 1 X be the ground state of −∆ on X, in other words, the normalized constant function on X.
In order to establish BEC in the sense of (1.3) we use the following abstract lower bound for manifolds X where the Laplacian has a gap. The proof can be found in Section 4.
Proposition 2.2 (Lower bound)
Let X = H d /Γ where Γ is a discrete subgroup of Iso(H d ) such that vol(X) < ∞. Assume there exists Ξ > 0 such that −∆(Id − |ψ 0 ψ 0 |) ≥ Ξ. Then we have ψ 0 , γψ 0 ≥ 1 − E N NΞ .
Hence, in order to obtain a concrete lower bound, we need now an upper bound for E N /N. This will be now given in terms of a diluteness parameter defined as
Y := ρ ln((tanh(a/2)) −1 ) : d = 2, ρ tanh a : d = 3, (2.4)
where a is the 'hyperbolic scattering length' which depends only on the potential V and is defined in (A.2). In fact, we can make E N /N arbitrarily small if Y is small enough. To this end, for any ε > 0 let
Y 0 (ε) := min 3 2ε 3µ +1−1 16π , (8π(R 0 + 1) 2 ) −1 : d = 2, min 3 2ε 3µ +1−1 16πe 2R 0 , (8e 2R 0 (R 0 + 1) 2 ) −1 : d = 3.
(2.5)
Then we obtain the following (see Section 3 for the proof). Given that the diluteness parameter Y (defined as in (2.4)) satisfies Y ≤ (8π
(R 0 + 1) 2 ) −1 in d = 2 or Y ≤ (8e 2R 0 (R 0 + 1) 2 ) −1 in d = 3, we have E N N ≤ 16πµY (1 + 8π 3 Y ) : d = 2, 16πµe 2R 0 Y 1 + 8 3 πe 2R 0 Y : d = 3.
(2.6)
In particular, we have E N N ≤ ε for all Y ≤ Y 0 (ε). We now apply the combination of Proposition 2.2 and Theorem 2.3 to two classes of hyperbolic manifolds which are known to have spectral gaps. In order to be able to obtain a thermodynamic limit, our goal is to find a sequence of manifolds of growing volume tending to infinity with a uniform spectral gap. The first one comprises special non-compact hyperbolic manifolds of finite volume.
Modular surfaces
The case of d = 2 dimensions where one considers so-called modular surfaces, cf. [Sar03], is the most well-studied and most thoroughly understood one. The special linear group PSL 2 (R) := SL 2 (R)/{± Id} acts on H 2 ⊂ C via Möbius transformations
a b c d z := az + b cz + d ,
and is isomorphic to the group of orientation-preserving isometries of H 2 . The modular surfaces arise by considering the action of a discrete subgroup of PSL 2 (R), the modular group PSL 2 (Z) := SL 2 (Z)/{± Id} and its congruence subgroups. The latter are defined as those subgroups, which contain one of the principal congruence subgroups given by
Γ(L) := {A ∈ SL 2 (Z) : A = Id mod L}, L ∈ N,
where mod L is to be understood as taken in each entry of the matrices. We can then define for each L ∈ N a hyperbolic surface by 3 .
X L := H 2 /Γ(L),
In particular this shows that vol(X L ) → ∞, as L → ∞. Next, we need a uniform spectral gap. First, one can show that there is a gap of 1 4 for the continuous spectrum of any hyperbolic surface. It remains to find a similar bound for the lowest non-zero eigenvalue. Selberg conjectured that one actually has the same lower bound 1/4 [Sel14]. Although this remains an open problem, there are several proofs for slightly weaker bounds, being sufficient for our application. The to the authors' best knowledge best one is given in the following theorem.
Theorem 2.5 ([KS03])
Let Γ be a congruence subgroup of PSL 2 (Z) and X = H 2 /Γ. For the smallest non-zero eigenvalue λ 1 (X) of the Laplacian on X one has
λ 1 (X) ≥ 1 4 − 7 64 2 = 975 4096 .
Remark 2.6. Selberg already proved the bound λ 1 (X) ≥ 3 16 in [Sel65]. For a discussion of further bounds we refer the reader to [Sar03]. Now, combining Proposition 2.2 and Theorem 2.3 with Theorem 2.5 in the setting of modular surfaces yields the following first application.
Corollary 2.7
Let R 0 > 0. For all ε > 0 and all potentials V with supp V ⊆ R 0 , scattering length a and all N, L ∈ N satisfying
ρ ln((tanh a) −1 ) = N vol(X L ) ln((tanh a) −1 ) < Y 0 975 4096 ε
where Y 0 (·) is defined in (2.5), we have
ψ 0 , γψ 0 ≥ 1 − ε.
Remark 2.8. The statement of Corollary 2.7 implies the double limit statement in (1.3). However, it is stronger than (1.3) in two ways: (a) it is quantitative and (b) the occurrence of BEC only requires an assumption on ρ and a, so it also holds for any finite number of particles N and volume vol(X L ) corresponding to the same density. While we focused on the infinite-volume limit in the introduction for emphasis, the result also applies to finite systems.
Quotients of hyperbolic 3-space by congruence subgroups
The gap of modular surfaces can be generalized to higher dimensions. Here, it is more convenient to work in the hyperboloid model, see (2.1). For a unit ring R let SO d,1 (R) be the group of R-valued matrices with determinant one which leave q d invariant. The group of orientation-preserving isometries in H d is then given by SO 0 d,1 (R), which is defined as the connected component of the identity matrix in SO d,1 (R).
Remark 2.9. In Section 2.2 we used that SO 0 2,1 (R) ∼ = PSL 2 (R). Now we can consider principal congruence subgroups as follows, see also [EGM90;BS91]. Let SO 0 d,1 (Z) := SO 0 d,1 (R) ∩ SO d,1 (Z). Then the principal congruence subgroups can be defined as
Γ d (L) := {A ∈ SO 0 d,1 (Z) : A = Id mod L}, L ∈ N.
In particular, note that Γ d (1) = SO 0 d,1 (Z). A congruence subgroup is then a subgroup of SO 0 d,1 (Z) which contains Γ d (L) for some L.
Analogously to the 2-dimensional case we then define X L := H d /Γ d (L) and obtain hyperbolic manifolds of finite volume. Again,
vol(X L ) = [SO 0 d,1 (Z) : Γ d (L)] vol(X 1 ),
which equally tends to infinity as L → ∞. Finally, we need a variant of Theorem 2.5, i.e., the existence of a gap, for higher dimensions. For d = 3 this was proven by Sarnak [Sar83]. In [EGM90] and [LPS87] it was first generalized to arbitrary dimension. Other versions for more general algebraic groups can be found in [BS91] and [Clo03].
Theorem 2.10
Let d ≥ 3, Γ be a congruence subgroup of SO 0 d,1 (Z) and X = H d /Γ. For the smallest non-zero eigenvalue λ 1 (X) of the Laplacian on X one has
λ 1 (X) ≥ 2d − 3 4 .
Then, applying this theorem in combination with Theorem 2.3 and Proposition 2.2 once more for the case d = 3 yields the following.
Corollary 2.11
Let d = 3 and R 0 > 0. For all ε > 0 and all potentials V with supp V ⊆ [0, R 0 ], scattering length a and all N, L ∈ N satisfying
ρ ln tanh a = N vol(X L ) ln tanh a < Y 0 3 4 ε
where Y 0 (·) is defined in (2.5), we have ψ 0 , γψ 0 ≥ 1 − ε.
Random compact hyperbolic surfaces
Another possibility is to consider compact hyperbolic manifolds. An analogy of Selberg's conjecture in this case is only known in a probabilistic sense and leads to the theory of compact random hyperbolic surfaces as developed by Mirzakhani. Recent surveys for this topic can be found in [Wri20; Mon21]. A compact hyperbolic surface is given by H 2 /Γ, where Γ ⊂ PSL 2 (R) is a discrete and co-compact subgroup. For g ∈ N let M g := compact hyperbolic surfaces of genus g / isometries , the so-called moduli space, which can be also represented as a quotient of the Teichmüller space by some group action [Mir13, Section 2]. On M g one can construct a probability measure P WP g originating from a natural symplectic form on M g , the so-called Weil-Petersson form [Wri20, 2.8]. By the Gauss-Bonnet theorem the volume of any X ∈ M g equals 2π(2g − 2) and therefore we can consider g → ∞ for an infinite volume limit.
In this limit an analog of Selberg's 1/4 conjecture was formulated in [Wri20]:
P WP g X ∈ M g : λ 1 (X) ≥ 1 4 − α ? → g→∞ 1 for all α > 0. (2.7)
As in the deterministic case, this remains an open problem but several weaker results have been established as well. The currently best one is the following. Let α > 0 and ξ < 1. Then there exists g 0 ∈ N such that all g ≥ g 0 there is a measurable set A g with
P WP g (A g ) ≥ ξ
such that for all X ∈ A g , R 0 > 0, ε > 0, and all potentials V with supp V ⊆ R 0 , scattering length a and and N ∈ N satisfying
ρ ln((tanh a) −1 ) = N 2π(2g − 2) ln((tanh a) −1 ) < Y 0 3 16 − α ε where Y 0 (·) is defined in (2.5), we have ψ 0 , γψ 0 ≥ 1 − ε.
Proof. For given α > 0 let
A g := X ∈ M g : λ 1 (X) ≥ 3 16 − α .
Then we use Theorem 2.12 and find for ξ < 1 a g 0 such that P WP g (A g ) ≥ ξ for all g ≥ g 0 . Now, for X ∈ A g and under the given assumptions we have
λ 1 (X) ≥ 3 16 − α ≥ E N εN (2.8)
by Theorem 2.3. Thus, by Proposition 2.2
ψ 0 , γψ 0 ≥ 1 − E N Nλ 1 (X) (2.8) ≥ 1 − ε.
Remark 2.16. The two properties (a) and (b) described in Remark 2.8 also apply to Corollary 2.15.
Upper Bound
In this part we give the proof of Theorem 2.3. First, we show an abstract form of an upper bound, which is in complete analogy with [Lie+09, Section 2.1], cf. also [LY01, (2.7)]. For a function f on [0, ∞) we define a trial function Ψ ∈ L 2 (H N ) by
Ψ(x 1 , . . . , x N ) := N i=2 F i (x 1 , . . . , x i ), (3.1) where F i (x 1 , . . . , x i ) := f (t i (x 1 , . . . , x i−1 )), t i (x 1 , . . . , x i ) := min{d(x i , x j ) : j = 1, . . . , i − 1}.
Proposition 3.1
For any non-decreasing function f on [0, ∞) let Ψ given by (3.1). Let ρ := N/ vol(X).
Then we have Ψ, H N Ψ Ψ 2 ≤ N (1 − ρI(f )) 2 ρJ(f ) + 2 3 µ(ρK(f )) 2 ,
given that the integrals
I(f ) := H d (1 − f (d(o, z)) 2 )dz J(f ) := H d µf ′ (d(o, z)) 2 + 1 2 V (d(o, z)) |f (d(o, z))| 2 dz, K(f ) := H d f (d(o, z))f ′ (d(o, z))dz,
for any o ∈ H d chosen as an origin, are finite and ρI(f ) < 1.
Proof. The proof is analogous to the Euclidean case [Lie+09] with some modifications for the hyperbolic setting. For a function Φ : X ×N → C let ∇ k denote the gradient on the manifold X with respect to the k-th component, that is, for each
x = (x 1 , . . . , x N ) ∈ X ×N , we get an element ∇ k Φ(x) ∈ T x k X if Φ is smooth enough around x.
We write ·, · TxX : T x X × T x X → R for the Riemannian metric of X at the point x, and · TxX for the corresponding norm on T x X. For notational convenience we will mostly drop the argument x.
By the chain rule we get for almost all x ∈ X ×N
∇ k Ψ = i≥k Ψ F i f ′ (t i )∇ k t i = i≥k Ψ F i f ′ (t i )∇ k d(x i , x i * ),
where x i * denotes the nearest neighbor among the points x 1 , . . . , x i−1 . Therefore,
N k=1 ∇ k Ψ 2 Tx k X = N i=1 |Ψ| 2 F 2 i f ′ (t i ) 2 i k=1 ∇ k d(x i , x i * ) 2 Tx k X + 2 N k=1 j>i≥k |Ψ| 2 F i F j f ′ (t i )f ′ (t j ) ∇ k d(x i , x i * ), ∇ k d(x j , x j * ) Tx k X ,
where we use that there is a unique nearest neighbor almost everywhere and that we have to sum over ordered pairs. Since ∇ x d(x, y) TxX ≤ 1 almost everywhere, observe that ∇ k d(x i , x i * ) Tx k X ≤ ǫ ik and k ǫ ik ≤ 2 for almost every x, where
ǫ ik := 1 : i = k or t i = d(x i , x k ), 0 : else.
Thus, we arrive at
Ψ, H N Ψ Ψ 2 ≤ 2µ N i=1 |Ψ| 2 F −2 i f ′ (t i ) 2 |Ψ| 2 + i<j |Ψ| 2 V (d(x i , x j )) |Ψ| 2 (3.2) + 2µ N k=1 j>i≥k |ǫ ik ǫ jk | |Ψ| 2 F i F j f ′ (t i )f ′ (t j ) |Ψ| 2 . (3.3)
Now, for j < i we define F i,j in the same way as F i with the only difference that we omit the point x j in the consideration of the nearest neighbors. Likewise, we define F i,jk by omitting x j and x k . Then F i,j does not depend on x j and F i,jk does not depend on x j and x k . Furthermore, since f is monotonically increasing and 0 ≤ f ≤ 1, we have
F 2 j+1 · · · F 2 i−1 F 2 i+1 · · · F 2 N ≤ F 2 j+1,j · · · F 2 i−1,j F 2 i+1,ij · · · F 2 N,ij , (3.4) F 2 j · · · F 2 N ≥ 1 − N k=1, =i,j (1 − f (d(x j , x k )) 2 ) F 2 j+1,j · · · F 2 i−1,j × 1 − N k=1, =i (1 − f (d(x i , x k )) 2 ) F 2 i+1,ij · · · F 2 N,ij .
(3.5) Furthermore, we trivially find
f ′ (t i ) 2 η i 2 ≤ i−1 j=1 f ′ (d(x i , x j )) 2 [∇ i d(x i , x j )] 2 , (3.6) F i ≤ f (d(x i , x j )). (3.7)
Now, the numerator of the right-hand side in (3.2) can be estimated from above using (3.4) together with (3.6) and (3.7),
N i=1 |Ψ| 2 F −2 i f ′ (t i ) 2 + j<i |Ψ| 2 V (d(x i , x j )) (3.8) ≤ 2µ j<i F 2 1 . . . F 2 j−1 F 2 j+1,j · · · F 2 i−1,j F 2 i+1,ij · · · F 2 N,ij dx 1,...,N,ij (3.9) × 2µf ′ (d(x i , x j )) 2 + f (d(x i , x j )) 2 V (d(x i , x j )) dx i dx j ,(3.10)
where dx 1,...,N,ij denotes the integration over all x 1 , . . . , x N except x i and x j . For the denominator we use (3.5) and obtain similarly
Ψ 2 ≥ F 2 1 · · · F 2 j−1 F 2 j+1,j · · · F 2 i−1,j F 2 i+1,ij · · · F 2 N,ij × vol(X) − N k=1, =i,j X (1 − f (d(x j , x k )) 2 )dx j × vol(X) − N k=1, =i X (1 − f (d(x i , x k )) 2 )dx i dx 1,...,N,ij .
Now, we use that
X=H d /Γ g(d(x, x 0 ))dx ≤ H d g(d(o, z))dz (3.11)
for any positive function g defined on H d and all x 0 ∈ X, o ∈ H d . This yields
(3.10) ≤ X H d 2µf ′ (d(o, x j )) 2 + f (d(o, x j )) 2 V (d(o, x j )) dzdx j = 2 vol(X)J(f ),
and for all k,
X (1 − f (d(x j , x k )) 2 )dx j ≤ H d (1 − f (d(o, z)) 2 )dz = I(f ).
The integral over dx 1,...,N,ij cancels in the numerator and denominator and we obtain
2µ N i=1 |Ψ| 2 F −2 i f ′ (t i ) 2 |Ψ| 2 + i<j |Ψ| 2 V (d(x i , x j )) |Ψ| 2 ≤ N(N − 1) 2 2 vol(X)J(f ) (vol(X) − (N − 1)I(f )) 2 .
Next, we estimate the non-diagonal term (3.3), cf. [LSY01b]. We get the same cancellations in the numerator and denominator up to the term
2µ N k=1 j>i≥k X×X |ǫ ik ǫ jk | f (t i )f (t j )f ′ (t i )f ′ (t j )dx i dx j ≤ 4µ N k=1 j>i>k X×X f (d(x i , x k ))f (d(x j , x k ))f ′ (d(x i , x k ))f ′ (d(x j , x k ))dx i dx j = 2 3 µN(N − 1)(N − 2)K(f ) 2 .
This shows the desired bounds.
Our choice for f in definition of the F i (3.1) will be f R , R > 0, given as in (A.2). Then we have J(f R ) = E R , which is explicitly computed in Theorem A.2. Therefore, it remains to find explicit bounds for I(f R ) and K(f R ), which is the content of the following two lemmas.
Lemma 3.2
For all R > R 0 ,
I(f R ) ≤ 2π ln tanh(R/2) tanh(a/2) (R 2 − a 2 ) : d = 2, 4π tanh a tanh R−tanh a tanh R(R 2 − a 2 ) : d = 3.
Proof. Using hyperbolic polar coordinates, we get
I(f R ) = vol(S d−1 ) R 0 (1 − f R (r) 2 ) sinh d−1 rdr ≤ vol(S d−1 ) a 0 sinh d−1 rdr + vol(S d−1 ) R a 1 − f ∞ (r) 2 f ∞ (R) 2 sinh d−1 rdr = vol(S d−1 ) R 0 sinh d−1 rdr − vol(S d−1 ) f ∞ (R) 2 R a f ∞ (r) 2 sinh d−1 rdr.
Let u(r) := r 0 sinh d−1 (r ′ )dr ′ . With integration by parts the second term can be expressed as
R a f ∞ (r) 2 sinh d−1 rdr = [f ∞ (r) 2 u(r)] R a − R a 2f ∞ (r)f ′ ∞ (r)u(r)dr = f ∞ (R) 2 R 0 sinh d−1 rdr − R a 2f ∞ (r)f ′ ∞ (r)u(r)dr. Thus, using that u(r) ≤ r sinh d−1 r and f ′ ∞ (r) sinh d−1 r = C d (a) is independent of r (cf. Remark A.3), I(f R ) ≤ 2 vol(S d−1 ) f ∞ (R) 2 R a f ∞ (r)f ′ ∞ (r)u(r)dr ≤ 2C d (a) vol(S d−1 ) f ∞ (R) 2 R a f ∞ (r) ≤f∞(R) rdr ≤ C d (a) vol(S d−1 ) f ∞ (R) (R 2 − a 2 ).
Note that we have C 2 (a) = 1 and C 3 (a) = tanh a.
Lemma 3.3
For all R > R 0 ,
K(f R ) ≤ 2πR ln tanh(R/2) tanh(a/2) : d = 2, 4π tanh aR 1− tanh a tanh R : d = 3.
Proof. Using f ′ R (r)f R (r) = 1 2 (f R (r) 2 ) ′ , partial integration and (A.3), we obtain
K(f R ) = vol(S d−1 ) 2 R 0 (f R (r) 2 ) ′ sinh d−1 rdr = vol(S d−1 ) 2 f R (R) 2 sinh d−1 (R) − vol(S d−1 ) 2 R 0 f R (r) 2 (sinh d−1 r) ′ dr ≤ vol(S d−1 ) 2 f R (R) 2 sinh d−1 (R) − vol(S d−1 ) 2f ∞ (R) 2 R a f ∞ (r) 2 (sinh d−1 r) ′ dr = vol(S d−1 ) f ∞ (R) 2 R a f ∞ (r)f ′ ∞ (r) sinh d−1 rdr.
Using again that f ′ ∞ (r) sinh d−1 r = C d (a) is independent of r, we conclude
K(f R ) ≤ C d (a) vol(S d−1 ) f ∞ (R) 2 R a f ∞ (r)dr ≤ C d (a) vol(S d−1 ) f ∞ (R) .
Plugging in vol(S 1 ) = 2π, vol(S 2 ) = 4π, C 2 (a) = 1, C 3 (a) = tanh a, and f ∞ (R) = ln tanh(R/2) tanh(a/2) for d = 2, f ∞ (R) = 1 − tanh a tanh R for d = 3 yields the claimed estimates.
Remark 3.4. One can write the estimates from Lemmas 3.2 and 3.3 in a dimensionindependent way as
I(f R ) ≤ f ′ ∞ (R) sinh d−1 R vol(S d−1 ) f ∞ (R) (R 2 − a 2 )
,
K(f R ) ≤ f ′ ∞ (R) sinh d−1 R vol(S d−1 ) f ∞ (R) ,
as it can be seen in the proofs.
E N N ≤ 4πρµ tanh a tanh R 1 − 4πρ tanh a (R 2 −a 2 ) tanh R tanh R−tanh a 2 (tanh R − tanh a) 1 + 8 3 πρ tanh a tanh R tanh R − tanh a ,
provided that 4πρ tanh a (R 2 −a 2 ) tanh R tanh R−tanh a < 1.
Proof. Plugging in the concrete upper bounds of Lemma 3.2, Lemma 3.3 (in the form of Remark 3.4) and Theorem A.2 in Proposition 3.1 yields
Ψ, H N Ψ N Ψ 2 ≤ ρC d (a)µ vol(S d−1 ) 1 − ρC d (a) vol(S d−1 ) f∞(R) (R 2 − a 2 ) 2 f ∞ (R) 1 + 2 3 ρ C d (a) vol(S d−1 ) f ∞ (R) .
By using the values for C d (a) and f ∞ (R) for d = 2 and d = 3 one obtains the claimed upper bounds.
Proof of Theorem 2.3. Choose R := max{R 0 , a + 1}, which is eligible in Proposition 3.5. Then we find, using a ≤ R 0 that R 2 − a 2 ≤ (R 0 + 1) 2 . Furthermore, for d = 2 we have ρ ln tanh(R/2) tanh(a/2) = ρ ln(tanh(a/2) −1 ) 1 − ln tanh(R/2) ln tanh(a/2)
≤ 1 1 − ln tanh((a+1)/2) ln tanh(a/2) Y ≤ 1 1 − e −1 Y ≤ 2Y, and for d = 3, tanh R tanh R − tanh a ≤ tanh(a + 1) tanh(a + 1) − tanh a ≤ e 2a ≤ e 2R 0 .
Using these estimates, the upper bounds of Proposition 3.5 simplify as follows:
E N N ≤ 4πµY (1−4π(R 0 +1) 2 Y ) 2 (1 + 8π 3 Y ) : d = 2, 4πµe 2R 0 Y (1−4πe 2R 0 (R 0 +1) 2 Y ) 2 1 + 8π 3 e 2R 0 Y : d = 3.
If we assume 4π(R 0 + 1) 2 Y ≤ 1 2 and 4πe 2R 0 (R 0 + 1) 2 Y ≤ 1 2 , respectively, we get (2.6). For the choice of Y 0 (ε) note that the inequality aY (1 + bY ) ≤ c has the solution
Y ≤ 4bc/a + 1 − 1 2b
for Y ≥ 0.
Lower Bound
In this section we prove the lower bound (Proposition 2.2). Recall that Ψ 0 ∈ L 2 (X ×N ) is the ground state of the operator H N (2.2) and ψ 0 the ground state of −∆ on X. Furthermore, the one-particle density matrix γ was defined in (2.3).
Proof of Proposition 2.2. Since V ≥ 0, we have
tr(−∆γ) = Ψ 0 , − 1 N N i=1 ∆ i Ψ 0 ≤ E N N .
Let
p n φ n , −∆P m φ n ≥ Ξ n p n |ψ 0 ψ 0 | ⊥ φ n 2 .
Thus,
ψ 0 , γψ 0 = n p n | ψ 0 , φ n | 2 = n p n 1 − |ψ 0 ψ 0 | ⊥ φ n 2 = 1 − n p n |ψ 0 ψ 0 | ⊥ φ n 2 ≥ 1 − E N NΞ .
A. Variational principle
In this part we show existence and uniqueness of the ground state for the key two-particle scattering problem in the hyperbolic setting. This will be used in the choice of the N-particle test functions in the upper bound in Section 3. We also define a 'hyperbolic scattering length' a. As in the Euclidean case it will correspond to the radius of a hardcore potential with the same scattering behavior. The arguments follow closely those in [LY01;Lie+09]. Let V : [0, ∞) → [0, ∞) be a measurable function with essential compact support and let R 0 > 0 such that supp V ⊆ [0, R 0 ]. Let o ∈ H d be fixed. For R > R 0 and φ ∈ H 1 (B R (o)), we define the functional
E R (φ) := B R (o)⊆H d |∇φ(z)| 2 + 1 2 V (d(o, z)) |φ(z)| 2 dz.
Remark A.1. With this functional we can describe two-particle energies on a d-dimensional hyperbolic manifold X, cf. the proof of Proposition 3.1. for some number a > 0. The energy corresponding to φ is given by Finally, we have that f R is non-decreasing and
E R := E R (φ) = µf ′ (R) sinh d−1 R vol(S d−1 ) f ∞ (R) = f R (r) ≥ f ∞ (r) f ∞ (R) for all r ≥ a. (A.3)
Remark A.3. (a) Notice that we indeed defined a in such a way that f ∞ (a) = 0, i.e., if V is a hardcore potential with radius R 0 , then a = R 0 .
(b) As f is an indefinite integral of (sinh d−1 ) −1 , the quantity f ′ (R) sinh d−1 R only depends on a (or V ) but not on R. Therefore, we also write spherical average of φ 2 . By the generalized Jensen inequality for probability measures, we obtain ∇ φ 2 ≤ ∇φ 2 and thus also E R ( φ) ≤ E R (φ) because the potential is assumed to be spherically symmetric as well.
Existence of a minimizer: As E R is bounded from below, there exists a minimizing sequence of spherically symmetric (φ n ) in H 1 (B R (o)) with φ n (z) = 1 for a.e. z with d(o, z) = 1 and all n. Define φ n ∈ H 1 (H d ) by φ n (z) := φ n (z) for z ∈ B R (o) and φ n (z) = h (d(o, z)) for some h ∈ C ∞ (R + ) with h(r) = 1 for r < R + 1 and h(r) = 0 for r > 2R + 1. As sup n φ n H 1 (H d ) < ∞ (and because H 1 (H d ) is reflexive, cf. [Heb96,
Proposition 2.4]), one can find a subsequence ( φ n k ) in H 1 (H d ) which converges weakly in H 1 (H d ) to some φ ∈ H 1 (H d ), which is rotationally symmetric. We then have that (φ n k ) also converges weakly to φ := φ| B R (o) ∈ H 1 (B R (o)). One gets φ(z) = 1 for a.e. z with d(o, z) = 1 because the radial part is continuous outside of the origin, and φ(z) = 1 for d(o, z) ∈ (R, R+1). By equivalence of lower semicontinuity and weak lower semicontinuity for convex functions, we obtain lim k→∞ E R (φ n k ) ≥ E R (φ) and therefore, φ is a minimizer. The Euler-Lagrange equation (A.1) follows by considering d dδ | δ=0 E R (φ + δψ) = 0 for all infinitely differentiable functions ψ which vanish for all z with d(o, z) ≥ R. Furthermore, (A.1) can be written down for the radial part f R on (0, R) given by f R (d(o, z)) := φ(z), which is a linear ODE with boundary values f R (R) = 1, f ′ R (R) = 0. Thus, it has a unique solution.
For R 0 < d(o, z) < R we infer from (A.1) that −∆φ = 0. As the Laplace-Beltrami operator on H d is given in hyperbolic polar coordinates by ∆ = sinh(r) 1−d ∂ r sinh(t) d−1 ∂ r + sinh(r) −2 ∆ Σ , we find that ∂ r sinh(t) d−1 ∂ r f R (r) = 0. The corresponding solutions for d = 2 and d = 3 are given by (A.2).
For the energy we use partial integration and f ′ ∞ (r) = 1 sinh d−1 r . Thus we get
E R = vol(S d−1 ) µ sinh d−1 rf R (r)∂ r f R (r) R 0 + R 0 −µ 1 sinh d−1 (r) ∂ r (sinh d−1 (r)∂ r f R (r)) + 1 2 V (r)f R (r) f R (r) sinh d−1 (r)dr = µ vol(S d−1 ) f ∞ (R) 2 sinh d−1 rf ∞ (r)∂ r f ∞ (r) R 0 + vol(S d−1 ) R 0 −µ∆ r f R (r) + 1 2 V (r)f R (r) =0 f R (r) sinh d−1 (r)dr = µ vol(S d−1 ) f ∞ (R) sinh d−1 (R)f ′ ∞ (R).
The last statement (A.3) follows in the same way as in [Lie+09, Lemma C.2] from the Hopf maximum principle.
Theorem 2. 3 (
3Abstract upper bound) Let X = H d /Γ where Γ is a discrete subgroup of Iso(H d ) such that vol(X) < ∞. Let V be a potential supported in [0, R 0 ] with hyperbolic scattering length a, and set E N = inf σ(H N ).
L 3 p
3A fundamental domain for X 1 is given by (cf. [DS05, Lemma 2.3.1]) F 1 = {z ∈ H 2 : Re z ≤ 1/2, |z| ≥ 1}, and from that one can compute directly that vol(X 1 ) = π 3 . Furthermore, as [SL 2 (Z) : Γ(L)] = L 3 p prime, p|L 1 − 1 p 2 , see [DS05, Exercise 1.2.3(b)], we can infer that vol(X L ) = [SL 2 (Z) : Γ(L)] vol(X 1 ) = prime
Proposition 2.4 ([Sel14; Sar03])Let X be a Riemannian surface, that is, X = H 2 /Γ, where Γ is any discrete subgroup of the group of isometries of H 2 . Then the continuous spectrum of the Laplacian on X equals [1/4, ∞).
Proposition 3. 5
5Let R ≥ R 0 and R > a. In d = 2 we have for all ρ and a 2 − a 2 ) < 1 and in d = 3 we have for all ρ and a
P m := 1 1 [0,m] (−∆) be the spectral projection of −∆ to all values smaller than m, which makes −∆P m bounded. By dominated convergence using that tr(−∆P m γ) ≤ E N /N we see that tr((−∆ + ∆P m )γ) → 0, m → ∞. Now, by the spectral theorem we can write γ = n p n ·, φ n φ n , n p n = 1, with (φ n ) being an orthonormal basis of L 2 (X)
Theorem A. 2 For
2In the class of functions φ ∈ H 1 (B R (o)) with φ(z) = 1 for a.e. z with d(o, z) = 1 there exists a unique minimizer φ R of E R . It is spherically symmetric, non-R 0 < r < R, we have φ(z) = f R (d(o, z)) with f R (r) := f ∞ (r) f ∞ (R)
Remark 2.14. In other settings of random hyperbolic manifolds, namely for conformally compact infinite area hyperbolic surfaces[MN21] and for finite area non-compact hyperbolic surfaces[HM21] the lower bound 3 16 in Theorem 2.12 could actually be improved toTheorem 2.12 ([WX21; LW21])
We have for all α > 0
lim
g→∞
P WP
g
X ∈ M g : λ 1 (X) ≥
3
16
− α = 1.
Remark 2.13. This improves a famous result by Mirzakhani [Mir13, Theorem 4.8], where
she showed the same with constant 1
4
ln 2
2π+ln 2
2 ≈ 0.02 instead of 3
16 .
1
4 , see also references therein.
Corollary 2.15
AcknowledgmentsThe authors thank Christian Brennecke, Matthew de Courcy-Ireland, Andreas Deuchert, Søren Fournais, and Christian Hainzl for useful comments.Proof. First, we show that we can restrict to non-negative and spherically symmetric functions as minimizers. Let f, g real-valued functions on R. Then we find (cf. [LL01,for all points where they are differentiable and f 2 + g 2 > 0. Thus, we have a.e.Furthermore,dz, is convex. For φ ∈ H 1 (B R (o)) let φ be the spherically symmetric function given by the square root of the
Bose-Einstein condensation beyond the Gross-Pitaevskii regime. A Adhikari, C Brennecke, B Schlein, Ann. Henri Poincaré22A. Adhikari, C. Brennecke, and B. Schlein. "Bose-Einstein condensation beyond the Gross- Pitaevskii regime". In: Ann. Henri Poincaré 22.4 (2021), pp. 1163-1233.
Bose-Einstein quantum phase transition in an optical lattice model. M Aizenman, E H Lieb, R Seiringer, J P Solovej, J Yngvason, Phys. Rev. A. 7023612M. Aizenman, E. H. Lieb, R. Seiringer, J. P. Solovej, and J. Yngvason. "Bose-Einstein quan- tum phase transition in an optical lattice model". In: Phys. Rev. A 70.2 (2004), p. 023612.
Quantum Simulators: Architectures and Opportunities. E Altman, PRX Quantum. 217003E. Altman et al. "Quantum Simulators: Architectures and Opportunities". In: PRX Quan- tum 2 (2021), p. 017003.
Complete Bose-Einstein condensation in the Gross-Pitaevskii regime. C Boccato, C Brennecke, S Cenatiempo, B Schlein, Commun. Math. Phys. 3593C. Boccato, C. Brennecke, S. Cenatiempo, and B. Schlein. "Complete Bose-Einstein con- densation in the Gross-Pitaevskii regime". In: Commun. Math. Phys. 359.3 (2018), pp. 975- 1026.
Bogoliubov theory in the Gross-Pitaevskii limit. C Boccato, C Brennecke, S Cenatiempo, B Schlein, Acta Math. 222C. Boccato, C. Brennecke, S. Cenatiempo, and B. Schlein. "Bogoliubov theory in the Gross-Pitaevskii limit". In: Acta Math. 222.2 (2019), pp. 219-335.
Optimal rate for Bose-Einstein condensation in the Gross-Pitaevskii regime. C Boccato, C Brennecke, S Cenatiempo, B Schlein, Commun. Math. Phys. 376C. Boccato, C. Brennecke, S. Cenatiempo, and B. Schlein. "Optimal rate for Bose-Einstein condensation in the Gross-Pitaevskii regime". In: Commun. Math. Phys. 376.2 (2020), pp. 1311-1395.
Quantum simulation of hyperbolic space with circuit quantum electrodynamics: From graphs to geometry. I Boettcher, P Bienias, R Belyansky, A J Kollár, A V Gorshkov, Phys. Rev. A. 10232208I. Boettcher, P. Bienias, R. Belyansky, A. J. Kollár, and A. V. Gorshkov. "Quantum simulation of hyperbolic space with circuit quantum electrodynamics: From graphs to geometry". In: Phys. Rev. A 102.3 (2020), p. 032208.
On the theory of superfluidity. N N Bogoliubov, Izv. Akad. Nauk USSR. 1177N. N. Bogoliubov. "On the theory of superfluidity". In: Izv. Akad. Nauk USSR 11.77 (1947).
. Engl. translation in J.Phys. (USSR). 23Engl. translation in J.Phys. (USSR) 11 (1947), 23.
Plancks Gesetz und Lichtquantenhypothese. S N Bose, Z. Phys. S. N. Bose. "Plancks Gesetz und Lichtquantenhypothese". In: Z. Phys. (1924), pp. 178- 181.
The second-order correction to the ground state energy of the dilute Bose gas. B Brietzke, J P Solovej, Ann. Henri Poincaré21B. Brietzke and J. P. Solovej. "The second-order correction to the ground state energy of the dilute Bose gas". In: Ann. Henri Poincaré 21.2 (2020), pp. 571-626.
. M Burger, P Sarnak, Ramanujan duals II". In: Invent. Math. 106M. Burger and P. Sarnak. "Ramanujan duals II". In: Invent. Math. 106.1 (1991), pp. 1-11.
Eigenvalues in Riemannian geometry. I Chavel, Pure and Applied Mathematics. Academic PressI. Chavel. Eigenvalues in Riemannian geometry. Pure and Applied Mathematics. Academic Press, 1984.
Démonstration de la conjecture τ. L Clozel, In: Invent. Math. 151L. Clozel. "Démonstration de la conjecture τ ". In: Invent. Math. 151.2 (2003), pp. 297-328.
Bose-Einstein condensation of scalar fields on hyperbolic manifolds. G Cognola, L Vanzo, Phys. Rev. D. 47104575G. Cognola and L. Vanzo. "Bose-Einstein condensation of scalar fields on hyperbolic man- ifolds". In: Phys. Rev. D 47.10 (1993), p. 4575.
Maximum principle for parabolic inequalities and the heat flow on open manifolds. J Dodziuk, Indiana Univ. Math. J. 325J. Dodziuk. "Maximum principle for parabolic inequalities and the heat flow on open manifolds". In: Indiana Univ. Math. J. 32.5 (1983), pp. 703-716.
A first course in modular forms. F Diamond, J M Shurman, Springer228F. Diamond and J. M. Shurman. A first course in modular forms. Vol. 228. Springer, 2005.
Gross-Pitaevskii limit of a homogeneous Bose gas at positive temperature. A Deuchert, R Seiringer, Arch. Ration. Mech. Anal. 2363A. Deuchert and R. Seiringer. "Gross-Pitaevskii limit of a homogeneous Bose gas at posi- tive temperature". In: Arch. Ration. Mech. Anal. 236.3 (2020), pp. 1217-1271.
Ground-State Energy of a Hard-Sphere Gas. F J Dyson, Phys. Rev. 10620F. J. Dyson. "Ground-State Energy of a Hard-Sphere Gas". In: Phys. Rev. 106.1 (1957), p. 20.
Kloosterman sums for Clifford algebras and a lower bound for the positive eigenvalues of the Laplacian for congruence subgroups acting on hyperbolic spaces. J Elstrodt, F Grunewald, J Mennicke, In: Invent. Math. 101J. Elstrodt, F. Grunewald, and J. Mennicke. "Kloosterman sums for Clifford algebras and a lower bound for the positive eigenvalues of the Laplacian for congruence subgroups acting on hyperbolic spaces". In: Invent. Math. 101.1 (1990), pp. 641-685.
Quantentheorie des einatomigen idealen Gases. A Einstein, Sitzber. Kgl. Preuss. Akad. Wiss. A. Einstein. "Quantentheorie des einatomigen idealen Gases". In: Sitzber. Kgl. Preuss. Akad. Wiss. (1924), pp. 261-267.
Length scales for BEC in the dilute Bose gas. S Fournais, Partial Differential Equations, Spectral Theory, and Mathematical Physics. S. Fournais. "Length scales for BEC in the dilute Bose gas". In: Partial Differential Equa- tions, Spectral Theory, and Mathematical Physics. 2021, pp. 115-133.
The energy of dilute Bose gases. S Fournais, J P Solovej, Ann. Math. 192S. Fournais and J. P. Solovej. "The energy of dilute Bose gases". In: Ann. Math. 192.3 (2020), pp. 893-976.
The energy of dilute Bose gases II: The general case. S Fournais, J P Solovej, arXiv:2108.12022S. Fournais and J. P. Solovej. The energy of dilute Bose gases II: The general case. 2021. arXiv: 2108.12022.
Another proof of BEC in the GP-limit. C Hainzl, J. Math. Phys. 6251901C. Hainzl. "Another proof of BEC in the GP-limit". In: J. Math. Phys. 62.5 (2021), p. 051901.
E Hebey, Sobolev Spaces on Riemannian Manifolds. Lecture Notes in Mathematics Nr. 1635. SpringerE. Hebey. Sobolev Spaces on Riemannian Manifolds. Lecture Notes in Mathematics Nr. 1635. Springer, 1996.
Near optimal spectral gaps for hyperbolic surfaces. W Hide, M Magee, arXiv:2107.05292W. Hide and M. Magee. Near optimal spectral gaps for hyperbolic surfaces. 2021. arXiv: 2107.05292.
Quantum simulation of Unruh radiation. J Hu, L Feng, Z Zhang, C Chin, Nat. Phys. 15J. Hu, L. Feng, Z. Zhang, and C. Chin. "Quantum simulation of Unruh radiation". In: Nat. Phys. 15.8 (2019), pp. 785-789.
Hyperbolic band theory under magnetic field and Dirac cones on a higher genus surface. K Ikeda, S Aoki, Y Matsuki, J. Phys.: Condens. Matter. 33485602K. Ikeda, S. Aoki, and Y. Matsuki. "Hyperbolic band theory under magnetic field and Dirac cones on a higher genus surface". In: J. Phys.: Condens. Matter 33.48 (2021), p. 485602.
Hyperbolic lattices in circuit quantum electrodynamics. A J Kollár, M Fitzpatrick, A A Houck, Nature. 571A. J. Kollár, M. Fitzpatrick, and A. A. Houck. "Hyperbolic lattices in circuit quantum electrodynamics". In: Nature 571.7763 (2019), pp. 45-50.
Hyperbolic Bloch equations: Atom-cluster kinetics of an interacting Bose gas. M Kira, Ann. Phys. (N. Y.). 356M. Kira. "Hyperbolic Bloch equations: Atom-cluster kinetics of an interacting Bose gas". In: Ann. Phys. (N. Y.) 356 (2015), pp. 185-243.
Graphs and Discrete Dirichlet Spaces. Grundlehren der mathematischen Wissenschaften. M Keller, D Lenz, R Wojciechowski, Springer International PublishingM. Keller, D. Lenz, and R. Wojciechowski. Graphs and Discrete Dirichlet Spaces. Grund- lehren der mathematischen Wissenschaften. Springer International Publishing, 2021.
Line-graph lattices: Euclidean and non-Euclidean flat bands, and implementations in circuit quantum electrodynamics. A J Kollár, M Fitzpatrick, P Sarnak, A A Houck, Commun. Math. Phys. 376A. J. Kollár, M. Fitzpatrick, P. Sarnak, and A. A. Houck. "Line-graph lattices: Euclidean and non-Euclidean flat bands, and implementations in circuit quantum electrodynamics". In: Commun. Math. Phys. 376.3 (2020), pp. 1909-1956.
Bose-Einstein Condensation for Lattice Bosons. T Koma, arXiv:2106.00863T. Koma. Bose-Einstein Condensation for Lattice Bosons. 2021. arXiv: 2106.00863.
Appendix 2 to Functoriality for the exterior square of GL4 and the symmetric fourth of GL2. H Kim, P Sarnak, In: J. Am. Math. Soc. 16H. Kim and P. Sarnak. "Appendix 2 to Functoriality for the exterior square of GL4 and the symmetric fourth of GL2." In: J. Am. Math. Soc. 16 (2003), pp. 139-183.
Electric-circuit realization of a hyperbolic drum. P M Lenggenhager, arXiv:2109.01148P. M. Lenggenhager et al. Electric-circuit realization of a hyperbolic drum. 2021. arXiv: 2109.01148.
E Lieb, R Seiringer, J Solovej, J Yngvason, The Mathematics of the Bose Gas and its Condensation. Oberwolfach Seminars. Birkhäuser Basel. E. Lieb, R. Seiringer, J. Solovej, and J. Yngvason. The Mathematics of the Bose Gas and its Condensation. Oberwolfach Seminars. Birkhäuser Basel, 2009.
Bose-Einstein Condensation. E H Lieb, visited on 01/18/2022E. H. Lieb. Bose-Einstein Condensation. 1998. url: http://web.math.princeton.edu /~aizenman/OpenProblems_MathPhys/9801.BoseEinst.tex (visited on 01/18/2022).
E Lieb, M Loss, Analysis. CRM Proceedings & Lecture Notes. American Mathematical SocietyE. Lieb and M. Loss. Analysis. CRM Proceedings & Lecture Notes. American Mathematical Society, 2001.
Poincaré series for SO (n, 1). J.-S Li, I Piatetski-Shapiro, P Sarnak, Proc.: Math. Sci. Math. SciSpringer97J.-S. Li, I. Piatetski-Shapiro, and P. Sarnak. "Poincaré series for SO (n, 1)". In: Proc.: Math. Sci. Vol. 97. 1. Springer. 1987, pp. 231-237.
Proof of Bose-Einstein Condensation for Dilute Trapped Gases. E H Lieb, R Seiringer, Phys. Rev. Lett. 88170409E. H. Lieb and R. Seiringer. "Proof of Bose-Einstein Condensation for Dilute Trapped Gases". In: Phys. Rev. Lett. 88.17 (2002), p. 170409.
Derivation of the Gross-Pitaevskii equation for rotating Bose gases. E H Lieb, R Seiringer, Commun. Math. Phys. 2642E. H. Lieb and R. Seiringer. "Derivation of the Gross-Pitaevskii equation for rotating Bose gases". In: Commun. Math. Phys. 264.2 (2006), pp. 505-537.
Bosons in a trap: A rigorous derivation of the Gross-Pitaevskii energy functional. E H Lieb, R Seiringer, J Yngvason, The Stability of Matter: From Atoms to Stars. SpringerE. H. Lieb, R. Seiringer, and J. Yngvason. "Bosons in a trap: A rigorous derivation of the Gross-Pitaevskii energy functional". In: The Stability of Matter: From Atoms to Stars. Springer, 2001, pp. 685-697.
Bosons in a trap: A rigorous derivation of the Gross-Pitaevskii energy functional. E H Lieb, R Seiringer, J Yngvason, The Stability of Matter: From Atoms to Stars. SpringerE. H. Lieb, R. Seiringer, and J. Yngvason. "Bosons in a trap: A rigorous derivation of the Gross-Pitaevskii energy functional". In: The Stability of Matter: From Atoms to Stars. Springer, 2001, pp. 685-697.
Towards optimal spectral gaps in large genus. M Lipnowski, A Wright, arXiv:2103.07496M. Lipnowski and A. Wright. Towards optimal spectral gaps in large genus. 2021. arXiv: 2103.07496.
The ground state energy of a dilute two-dimensional Bose gas. E H Lieb, J Yngvason, J. Stat. Phys. 1033E. H. Lieb and J. Yngvason. "The ground state energy of a dilute two-dimensional Bose gas". In: J. Stat. Phys. 103.3 (2001), pp. 509-526.
Ground State Energy of the Low Density Bose Gas. E H Lieb, J Yngvason, Phys. Rev. Lett. 80E. H. Lieb and J. Yngvason. "Ground State Energy of the Low Density Bose Gas". In: Phys. Rev. Lett. 80 (1998), pp. 755-758.
The large N limit of superconformal field theories and supergravity. J Maldacena, Int. J. Theor. Phys. 38J. Maldacena. "The large N limit of superconformal field theories and supergravity". In: Int. J. Theor. Phys. 38.4 (1999), pp. 1113-1133.
Growth of Weil-Petersson volumes and random hyperbolic surface of large genus. M Mirzakhani, J. Differ. Geom. 94M. Mirzakhani. "Growth of Weil-Petersson volumes and random hyperbolic surface of large genus". In: J. Differ. Geom. 94.2 (2013), pp. 267-300.
Extension of Alon's and Friedman's conjectures to Schottky surfaces. M Magee, F Naud, arXiv:2106.02555M. Magee and F. Naud. Extension of Alon's and Friedman's conjectures to Schottky sur- faces. 2021. arXiv: 2106.02555.
Geometry and spectrum of typical hyperbolic surfaces. L Monk, Université de StrasbourgPhD thesisL. Monk. "Geometry and spectrum of typical hyperbolic surfaces". PhD thesis. Université de Strasbourg, 2021.
Hyperbolic band theory. J Maciejko, S Rayan, In: Sci. Adv. 736J. Maciejko and S. Rayan. "Hyperbolic band theory". In: Sci. Adv. 7.36 (2021).
Optimal rate of condensation for trapped bosons in the Gross-Pitaevskii regime. P T Nam, M Napiórkowski, J Ricaud, A Triay, arXiv:2001.04364P. T. Nam, M. Napiórkowski, J. Ricaud, and A. Triay. Optimal rate of condensation for trapped bosons in the Gross-Pitaevskii regime. 2021. arXiv: 2001.04364.
Ground states of large bosonic systems: the Gross-Pitaevskii limit revisited. P T Nam, N Rougerie, R Seiringer, Anal. PDE. 9P. T. Nam, N. Rougerie, and R. Seiringer. "Ground states of large bosonic systems: the Gross-Pitaevskii limit revisited". In: Anal. PDE. 9.2 (2016), pp. 459-485.
Scaling limits of bosonic ground states, from many-body to non-linear Schrödinger. N Rougerie, EMS Surv. Math. Sci. 72N. Rougerie. "Scaling limits of bosonic ground states, from many-body to non-linear Schrödinger". In: EMS Surv. Math. Sci. 7.2 (2021), pp. 253-408.
IV: Analysis of Operators. M Reed, B Simon, Methods of Modern Mathematical Physics. Academic PressM. Reed and B. Simon. IV: Analysis of Operators. Methods of Modern Mathematical Physics. Academic Press, 1978.
Spectra of hyperbolic surfaces. P Sarnak, Bull. Am. Math. Soc. 40P. Sarnak. "Spectra of hyperbolic surfaces". In: Bull. Am. Math. Soc. 40.4 (2003), pp. 441- 478.
The arithmetic and geometry of some hyperbolic three manifolds. P Sarnak, Acta Math. 151P. Sarnak. "The arithmetic and geometry of some hyperbolic three manifolds". In: Acta Math. 151 (1983), pp. 253-295.
Gross-Pitaevskii theory of the rotating Bose gas. R Seiringer, Commun. Math. Phys. 229R. Seiringer. "Gross-Pitaevskii theory of the rotating Bose gas". In: Commun. Math. Phys. 229.3 (2002), pp. 491-509.
Collected Papers I. Springer Collected Works in Mathematics. A Selberg, SpringerBerlin HeidelbergA. Selberg. Collected Papers I. Springer Collected Works in Mathematics. Springer Berlin Heidelberg, 2014.
On the estimation of Fourier coefficients of modular forms. A Selberg, Proc. Sympos. SymposAmer. Math. Soc8A. Selberg. "On the estimation of Fourier coefficients of modular forms". In: Proc. Sympos. Pure Math. Vol. 8. Amer. Math. Soc. 1965, pp. 1-15.
Higher-dimensional Euclidean and non-Euclidean structures in planar circuit quantum electrodynamics. A Saa, E Miranda, F Rouxinol, arXiv:2108.08854A. Saa, E. Miranda, and F. Rouxinol. Higher-dimensional Euclidean and non-Euclidean structures in planar circuit quantum electrodynamics. 2021. arXiv: 2108.08854.
A Stegmaier, L K Upreti, R Thomale, I Boettcher, arXiv:2111.05779Universality of Hofstadter butterflies on hyperbolic lattices. 2021. A. Stegmaier, L. K. Upreti, R. Thomale, and I. Boettcher. Universality of Hofstadter butterflies on hyperbolic lattices. 2021. arXiv: 2111.05779.
Anti-de Sitter space and holography. E Witten, Adv. Theor. Math. Phys. 2E. Witten. "Anti-de Sitter space and holography". In: Adv. Theor. Math. Phys. 2 (1998), pp. 253-291.
A tour through Mirzakhani's work on moduli spaces of Riemann surfaces. A Wright, Bull. Am. Math. Soc. 573A. Wright. "A tour through Mirzakhani's work on moduli spaces of Riemann surfaces". In: Bull. Am. Math. Soc. 57.3 (2020), pp. 359-408.
Random hyperbolic surfaces of large genus have first eigenvalues greater than 3 16 − ǫ. Y Wu, Y Xue, arXiv:2102.05581Y. Wu and Y. Xue. Random hyperbolic surfaces of large genus have first eigenvalues greater than 3 16 − ǫ. 2021. arXiv: 2102.05581.
The second order upper bound for the ground energy of a Bose gas. H.-T Yau, J Yin, J. Stat. Phys. 1363H.-T. Yau and J. Yin. "The second order upper bound for the ground energy of a Bose gas". In: J. Stat. Phys. 136.3 (2009), pp. 453-503.
Efimov-like states and quantum funneling effects on synthetic hyperbolic surfaces. R Zhang, C Lv, Y Yan, Q Zhou, In: Sci. Bull. 66R. Zhang, C. Lv, Y. Yan, and Q. Zhou. "Efimov-like states and quantum funneling effects on synthetic hyperbolic surfaces". In: Sci. Bull. 66.19 (2021), pp. 1967-1972.
Quantum phase transitions of interacting bosons on hyperbolic lattices. X Zhu, J Guo, B Nikolas, H Guo, S Feng, J. Phys. Condens. Matter. X. Zhu, J. Guo, B. Nikolas, H. Guo, and S. Feng. "Quantum phase transitions of interacting bosons on hyperbolic lattices". In: J. Phys. Condens. Matter (2021).
| zyda_arxiv-0718000 |
Growth and electronic structure of graphene on semiconducting Ge(110)
Julia Tesch
Fachbereich Physik
Universität Konstanz
78457KonstanzGermany
Elena Voloshina
Institut für Chemie
Humboldt-Universität zu Berlin
10099BerlinGermany
Mikhail Fonin
Fachbereich Physik
Universität Konstanz
78457KonstanzGermany
Yuriy Dedkov
Fachbereich Physik
Universität Konstanz
78457KonstanzGermany
Growth and electronic structure of graphene on semiconducting Ge(110)
(Dated: May 23, 2017)
The direct growth of graphene on semiconducting or insulating substrates might help to overcome main drawbacks of metal-based synthesis, like metal-atom contaminations of graphene, transfer issues, etc. Here we present the growth of graphene on n-doped semiconducting Ge(110) by using an atomic carbon source and the study of the structural and electronic properties of the obtained interface. We found that graphene interacts weakly with the underlying Ge(110) substrate that keeps graphene's electronic structure almost intact promoting this interface for future graphenesemiconductor applications. The effect of dopants in Ge on the electronic properties of graphene is also discussed.
I. INTRODUCTION
Presently, the main methods of the synthesis of graphene (gr), a purely 2D material consisting of carbon atoms, which can be scaled down in order to be used in further applications, are its preparation on semiconducting SiC [1][2][3] or on metallic substrates [4][5][6][7]. However, these methods have natural drawbacks like, e. g., the price of the high-quality SiC wafers and difficulty to control the thickness homogeneity of graphene on SiC. In case of graphene synthesis on metal substrates with the subsequent transfer onto the desired support, it was found that the level of the metal-atom contamination in the obtained graphene is not acceptable for modern microelectronics [8,9]. These as well as other fundamental problems limit the commercialization of graphene [10,11] and stimulate researchers to search for the new ways of graphene synthesis.
One possibility to implement graphene in modern microelectronics processing is to perform its synthesis directly on an insulating substrate. Here one option is to use h-BN, which can be grown on the metallic substrates, like Cu, Fe, or Ni, or on semiconductors, like Ge, thus allowing a CVD synthesis of graphene, make a tunnel barrier for the carrier injection in graphene, and to avoid a metal contamination of graphene [12][13][14].
Another approach comprises graphene synthesis directly on the semiconducting substrate.
The direct growth of graphene on Si is problematic due to its carbidic phase formation at high temperatures [15][16][17][18][19]. However, the recent progress in graphene synthesis reveals the possibility to grow single-and multilayer graphene on Ge and Ge/Si substrates [20][21][22][23][24]. While the Ge(001) surface is the most technologically relevant one, the faceting of the underlying Ge with with the Ge(107) facets upon graphene growth was found by means of scanning electron and tunneling microscopy (SEM and STM) [23,25,26], that limits further technological processing of this interface. Contrary to the previous case, graphene as well as the underlying Ge surface remain flat for the Ge(110) surface, which was confirmed by low-energy electron diffraction (LEED) and STM [20,22,23]. Despite the availability of a number of the intensive studies on the growth of graphene on Ge, the little is known about the electronic structure of this interface [27]. In this work the ex situ CVD grown graphene flakes on undoped Ge/Si(001) were investigated by means of micro-and nano-ARPES (angle-resolved photoelectron spectroscopy), which indicates the free-standing character of graphene maintaining the linear dispersion of the π states in the vicinity of the Fermi level (E F ) and its p-doping with the position of the Dirac point of E D = 0.185 eV above E F .
Here we present a complete in situ UHV preparation as well as structural and electronic structure study of a nearly full graphene layer epitaxially grown from an atomic carbon source on Ge(110). The presented LEED and STM results confirm the high quality of the prepared system indicating the existence of the reconstructed Ge(110) surface below graphene. Our x-ray photoelectron spectroscopy (XPS), normal-emission ARPES (NE PES), and energy-loss near-edge spectroscopy performed at the carbon K-edge (C K-edge ELNES) reveal the nearly free-standing behaviour of graphene on Ge(110). We also address the plasmon excitations in this system performing electron-energy loss spectroscopy (EELS). Our results were compared and analyzed with the available theoretical spectroscopic data for freestanding graphene and "strongly-interacting" gr/Ni(111) demonstrating good agreement with the former case.
III. RESULTS AND DISCUSSIONS
The growth of graphene on Ge(110) was characterized by means of STM, LEED, and XPS and these results are compiled in Figs. 1 and 2. The Ge(110) surface shows a large scale ordering as can be deduced from the STM [ Fig. 1(a,b)] and LEED images [ Fig. 1(f)].
According to previous studies this surface can be described as a faceted surface with {17 15 1} facets and c(8 × 10) reconstruction on the steps [28][29][30][31]. Deposition of carbon on Ge (110) Formation of the uniform graphene sp 2 structure is also confirmed by XPS data (Fig. 2).
High-temperature deposition of graphene on Ge(110) leads only to the damping of the Ge 2p XPS signal [ Fig. 2(a,b)] without indication of the formation of the Ge-C bonds as can be concluded from the analysis of the Ge-related XPS peaks. Our data reveal a single C 1s peak for gr/Ge(110) with a small shoulder at the low binding energies (due to the possible 4 bonds between carbon atoms and dopant atoms segregated at the interface) that confirms the homogeneity of the prepared gr/Ge(110) system.
The electronic structure of the grown graphene layer on Ge(110) was investigated by NE PES for the occupied valence band states below E F and by C K-edge ELNES for the unoccupied states above E F and these results are presented in Fig. 3(a,b), respectively.
From the comparison of the PES spectrum of gr/Ge(110) and the one for the graphite single crystal we can conclude that in the former case the graphene-derived π and σ states are of graphene [32], the recent theoretical works on the Sb intercalation in gr/SiC reveal the n-doping of graphene [33]. A similar effect of the n-doping of the free-standing graphene upon Sb adsorption was also observed in experiment [34].
The unoccupied electronic states of graphene on Ge(110) were probed by the C K-edge ELNES spectroscopy, which can be considered as a simplified version of the near-edge x-ray absorption spectroscopy (NEXAFS). Here we used an electron beam of energy E p = 700 eV and detected the signal originating from the energy losses due to the excitation of electrons from the C 1s core level of carbon atoms in graphene onto unoccupied states above E F .
Similarly to NEXAFS, this method is element-specific, i. e. the intensity of the loss-signal is proportional to the atom-projected partial density of unoccupied states of the element in the system, the core-level of which is involved in the process. In our case we will observe two structures, which can be assigned to the 1s → π * and 1s → σ * transitions and the respective density of states above E F [35][36][37][38].
The C K-edge ELNES spectrum of gr/Ge(110), collected in the specular-reflected electron-beam geometry, is shown in the lower part of [ Fig. 3(b)] and compared with the theoretical ELNES (middle part) [39] and NEXAFS (upper part) [40] spectra of graphene and the gr/Ni(111) system. [All theoretical spectra were shifted by the same energy value in order to have the first peak, corresponding to the 1s → π * transition in the theoretical ELNES spectra, energetically coincide with the same peak in the experimental spectrum.
The double-peak structure of the 1s → σ * transition in the NEXAFS spectrum is due to excitonic effects.] One can see that there is a very good agreement between experimental ELNES spectrum of gr/Ge(110) and theoretical ELNES spectrum for free-standing graphene (lower and middle parts): (i) both 1s → π * and 1s → σ * transitions exhibit a single peak at the respective threshold, that can be taken as a signature of the weak interaction between graphene and the Ge(110) surface, (ii) the energy splitting between two transitions in the experimental spectrum is almost identical to the one deduced from the theoretically calculated ELNES spectrum. As was shown in Refs. [39][40][41][42] the value of this splitting as well as the modification of the shape of the 1s → π * transition can be taken as an indication for the sp 2 − sp 3 rehybridization of carbon atoms, which can appear due to the graphene contact with substrate or due to the adsorption of different species on top of graphene [7].
Such example of the spectral shape modifications of the ELNES and NEXAFS spectra for the strongly interacting gr/Ni(111) interface is shown in Fig. 3(b). As was shown, besides the strong n-doping of graphene on Ni, there is a strong intermixing of the valence band states of graphene and Ni, leading to the strong modification of the energy distribution of the partial density of states of both elements. All discussed effects are clearly visible in ELNES as well as in the NEXAFS spectra, due to the similarity of the electron excitation processes.
In our EELS experiments on gr/Ge(110) we also address the plasmon excitations in the system. Figure 4 shows the energy-loss spectra for this system measured as a function of the primary electron beam energy (marked for every spectrum) and presented in the energy range around the elastic peak (zero energy-loss energy). These spectra reveal a series of peaks (≈ 17 eV, ≈ 33 eV), which can be clearly assigned to the bulk Ge plasmons, whereas the peak at ≈ 9.5 eV and low energy shoulders can be assigned to the surface-related transitions of Ge(110) [43][44][45][46].
Variation of the primary beam energy allows to change the surface sensitivity of EELS as can be seen from Fig. 4. This leads to the increase of the graphene-related signal in the EELS spectra as the energy of the electron beam is decreased, which manifests itself as an increase of the intensity in the energy range of 3.5 − 6.5 eV as well as in the increase of the overall background for the energies above 15 eV. The first feature is assigned to the so-called π plasmon [47][48][49], the energy of which is determined as 6.33 ± 0.25 eV by a curve fitting procedure. The second observation is connected to the increase of the intensity of the π + σ plasmon as well as the increase of the background of the low energy inelastically scattered electrons. The exact position of the π + σ plasmon cannot be extracted from these data.
IV. CONCLUSIONS
In conclusion, we demonstrate the growth of a high-quality graphene layer on Ge (110) by evaporation of atomic carbon on the hot Ge surface. Our STM and LEED data confirm the honeycomb sp 2 structure of the graphene layer. From the analysis of the electronic structure of the graphene layer by means of PES and ELNES we conclude the nearly freestanding character of graphene which was found to be n-doped due to the segregation of Sb dopant atoms at the gr/Ge interface during sample preparation routines. Such effect of the substrate-dopant segregation at the graphene-semiconductor interface can be used for a controllable doping of graphene that might influence its electron-and spin-transport properties.
II. EXPERIMENTAL DETAILSGrowth of graphene and all studies were performed in the surface science cluster tool (Omicron NanoTechnology; base pressure 1 × 10 −10 mbar). Prior to every experiment a Ge(110) substrate (G-materials (Germany), Sb doped, resistivity 0.35 Ω · cm) was cleaned via cycles of Ar + -sputtering (1.5 keV, p(Ar) = 1 × 10 −5 mbar) and annealing (T = 870 • C).Graphene was grown on the hot Ge(110) substrate (T = 860 − 870 • C) from the atomic carbon source (Dr. Eberl MBE-Komponenten GmbH) with the filament current of I = 70 A and maximal pressure of 2 × 10 −9 mbar during C-deposition. Cleanliness and quality of samples was controlled by LEED, STM (Omicron VT-SPM), NE PES (non-monochromatized He II line), and XPS (non-monochromatized Al K line) (energy analyzer Omicron EA 125 was set either in angle-resolved or in angle-integrated mode, respectively) after every preparation step. ELNES and EELS experiments were performed in the specularly-reflected electron beam mode with angular and energy resolution of 1 • and ≈ 1 eV, respectively. The primary electron energy is marked for every spectrum. Low-temperature (LT) STM experiments were performed in an Omicron Cryogenic STM on the gr/Ge(110) sample quickly transferred from the growth/characterization facility under N 2 -atmosphere. Following the transfer, gr/Ge(110) was annealed in UHV at 700 • C.
at T = 870 • C and subsequent cooling of the sample to room temperature lifts the previously observed reconstruction, however, producing an ordered underlying Ge surface as can be seen from the respective STM and LEED images [Fig. 1(c-e,g)]. The prepared graphene layer forms two types of domains rotated by 30 • with respect to each other as seen from LEED and demonstrates clear honeycomb sp 2 structure on the Ge(110) surface [Fig. 1(c-e,h)]. Our results on the observation of two graphene domains are consistent with the previously reported data for the CVD grown graphene on Ge(110) [23]. The observed alignment of the graphene lattices of two domains is different by ≈ 15 • compared to the one observed for the single-domain graphene growth in Ref. [22]. Similar to the results presented in this work, our growth methods rule out the influence of hydrogen on the alignment of graphene on Ge(110); however, further structural studies are required. Our atomically resolved STM images demonstrate clear signatures of quasiparticle scattering in the graphene layer due to imperfections in graphene as well as due to the presence of the scattering centres at the interface (segregated dopants, see discussion below). The interference of the scattering waves of the carriers in graphene leads to the formation of the corresponding ( √ 3× √ 3)R30 • structure with respect to the graphene atomic-related structure in the 2D Fast-Fourier-Transformation (FFT) map. The spots of these structures are marked in the inset of Fig. 1(e) by white rectangle and circle, respectively. This ( √ 3 × √ 3)R30 • structure in the FFT map is assigned to the so-called intervalley scattering between adjacent cones at K and K points of the graphene-derived Brillouin zone.
shifted to the higher binding energies by ≈ 1 eV and ≈ 0.5 eV, respectively. This shift indicates that in the present study the graphene layer is n-doped, which is opposite to the result presented in Ref.[27], where small p-doping of graphene was observed with the position of the Dirac point of E D = 0.185 eV above E F . This difference can be assigned to the different types of substrates used in the experiments: n-doped (Sb) Ge(110) in the present study and an undoped Ge-epilayer on Si(001) in Ref.[27]. Here we can conclude that the cleaning procedure of Ge(110) (cycles of the Ar + -sputtering and annealing) as well as the high temperature used during graphene growth can lead to the segregation of Sb atoms at the gr/Ge(110) interface, thus influencing the doping of the formed graphene layer. This is confirmed by our LT STM (T = 24 K) data of gr/Ge(110) which are presented as upper insets ofFig. 3(a), where one can clearly see the characteristic STM-signatures of such interface-trapped dopant atoms (one of them is circled). Although the first ARPES data on the Sb-atoms adsorption on graphene/SiC pointed towards the possible p-doping
AcknowledgementWe thank the German Research Foundation (DFG) for financial support within the Priority Programme 1459 "Graphene".
. K V Emtsev, A Bostwick, K Horn, J Jobst, G L Kellogg, L Ley, J L Mcchesney, T Ohta, S A Reshanov, J Roehrl, E Rotenberg, A K Schmid, D Waldmann, H , K. V. Emtsev, A. Bostwick, K. Horn, J. Jobst, G. L. Kellogg, L. Ley, J. L. McChesney, T. Ohta, S. A. Reshanov, J. Roehrl, E. Rotenberg, A. K. Schmid, D. Waldmann, H. B.
Towards wafer-size graphene layers by atmospheric pressure graphitization of silicon carbide. T Weber, Seyller, Nat. Mater. 8Weber, T. Seyller, Towards wafer-size graphene layers by atmospheric pressure graphitization of silicon carbide, Nat. Mater. 8 (2009) 203-207.
M Sprinkle, M Ruan, Y Hu, J Hankinson, M Rubio-Roy, B Zhang, X Wu, C Berger, W A De Heer, Scalable templated growth of graphene nanoribbons on SiC. 5M. Sprinkle, M. Ruan, Y. Hu, J. Hankinson, M. Rubio-Roy, B. Zhang, X. Wu, C. Berger, W. A. De Heer, Scalable templated growth of graphene nanoribbons on SiC, Nat. Nanotech. 5 (2010) 727-731.
Large-area synthesis of high-quality and uniform graphene films on copper foils. X Li, W Cai, J An, S Kim, J Nah, D Yang, R Piner, A Velamakanni, I Jung, E Tutuc, S K Banerjee, L Colombo, R S Ruoff, Science. 324X. Li, W. Cai, J. An, S. Kim, J. Nah, D. Yang, R. Piner, A. Velamakanni, I. Jung, E. Tutuc, S. K. Banerjee, L. Colombo, R. S. Ruoff, Large-area synthesis of high-quality and uniform graphene films on copper foils, Science 324 (2009) 1312-1314.
Roll-toroll production of 30-inch graphene films for transparent electrodes. S Bae, H Kim, Y Lee, X Xu, J.-S Park, Y Zheng, J Balakrishnan, T Lei, H R Kim, Y , Nat. Nanotech. I. Song, Y.-J. Kim, K. S. Kim, B. Ozyilmaz, J.-H. Ahn, B. H. Hong, S. Iijima5S. Bae, H. Kim, Y. Lee, X. Xu, J.-S. Park, Y. Zheng, J. Balakrishnan, T. Lei, H. R. Kim, Y. I. Song, Y.-J. Kim, K. S. Kim, B. Ozyilmaz, J.-H. Ahn, B. H. Hong, S. Iijima, Roll-to- roll production of 30-inch graphene films for transparent electrodes, Nat. Nanotech. 5 (2010) 574-578.
Uniform wafer-scale chemical vapor deposition of graphene on evaporated Cu (111) film with quality comparable to exfoliated monolayer. L Tao, J Lee, M Holt, H Chou, S J Mcdonnell, D A Ferrer, M G Babenco, R M Wallace, S K Banerjee, R S Ruoff, J. Phys. Chem. C. 116L. Tao, J. Lee, M. Holt, H. Chou, S. J. McDonnell, D. A. Ferrer, M. G. Babenco, R. M. Wallace, S. K. Banerjee, R. S. Ruoff, Uniform wafer-scale chemical vapor deposition of graphene on evaporated Cu (111) film with quality comparable to exfoliated monolayer, J. Phys. Chem. C 116 (2012) 24068-24074.
Graphene growth and properties on metal substrates. Y Dedkov, E Voloshina, J. Phys.: Condens. Matter. 27303002Y. Dedkov, E. Voloshina, Graphene growth and properties on metal substrates, J. Phys.: Condens. Matter 27 (2015) 303002.
The CVD graphene transfer procedure introduces metallic impurities which alter the graphene electrochemical properties. A Ambrosi, M Pumera, Nanoscale. 6A. Ambrosi, M. Pumera, The CVD graphene transfer procedure introduces metallic impurities which alter the graphene electrochemical properties, Nanoscale 6 (2014) 472-476.
Residual metallic contamination of transferred chemical vapor deposited graphene. G Lupina, J Kitzmann, I Costina, M Lukosius, C Wenger, A Wolff, S Vaziri, M Östling, I Pasternak, A Krajewska, W Strupinski, S Kataria, A Gahoi, M C Lemme, G Ruhl, G Zoth, O Luxenhofer, W Mehr, ACS Nano. 95G. Lupina, J. Kitzmann, I. Costina, M. Lukosius, C. Wenger, A. Wolff, S. Vaziri, M.Östling, I. Pasternak, A. Krajewska, W. Strupinski, S. Kataria, A. Gahoi, M. C. Lemme, G. Ruhl, G. Zoth, O. Luxenhofer, W. Mehr, Residual metallic contamination of transferred chemical vapor deposited graphene, ACS Nano 9 (5) (2015) 4776-4785.
Graphene booms in factories but lacks a killer app. M Peplow, Nature. 522M. Peplow, Graphene booms in factories but lacks a killer app, Nature 522 (2015) 268-269.
The puzzle of graphene commercialization. S Park, Nat. Rev. Mater. 116085S. Park, The puzzle of graphene commercialization, Nat. Rev. Mater. 1 (2016) 16085.
Synthesis of in-plane and stacked graphene/hexagonal boron nitride heterostructures by combining with ion beam sputtering deposition and chemical vapor deposition. J H Meng, X W Zhang, H L Wang, X B Ren, C H Jin, Z G Yin, X Liu, H Liu, Nanoscale. 7J. H. Meng, X. W. Zhang, H. L. Wang, X. B. Ren, C. H. Jin, Z. G. Yin, X. Liu, H. Liu, Synthesis of in-plane and stacked graphene/hexagonal boron nitride heterostructures by com- bining with ion beam sputtering deposition and chemical vapor deposition, Nanoscale 7 (2015) 16046-16053.
Synthesis of large-area multilayer hexagonal boron nitride for high material performance. S M Kim, A Hsu, M.-H Park, S H Chae, S J Yun, J S Lee, D.-H Cho, W Fang, C Lee, T Palacios, M Dresselhaus, K K Kim, Y H Lee, J Kong, Nat. Commun. 68662S. M. Kim, A. Hsu, M.-H. Park, S. H. Chae, S. J. Yun, J. S. Lee, D.-H. Cho, W. Fang, C. Lee, T. Palacios, M. Dresselhaus, K. K. Kim, Y. H. Lee, J. Kong, Synthesis of large-area multilayer hexagonal boron nitride for high material performance, Nat. Commun. 6 (2015) 8662.
Aligned growth of hexagonal boron nitride monolayer on germanium. J Yin, X Liu, W Lu, J Li, Y Cao, Y Li, Y Xu, X Li, J Zhou, C Jin, W Guo, Small. 11J. Yin, X. Liu, W. Lu, J. Li, Y. Cao, Y. Li, Y. Xu, X. Li, J. Zhou, C. Jin, W. Guo, Aligned growth of hexagonal boron nitride monolayer on germanium, Small 11 (2015) 5375-5380.
Graphitic carbon growth on Si(111) using solid source molecular beam epitaxy. J Hackley, D Ali, J Dipasquale, J D Demaree, C J K Richardson, Appl. Phys. Lett. 95133114J. Hackley, D. Ali, J. DiPasquale, J. D. Demaree, C. J. K. Richardson, Graphitic carbon growth on Si(111) using solid source molecular beam epitaxy, Appl. Phys. Lett. 95 (2009) 133114.
Epitaxial graphene on silicon substrates. M Suemitsu, H Fukidome, J. Phys. D. 43374012M. Suemitsu, H. Fukidome, Epitaxial graphene on silicon substrates, J. Phys. D 43 (2010) 374012.
Study of graphene growth by gas-source molecular beam epitaxy using cracked ethanol: Influence of gas flow rate on graphitic material deposition. F Maeda, H Hibino, Jpn. J. Appl. Phys. 50F. Maeda, H. Hibino, Study of graphene growth by gas-source molecular beam epitaxy using cracked ethanol: Influence of gas flow rate on graphitic material deposition, Jpn. J. Appl. Phys. 50 (2011) 06GE12.
Mechanism of non-metal catalytic growth of graphene on silicon. G Hong, Q.-H Wu, J Ren, S.-T Lee, Appl. Phys. Lett. 100231604G. Hong, Q.-H. Wu, J. Ren, S.-T. Lee, Mechanism of non-metal catalytic growth of graphene on silicon, Appl. Phys. Lett. 100 (2012) 231604.
P Thanh Trung, J Campos-Delgado, F Joucken, J.-F Colomer, B Hackens, J.-P Raskin, C N Santos, S Robert, Direct growth of graphene on Si. 115223704P. Thanh Trung, J. Campos-Delgado, F. Joucken, J.-F. Colomer, B. Hackens, J.-P. Raskin, C. N. Santos, S. Robert, Direct growth of graphene on Si(111), J. Appl. Phys. 115 (2014) 223704.
. J.-H Lee, E K Lee, W.-J Joo, Y Jang, B.-S Kim, J Y Lim, S.-H Choi, S J Ahn, J R , J.-H. Lee, E. K. Lee, W.-J. Joo, Y. Jang, B.-S. Kim, J. Y. Lim, S.-H. Choi, S. J. Ahn, J. R.
Wafer-scale growth of single-crystal monolayer graphene on reusable hydrogen-terminated germanium. M.-H Ahn, C.-W Park, B L Yang, S W Choi, D Hwang, Whang, Science. 344Ahn, M.-H. Park, C.-W. Yang, B. L. Choi, S. W. Hwang, D. Whang, Wafer-scale growth of single-crystal monolayer graphene on reusable hydrogen-terminated germanium, Science 344 (2014) 286-289.
Graphene grown on Ge(001) from atomic source. G Lippert, J Dabrowski, T Schroeder, M A Schubert, Y Yamamoto, F Herziger, J Maultzsch, J Baringhaus, C Tegenkamp, M C Asensio, J Avila, G Lupina, Carbon. 75G. Lippert, J. Dabrowski, T. Schroeder, M. A. Schubert, Y. Yamamoto, F. Herziger, J. Maultzsch, J. Baringhaus, C. Tegenkamp, M. C. Asensio, J. Avila, G. Lupina, Graphene grown on Ge(001) from atomic source, Carbon 75 (2014) 104-112.
P C Rogge, M E Foster, J M Wofford, K F Mccarty, N C Bartelt, O D Dubon, On the rotational alignment of graphene domains grown on Ge(110) and Ge. 5P. C. Rogge, M. E. Foster, J. M. Wofford, K. F. McCarty, N. C. Bartelt, O. D. Dubon, On the rotational alignment of graphene domains grown on Ge(110) and Ge(111), MRS Commun. 5 (2015) 539-546.
Electronic and mechanical properties of graphene-germanium 9 interfaces grown by chemical vapor deposition. B Kiraly, R M Jacobberger, A J Mannix, G P Campbell, M J Bedzyk, M S Arnold, M C Hersam, N P Guisinger, Nano Lett. 15B. Kiraly, R. M. Jacobberger, A. J. Mannix, G. P. Campbell, M. J. Bedzyk, M. S. Arnold, M. C. Hersam, N. P. Guisinger, Electronic and mechanical properties of graphene-germanium 9 interfaces grown by chemical vapor deposition, Nano Lett. 15 (2015) 7414-7420.
Investigating the CVD synthesis of graphene on Ge(100): toward layer-by-layer growth. A M Scaparro, V Miseikis, C Coletti, A Notargiacomo, M Pea, M De, L Di Seta, Gaspare, ACS Appl. Mater. Interfaces. 8A. M. Scaparro, V. Miseikis, C. Coletti, A. Notargiacomo, M. Pea, M. De Seta, L. Di Gaspare, Investigating the CVD synthesis of graphene on Ge(100): toward layer-by-layer growth, ACS Appl. Mater. Interfaces 8 (2016) 33083-33090.
. R M Jacobberger, B Kiraly, M Fortin-Deschenes, P L Lévesque, K M Mcelhinny, G J , R. M. Jacobberger, B. Kiraly, M. Fortin-Deschenes, P. L. Lévesque, K. M. McElhinny, G. J.
Direct oriented growth of armchair graphene nanoribbons on germanium. R R Brady, S S Delgado, A Roy, M G Mannix, P G Lagally, P Evans, R Desjardins, M C Martel, N P Hersam, M S Guisinger, Arnold, Nat. Commun. 68006Brady, R. R. Delgado, S. S. Roy, A. Mannix, M. G. Lagally, P. G. Evans, P. Desjardins, R. Martel, M. C. Hersam, N. P. Guisinger, M. S. Arnold, Direct oriented growth of armchair graphene nanoribbons on germanium, Nat. Commun. 6 (2015) 8006.
M Lukosius, J Dabrowski, J Kitzmann, O Fursenko, F Akhtar, M Lisker, G Lippert, S Schulze, Y Yamamoto, M A Schubert, H M Krause, A Wolff, A Mai, T Schroeder, G Lupina, Metal-free CVD graphene synthesis on 200 mm Ge/Si(001) substrates. M. Lukosius, J. Dabrowski, J. Kitzmann, O. Fursenko, F. Akhtar, M. Lisker, G. Lippert, S. Schulze, Y. Yamamoto, M. A. Schubert, H. M. Krause, A. Wolff, A. Mai, T. Schroeder, G. Lupina, Metal-free CVD graphene synthesis on 200 mm Ge/Si(001) substrates, ACS Appl.
. Mater. Interfaces. 8Mater. Interfaces 8 (2016) 33786-33793.
Understanding the growth mechanism of graphene on Ge/Si(001) surfaces. J Dabrowski, G Lippert, J Avila, J Baringhaus, I Colambo, Y S Dedkov, F Herziger, G Lupina, J Maultzsch, T Schaffus, T Schroeder, M Kot, C Tegenkamp, D Vignaud, M C Asensio, Sci. Rep. 631639J. Dabrowski, G. Lippert, J. Avila, J. Baringhaus, I. Colambo, Y. S. Dedkov, F. Herziger, G. Lupina, J. Maultzsch, T. Schaffus, T. Schroeder, M. Kot, C. Tegenkamp, D. Vignaud, M. C. Asensio, Understanding the growth mechanism of graphene on Ge/Si(001) surfaces, Sci. Rep. 6 (2016) 31639.
LEED investigation of germanium surfaces cleaned by sublimation of sulphide films; structural transitions on clean Ge(110) surface. B Z Olshanetsky, S M Repinsky, A A Shklyaev, Surf. Sci. 64B. Z. Olshanetsky, S. M. Repinsky, A. A. Shklyaev, LEED investigation of germanium surfaces cleaned by sublimation of sulphide films; structural transitions on clean Ge(110) surface, Surf. Sci. 64 (1977) 224-236.
T Ichikawa, situ STM observations of ordering behaviors on Ge(110) surfaces and atomic geometry of the Ge{17 15 1} facet. 560T. Ichikawa, In situ STM observations of ordering behaviors on Ge(110) surfaces and atomic geometry of the Ge{17 15 1} facet, Surf. Sci. 560 (2004) 213-225.
Atomic geometry of the Ge(110)c(8×10) structure. T Ichikawa, Surf. Sci. 560T. Ichikawa, Atomic geometry of the Ge(110)c(8×10) structure, Surf. Sci. 560 (2004) 205-212.
Reconstructions and phase transition of clean Ge(110). C H Mullet, S Chiang, Surf. Sci. 621C. H. Mullet, S. Chiang, Reconstructions and phase transition of clean Ge(110), Surf. Sci. 621 (2014) 184-190.
Atomic hole doping of graphene. I Gierz, C Riedl, U Starke, C Ast, K Kern, Nano Lett. 8I. Gierz, C. Riedl, U. Starke, C. Ast, K. Kern, Atomic hole doping of graphene, Nano Lett. 8 (2008) 4603-4607.
First-principles study of Bi and Sb intercalated graphene on SiC(0001) substrate. C.-H Hsu, V Ozolins, F.-C Chuang, Surf. Sci. 616C.-H. Hsu, V. Ozolins, F.-C. Chuang, First-principles study of Bi and Sb intercalated graphene on SiC(0001) substrate, Surf. Sci. 616 (2013) 149-154.
Controlled n-doping in chemical vapour deposition grown graphene by antimony. H M W Khalil, J T Nam, K S Kim, H Noh, J. Phys. D. 4815307H. M. W. Khalil, J. T. Nam, K. S. Kim, H. Noh, Controlled n-doping in chemical vapour deposition grown graphene by antimony, J. Phys. D 48 (2015) 015307.
Electron energy-loss spectroscopy of electron states in isolated carbon nanostructures. K Suenaga, E Sandré, C Colliex, C J Pickard, H Kataura, S Iijima, Phys. Rev. B. 63K. Suenaga, E. Sandré, C. Colliex, C. J. Pickard, H. Kataura, S. Iijima, Electron energy-loss spectroscopy of electron states in isolated carbon nanostructures, Phys. Rev. B 63 (2001) 165408-4.
Energy-loss near-edge structure changes with bond length in carbon systems. J T Titantah, D Lamoen, Phys. Rev. B. 72J. T. Titantah, D. Lamoen, Energy-loss near-edge structure changes with bond length in carbon systems, Phys. Rev. B 72 (2005) 193104-4.
Atomic and electronic structure of graphene-oxide. K A Mkhoyan, A W Contryman, J Silcox, D A Stewart, G Eda, C Mattevi, S Miller, M Chhowalla, Nano Lett. 9K. A. Mkhoyan, A. W. Contryman, J. Silcox, D. A. Stewart, G. Eda, C. Mattevi, S. Miller, M. Chhowalla, Atomic and electronic structure of graphene-oxide, Nano Lett. 9 (2009) 1058- 1063.
Carbon K-edge electron-energy-loss near-edge structure in the reflection mode on graphene/Ni(111). A Cupolillo, N Ligato, S M Osman, L S Caputi, Appl. Phys. Lett. 109A. Cupolillo, N. Ligato, S. M. Osman, L. S. Caputi, Carbon K-edge electron-energy-loss near-edge structure in the reflection mode on graphene/Ni(111), Appl. Phys. Lett. 109 (2016) 161603-6.
First-principles calculation of the electronic structure and EELS spectra at the graphene/Ni(111) interface. G Bertoni, L Calmels, A Altibelli, V Serin, Phys. Rev. B. 7175402G. Bertoni, L. Calmels, A. Altibelli, V. Serin, First-principles calculation of the electronic structure and EELS spectra at the graphene/Ni(111) interface, Phys. Rev. B 71 (2004) 075402.
Theoretical description of X-ray absorption spectroscopy of the graphene-metal interfaces. E Voloshina, R Ovcharenko, A Shulakov, Y Dedkov, J. Chem. Phys. 138154706E. Voloshina, R. Ovcharenko, A. Shulakov, Y. Dedkov, Theoretical description of X-ray ab- sorption spectroscopy of the graphene-metal interfaces, J. Chem. Phys. 138 (2013) 154706.
Induced magnetism of carbon atoms at the graphene/Ni(111) interface. M Weser, Y Rehder, K Horn, M Sicot, M Fonin, A B Preobrajenski, E N Voloshina, E Goering, Y S Dedkov, Appl. Phys. Lett. 9612504M. Weser, Y. Rehder, K. Horn, M. Sicot, M. Fonin, A. B. Preobrajenski, E. N. Voloshina, E. Goering, Y. S. Dedkov, Induced magnetism of carbon atoms at the graphene/Ni(111) interface, Appl. Phys. Lett. 96 (2010) 012504.
Dynamical effects in x-ray absorption spectra of graphene and monolayered h-BN on Ni(111). J Rusz, A B Preobrajenski, M L Ng, N A Vinogradov, N Mårtensson, O Wessely, B Sanyal, O Eriksson, Phys. Rev. B. 8173402J. Rusz, A. B. Preobrajenski, M. L. Ng, N. A. Vinogradov, N. Mårtensson, O. Wessely, B. Sanyal, O. Eriksson, Dynamical effects in x-ray absorption spectra of graphene and mono- layered h-BN on Ni(111), Phys. Rev. B 81 (2010) 073402.
Electron energy-loss spectroscopy of GaAs and Ge surfaces. R Ludeke, L Esaki, Phys. Rev. Lett. 33R. Ludeke, L. Esaki, Electron energy-loss spectroscopy of GaAs and Ge surfaces, Phys. Rev. Lett. 33 (1974) 653-656.
Low-energy-electron-loss spectroscopy of Ge surfaces. R Ludeke, A Koma, Phys. Rev. B. 13R. Ludeke, A. Koma, Low-energy-electron-loss spectroscopy of Ge surfaces, Phys. Rev. B 13 (1976) 739-749.
Thermal desorption of ultraviolet-ozone oxidized Ge(001) for substrate cleaning. X J Zhang, G Xue, A Agarwal, R Tsu, M A Hasan, J E Greene, A Rockett, J. Vac. Sci. Techn. A. 11X. J. Zhang, G. Xue, A. Agarwal, R. Tsu, M. A. Hasan, J. E. Greene, A. Rockett, Thermal desorption of ultraviolet-ozone oxidized Ge(001) for substrate cleaning, J. Vac. Sci. Techn. A 11 (1993) 2553-2561.
Surface phase transitions of Ge(111)c(2 × 8) studied by electron energy loss spectroscopy. L Pasquali, S D'addato, L Tagliavini, A M Prandini, S Nannarone, Surf. Sci. L. Pasquali, S. D'Addato, L. Tagliavini, A. M. Prandini, S. Nannarone, Surface phase transi- tions of Ge(111)c(2 × 8) studied by electron energy loss spectroscopy, Surf. Sci. 377-379 (1997) 534-538.
Plasmon dispersion on epitaxial graphene studied using high-resolution electron energy-loss spectroscopy. J Lu, K Loh, H Huang, W Chen, A Wee, Phys. Rev. B. 80113410J. Lu, K. Loh, H. Huang, W. Chen, A. Wee, Plasmon dispersion on epitaxial graphene studied using high-resolution electron energy-loss spectroscopy, Phys. Rev. B 80 (2009) 113410.
Plasmon modes in graphene: status and prospect. A Politano, G Chiarello, Nanoscale. 6A. Politano, G. Chiarello, Plasmon modes in graphene: status and prospect, Nanoscale 6 (2014) 10927-10940.
Interband plasmons in supported graphene on metal substrates: Theory and experiments. A Politano, I Radović, D Borka, Z L Mišković, G Chiarello, Carbon. 96A. Politano, I. Radović, D. Borka, Z. L. Mišković, G. Chiarello, Interband plasmons in sup- ported graphene on metal substrates: Theory and experiments, Carbon 96 (2016) 91-97.
gr/Ge(110graphene's atomic lattice and from the intervalley scattering in graphene, respectively. STM data were acquired at room temperature. Imaging parameters: (a) 500 × 500 nm 2 , U T = +2.5 V, I T = 1 nA, (b) 80 × 80 nm 2 , U T = +2.5 V, I T = 0.3 nA, (c) 400 × 400 nm 2 , U T = +0.5 V, I T = 5 nA. Leed Stm, Characterization Of Ge, d) 150 × 150 nm 2 , U T = +0.5 V, I T = 6 nA, (e) 30 × 30 nm 2 , U T = +1.5 V, I T = 0.8 nA. insetFIG. 1: STM and LEED characterization of Ge(110) (a,b,f) and gr/Ge(110graphene's atomic lattice and from the intervalley scattering in graphene, respectively. STM data were acquired at room temperature. Imaging parameters: (a) 500 × 500 nm 2 , U T = +2.5 V, I T = 1 nA, (b) 80 × 80 nm 2 , U T = +2.5 V, I T = 0.3 nA, (c) 400 × 400 nm 2 , U T = +0.5 V, I T = 5 nA, (d) 150 × 150 nm 2 , U T = +0.5 V, I T = 6 nA, (e) 30 × 30 nm 2 , U T = +1.5 V, I T = 0.8 nA (inset:
Electron beam energy is 38 eV for (f,g) and 73 eV for (h), respectively. FIG. 3: (a) NE PES spectra of Ge. × 7 nm 2 , U T = +0.02 V, I T = 8 nAintensity is scaled down by factor 5) and gr/Ge(110× 7 nm 2 , U T = +0.02 V, I T = 8 nA). Electron beam energy is 38 eV for (f,g) and 73 eV for (h), respectively. FIG. 3: (a) NE PES spectra of Ge(110) (intensity is scaled down by factor 5) and gr/Ge(110).
Imaging parameters: (left) 20 × 20 nm 2 , U T = +1.0 V, I T = 0.2 nA, (right) 10 × 10 nm 2 , U T = +0.5 V, I T = 0.9 nA. (b) Experimental and theoretical C K-edge ELNES and NEXAFS spectra of gr/Ge. Spectrum of graphite crystal is shown as a shaded area for comparison. Inset shows LT STM images of gr/Ge(110). where scattering features due to dopant atoms at the interface are clearly resolvedSpectrum of graphite crystal is shown as a shaded area for comparison. Inset shows LT STM images of gr/Ge(110), where scattering features due to dopant atoms at the interface are clearly resolved. Imaging parameters: (left) 20 × 20 nm 2 , U T = +1.0 V, I T = 0.2 nA, (right) 10 × 10 nm 2 , U T = +0.5 V, I T = 0.9 nA. (b) Experimental and theoretical C K-edge ELNES and NEXAFS spectra of gr/Ge(110), graphene, and gr/Ni(111).
EELS spectra of gr/Ge(110) obtained with different primary beams. The energy of the electron beam is marked for every spectra. 4Lower inset presents the geometry used in the EELS/ELNES experimentsFIG. 4: EELS spectra of gr/Ge(110) obtained with different primary beams. The energy of the elec- tron beam is marked for every spectra. Lower inset presents the geometry used in the EELS/ELNES experiments.
| zyda_arxiv-0728000 |
Scalar and fermion contributions to the vacuum energy
3 Mar 2013
Dimitrios Metaxas [email protected]
Department of Physics
National Technical University of Athens
Zografou Campus15780AthensGreece
Scalar and fermion contributions to the vacuum energy
3 Mar 2013
I consider a theory of a real scalar and a fermion field, with a Yukawa interaction and a potential term that admits two degenerate minima at the tree level. I calculate the quantum vacuum energy difference between these two vacua and I find a finite, non-zero result, with scalar and fermion contributions whose origin and physical significance I discuss.
I will start by reviewing the problem of the vacuum energy for renormalizable quantum field theories, in four-dimensional flat spacetime, that contain a scalar field, φ, which is endowed, at tree level, with a standard kinetic term and a general potential term, U(φ), which is bounded below.
(A) If the potential term at hand has a single minimum (vacuum) at φ = φ min then quantization can be performed around it after expanding U(φ) = U(φ min ) + 1 2 U ′′ (φ min )(φ − φ min ) 2 + · · ·, discarding the constant term, using the quadratic term to describe a scalar excitation of mass m around the minimum, with m 2 = U ′′ (φ min ), and treating the higher order terms in perturbation theory as interactions with the respective coupling constants.
The constant term, also called the vacuum energy term, along with the mass and the coefficients of the higher order interactions have no meaning at this point; they are called bare terms and get regularized by (infinite) multiplications or subtractions, along with a similar treatment of the kinetic term in the usual process of renormalization.
Associated with this procedure are two parameters, both with dimensions of mass: Λ, which is used in order to cut-off divergent expressions, and µ, that sets the scale in which the physical parameters of the theory, masses and coupling constants are defined or measured. Then one proceeds by calculating, order by order in the perturbation expansion, the various Green's functions of the theory as well as the related functional expressions of the effective action with the corresponding effective potential [1].
The cut-off, Λ, was just a mathematical convention and should be absent from any final result of these calculations. The theory is defined by specifying the values of the masses and the coupling constants at a scale µ; although the Green's functions and the effective action depend on the scale, any physical result derived from them should be µ-independent. For example: one may measure and define the masses and coupling constants of the theory using scattering experiments at a "reference" scale µ ref = 1GeV. Then one may predict and measure the outcomes of experiments at any other scale, say, µ exp = 10GeV. The result should be the same with what one would have obtained after having used a different µ ref to start with. This is embodied in the renormalization group formalism and is expressed mathematically by the fact that the total derivative of any physical quantity with respect to µ, given by the sum of the various partial derivatives, must vanish.
We see immediately the reason why the constant, vacuum energy, term was discarded: there is no physical process or experiment that depends on it; it can be set to zero, or any other value if one is not worried about the semiclassical expansion around an infinite constant. Once this is done (here I will consider it set to zero) there is no prediction for a different value, nor can there be any process to verify such a prediction. If one wants to use the renormalization group formalism consistently, however, one must take care of the constant term too, that is, in our case, subtract its value at the minimum at any level in the perturbation expansion of the effective potential [2].
When the theory under consideration is coupled to gravity, whether the latter is considered at the classical level or quantized, the value of the vacuum energy becomes a physical observable that can be measured in the cosmological expansion rate and contributes to the cosmological constant [3]. The quantum theory of gravity is not renormalizable; it can be viewed as an effective quantum field theory [4], with a limited range of predictability, as all effective quantum field theories, and its implications will not be considered here. As far as renormalizable quantum field theories are concerned, there can be no prediction for the vacuum energy defined as the value of the renormalized effective potential at its minimum.
It is sometimes argued that the sum of the zero-point energies of the field modes at the minimum contributes
1 4π 2 Λ 0 dk k 2 √ k 2 + m 2 ≈ Λ 4 16π 2(1)
when a momentum cut-off regularization scheme is employed, or
µ 4−d (2π) (d−1) 1 2 d d−1 k √ k 2 + m 2 ≈ m 4 64π 2 ln m 2 µ 2(2)
when dimensional regularization and minimal subtraction prescription are performed. In (2), a fermion field would have given a contribution with the opposite sign, involving, of course, the fermion mass at the minimum. The cut-off, Λ, is usually considered to be related to the Planck or a Grand Unified Theory (GUT) scale, and the scale µ to the radiation associated with the supernova observations or the Cosmic Microwave Background [5]. Although these expressions are suggestive of contributions to the vacuum energy that drive it away from a zero value when non-renormalizable interactions such as gravity are considered, they can hardly be considered as a prediction of a renormalizable quantum field theory. Higher energy scales, such as the GUT scale, may or may not leave an imprint on processes at the electroweak scale depending on the details of the decoupling procedure, none of the contributions, however, may depend explicitly on the cut-off in a way implied by (1). As far as the expression in (2) is concerned, one also sees that it cannot, by itself, correspond to a well-defined prediction; it is rather a one-loop result that should be subtracted if the perturbation expansion around the vacuum is to be done consistently.
(B) Let us now consider the case where the potential energy term, U(φ), has, besides the global minimum at φ min , a second, local minimum at φ met , such that U(φ met ) > U(φ min ). This local minimum corresponds to a "false", metastable vacuum, and the energy difference between the two vacua is a physical observable that can, in principle, be measured if an appropriate metastable state is prepared. The perturbation expansion of the effective potential must account for this fact; the renormalization group equation [2] will ensure that the vacuum energy difference can be consistently defined, the value of the "true" vacuum energy, however, is still undetermined and can be set to zero. Only the energy difference between the two vacua is a meaningful, physical quantity. This vacuum energy difference is also an input of the theory, much like the various masses and coupling constants; it is not a prediction of the quantum field theory. Similar considerations apply when the global minimum of the potential was not present at tree level but was induced by radiative, quantum effects [1]. The dimensionful parameter that defines the location of the absolute minimum and its related energy difference with respect to the metastable one is again an input of the theory, although "camouflaged" at the tree level. As an additional, important note for these cases, one should mention that the energy of a metastable state has an imaginary part that is related to the rate of its decay [6]; this is a non-perturbative effect, however, and will not show up at any level of the perturbation expansion.
(C) One may also consider a theory where the potential energy term at tree level has a discrete or continuous family of degenerate minima that are related by a symmetry. Two simple examples that one can have in mind involve a complex scalar field with a "Mexican-hat" potential, or a real scalar field with a "reflection" symmetry of φ → −φ. Quantization can again be performed picking one of these minima and following the same procedure as above. The value of the potential at the minimum is again undefined and can be consistently set to zero. Once this is done, by symmetry considerations, the value of the renormalized potential at any other minimum will be zero as well.
(D) Finally, coming to the case that is relevant to the present work, one can imagine the case of a potential term with a set of degenerate minima that have the same value of the energy at tree level but are not otherwise related by any symmetry. A simple example would be a potential with two minima at φ 1 and φ 2 , such that U(φ 1 ) = U(φ 2 ) but U ′′ (φ 1 ) = U ′′ (φ 2 ). Then the elementary excitations around each minimum would have different masses. If one were to pick one minimum, say φ 1 , to quantize the theory, all the subtractions described before would have to be performed at this point, and the difference of terms such as (2) around the two minima should give a finite, possibly non-zero result for φ 2 . This would be a definite prediction for the energy of the second vacuum, similar to well-known phenomena like the Casimir effect [7]. Obviously, it is not possible to have a renormalizable quantum field theory in four dimensions with such a potential term at tree level (it is interesting, however, that the effective potential in the Standard Model allows for the possibility of a second minimum, other than the one in the electroweak scale, close to the Planck scale and degenerate in energy [8]). Even so, there are other examples where asymmetries between classically degenerate vacua can be seen and this is investigated further below.
In order to examine a simple case of the aforementioned asymmetries, I will consider here a theory with a real scalar and a fermion field with a Yukawa interaction and the Lagrangian:
L = 1 2 (∂φ) 2 − U(φ) + iψ∂ /ψ − gφψψ,(3)
where the potential term,
U(φ) = λ 4! φ 2 (φ − φ 0 ) 2 ,(4)
has two degenerate minima at φ = 0 and φ = φ 0 . There are two sources of asymmetry in this case: first, as it is obvious, the fermion acquires a mass, m f = gφ 0 , around the second minimum, while it is massless around the first. Second, the scalar potential is, in fact, asymmetric in field space. The masses of the scalar excitations are the same around the two vacua,
U ′′ (0) = U ′′ (φ 0 ) = λ 12 φ 2 0 ≡ M 2 ,(5)
since renormalization involves a scale, µ, however, there is a resulting asymmetry between the zero and the non-zero vacuum, depending on where the renormalization conditions are imposed. As a final result, we will find, therefore, a difference in the renormalized vacuum energies of these two vacua, although they are degenerate at tree level.
The effective potential at one loop, after dimensional regularization, is given by the well-known expression
U eff (φ) = U(φ) + 1 64π 2 (U ′′ ) 2 ln U ′′ µ 2 − 1 2 − 4g 4 φ 4 ln g 2 φ 2 µ 2 − 1 2(6)+ c 0 + c 1 φ + c 2 φ 2 2 + c 3 φ 3 3! + c 4 φ 4 4! .
I have included the four counterterms, proportional to c 4 , c 3 , c 2 , c 1 , in order to impose the four renormalization conditions
U ′′′′ eff (φ 0 ) = λ,(7)U ′′′ eff (φ 0 ) = λφ 0 2 ,(8)U ′′ eff (φ 0 ) = M 2 ,(9)
and
U ′ eff (φ 0 ) = 0,(10)
and the constant, c 0 counterterm, to account for the vacuum energy. One is only allowed a single counterterm to adjust that, and once a condition is imposed at one vacuum there is a definite, possibly non-zero prediction, for the value at the other vacuum. All the other counterterms, from linear to quartic are allowed since there is no symmetry, like reflection with respect to the origin (evenness of the potential), which is usually imposed for simplicity. The linear counterterm is not necessary since it corresponds merely to a shift of the field, it has been included, however, for clarity. Using it, I have imposed the conditions that keep φ 0 as one of the minima. Then the second minimum will be slightly displaced from φ = 0. This effect can also be calculated from the linear term in the potential, for small enough values of the couplings, however, as will be seen shortly, this will be a subleading effect.
I should mention at this point that I consider values of the couplings that do not destroy the vacuum structure of the theory, that is I take the Yukawa coupling small enough, g 2 < λ/4, as is required for stability.
The four renormalization conditions stated above can be solved to give the four coefficients, c 4 , c 3 , c 2 and c 1 , and the final result for the effective potential at one loop, without including the c 0 term, is:
U eff (φ) = U(φ) + 1 64π 2 (U ′′ ) 2 ln U ′′ µ 2 − 1 2 − 4g 4 φ 4 ln g 2 φ 2 φ 2 0 − 1 2(11)+ 1 64π 2 1 12 λ 2 φ 3 0 ln M 2 µ 2 + 3 2 λ 2 φ 3 0 − 32 3 g 4 φ 3 0 φ − 1 3 λ 2 φ 2 0 ln M 2 µ 2 + 13 4 λ 2 φ 2 0 − 24g 4 φ 2 0 φ 2 + 1 2 λ 2 φ 0 ln M 2 µ 2 + 3λ 2 φ 0 − 32g 4 φ 0 φ 3 − 1 4 λ 2 ln M 2 µ 2 + λ 2 − 44 3 g 4 φ 4 .
Now we have a definite expression for the value of the potential at φ 0 :
U eff (φ 0 ) = 1 64π 2 M 4 ln M 2 µ 2 − 1 2 + 1 4 λ 2 φ 4 0 − 2g 4 φ 4 0 .(12)
The second minimum, as mentioned before, is not located exactly at φ = 0, its position, however, can be calculated in the small coupling expansion, and the effect of its displacement on the vacuum energy can be seen to be subleading compared to
U eff (0) = 1 64π 2 M 4 ln M 2 µ 2 − 1 2 .(13)
The displacement of the second minimum from zero can be shown to be of order λφ 0 , and the resulting change in the vacuum energy of order λ 3 φ 4 0 . One has, therefore, a definite prediction for the vacuum energy difference between the two vacua,
δU = U eff (φ 0 ) − U eff (0) = 1 64π 2 1 4 λ 2 φ 4 0 − 2g 4 φ 4 0 ,(14)
regardless of the choice of c 0 (which can be chosen so as to cancel the term of (13) for consistency [2]). This is a quantum result that was absent at tree level, where one would have to put in by hand the value of the vacuum energy, or even any vacuum energy difference between two or more vacua. Before embarking on the discussion of the result, I should mention that, as is well known, there is a region in field space where the final expression for the one-loop effective potential has an imaginary part [9]. It is the region where U ′′ (φ) < 0, and one has to be more careful when deriving physical results associated with this part of the field space. Our areas of interest, however, near φ = 0 and φ = φ 0 , have no overlap with the problematic region in this case. Now we can proceed to investigate the origins and implications of the final result. As far as the second, fermion contribution to (14) is concerned, one might have expected the result qualitatively, as well as its sign. It is also interesting, however, that one has a non-zero scalar contribution to the vacuum energy difference. This arises from the fact that the potential is asymmetric with respect to the renormalization conditions imposed. This is true even without the fermion field; a fermion without a mass term at tree level was considered here merely in order to get a simple and suggestive quantitative result.
Without the fermion one can equally well impose the previous renormalization conditions at φ = 0; then the second minimum near a non-zero φ 0 would show the same effect, that is an energy difference equal to the first factor in (14). This calculation is easy to do and will not be reproduced here. The final result for just the scalar field with the potential term in (4) is that the vacuum at φ 0 in the quantum theory has higher energy than the one at 0 by the amount given by the first term in (14), regardless of where the renormalization conditions are imposed.
With the fermion term used here, and the condition g 2 < λ/4 that has to be fulfilled for stability, one sees that the energy at φ 0 is always higher than that at φ = 0. It should be kept in mind, however, that the model considered is quite simple and that even slightly more elaborate models, with more fermion species or fermion mass terms will give a more general expression with a greater range of final values.
It is important that the final result in (14) is independent of any cut-off or renormalization scale, and is given, as expected, by the parameters that define the theory, couplings and masses. Any "running" from renormalization group effects appears at higher orders as it should. One may also view the expression derived here as a definite finite result that comes from considering finite factors included in (2) and then taking the difference of two such terms. In any case, it is a prediction of a renormalizable quantum field theory and a purely quantum effect.
The fact that the two, classically degenerate, vacua are energetically inequivalent because of quantum corrections, gives this simple model a structure that is richer than expected. The vacuum with higher energy, φ 0 in this case, becomes metastable, although it was classically stable. One can accordingly calculate its rate of decay; the appropriate formalism is related to the results of [10] although the physical situation here is different. Since the vacuum energy difference is a quantum effect, the result for this vacuum decay rate is extremely small; it is proportional to the exponential of minus the "bounce" action, which, in our case, turns out to be of order 1/λ 2 . It would be interesting, as a problem for further research, to study the evolution of the vacua and the effective potential in a finite temperature and cosmological setting in this or related problems where the breaking or the lack of symmetry play an important role [11].
As a final note I should discuss the possibility of a "landscape" of vacua, a large number of which are degenerate, with zero energy at the classical level or even after some quantum corrections have been taken into account. Unless they are all related by the same symmetries, it does not seem possible to have zero energy in all of them when higher order quantum effects are considered, and the energy difference between two adjacent vacua, if one literally translates the results obtained here, would be proportional to powers of coupling constants times their distance in field space. It is an attractive scenario which states that if the value of the vacuum energy of a particular minimum is fixed by some reason to be zero, the value of the vacuum energy for any nearby minimum will be a highly suppressed and calculable number.
One frequently encounters the problem, however, that some of the interactions that are involved, in this or other physically important situations, are non-renormalizable, the most important example being the gravitational interaction; when these are regarded as effective quantum field theories [4], instead of a coupling constant expansion that was the basic tool of renormalizable theories, one now has an expansion in powers of the energy, and it is possible that well-defined results for the vacuum energy or energy difference exist in these situations as well. It would be interesting, therefore, as a subject of future work, to consider the results of similar considerations in effective quantum field theories .
AcknowledgementsThis work was completed while visiting the National Technical University of Athens. I would like to thank the people of the Physics Department for their hospitality.
S Coleman, Aspects of Symmetry. Cambridge Univ. PressS. Coleman, Aspects of Symmetry, Cambridge Univ. Press (1985).
. S Coleman, E J Weinberg, Phys. Rev. 71888S. Coleman and E. J. Weinberg, Phys. Rev., D7, 1888 (1973).
. E J Weinberg, hep-th/0507214E. J. Weinberg, hep-th/0507214.
. C Ford, D R T Jones, P W Stephenson, M B Einhorn, Nucl. Phys. 39517C. Ford, D. R. T. Jones, P. W. Stephenson and M. B. Einhorn, Nucl. Phys., B395, 17 (1993).
. M B Einhorn, D R T Jones, JHEP. 070451M. B. Einhorn and D. R. T. Jones, JHEP, 0704, 051 (2007).
. S Weinberg, Rev.Mod.Phys. 611S. Weinberg, Rev.Mod.Phys., 61, 1 (1989).
. T Padmanabhan, Phys. Rep. 380235T. Padmanabhan, Phys. Rep., 380, 235 (2003).
. J F Donoghue, Phys. Rev. 503874J. F. Donoghue, Phys. Rev., D50, 3874 (1994).
. M M Anber, J F Donoghue, M El-Houssieny, Phys.Rev. 83124003M. M. Anber, J. F. Donoghue and M. El-Houssieny, Phys.Rev., D83, 124003 (2011).
. J F Donoghue, arXiv:1209.3511gr-qcJ. F. Donoghue, arXiv:1209.3511 [gr-qc].
. J Martin, Comptes Rendus Physique. 13566J. Martin, Comptes Rendus Physique, 13, 566 (2012).
. I L Shapiro, J Sola, arXiv:0808.0315hep-thI. L. Shapiro and J. Sola, arXiv:0808.0315 [hep-th].
. S Coleman, Phys. Rev. 152929S. Coleman, Phys. Rev., D15, 2929 (1977).
. A D Linde, Nucl. Phys. 216421A. D. Linde, Nucl. Phys., B216, 421 (1983).
. G Plunien, B Muller, W Greiner, Phys.Rept. 13487G. Plunien, B. Muller and W. Greiner, Phys.Rept., 134, 87 (1986).
. K A Milton, S A Fulling, P Parashar, A Romeo, K V Shajesh, J A Wagner, J.Phys. 41164052K. A. Milton, S. A. Fulling, P. Parashar, A. Romeo, K.V. Shajesh and J. A. Wagner, J.Phys., A41, 164052 (2008).
. M Sher, Phys.Rept. 179273M. Sher, Phys.Rept., 179, 273 (1989).
. C D Froggatt, H B Nielsen, Phys.Lett. 36896C. D. Froggatt and H. B. Nielsen, Phys.Lett., B368, 96 (1996).
. D L Bennett, H B Nielsen, Int.J.Mod.Phys. 95155D. L. Bennett and H. B. Nielsen, Int.J.Mod.Phys., A9, 5155 (1994).
. J Elias-Miro, J R Espinosa, G F Giudice, G Isidori, A Riotto, A Strumia, Phys.Lett. 709222J. Elias-Miro, J. R. Espinosa, G. F. Giudice, G. Isidori, A. Riotto and A. Strumia, Phys.Lett., B709, 222 (2012).
. F Bezrukov, M Yu, B A Kalmykov, M Kniehl, Shaposhnikov, JHEP. 1210140F. Bezrukov, M. Yu. Kalmykov, B. A. Kniehl and M. Shaposhnikov, JHEP, 1210, 140 (2012).
. I Masina, arXiv:1209.0393hep-phI. Masina, arXiv:1209.0393 [hep-ph].
. F Bezrukov, G K Karananas, J Rubio, M Shaposhnikov, arXiv:1212.4148hep-phF. Bezrukov, G. K. Karananas, J. Rubio and M. Shaposhnikov, arXiv:1212.4148 [hep-ph].
. R Armillis, A Monin, M Shaposhnikov, arXiv:1302.5619hep-thR. Armillis, A. Monin and M. Shaposhnikov, arXiv:1302.5619 [hep-th].
. R Jackiw, Phys.Rev. 91686R. Jackiw, Phys.Rev., D9, 1686 (1974).
. E J Weinberg, A Wu, Phys. Rev. 362474E. J. Weinberg and A. Wu, Phys. Rev., D36, 2474 (1987).
. E J Weinberg, Phys. Rev. 474614E. J. Weinberg, Phys. Rev., D47, 4614 (1993).
. D Metaxas, E J Weinberg, Phys. Rev. 53836D. Metaxas and E. J. Weinberg, Phys. Rev., D53, 836 (1996).
. D Metaxas, Phys. Rev. 6383507D. Metaxas, Phys. Rev., D63, 083507 (2001).
. D Metaxas, Phys. Rev. 7547701D. Metaxas, Phys. Rev., D75, 047701 (2007).
. J , Int.J.Mod.Phys. 264523J. Alexandre, Int.J.Mod.Phys., A26, 4523 (2011).
. J Alexandre, A Tsapalis, Phys.Rev. 8725028J. Alexandre and A. Tsapalis, Phys.Rev., D87 025028 (2013).
. K Farakos, Int.J.Mod.Phys. 271250168K. Farakos, Int.J.Mod.Phys., A27, 1250168 (2012).
. K Farakos, D Metaxas, Phys.Lett. 71176K. Farakos and D. Metaxas, Phys.Lett., B711, 76 (2012).
| zyda_arxiv-0800000 |
Measuring the halo mass function in loose groups
6 Dec 2010
D J Pisano
D G Barnes
B K Gibson
L Staveley-Smith
K C Freeman
V A Kilborn
Centre for Astrophysics and Supercomputing
West Virginia University Dept. of Physics
P.O. Box 631526506MorgantownWVUSA
Centre for Astrophysics
Swinburne University
Hawthorn, VIC 3122Aus-tralia
L. Staveley-Smith School of Physics, M013
University of Central Lancashire
PR1 @HUPrestonUK
K.C. Freeman RSAA, Mount Stromlo Observatory
University of Western Australia, Crawley
Cotter Road6009, 2611WestonWA, ACTAustralia, Australia
Measuring the halo mass function in loose groups
6 Dec 2010
Using data from our Parkes & ATCA HI survey of six groups analogous to the Local Group, we find that the HI mass function and velocity distribution function for loose groups are the same as those for the Local Group. Both mass functions confirm that the "missing satellite" problem exists in other galaxy groups.
Project Overview
Cold dark matter (CDM) models of galaxy formation predict that the Local Group should contain about 300 dark matter halos but there is an order of magnitude fewer galaxies observed [4,5]. While the "missing satellite" problem can be mitigated by the inclusion of baryon physics in CDM models or alternate forms of dark matter, it is important to establish how this problem is manifest beyond the Local Group.
We have conducted a HI survey of six loose groups of galaxies that are analogous to the Local Group. The six groups are composed of only spiral and irregular galaxies that have mean separations of ∼550 kpc. The groups have average diameters of 1.6 Mpc and have M virial ∼10 11.7−13.6 M ⊙ ; they are similar to the Local Group in all these ways. Details on our observations, data reduction, and our search for HI clouds in the groups can be found in [6]. The survey identified a total of 63 group galaxies with all of the new detections having properties consistent with being typical dwarf irregular galaxies. Fig. 1 Left: The HIMF for loose groups as compared to that for the Local Group galaxies with HI detections and Local Group galaxies with HI detections and upper limits. Also shown is the HIMF from HIPASS [7] and a flat HIMF. Right: The VDF for the loose groups compared to all Local Group galaxies, only those detected in HI, the HIPASS VDF from [7], field galaxies from [3], cluster galaxies from [1], and the theoretical predictions from Via Lactea II [2]. Aside from the loose and Local Group data, all other functions have been arbitrarily renormalized.
Halo Mass Functions
Using the survey completeness from [6] and our catalog of group galaxies, we constructed both a HI mass function (HIMF) and a circular velocity distribution function (VDF) for the six loose groups as shown in Figure 1. The figure shows that both the HIMF and VDF for the Local Group are not atypical, but are consistent with those for the six loose groups. The HIMF for low density regions, such as loose groups (and including the Local Group), is consistent with being flatter than the HIMF in the field as was found by [7]. The VDF for loose groups has a consistent low mass slope to field galaxies and HIPASS galaxies [3,8], but is much shallower than is predicted by simulations [2] or observed in galaxy clusters [1]. For a more complete discussion of these results, see Pisano et al. (2011, in preparation).
. V Desai, J J Dalcanton, L Mayer, D Reed, T Quinn, F Governato, MNRAS. 351265Desai, V., Dalcanton, J. J., Mayer, L., Reed, D., Quinn, T., & Governato, F. 2004, MNRAS, 351, 265
. J Diemand, M Kuhlen, P Madau, M Zemp, B Moore, D Potter, J Stadel, Nature. 454735Diemand, J., Kuhlen, M., Madau, P., Zemp, M., Moore, B., Potter, D., & Stadel, J. 2008, Nature, 454, 735
. A H Gonzalez, K A Williams, J S Bullock, T S Kolatt, J R Primack, ApJ. 528145Gonzalez, A. H., Williams, K. A., Bullock, J. S., Kolatt, T. S., & Primack, J. R. 2000, ApJ, 528, 145
. A Klypin, A V Kravtsov, O Valenzuela, F Prada, ApJ. 52282Klypin, A., Kravtsov, A.V., Valenzuela, O., & Prada, F., 1999, ApJ, 522, 82
. B Moore, S Ghigna, F Governato, G Lake, T Quinn, J Stadel, P Tozzi, ApJ. 52419Moore, B., Ghigna, S., Governato, F., Lake, G., Quinn, T., Stadel, J., & Tozzi, P., 1999, ApJ, 524, L19
. D J Pisano, D G Barnes, B K Gibson, L Staveley-Smith, K C Freeman, V A Kilborn, ApJ. 662959Pisano, D. J., Barnes, D. G., Gibson, B. K., Staveley-Smith, L., Freeman, K. C., & Kilborn, V. A. 2007, ApJ, 662, 959
. M A Zwaan, M J Meyer, L Staveley-Smith, R L Webster, MNRAS. 35930Zwaan, M.A., Meyer, M.J., Staveley-Smith, L., & Webster, R.L., 2005, MNRAS, 359, L30
. M A Zwaan, M J Meyer, L Staveley-Smith, MNRAS. 403Zwaan, M.A., Meyer, M.J., & Staveley-Smith, L., 2010, MNRAS, 403, 1969
| zyda_arxiv-0813000 |
Semi-analytical technique for the design of disordered coatings with tailored optical properties
Rishi Bhrigu
Department of Energy Science and Engineering
Indian Institute of Technology
Bombay
Mishra
Department of Energy Science and Engineering
Indian Institute of Technology
Bombay
Nithin Jo Varghese
Department of Energy Science and Engineering
Indian Institute of Technology
Bombay
Karthik Sasihithlu *[email protected]
Department of Energy Science and Engineering
Indian Institute of Technology
Bombay
Semi-analytical technique for the design of disordered coatings with tailored optical properties
Disordered media coatings are finding increasing use in applications such as day-time radiative cooling paints and solar thermal absorber plate coatings which require tailored optical properties over a broad spectrum ranging from visible to far-IR wavelengths. Both monodisperse and polydisperse configurations with thickness of coatings up to 500 are currently being explored for use in these applications. In such cases it becomes increasingly important to explore utility of analytical and semi-analytical methods for design of such coatings to help reduce the computational cost and time for design. While well-known analytical methods such as Kubelka-Munk and four-flux theory have previously been used for analysis of disordered coatings, analysis of their utility has so far in literature been restricted to either solar spectrum or IR but not simultaneously over the combined spectrum as required for the above applications. In this work, we have analysed the applicability of these two analytical methods for such coatings over the entire wavelength range from visible to IR, and based on observed deviation from exact numerical simulation we propose a semi-analytical technique to aid in the design of these coatings with significant computational cost savings.
Introduction
Disordered coatings, which consist of dielectric/metal nanoparticles dispersed randomly in a matrix, find their application in several fields such as solar thermal absorber coatings [1], solar reflecting coatings [2], color paints [3], translucent paints [4], tissue optics [5], daytime passive radiative cooling coatings [6][7][8], and many more. The main advantages that such disordered media offer that make them an attractive proposition for use in these applications are their cost-effective means of fabrication, and tunability of the desired optical properties of the coating -since the spectral position of Mie (plasmon) resonance of the embedded dielectric (metal) particles strongly depend on the size of the particles. The main challenging task in the design of such disordered media is the modelling of its optical properties. Techniques based on homogenization of the composite structures that predict effective permittivity and permeability of the disordered media, such as Maxwell-Garnett theory [9] and Bruggeman's model [10], are valid only when the particle sizes are much smaller than the incident wavelength [11]. Doyle et al. [12] showed that the use of Mie coefficients in this effective medium theory provides good accuracy to the calculation of effective optical properties of metal spheres suspended in a polymer. However, the theory predicts absorption for a nonabsorbing particle [11] and thus cannot be used to predict the effective refractive index of disordered media for solar reflecting paint/coatings where non absorbing particles are utilized. In literature, other analytical techniques developed for this objective include those which consider diffusion of photons [13], and those which solve the radiative transfer equation under -flux (2 ≤ ≤ 4) approximations [14,15]. Of these methods Kubelka Munk (KM) theory [16] (for which = 2) and the four-flux (FF) method [17] are commonly used. In addition, simulation techniques such as the Monte Carlo method [18], and exact electromagnetic solvers are employed to model the optical/radiative properties of disordered coatings. However, these simulation techniques do not present a clear picture linking the microscopic properties of the particles, such as the scattering and absorption coefficients, to the macroscopic optical properties of the coating. Moreover, exact electromagnetic solvers which solve Maxwell's equations numerically to obtain radiative properties of the coating, put a premium on computational resources and the time for design when the thickness of random media is in the order of tens/hundreds of microns -as is currently being deployed in these applications. Particularly when several parameters of the configuration are in play, such as that encountered in disordered media, analytical techniques such as KM and FF theories provide important means to arrive at an optimum combination of the parameters with minimal computational resources while also explicitly linking the properties of the micro constituents to the observed optical properties of the coating.
KM and FF theories have so far in literature been used in applications where the spectrum of interest has been limited to either the visible spectrum or IR separately. For example, KM theory has been used extensively in paints and coatings [1,3], paper industry [19], tissue optics [5] among others. Similarly, the FF method has been used extensively by researchers to model, predict and optimize the optical properties of light scattering coatings [2,[20][21][22]. However, the applicability of these theories over a broad spectrum covering both visible and IR spectrum simultaneously has not been a subject of attention. This becomes important when designing coatings for applications such as day-time passive radiative cooling and solar thermal absorber plate coatings where tailored optical properties over a broad spectrum covering both the solar spectrum as well as far-infrared are crucial. For example, coatings for day-time passive radiative cooling [23] require high reflectivity in solar spectrum (0.3-3 m wavelength range) and high emissivity in the infra-red spectrum (5-15 m wavelength range). It is not obvious that the analytical techniques retain their accuracy over such a broad spectrum since with increasing wavelength there is a possibility that the nature of scattering transitions from the independent scattering regime (where scattering cross-section of particles can be superposed) to dependent scattering regime (where near-field effects, and interference between far-field scattered radiation become important). Previously reported relation [24] demarcating the two regimes has been obtained from experimental observations carried out in the visible spectrum only. Thus there is a need to explore the applicability of these analytical techniques over a broad spectrum in greater depth. In regimes where the predictions from these analytical techniques are not satisfactory, other possible methods of design which combine the accuracy of exact electromagnetic solvers with the minimal computational requirements of analytical methods are expected to be of pressing need to researchers interested in designing such coatings.
With this in mind, the manuscript has been arranged as follows. In Section 2 we have compared the reflectivity and emissivity predictions of KM and FF techniques with results from exact numerical solvers for different degrees of absorption in the particles (imaginary index of particle = 0.0, 10 −2 , and 10 −1 ) and in the matrix (imaginary index of matrix = 0.0, 10 −4 , and 10 −2 ) for different thickness of the coating (10 m and 50 m) in the wavelength range 0.3 − 15 . We show that these techniques are accurate over the entire spectrum when particles are in the limit of independent scattering and under low absorption conditions but fail when volume fraction of particles is high such that interaction among particles is no longer negligible or when absorption in the matrix/particles is high. For such conditions where analytical techniques fail to predict the optical properties accurately we propose an alternative technique which combines the use of exact numerical solver and KM theory which we show can predict optical properties with accuracy as well as with minimal computational requirements. This 'semi-analytical' technique is detailed in Section 3. In the end, as an example to showcase the applicability of this semi-analytical technique, we predict the properties of a disordered coating suitable for the application of passive radiative cooling and compare these with experimental measurements previously reported in literature.
Analytical techniques -Kubelka Munk (KM) and the Four-Flux (FF) methods
We start with the expressions for reflectivity and transmissivity of the coating as obtained from KM and FF theories which we use in the work to analyze the optical properties of the disordered coating. Detailed derivations of these expressions can be found in several references [17,25,26]. The optical coating considered in this work is a plane-parallel slab of particulate composite on a substrate as shown in Fig. 1. The composite is considered to be of finite thickness and infinite extension in the lateral direction. The randomly distributed spherical particles embedded within the host medium (also called the matrix) act as inhomogeneities to the propagating EM wave, thereby causing its scattering (and absorption, in case the particle is lossy). The objective is to predict the optical properties of this coating including the total reflectance, transmittance and absorption. The expression for the reflectivity ( KM ) and transmissivity ( KM ) from KM theory is given by [3,25]:
KM = (1 − ) (1 + ) (exp( ) − exp(− )) (1 + ) 2 exp( ) − (1 − ) 2 exp(− )(1)KM = 4 (1 + ) 2 exp( ) − (1 − ) 2 exp(− )(2)
where, is the thickness of the layer, the coefficients and are given by [3,27]:
= √︁ /( + ); = √︁ 2 (2 + )(3)with = 3 (1 − ) − 4 ;(4)
and the factors and got using Mie theory [1]
= 3 sca 4 ; = 3 abs 4 ,(5)
where is the volume fraction, is the radius of the sphere, sca ( abs ) is the Mie scattering (absorption) efficiency of a single particle embedded in a host medium of index ℎ , and is the asymmetry parameter. Expressions for sca , abs , and in terms of standard Mie coefficients can be found in Ref. [28]. It should be pointed out that the relations between the coating properties and , and the particle properties and given in Eq. 3 are not unique -several other relations [4,5,[29][30][31][32][33] have been proposed over the years. The expressions in Eq. 3 and Eq. 4, taken from Ref. [3,27], is representative and have been chosen for demonstrative reasons. As we will see in Sec. 3 the semi-analytical method being proposed in this work does not depend on such relations and hence do not affect the central results of this work.
In the limit of low absorption, → 0, the reflectivity in Eq. (1) can be shown to reduce to [25]:
KM = + 1 .(6)
It must be noted that Eq. (1) and Eq. (2) do not take into account surface reflection of incident radiation at interface (1). Modified reflectance 0 and transmittance 0 which take into account surface reflection correction are calculated using [34]:
0 = c + (1 − c ) (1 − i ) KM 1 − i KM ; 0 = (1 − c ) KM 1 − i KM (7)
where, c is the specular reflectance of incident light got from Fresnel reflection which for normal incidence from a medium of index surr reads:
c = − 1 + 1 2(8)
with = h / surr and i is the diffuse reflectance of internal radiation at interface (1), marked in Fig. 1, which is calculated using:
= 2 ∫ /2 0 ( ) sin cos d .(9)
where, from Fresnel's coefficients:
( ) = 1 2 √ 2 − sin − cos √ 2 − sin + cos 2 + 2 cos − √ 2 − sin 2 cos + √ 2 − sin 2 .(10)
The expression for i from Eq. 9 can be used even the limit of low diffuse scattering since the contribution from the product KM will be negligible in this regime. Many configurations developed for radiative cooling application [7,35,36] and solar absorber plates [37,38] involve use of a substrate. In the presence of a substrate, the net reflectance and transmittance from Eq. 7 will have to be further modified as [39]:
= 0 + 2 0 g 1 − 0 g ; = (1 − g ) 0 1 − 0 g(11)
Here g is the diffuse reflectance at interface (2) obtained from Eq. 9 with = h / g . The substrate index is taken to be 1.5 in this work. The derivation of reflection and transmission coefficients from KM theory assumes that the incident light is diffuse. When the incident radiation is collimated, alternate methods such as the four-flux theory, which take into account the propagation of both collimated and diffuse radiation across the interfaces in two directions, are expected to be more accurate. This careful consideration of both collimated and diffuse components leads to expressions for the optical properties being far more complicated than in KM theory. The net reflection and transmission coefficients when incident radiation is fully collimated can be expressed in terms of a summation over collimated-collimated reflectivity ( cc ), collimated-diffuse reflectivity ( cd ), collimated-collimated transmissivity ( cc ), and collimated-diffuse transmissivity ( cd ) as:
= cc + cd + dd ; = cc + cd + dd(12)
Expressions for cc , cd , cc and cd are quite elaborate and have been included in the supplementary document (Section S1) for reference. In Sections 2.1, 2.2, and 2.3, we use the expressions for and given in Eqs. 11 for KM theory and Eqs. 12 for FF to predict the optical properties of disordered coatings and compare these with the results obtained from Lumerical FDTD solver [40]. We analyze situations where both the particles and host medium are absorbing as well as non-absorbing, and also consider the effect of different thickness of the coating. The degree of absorption in particles considered in this work are relevant for dielectric inclusions typically included in coatings for use in radiative cooling and solar thermal applications. In addition, to facilitate the parametric study we assume non-dispersive form of refractive index for both the particles as well as the host matrix. We first confine our analysis to the independent scattering regime in Sections 2.1 and 2.2, and extend the analysis to dependent scattering regime later in Sec. 2.3. The FDTD simulations were set up in ANSYS Lumerical. Periodic boundary conditions were applied in the lateral and directions, and coating is illuminated with a plane wave source from direction. A mesh size of 30 nm was used which we find is sufficient for convergence (mesh convergence study is shown in supplementary Fig. S3).
(a) (b) (c) (d)
2.1. Comparison of predictions from KM, FF theories and FDTD solver in the independent scattering regime for monodisperse inclusions with and without absorption in particles and in host medium keeping the other parameters = 0.25 , = 0.05, ℎ = 1.5 non varying. It is observed that particularly for smaller thickness of coating and in the absence of absorption the predictions from FF method deviates significantly from the FDTD simulations as compared to KM method in both the visible as well as IR spectrum. However, for larger thicknesses of the coating and in the presence of absorption in particles, FF is relatively more accurate than the KM method across the spectrum, more so for higher wavelengths.
In the presence of absorbing host media, the expressions for and in Eqs. 1 and 2 needs to be modified to account for absorption in the matrix [20,22] as:
= √︁ /( + ), and = √︁ 2 (2 + ) where, = + (1 − ) .
Here, = 4 h / , with h being the imaginary part of refractive index of the matrix, and the wavelength in vacuum. In addition, expressions for sca and abs in Eq. 5 needs to be modified as shown by Mischenko et al. [41]. Figure 3a and 3b show the comparison between KM, FF, and FDTD results for the case when host medium is weakly absorbing with ℎ = 10 −4 and Fig. 3c and 3d show the corresponding comparison when it is more strongly absorbing with ℎ = 10 −2 . In the presence of weakly absorbing matrix and for smaller thickness of the coating FF is again observed to deviate significantly from the FDTD simulations. As absorption increases we observe significant deviation from FDTD results in both FF and KM theories particularly for the higher wavelengths.
Comparison of predictions from KM, FF theories and FDTD solver in the independent scattering regime for polydisperse inclusions with and without absorption in particles
In this section we explore the predictive capability of KM and FF theories for polydisperse medium which consists of randomly positioned particles with different sized radius. The study is motivated from the observation that synthesis of nanoparticles via various methods such as sol-gel [42], microemulsion [43], hydrothermal [44], results in a polydisperse size distribution of particles. Moreover, some recent studies [45,46] have also deliberately adopted coatings with different size distribution of particles to make use of the property of size-dependent scattering of particles to obtain wavelength-selective coatings. Such a particulate medium can be analyzed by considering the particles to be distributed about a mean radius with standard deviation , with the expressions for and to be used in Eqs. 5 got by summing over the respective coefficients for individual particle volume fractions [22] as:
= ∑︁ =1 ; = ∑︁ =1(13)
where and are the Mie scattering and absorption coefficients respectively of the particle with fill fraction . Equation (13) can also be used to calculate and when there are two or with and without absorption. The particle size distribution curve has been shown in Fig. S2. Figure 4a and 4b show the comparison between KM, FF, and FDTD results for the case when particles are non-absorbing and Fig. 4c and 4d show the corresponding comparison when particles are absorbing with = 10 −2 . Other parameter values are retained as for the case of monodisperse particulate coating. The observations follow the trend seen for the case of monodisperse coating with significant deviations observed in the predictions of FF method for lower values of thicknesses of coatings and when particles are nonabsorbing. For larger thicknesses of the coating and in the presence of absorption, both FF and KM are observed to predict the optical properties with reasonable accuracy across the spectrum.
Comparison of predictions from KM, FF and FDTD solvers in the dependent scattering regime
So far we have analyzed for the situations where the fill fraction of particles in the composite is small enough so that the particles can be assumed to independently scatter from one another. However, as the fill fraction of particles increases, there will be a transition to dependentscattering regime where both the near-field interaction between the particles as well as far-field interference between the scattered field of individual particles have a significant impact on the overall properties of the coating. Hotel [24] empirically determined this transition to occur when > 0.27 and / > 0.3 where is the mean inter-particle spacing and is the wavelength. Several coatings reported in literature [7,[46][47][48][49] have fill fractions in the range 0.1-0.6 where such effects cannot be neglected. We thus explore here the predictive capability of FF and KM theories for such coatings by considering a monodisperse distribution of particles with increased fill fraction = 0.3 while retaining other parameter values to be same as that included for Fig. 2b. This comparison is shown in Fig. 5, where we observe that the predictions from both FF and KM theories deviate significantly from FDTD simulations across the spectra, and thus cannot be relied on for predicting optical properties of such coatings.
A comparison between the weighted average of the optical properties across the spectra as predicted by KM and FF theories for the different cases considered so far has been tabulated in Table 1 where AM1.5 ( ) is the spectral solar irradiance [50] and IR = ∫ BB ( ) ( ) / ∫ BB ( ) , where BB ( ) is the black body irradiance. For the relevant applications in consideration for this study i.e. coatings suitable for radiative cooling application and for use in solar thermal absorber plates, the reflection over the solar spectrum i.e. over wavelength range 0.3 − 3 and emissivity over the infra-red spectrum i.e. over wavelength range 5 − 15 is of primary importance, and the weighted average over this spectral range is reported in Table 1 along with the deviation from FDTD simulations expressed in % error in brackets.
Semi-analytical method
The comparison with FDTD simulations shown in Section 2 demonstrate the failure of KM and FF analytical methods in configurations where dependent scattering is not negligible and when matrix/particles are absorbing. This failure can be attributed to the actual scattering and absorption coefficients of these coatings diverging from the values calculated using Mie scattering coefficients of the individual particles. At present no single analytical technique exists that can correctly predict the optical properties of particulate media in the presence of dependent scattering effects as well as correctly account for the absorption in matrix/particles. One can then resort to using exact numerical solvers to accurately estimate the optical properties of the coating in such cases. However, as Fig. 6 shows, the computational time required to simulate such structures increases exponentially with thickness of the coating. For coatings of thickness in the range 100-500 microns which are currently being adopted in literature for the radiative cooling application [6,8,36,46,49] the design time is clearly prohibitive. In such cases it becomes imperative to develop alternate techniques which can combine the accuracy power of exact FDTD solvers with the simplicity and minimal computational requirements of the analytical techniques. Particularly when multiple parameters are involved in design -such as that observed for disordered media -such a method will prove to be useful in reducing the design time to find the optimum combination of parameters necessary to obtain the required optical properties of the coating. In order to obtain a better estimate for the absorption and scattering coefficients of such media where dependent scattering effects are non-negligible, researchers have previously [27,[51][52][53] relied on experimental measurements of the optical properties of a fabricated coating and then using the KM theory results from Section 2 to extract the required coefficients. Instead of relying on experimental measurements which is not always feasible especially at the initial state of design, we modify this technique and instead propose the following two-step semi-analytical method to estimate the optical properties of random media of thickness when usage of exact numerical solvers to simulate the properties of such a thick coating is prohibitive.
• Step 1: Use a numerical solver to obtain the optical properties and of a similar coating but with much smaller thickness and extract the and parameters by inverting Eq. 1 and 2. Care must be taken at this step to ensure that the configuration set up in the solver considers incident light to be in the same medium as the index of matrix i.e., surr = h in order to ensure that reflection from surfaces and substrates are not included in this step. In case the host matrix is absorbing then only the real part is considered i.e., surr = Re( h ). Care must also be taken to ensure that when scattering efficiency of the particles is high, the value of should be chosen such that where ≈ 1/( ) is the scattering mean-free path with being the particle number density and the scattering cross section. At the other limit when scattering efficiency is low the optical properties of the coating are primarily determined from surface reflection and transmission which are accounted for in step 2. Thus the choice of is determined from the scattering mean-free path calculated in the high-scattering regime.
• Step 2: From the and parameters extracted from step 1 use the analytical expressions from KM theory i.e. Eqs. 1, 2, 7 and 11 to predict the optical properties of the coating of the required thickness . Specular reflection at the surfaces as well as at the substrate are accounted for here. A more elaborate procedure, along with details of a supporting convergence test which may need to be incorporated in some cases to arrive at the value of thickness is included in Section S4 of supplementary. We now apply this technique for the cases considered in Section 2 where the predictions from analytical methods deviated significantly from those of FDTD solver, such as for the dependent scattering regime, as well as when the absorption in the particles/host matrix is significant. Fig. 7 (Fig. 8) shows the comparison between the predictions from the semi-analytical technique and from FDTD simulations when absorption in particles (host matrix) is varied. In both these cases the semi-analytical technique uses the results of exact FDTD simulations of a 10 m thick coating to predict the optical properties of a larger 50 m thick coating. A volume fill fraction = 0.3 is maintained in both these cases where dependent scattering effects are known to be dominant, while keeping other parameter values same as that analysed for the monodisperse case of Sec. 2.1. For these cases we observe a close match in the predictions of the semi-analytical method with the FDTD results over the entire spectrum, with only a slight deviation observed for the higher wavelengths when absorption is high. The weighted-average reflectivity of the coating for solar spectrum, and emissivity over the infra-red spectrum for the cases considered in Figs. 7 and 8 are listed in Table 2
Comparison with experimental data
We now apply the semi-analytical technique described in Section 3 to predict the optical properties of fabricated coatings reported in literature which have been designed for radiative cooling application. We choose two such disordered coatings where dependent scattering is expected to be dominant so that analytical techniques are not applicable, and the thickness of the coating prohibits the use of exact electromagnetic solvers to predict the optical properties to good accuracy. In Ref. [48], a hierarchically porous polymer (P(VdF-HFP)) coating of thickness 300 m containing air voids with sizes ranging from 0.05-5 m in P(VdF-HFP) matrix has been fabricated, and experimentally characterized to have solar reflectivity value of 0.96 and emissivity in the 8-13 m wavelength range to be 0.97. In order to apply semi-analytical technique to predict the properties of this coating, we set up a simulation in FDTD solver with a smaller coating thickness = 50 m (determined using the convergence test explained in Section S4 of supplementary). This thickness is chosen to ensure sufficient number of larger sized air voids ( ≈ 2.5 m) in this P(VdF-HFP) matrix. The size distribution of nano-micro air voids used in the simulation is given in supplementary (Fig. S4). Refractive index data of P(VDF-HFP) is extracted from Ref. [48]. The reflectivity data in the wavelength range 0.3 − 16 m, predicted using the semi-analytical method for = 300 m thickness, is compared with that reported in Ref. [48] in Fig. 9a. While an appreciable match is noticed in the predicted values across the spectrum, small deviation observed in the reflectivity values can be attributed to our inability to incorporate exact size distribution of both micro and nano voids as present in the fabricated structure, in ANSYS Lumerical.
In Ref. [46] an ultrawhite BaSO 4 film of thickness 400 m has been developed with 60 % volume fraction of BaSO 4 nanoparticles, and has been characterized to have reflectivity of 0.976 in the solar spectrum and emissivity of 0.96 in 8-13 m wavelength range. In order to apply the semi-analytical technique to predict the properties of this coating, we set up a simulation in FDTD solver with structure thickness = 15 m and BaSO 4 spherical particles randomly distributed with volume fraction 60 %. The particles are taken to be of uniform size distribution with diameters spread over the range 398 ± 130 nm to match that reported in Ref. [46]. Matrix is considered to be air for BaSO 4 film. Refractive index data of BaSO 4 is extracted from Ref. [54]. The emissivity data in the wavelength range 0.3 − 16 m, predicted using the semi-analytical method for = 400 m thickness, is compared with that reported in Ref. [46] in Fig. 9b. While we again observe an appreciable match across the spectrum, some deviation observed particularly around wavelength of 2 m is suspected to be due to difference in the refractive index of the fabricated film and that calculated from first-principles in Ref. [54].
Conclusion
In this study we have analyzed the applicability of well-known analytical techniques of KM and FF theories to predict optical properties of a disordered metamaterial coating over a broad spectrum ranging from 300 nm to 15 m wavelength. Recent advancements in the use of disordered coatings in applications such as radiative cooling and solar thermal absorber plates which require tailored optical properties over this wavelength range necessitates such a study. Based on deviations observed between the predictions of these analytical techniques and exact FDTD solver in the dependent scattering regime, a two-step semi-analytical technique has been proposed which can be used to predict optical properties of such coatings with good accuracy and minimal computational resources. Such a method is expected to be resourceful for designing coatings with specific optical properties where several parameter combinations need to be investigated to arrive at an optimal combination. Small deviations observed when absorption in host matrix is high warrants further research to improve this technique.
Fig. 1 .
1Schematic of coating considered in this work with incident plane wave source.
Fig. 2 .
2Reflectivity and transmissivity spectrum for h = 1.5, = 0.25 m, = 0.05, and (a) p = 2.5, = 10 m; (b) p = 2.5, = 50 m. Reflectivity and absorptivity spectrum for h = 1.5, = 0.25 m, = 0.05, and (c) p = 2.5 + 0.1, = 10 m; (d) p = 2.5 + 0.1, = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical.
Figure 2a
2aand 2b show the comparison between KM, FF, and FDTD results for the case when particles are non-absorbing andFig. 2c and 2dshow the corresponding comparison when particles are absorbing with imaginary index of particles = 10 −1 . We compare the predictions for different coating thicknesses 10 and 50
Fig. 3 .
3Reflectivity and absorptivity spectra for p = 2.5, = 0.25 m, = 0.05, and (a) h = 1.5 + 10 −4 , = 10 m; (b) h = 1.5 + 10 −4 , = 50 m; (c) h = 1.5 + 10 −2 , = 10 m; (d) h = 1.5 + 10 −2 , = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical. more type of particles present in the matrix (with different refractive index). For demonstration we consider a Gaussian distribution of spherical particles about mean radius = 0.25 with standard deviation = 0.016
Fig. 4 .
4Reflectivity and transmissivity for h = 1.5, = 0.25 m, = 0.016, = 0.05 and (a) p = 2.5, = 10 m; (b) p = 2.5, = 50 m. Reflectivity and absorptivity for h = 1.5, = 0.25 m, = 0.016, = 0.05 and (c) p = 2.5 + 0.01 , = 10 m; (d) p = 2.5 + 0.01 , = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical.
Fig. 5 .
5. The weighted averages are calculated as: solar = ∫ AM1.5 ( ) ( ) / ∫ AM1.5 ( ) , Reflectivity and transmissivity for p = 2.5, h = 1.5, = 0.25 m, = 0.3, and = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical.
Fig. 6 .
6Comparison of computational time as a function of thickness of the disordered media coating. Simulations are carried out in ANSYS Lumerical using an eight-core Intel Xeon workstation for the configuration: = 2.5, ℎ = 1.5, = 0.25 m, = 0.05, with mesh size 30 nm. Auto shutoff level (simulation termination criteria) is set at 10 −3 .
Fig. 7 .
7(a) Reflectivity and transmissivity for p = 2.5, h = 1.5, = 0.25 m, = 0.3, and = 50 m; (b) Reflectivity and absorptivity for p = 2.5 + 0.1 , h = 1.5, = 0.25 m, = 0.3, and = 50 m. Here, SM stands for semi-analytical method and LM for Lumerical.
Fig. 8 .
8Reflectivity and absorptivity for p = 2.5, = 0.25 m, = 0.3, = 50 m, and (a) h = 1.5 + 10 −2 ; (b) h = 1.5 + 10 −1 . Here, SM stands for semi-analytical method and LM for Lumerical.
Fig. 9 .
9(a) Reflectivity of hierarchically porous P(VDF-HFP) coating calculated using semi-analytical technique is compared with experimental result given by Mandal et al.[48]; (b) Absorptivity/emissivity of BaSO 4 film calculated using semi-analytical technique is compared with experimental result given by Li et al.[46].
Table 1 .
1Weighted-average reflectivity in solar spectrum solar,KM ( solar,FF ) and emissivity in IR spectrum IR,KM ( IR,FF ) calculated using KM (FF) theory. Values in brackets denote deviation of prediction from FDTD results.Sr. no. Fig. no.
solar,KM
IR,KM
solar,FF
IR,FF
1
2a
0.391 (21.4%) -
0.509 (58.1%) -
2
2b
0.745 (0.67%) -
0.747 (0.4%)
-
3
2c
0.076 (13.4%) 0.08 (95%)
0.081 (20.9%) 0.043 (4.87%)
4
2d
0.078 (18.2%) 0.331 (70.6%) 0.079 (19.7%) 0.197 (1.55%)
5
3a
0.376 (18.6%) -
0.483 (52.4%) -
6
3b
0.619 (8.22%) 0.012 (100%) 0.612 (6.99%) 0.005 (16.67%)
7
3c
0.112 (30.2%) 0.205 (83.0%) 0.110 (27.9%) 0.086 (23.2%)
8
3d
0.112 (36.6%) 0.661 (51.3%) 0.108 (31.7%) 0.357 (18.3%)
9
4a
0.392 (19.1%) -
0.510 (55.0%) -
10
4b
0.746 (4.63%) -
0.755 (5.89%) -
11
4c
0.260 (25.0%) 0.008 (60%)
0.279 (34.1%) 0.004 (20%)
12
4d
0.304 (16.5%) 0.041 (78.3%) 0.268 (2.68%) 0.023 (0.01%)
13
5
0.944 (8.13%) -
0.935 (6.98%) -
along with the deviation from FDTD simulations expressed in % error in brackets. Particularly illustrative of the effectiveness of the semi-analytical technique is the reduction in error (1.03 % in Sr. no. 1) as compared to those obtained from analytical techniques and reported inTable 1( 8.13 % using KM theory and 6.98 % using FF theory in Sr. no. 13) for the configuration: p = 2.5, h = 1.5, = 0.25 m, = 0.3, and = 50 m where dependent scattering is expected to be dominant.
Table 2 .
2Weighted-average reflectivity in solar spectrum solar,SM , and emissivity in IR spectrum IR,SM calculated using the semi-analytical technique. Values in brackets denote deviation of prediction from FDTD results.Sr. no.Fig. no.solar,SM
IR,SM
1
7a
0.864 (1.03%) -
2
7b
0.059 (0.01%) 0.710 (8.73%)
3
8a
0.178 (2.73%) 0.391 (6.25%)
4
8b
0.062 (19.2%) 0.935 (5.17%)
DisclosuresThe authors declare no conflicts of interest.Supplementary informationSee Supplement 1 for supporting content.
Absorption and scattering of light by pigment particles in solar-absorbing paints. M Gunde, Z Orel, Appl. Opt. 39M. Gunde and Z. Orel, "Absorption and scattering of light by pigment particles in solar-absorbing paints," Appl. Opt. 39, 622-628 (2000).
Radiative cooling during the day: simulations and experiments on pigmented polyethylene cover foils. T Nilsson, G Niklasson, Sol. Energy Mater. & Sol. Cells. 37T. Nilsson and G. Niklasson, "Radiative cooling during the day: simulations and experiments on pigmented polyethylene cover foils," Sol. Energy Mater. & Sol. Cells 37, 93-118 (1995).
The color of finely dispersed nanoparticles. M Quinten, Appl. Phys, B. 73M. Quinten, "The color of finely dispersed nanoparticles," Appl. Phys, B 73, 317-326 (2001).
Mathematical and empirical evaluation of accuracy of the kubelka-munk model for color match prediction of opaque and translucent surface coatings. M Bandpay, F Ameri, K Ansari, S Moradian, J. Coat. Technol. Res. 15M. Bandpay, F. Ameri, K. Ansari, and S. Moradian, "Mathematical and empirical evaluation of accuracy of the kubelka-munk model for color match prediction of opaque and translucent surface coatings," J. Coat. Technol. Res. 15, 1117-1131 (2018).
Empirical relationship between kubelka-munk and radiative transfer coefficients for extracting optical parameters of tissues in diffusive and nondiffusive regimes. A Roy, R Ramasubramaniam, H Gaonkar, J. Biomed. Opt. 177A. Roy, R. Ramasubramaniam, and H. Gaonkar, "Empirical relationship between kubelka-munk and radiative transfer coefficients for extracting optical parameters of tissues in diffusive and nondiffusive regimes," J. Biomed. Opt. 17, 7 (2012).
Nanoparticle embedded double-layer coating for daytime radiative cooling. Z Huang, X Ruan, Int. J. Heat Mass Transf. 104Z. Huang and X. Ruan, "Nanoparticle embedded double-layer coating for daytime radiative cooling," Int. J. Heat Mass Transf. 104, 890-896 (2017).
Double-layer nanoparticle-based coatings for efficient terrestrial radiative cooling. H Bao, C Yan, B Wang, X Fang, C Zhao, X Ruan, Sol. Energy Mater. Sol. Cells. 168H. Bao, C. Yan, B. Wang, X. Fang, C. Zhao, and X. Ruan, "Double-layer nanoparticle-based coatings for efficient terrestrial radiative cooling," Sol. Energy Mater. Sol. Cells 168, 78-84 (2017).
Disordered metamaterial coating for daytime passive radiative cooling. B Mishra, S Sundaram, N Varghese, K Sasihithlu, AIP Adv. 11105218B. Mishra, S. Sundaram, N. Varghese, and K. Sasihithlu, "Disordered metamaterial coating for daytime passive radiative cooling," AIP Adv. 11, 105218 (2021).
Colours in metal glasses and in metallic films. J Garnett, Phil Trans. R. Soc. Lond. A. 203J. Garnett, "Colours in metal glasses and in metallic films," Phil Trans. R. Soc. Lond. A 203, 385-420 (1904).
Berechnung verschiedener physikalischer konstanten von heterogenen substanzen. i. dielektrizitätskonstanten und leitfähigkeiten der mischkörper aus isotropen substanzen. D Bruggeman, Ann der Phys. 24636D. Bruggeman, "Berechnung verschiedener physikalischer konstanten von heterogenen substanzen. i. dielektrizität- skonstanten und leitfähigkeiten der mischkörper aus isotropen substanzen," Ann der Phys. 24, 636 (1935).
Applicability of effective-medium theories to problems of scattering and absorption by nonhomogeneous atmospheric particles. C Bohren, J. Atmos. Sci. 43C. Bohren, "Applicability of effective-medium theories to problems of scattering and absorption by nonhomogeneous atmospheric particles," J. Atmos. Sci. 43, 468-475 (1986).
Optical properties of a suspension of metal spheres. W Doyle, Phys. Rev. B. 39W. Doyle, "Optical properties of a suspension of metal spheres," Phys. Rev. B 39, 9852-9858 (1989).
The determination of light absorption in diffusing materials by a photon diffusion model. L Gate, J. Phys. D: Appl. Phys. 4L. Gate, "The determination of light absorption in diffusing materials by a photon diffusion model," J. Phys. D: Appl. Phys. 4, 1049-1056 (1971).
Wave propagation and scattering in random media. A Ishimaru, Academic press2New YorkA. Ishimaru, Wave propagation and scattering in random media, vol. 2 (Academic press New York, 1978).
Radiative transfer calculations in multilayer systems with smooth or rough interfaces. J Caron, C Andraud, J Lafait, J. modern optics. 51J. Caron, C. Andraud, and J. Lafait, "Radiative transfer calculations in multilayer systems with smooth or rough interfaces," J. modern optics 51, 575-595 (2004).
Ein beitrag zur optik der farbanstriche. P Kubelka, F Munk, Z. Tech. Phys. (Leipzig). 12P. Kubelka and F. Munk, "Ein beitrag zur optik der farbanstriche," Z. Tech. Phys. (Leipzig) 12, 593-601 (1931).
Four-flux models to solve the scattering transfer equation in terms of lorenz-mie parameters. B Maheu, J Letoulouzan, G Gouesbet, Appl. Opt. 23B. Maheu, J. Letoulouzan, and G. Gouesbet, "Four-flux models to solve the scattering transfer equation in terms of lorenz-mie parameters." Appl. Opt. 23, 3353-3362 (1984).
Mcml -monte carlo modelling of light transport in multi-layered tissues. L Wang, S Jacques, L Zheng, Comput. Methods Programs Biomed. 47L. Wang, S. Jacques, and L. Zheng, "Mcml -monte carlo modelling of light transport in multi-layered tissues," Comput. Methods Programs Biomed. 47, 131-146 (1995).
Kubelka-munk theory in describing optical properties of paper (i). V Dzimbeg-Malcic, Z Barbaric-Mikocevic, K Itric, Tech Gaz. 18V. Dzimbeg-Malcic, Z. Barbaric-Mikocevic, and K. Itric, "Kubelka-munk theory in describing optical properties of paper (i)," Tech Gaz 18, 117-124 (2011).
A theoretical feasibility study of pigments for thickness-sensitive spectrally selective paints. N Etherden, T Tesfamichael, G Niklasson, Wäckelg0ard , J. Phys.: Appl. Phys. 37N. Etherden, T. Tesfamichael, G. Niklasson, and Wäckelg0ard, "A theoretical feasibility study of pigments for thickness-sensitive spectrally selective paints." J. Phys.: Appl. Phys. 37, 1115-1122 (2004).
Four-flux model of the light scattering in porous varnish and paint layers: towards understanding the visual appearance of altered blanched easel oil paintings. A Genty-Vincent, T Song, C Andraud, M Menu, Appl. Phys. A. 123473A. Genty-Vincent, T. Song, C. Andraud, and M. Menu, "Four-flux model of the light scattering in porous varnish and paint layers: towards understanding the visual appearance of altered blanched easel oil paintings," Appl. Phys. A 123, 473 (2017).
Extending the applicability of four-flux radiative transfer method. M Gali, A Gentle, M Arnold, G Smith, Appl. Opt. 56M. Gali, A. Gentle, M. Arnold, and G. Smith, "Extending the applicability of four-flux radiative transfer method," Appl. Opt. 56, 8699-8709 (2017).
Passive radiative cooling below ambient air temperature under direct sunlight. A Raman, M Anoma, L Zhu, E Rephaeli, S Fan, Nature. 515A. Raman, M. Anoma, L. Zhu, E. Rephaeli, and S. Fan, "Passive radiative cooling below ambient air temperature under direct sunlight," Nature 515, 540-544 (2014).
Optical properties of coatings. effect of pigment concentration. H Hottel, A Sarofim, AIAA J. 9H. Hottel and A. Sarofim, "Optical properties of coatings. effect of pigment concentration," AIAA J. 9, 1895-1898 (1971).
New contributions to the optics of intensely light scattering materials. Part I. P Kubelka, J. Opt. Soc. Am. 38P. Kubelka, "New contributions to the optics of intensely light scattering materials. Part I," J. Opt. Soc. Am. 38, 448-457 (1948).
Four-flux models to solve the scattering transfer equation: Special cases. B Maheu, G Gouesbet, Appl. Opt. 25B. Maheu and G. Gouesbet, "Four-flux models to solve the scattering transfer equation: Special cases." Appl. Opt. 25, 1122-1128 (1986).
Determination of kubelka-munk scattering and absorption coefficients by diffuse illumination. R Molenaar, J T Bosch, J Zijp, Appl. Opt. 38R. Molenaar, J. t. Bosch, and J. Zijp, "Determination of kubelka-munk scattering and absorption coefficients by diffuse illumination." Appl. Opt. 38, 2068-2077 (1999).
Absorption and scattering of light by small particles. C F Bohren, D R Huffman, John Wiley & SonsC. F. Bohren and D. R. Huffman, Absorption and scattering of light by small particles (John Wiley & Sons, 2008).
Forward-scattering ratios and average pathlength parameter in radiative transfer models. W Vargas, G Niklasson, Appl. Opt. 36W. Vargas and G. Niklasson, "Forward-scattering ratios and average pathlength parameter in radiative transfer models." Appl. Opt. 36, 3735-3738 (1997).
Inversion methods from kubelka-munk analysis. W Vargas, J. Opt. A: Pure Appl. Opt. 4W. Vargas, "Inversion methods from kubelka-munk analysis," J. Opt. A: Pure Appl. Opt. 4, 452-456 (2002).
Revised kubelka-munk theory. i. theory and application. L Yang, B Kruse, J. Opt. Soc. Am. A. 21L. Yang and B. Kruse, "Revised kubelka-munk theory. i. theory and application," J. Opt. Soc. Am. A 21, 1933-1941 (2004).
Modified kubelka-munk model for calculation of the reflectance of coatings with optically-rough surfaces. A Murphy, J. Phys. D: Appl. Phys. 39A. Murphy, "Modified kubelka-munk model for calculation of the reflectance of coatings with optically-rough surfaces," J. Phys. D: Appl. Phys. 39, 3571-3581 (2006).
Relationship between the kubelka-munk scattering and radiative transfer coefficients. S , J. Opt. Soc. Amer. A. 25S. Thennadil, "Relationship between the kubelka-munk scattering and radiative transfer coefficients," J. Opt. Soc. Amer. A. 25, 1480-1485 (2008).
Calculation of the color of pigmented plastics. J Saunderson, J. Opt. Soc. Am. 32J. Saunderson, "Calculation of the color of pigmented plastics." J. Opt. Soc. Am. 32, 727-736 (1942).
Use of hollow silica and titanium dioxide microparticles in solar reflective paints for daytime radiative cooling applications in a tropical region. S Atiganyanun, J. Photonics for Energy. 1122103S. Atiganyanun, "Use of hollow silica and titanium dioxide microparticles in solar reflective paints for daytime radiative cooling applications in a tropical region," J. Photonics for Energy 11, 022103 (2021).
Effective radiative cooling with ZrO 2 /PDMS reflective coating. Y Zhang, X Tan, G Qi, X Yang, D Hu, P Fyffe, X Chen, Sol. Energy Mater. & Sol. Cells. 229111129Y. Zhang, X. Tan, G. Qi, X. Yang, D. Hu, P. Fyffe, and X. Chen, "Effective radiative cooling with ZrO 2 /PDMS reflective coating." Sol. Energy Mater. & Sol. Cells 229, 111129 (2021).
A review of cermet-based spectrally selective solar absorbers. F Cao, K Mcenaney, G Chen, Z Ren, Energy & Environ. Sci. 7F. Cao, K. McEnaney, G. Chen, and Z. Ren, "A review of cermet-based spectrally selective solar absorbers," Energy & Environ. Sci. 7, 1615-1627 (2014).
Design and optimization of nanoparticle-pigmented solar selective absorber coatings for high-temperature concentrating solar thermal systems. X Wang, X Yu, S Fu, E Lee, K Kekalo, J Liu, J. Appl. Phys. 12333104X. Wang, X. Yu, S. Fu, E. Lee, K. Kekalo, and J. Liu, "Design and optimization of nanoparticle-pigmented solar selective absorber coatings for high-temperature concentrating solar thermal systems," J. Appl. Phys. 123, 033104 (2018).
G Kortüm, Reflectance Spectroscopy: Principles, Methods, Applications. SpringerG. Kortüm, Reflectance Spectroscopy: Principles, Methods, Applications (Springer, 1969).
Lumerical Inc, FDTD: 3D Electromagnetic Simulator. Lumerical Inc., FDTD: 3D Electromagnetic Simulator (2021).
Far-field lorenz-mie scattering in an absorbing host medium: Theoretical formalism and fortran program. M Mishchenko, P Yang, J. Quant. Spectrosc. & Radiat. Transf. 205M. Mishchenko and P. Yang, "Far-field lorenz-mie scattering in an absorbing host medium: Theoretical formalism and fortran program." J. Quant. Spectrosc. & Radiat. Transf. 205, 241-252 (2018).
Nanomaterial by sol-gel method: synthesis and application. D Bokov, A Jalil, S Chupradit, W Suksatan, M Javed Ansari, I H Shewael, G H Valiev, E Kianfar, Adv. Mater. Sci. Eng. 2021D. Bokov, A. Turki Jalil, S. Chupradit, W. Suksatan, M. Javed Ansari, I. H. Shewael, G. H. Valiev, and E. Kianfar, "Nanomaterial by sol-gel method: synthesis and application," Adv. Mater. Sci. Eng. 2021 (2021).
Microemulsion method: A novel route to synthesize organic and inorganic nanomaterials: 1st nano update. M A Malik, M Y Wani, M A Hashim, Arab. journal Chem. 5M. A. Malik, M. Y. Wani, and M. A. Hashim, "Microemulsion method: A novel route to synthesize organic and inorganic nanomaterials: 1st nano update," Arab. journal Chem. 5, 397-417 (2012).
Hydrothermal synthesis of nanomaterials. Y X Gan, A H Jayatissa, Z Yu, X Chen, M Li, J. Nanomater. 2020Y. X. Gan, A. H. Jayatissa, Z. Yu, X. Chen, and M. Li, "Hydrothermal synthesis of nanomaterials," J. Nanomater. 2020 (2020).
A strategy of hierarchical particle sizes in nanoparticle composite for enhancing solar reflection. J Peoples, X Li, Y Lv, J Qiu, Z Huang, X Ruan, Int. J. Heat Mass Transf. 131J. Peoples, X. Li, Y. Lv, J. Qiu, Z. Huang, and X. Ruan, "A strategy of hierarchical particle sizes in nanoparticle composite for enhancing solar reflection," Int. J. Heat Mass Transf. 131, 487-494 (2019).
Ultrawhite baso 4 paints and films for remarkable daytime subambient radiative cooling. X Li, J Peoples, P Yao, X Ruan, ACS Appl. Mater. Interfaces. 13X. Li, J. Peoples, P. Yao, and X. Ruan, "Ultrawhite baso 4 paints and films for remarkable daytime subambient radiative cooling," ACS Appl. Mater. Interfaces 13, 21733-21739 (2021).
Effective radiative cooling by paint-format microsphere-based photonic random media. S Atiganyanun, J Plumley, S Han, K Hsu, J Cytrynbaum, T Peng, S Han, S Han, ACS Photonics. 5S. Atiganyanun, J. Plumley, S. Han, K. Hsu, J. Cytrynbaum, T. Peng, S. Han, and S. Han, "Effective radiative cooling by paint-format microsphere-based photonic random media," ACS Photonics 5, 1181-1187 (2018).
Hierarchically porous polymer coatings for highly efficient passive daytime radiative cooling. J Mandal, Y Fu, A Overvig, M Jia, K Sun, N Shi, H Zhou, X Xiao, N Yu, Y Yang, Science. 362J. Mandal, Y. Fu, A. Overvig, M. Jia, K. Sun, N. Shi, H. Zhou, X. Xiao, N. Yu, and Y. Yang, "Hierarchically porous polymer coatings for highly efficient passive daytime radiative cooling," Science 362, 315-319 (2018).
Full daytime sub-ambient radiative cooling in commercial-like paints with high figure of merit. X Li, J Peoples, Z Huang, Z Zhao, J Qiu, X Ruan, Cell Reports Phys. Sci. 1100221X. Li, J. Peoples, Z. Huang, Z. Zhao, J. Qiu, and X. Ruan, "Full daytime sub-ambient radiative cooling in commercial-like paints with high figure of merit," Cell Reports Phys. Sci. 1, 100221 (2020).
ASTM International. ASTM G173-03, Standard Tables for Reference Solar Spectral Irradiances: Direct Normal and Hemispherical on 37°Tilted Surface. ASTM International. ASTM G173-03, Standard Tables for Reference Solar Spectral Irradiances: Direct Normal and Hemispherical on 37°Tilted Surface (2012).
Solar spectral optical properties of pigments -part i: model for deriving scattering and absorption coefficients from transmittance and reflectance measurements. R Levinson, P Berdahl, H Akbari, Sol. Energy Mater. & Sol. Cells. 89R. Levinson, P. Berdahl, and H. Akbari, "Solar spectral optical properties of pigments -part i: model for deriving scattering and absorption coefficients from transmittance and reflectance measurements." Sol. Energy Mater. & Sol. Cells 89, 319-349 (2005).
Toward a quantitative model for suspended particle devices: Optical scattering and absorption coefficients. D Barrios, R Vergaz, J Sánchez-Pena, C Granqvist, G Niklasson, Sol. Enegy Mater. & Sol. Cells. 111D. Barrios, R. Vergaz, J. Sánchez-Pena, C. Granqvist, and G. Niklasson, "Toward a quantitative model for suspended particle devices: Optical scattering and absorption coefficients." Sol. Enegy Mater. & Sol. Cells 111, 115-122 (2013).
General method for determining light scattering and absorption of nanoparticle composites. J Wang, C Xu, A Nilsson, D Fernandes, M Strömberg, J Wang, G Niklasson, Adv. Opt. Mater. 1801315J. Wang, C. Xu, A. Nilsson, D. Fernandes, M. Strömberg, J. Wang, and G. Niklasson, "General method for determining light scattering and absorption of nanoparticle composites." Adv. Opt. Mater. p. 1801315 (2018).
Atomistic metrics of BaSO 4 as an ultra-efficient radiative cooling material: a first-principles prediction. Z Tong, J Peoples, X Li, X Yang, H Bao, X Ruan, arXiv:2101.05053arXiv preprintZ. Tong, J. Peoples, X. Li, X. Yang, H. Bao, and X. Ruan, "Atomistic metrics of BaSO 4 as an ultra-efficient radiative cooling material: a first-principles prediction," arXiv preprint arXiv:2101.05053 (2021).
| zyda_arxiv-0835000 |
Machine learning & artificial intelligence in the quantum domain
Vedran Dunjko [email protected]
Hans J Briegel [email protected]
Institute for Theoretical Physics
Max Planck Institute of Quantum Optics
Institute for Theoretical Physics
University of Innsbruck
6020, 85748Innsbruck, GarchingAustria, Germany
Department of Philosophy
University of Innsbruck
6020InnsbruckAustria
University of Konstanz
78457KonstanzGermany
Machine learning & artificial intelligence in the quantum domain
Quantum information technologies, on the one side, and intelligent learning systems, on the other, are both emergent technologies that will likely have a transforming impact on our society in the future. The respective underlying fields of basic researchquantum information (QI) versus machine learning and artificial intelligence (AI) -have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can indeed learn and benefit from each other. QML explores the interaction between quantum computing and machine learning, investigating how results and techniques from one field can be used to solve the problems of the other. In recent time, we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for machine learning problems, critical in our "big data" world. Conversely, machine learning already permeates many cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical machine learning optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of artificial intelligence for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement -exploring what ML/AI can do for quantum physics, and vice versa -researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain.
I. INTRODUCTION
Quantum theory has influenced most branches of physical sciences. This influence ranges from minor corrections, to profound overhauls, particularly in fields dealing with sufficiently small scales. In the second half of the last century, it became apparent that genuine quantum effects can also be exploited in engineering-type tasks, where such effects enable features which are superior to those achievable using purely classical systems. The first wave of such engineering gave us, for example, the laser, transistors, and nuclear magnetic resonance devices. The second wave, which gained momentum in the '80s, constitutes a broad-scale, albeit not fully systematic, investigation of the potential of utilizing quantum effects for various types of tasks which, at the base of it, deal with the processing of information. This includes the research areas of cryptography, computing, sensing and metrology, all of which now share the common language of quantum information science. Often, the research into such interdisciplinary programs was exceptionally fruitful. For instance, quantum computation, communication, cryptography and metrology are now mature, well-established and impactful research fields which have, arguably, revolutionized the way we think about information and its processing. In recent years, it has become apparent that the exchange of ideas between quantum information processing and the fields of artificial intelligence and machine learning has its own genuine questions and promises. Although such lines of research are only now receiving a broader recognition, the very first ideas were present already at the early days of QC, and we have made an effort to fairly acknowledge such visionary works.
In this review we aim to capture research at the interplay between machine learning, artificial intelligence and quantum mechanics in its broad scope, with a reader with a physics background in mind. To this end, we dedicate comparatively large amount of space to classical machine learning and artificial intelligence topics, which are often sacrificed in physics-oriented literature, while keeping the quantum information aspects concise. The structure of the paper is as follows. In the remainder of this introductory section I, we give quick overviews of the relevant basic concepts of the fields quantum information processing, and of machine learning and artificial intelligence. We finish off the introduction with a glossary of useful terms, list of abbreviations, and comments on notation. Subsequently, in section II we delve deeper into chosen methods, technical details, and the theoretical background of the classical theories. The selection of topics here is not necessarily balanced, from a classical perspective. We place emphasis on elements which either appear in subsequent quantum proposals, which can sometimes be somewhat exotic, or on aspects which can help put the relevance of the quantum results into proper context. Section III briefly summarizes the topics covered in the quantum part of the review. Sections IV -VII cover the four main topics we survey, and constitute the central body of the paper. We finish with a an outlook section VIII. Remark: The overall objective of this survey is to give a broad, "birds-eye" account of the topics which contribute to the development of various aspects of the interplay between quantum information sciences, and machine learning and artificial intelligence. Consequently, this survey does not necessarily present all the developments in a fully balanced fashion. Certain topics, which are in their very early stages of investigation, yet important for the nascent research area, were given perhaps a disproportional level of attention, compared to more developed themes. This is, for instance, particularly evident in section VII, which aims to address the topics of quantum artificial intelligence, beyond mainstream data analysis applications of machine learning. This topic is relevant for a broad perspective on the emerging field, however it has only been broached by very few authors, works, including the authors of this review and collaborators. The more extensively explored topics of, e.g., quantum algorithms for machine learning and data mining, quantum computational learning theory, or quantum neural networks, have been addressed in more focused recent reviews (Wittek, 2014a;Schuld et al., 2014a;Biamonte et al., 2016;Arunachalam and de Wolf, 2017;Ciliberto et al., 2017).
A. Quantum mechanics, computation and information processing
Executive summary: Quantum theory leads to many counterintuitive and fascinating phenomena, including the results of the field of quantum information processing, and in particular, quantum computation. This field studies the intricacies of quantum information, its communication, processing and use. Quantum information admits a plethora of phenomena which do not occur in classical physics. For instance, quantum information cannot be cloned -this restricts the types of processing that is possible for general quantum information.
Other aspects lead to advantages, as has been shown for various communication and computation tasks: for solving algebraic problems, reduction of sample complexity in black-box settings, sampling problems and optimization. Even restricted models of quantum computing, amenable for near-term implementations, can solve interesting tasks. Machine learning and artificial intelligence tasks can, as components, rely on the solving of such problems, leading to an advantage.
Quantum mechanics, as commonly presented in quantum information, is based on few simple postulates: 1) the pure state of a quantum system is given by a unit vector |ψ in a complex Hilbert space, 2) closed system pure state evolution is generated by a Hamiltonian H, specified by the linear Schrödinger equation H |ψ = i ∂ ∂t |ψ , 3) the structure of composite systems is given by the tensor product, and 4) projective measurements (observables) are specified by, ideally, non-degenerate Hermitian operators, and the measurement process changes the description of the observed system from state |ψ to an eigenstate |φ , with probability given by the Born rule p(φ) = | ψ |φ | 2 (Nielsen and Chuang, 2011). While the full theory still requires the handling of subsystems and classical ignorance 1 , already the few mathematical axioms of pure state closed system theory give rise to many quintessentially quantum phenomena, like superpositions, no-cloning, entanglement, and others, most of which stem from just the linearity of the theory. Many of these properties re-define how researchers in quantum information perceive what information is, but also have a critical functional role in say quantum enhanced cryptography, communication, sensing and other applications. One of the most fascinating consequences of quantum theory are, arguably, captured by the field of quantum information processing (QIP), and in particular quantum computing (QC), which is most relevant for our purposes. QC has revolutionized the theories and implementations of computation. This field originated from the observations by Manin (Manin, 1980) and Feynman (Feynman, 1982) that the calculation of certain properties of quantum systems, as they evolve in time, may be intractable, while the quantum systems themselves, in a manner of speaking, do perform that hard computation by merely evolving. Since these early ideas, QC has proliferated, and indeed the existence of quantum advantages which are offered by scalable universal quantum computers have been demonstrated in many settings. Perhaps most famously, quantum computers have been shown to have the capacity to efficiently solve algebraic computational problems, which are believed to be intractable for classical computers. This includes the famous problems of factoring large integers computing the discrete logarithms (Shor, 1997), but also many others such as Pell equation solving, some non-Abelian hidden subgroup problems, and others, see e.g. (Childs and van Dam, 2010;Montanaro, 2016) for a review. Related to this, nowadays we also have access to a growing collection of quantum algorithms 2 for various linear algebra tasks, as given in e.g. (Harrow et al., 2009;Rebentrost et al., 2016a), which may offer speed-ups. query complexity: a (quantum) algorithm solves a problem by intermittently calling a black-box subroutine, defined only via its inputoutput relations. Query complexity of an algorithm is the number of calls to the oracle, the algorithm will perform. used to solve other types of problems. For instance, in statistical physics, the capacity to sample from Gibbs distributions is often the key tool to compute properties of the partition function. A broad class of quantum approaches to sampling problems focuses on quantum enhancements of such Markov Chain methods (Temme et al., 2011;Yung and Aspuru-Guzik, 2012). Sampling tasks have been receiving an ever increasing amount of attention in the QIP community, as we will comment on shortly. Quantum computers are typically formalized in one of a few standard models of computation, many of which are, computationally speaking, equally powerful 4 . Even if the models are computationally equivalent, they are conceptually different. Consequently, some are better suited, or more natural, for a given class of applications. Historically, the first formal model, the quantum Turing machine (Deutsch, 1985), was preferred for theoretical and computability-related considerations. The quantum circuit model (Nielsen and Chuang, 2011) is standard for algebraic problems. The measurement-based quantum computing (MBQC) model (Raussendorf and Briegel, 2001;Briegel et al., 2009) is, arguably, best-suited for graph-related problems (Zhao et al., 2016), multi-party tasks and distributed computation (Kashefi and Pappa, 2016) and blind quantum computation (Broadbent et al., 2009). Topological quantum computation (Freedman et al., 2002) was an inspiration for certain knot-theoretic algorithms (Aharonov et al., 2006), and is closely related to algorithms for topological error-correction and fault tolerance. The adiabatic quantum computation model (Farhi et al., 2000) is constructed with the task of ground-state preparation in mind, and is thus well-suited for optimization problems (Heim et al., 2017).
FIG. 2 Computational models
Research into QIP also produced examples of interesting restricted models of computation: models which are in all likelihood not universal for efficient QC, however can still solve tasks which seem hard for classical machines.
Recently, there has been an increasing interest in such models, specifically the linear optics model, the so-called low-depth random circuits model and the commuting quantum circuits model 5 . In (Aaronson and Arkhipov, 2011) it was shown that the linear optics model can efficiently produce samples from a distribution specified by the permanents of certain matrices, and it was proven (barring certain plausible mathematical conjectures) that classical computers cannot reproduce the samples from the same distribution in polynomial time. Similar claims have been made for low-depth random circuits (Boixo et al., 2016;Bravyi et al., 2017) and commuting quantum circuits, which comprise only commuting gates (Shepherd and Bremner, 2009;. Critically, these restricted models can be 4 Various notions of "equally powerful" are usually expressed in terms of algorithmic reductions. In QIP, typically, the computational model B is said to be at least as powerful as the computational model A, if any algorithm of complexity O(f (n)) (where f (n) is some scaling function, e.g. "polynomial" or "exponential"), defined for model A, can be efficiently (usually this means in polynomial time) translated to an algorithm for B, which solves the same problem, and whose computational complexity is O(poly(f (n))). Two models are then equivalent if A is as powerful as B and B is as powerful as A. Which specific reduction complexity we care about (polynomial, linear, etc.) depends on the setting: e.g. for factoring polynomial reductions suffice, since there seems to be an exponential separation between classical and quantum computation. In contrast, for search, the reductions need to be sub-quadratic to maintain a quantum speed-up, since only a quadratic improvement is achievable. 5 Other restricted models exist, such as the one clean qubit model (DQC1) where the input comprises only one qubit in a pure state, and others are maximally mixed. This model can be used to compute a function -the normalized trace of a unitary specified by a quantum circuit -which seems to be hard for classical devices.
realized to sufficient size, as to allow for a demonstration of computations which the most powerful classical computers that are currently available cannot achieve, with near-term technologies. This milestone, referred to as quantum supremacy (Preskill, 2012;Lund et al., 2017), and has been getting a significant amount of attention in recent times. Another highly active field in QIP concentrates on (analogue) quantum simulations, with applications in quantum optics, condensed matter systems, and quantum many-body physics (Georgescu et al., 2014). Many, if not most of the above mentioned aspects of quantum computation are finding a role in quantum machine learning applications. Next, we briefly review basic concepts from the classical theories of artificial intelligence and machine learning.
B. Artificial intelligence and machine learning
Executive summary: The field of artificial intelligence incorporates various methods, which are predominantly focused on solving problems which are hard for computers, yet seemingly easy for humans. Perhaps the most important class of such tasks pertain to learning problems. Various algorithmic aspects of learning problems are tackled by the field of machine learning, which evolved from the study of pattern recognition in the context of AI. Modern machine learning addresses a variety of learning scenarios, dealing with learning from data, e.g. supervised (data classification), and unsupervised (data clustering) learning, or from interaction, e.g. reinforcement learning. Modern AI states, as its ultimate goal, the design of an intelligent agent which learns and thrives in unknown environments. Artificial agents that are intelligent in a general, human sense must have the capacity to tackle all the individual problems addressed by machine learning and other more specialized branches of AI. They will consequently require a complex combination of techniques.
In its broadest scope, the modern field of artificial intelligence (AI) encompasses a wide variety of sub-fields. Most of these sub-fields deal with the understanding and abstracting of aspects of various human capacities which we would describe as intelligent, and attempt to realize the same capacities in machines. The term "AI" was coined at Dartmouth College conferences in the 1956 (Russell and Norvig, 2009), which were organized to develop ideas about machines that can think, and the conferences are often cited as the birthplace of the field. The conferences were aimed to "find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves" 6 . The history of the field has been turbulent, with strong opinions on how AI should be achieved. For instance, over the course of its first 30 years, the field has crystalized into two main competing and opposite viewpoints (Eliasmith and Bechtel, 2006) on how AI may be realized: computationalism -holding that that the mind functions by performing purely formal operations on symbols, in the manner of a Turing machine, see e.g. (Newell and Simon, 1976)), and connectionism -which models mental and behavioral phenomena as the emergent processes of interconnected networks of simple units, mimicking the biological brain, see e.g. (Medler, 1998)). Aspects of these two viewpoints still influence approaches to AI. Irrespective of the underlying philosophy, for the larger part of the history of AI, the realization of "genuine AI" was, purportedly perpetually "a few years away" -a feature often attributed also to quantum computers by critics of the field. In the case of AI, such runaway optimism had a severe calamitous effect on the field, in multiple instances, especially in the context of funding (leading to periods now dubbed "winters of AI"). By the late 90s, the reputation of the field was low, and, even in hindsight, there was no consensus on the reasons why AI failed to produce human-level intelligence. Such factors played a vital role in the fragmentation of the field into various sub-fields which focused on specialized tasks, often appearing under different names. A particularly influential perspective of AI, often called nouvelle or embodied AI, was advocated by Brooks, which posited that intelligence emerges from (simple) embodied systems which learn through interaction with their environments (Brooks, 1990). In contrast to standard approaches of the time, Nouvelle AI insists on learning, rather than having properties pre-programmed, and on the embodiment of AI entities, as opposed to abstract entities like chess playing programs.
To a physicist, this perspective that intelligence is embodied is reminiscent to the viewpoint that information is physical, which had been "the rallying cry of quantum information theory" (Steane, 1998). Such embodied approaches are particularly relevant in robotics where the key issues involve perception (the capacity of the machine to interpret the external world using its sensors, which includes computer vision, machine hearing and touch), motion and navigation (critical in e.g. automated cars). Related to human-computer interfaces, AI also incorporates the field of natural language processing which includes language understanding -the capacity of the machine to derive meaning from natural language, and language generation -the ability of the machine to convey information in a natural language. Other general aspects of AI pertain to a few well-studied capacities of intelligent entities (Russell and Norvig, 2009). For instance, automated planning is related to decision theory 7 and, broadly speaking, addresses the task of identifying strategies (i.e. sequences of actions) which need to be performed in order to achieve a goal, while minimizing (a specified) cost. Already the simple class of so-called off-line planning tasks, where the task, cost function, and the set of possible actions are known beforehand, contains genuinely hard problems, e.g. it include, as a special case, the NP-complete 8 travelling salesman problem (TSP); for illustration see Fig. 3 9 . In modern times, TSP itself would no longer be considered a genuine AI problem, but it is serves to illustrate how already very specialized, simple sub-sub-tasks of AI may be hard. More general planning problems also include on-line variants, where not everything is known beforehand (e.g. TSP but where the "map" may fail to include all the available roads, and one simply has to actually travel to find good strategies). On-line planning overlaps with reinforcement learning, discussed later in this section. Closely related to planning is the capacity of intelligent entities for problem solving. In technical literature, problems solving is distinguished from planning by a lack of additional structure in the problem, usually assumed in planning -in other words, problem solving is more general and typically more broadly defined than planning. The lack of structure in general problem solving establishes a clear connection to (also unstructured) searching and optimization: in the setting of no additional information or structure, problem solving is the search for the solution to a precisely specified problem. While general problem solving can be, theoretically, achieved by a general search algorithm (which can still be subdivided into classes such as depth-first, breath-first, depth-limited search etc.), more often there is structure to the problem, in which case an informed search strategies -often called heuristic search strategies -will be more efficient (Russell and Norvig, 2009). Human intelligence, to no small extent, relies on our knowledge. We can accumulate knowledge, reason over it, and use it to come to the best decisions, for instance in the context of problem solving and planning. An aspect of AI tries to formalize such logical reasoning, knowledge accumulation and knowledge representation, often relying on formal logic, most often first order logic.
A particularly important class of problems central to AI, and related to knowledge acquisition, involve the capacity of the machine to learn through experience. This feature was emphasized already in the early days of AI, and the derived field of machine learning (ML) now stands as arguably the most successful aspect (or spin-off) of AI, which we will address in more detail.
1. Learning from data: machine learning Stemming from the traditions of pattern recognition, such as recognizing handwritten text, and statistical learning theory (which places ML ideas in a rigorous mathematical framework), ML, broadly speaking, explores the construction of algorithms that can learn from, and make predictions about data. Traditionally, ML deals with two main learning settings: supervised and unsupervised learning, which are closely related to data analysis and data mining-type tasks (Shalev-Shwartz and Ben-David, 2014). A broader perspective (Alpaydin, 2010) on the field also includes reinforcement learning (Sutton and Barto, 1998), which is closely related to learning as is realized by biological intelligent entities. We shall discuss reinforcement learning separately. In broad terms, supervised learning deals with learning-by-example: given a certain number of labeled points (so-called training set) {(x i , y i )} i where x i denote data points, e.g. N −dimensional vectors, and y i denote labels (e.g. binary variables, or real values), the task is to infer a "labeling rule" x i → y i which allows us to guess the labels of previously unseen data, that is, beyond the training set. Formally speaking, we deal with the task of inferring the conditional probability distribution P (Y = y|X = x) (more specifically, generating a labeling function which, perhaps probabilistically, assigns labels to points) based on a certain number of samples from the joint distribution P (X, Y ). For example, we could be inferring whether a particular DNA sequence belongs to an individual who is likely to develop diabetes. Such an inference can be based on the datasets of patients whose DNA sequences had been recorded, along with the information on whether they actually developed diabetes. In this example, the variable Y (diabetes status) is binary, and the assignment of labels is not deterministic, as diabetes also depends on environmental factors. Another example could include two real variables, where x is the height from which an object is dropped, and y the duration of the fall. In this example, both variables are real-valued, and (in vacuum) the labeling relation will be essentially deterministic. In unsupervised learning, the algorithm is provided just with the data points without labels. Broadly speaking, the goal here is to identify the underlying distribution, or structure, and other informative features in the dataset. In other words, the task is to infer properties of the distribution P (X = x), based on a certain number of samples, relative to a user-specified guideline or rule. Standard examples of unsupervised learning are clustering tasks, where data-points are supposed be grouped in a manner which minimizes within-group mean-distance, while maximizing the distance between the groups. Note that the group membership can be thought of as a label, thus this also corresponds to a labeling task, but lacks "supervision": examples of correct labelings.
In basic examples of such tasks, the number of expected clusters is given by the user, but this too can be automatically optimized.
Other types of unsupervised problems include feature extraction and dimensionality reduction, critical in combatting the so-called curse of dimensionality. The curse of dimensionality refers to problems which stem from the fact that the raw representations of real-life data often occupy very high dimensional spaces. For instance, a standard resolution one-second video-clip at standard refresh frequency, capturing events which are extended in time maps to a vector in ∼ 10 8 dimensional space 10 , even though the relevant information it carries (say a licence-plate number of a speeding car filmed) may be significantly smaller. More generally, intuitively it is clear that, since geometric volume scales exponentially with the dimension of the space it is in, the number of points needed to capture (or learn) general features of an n−dimensional object will also scale exponentially.
In other words, learning in high dimensional spaces is exponentially difficult. Hence, a means of dimensionality reduction, from raw representation space (e.g. moving car clips), to the relevant feature space (e.g. licence-plate numbers) is a necessity in any real-life scenario.
These approaches the data-points to a space of significantly reduced dimension, while attempting to maintain the main features -the relevant information -of the structure of the data. A typical example of a dimensionality example technique is e.g. principal component analysis. In practice, such algorithms also constitute an important step in data pre-processing for other types of learning and analysis. Furthermore, this setting also includes generative models (related to density estimation), where new samples from an unknown distribution are generated, based on few exact samples. As humanity is amassing data at an exponential rate (insideBIGDATA, 2017) it becomes ever more relevant to extract genuinely useful information in an automated fashion. In modern world ubiquitous big data analysis and data mining are the central applications of supervised and unsupervised learning.
Learning from interaction: reinforcement learning
Reinforcement learning (RL) (Russell and Norvig, 2009;Sutton and Barto, 1998) is, traditionally, the third canonical category of ML. Partially caused by the relatively recent prevalence of (un)supervised methods in the contexts of the pervasive data mining and big data analysis topics, many modern textbooks on ML focus on these methods. RL strategies have mostly remained reserved for robotics and AI communities. Lately, however, the surge of interest in adaptive and autonomous devices, robotics, and AI have increased the prominence of RL methods. One recent celebrated result which relies on the extensive use of standard ML and RL techniques in conjunction is that of AlphaGo (Silver et al., 2016), a learning system which mastered the game of Go, and achieved, arguably, superhuman performance, easily defeating the best human players. This result is notable for multiple reasons, including the fact that it illustrates the potential of learning machines over special-purpose solvers in the context of AI problems: while specialized devices which relied on programming over learning (such as Deep Blue) could surpass human performance in chess, they failed to do the same for the more complicated game of Go, which has a notably larger space of strategies. The learning system AlphaGo achieved this many years ahead of typical predictions. The distinction between RL and other data-learning ML methods is particularly relevant from a quantum information perspective, which will be addressed in more detail in section VII.B. RL constitutes a broad learning setting, formulated within the general agent-environment paradigm (AE paradigm) of AI (Russell and Norvig, 2009). Here, we do not deal with a static database, but rather an interactive task environment. The learning agent (or, a learning algorithm) learns through the interaction with the task environment.
Environment Agent
Reward
Agent Environment
Learning model s a 2 p=0.9
FIG. 5 An agent interacts with an environment by exchanging percepts and actions. In RL rewards can be issued. Basic environments are formalized by Markov Decision Processes (inset in Environment). Environments are reminiscent to oracles, see 1, in that the agent only has access to the input-output relations. Further, figures of merit for learning often count the number of interaction steps, which is analogous to the concept of query complexity.
As an illustration, one can imagine a robot, acting on its environment, and perceiving it via its sensors -the percepts being, say, snapshots made by its visual system, and actions being, say, movements of the robotas depicted in Fig. 5 . The AE formalism is, however, more general and abstract. It is also unrestrictive as it can also express supervised and unsupervised settings. In RL, it is typically assumed that the goal of the process is manifest in a reward function, which, roughly speaking, rewards the agent, whenever the agents behavior was correct (in which case we are dealing with positive reinforcement, but other variants of operant conditioning are also used 11 ). This model of learning seems to cover pretty well how most biological agents (i.e. animals) learn: one can illustrate this through the process of training a dog to do a trick by giving it treats whenever it performs well. As mentioned earlier, RL is all about learning how to perform the "correct" sequence of actions, given the received percepts, which is an aspect of planning, in a setting which is fully on-line: the only way to learn about the environment is by interacting with it.
Intermediary learning settings
While supervised, unsupervised and reinforcement learning constitute the three broad categories of learning, there are many variations and intermediary settings. For instance, semi-supervised learning interpolates between unsupervised and supervised settings, where the number of labeled instances is very small compared to the total available training set. Nonetheless, even a small number of labeled examples have been shown to improve the bare unsupervised performance (Chapelle et al., 2010), or, from an opposite perspective, unlabeled data can help with classification when facing a small quantity of labeled examples. In active supervised learning, the learning algorithm can further query the human user, or supervisor, for the labels of particular points which would improve the algorithm's performance. This setting can only be realized when it is operatively possible for the user to correctly label all the points, and may yield advantages when this exact labeling process is expensive. Further, in supervised settings, one can consider so-called inductive learning algorithms which output a classifier function, based on the training data, which can be used to label all possible points. A classifier is simply a function which assigns labels to the points in the domain of the data. In contrast, in transductive learning (Chapelle et al., 2010) settings, the points that need to be labeled later are known beforehand -in other words, the classifier function is only required to be defined on a-priori known points. Next, a supervised algorithm can perform lazy learning, meaning that the whole labeled dataset is kept in memory in order to label unknown points (which can then be added), or eager learning, in which case, the (total) classifier function is output (and the training set is no longer explicitly required) (Alpaydin, 2010). Typical examples of eager learning are linear classifiers, such as basic support vector machines, described in the next section, whereas lazy learning is exemplified by e.g. nearest-neighbour methods 12 . Our last example, online learning (Alpaydin, 2010), can be understood as either an extension of eager supervised learning, or a special case of RL. Online learning generalizes standard supervised learning, in the sense that the training data is provided sequentially to the learner, and used to, incrementally, update the classifying function. In some variants, the algorithm is asked to classify each point, and is given the correct response afterward, and the performance is based on the guesses. The match/mismatch of the guess and the actual label can also be understood as a reward, in which case online learning becomes a restricted case of RL.
Putting it all together: the agent-environment paradigm
The aforementioned specialized learning scenarios can be phrased in a unifying language, which also enables us to discuss how specialized tasks fit in the objective of realizing true AI. In modern take on AI (Russell and Norvig, 2009), the central concept of the theory is that of an agent. An agent is an entity which is defined relative to its environment, and which has the capacity to act, that is, do something.
In computer science terminology the requirements for something to be an agent (or for something to act) are minimal, and essentially everything can be considered an agent -for instance, all non-trivial computer programs are also agents.
While we, unsurprisingly, do not more precisely specify what intelligent behaviour entails, already this simple perspective on AI has non-trivial consequences. The first is that intelligence can be ascertained from the interaction history between the agent and its environment alone. Such a viewpoint on AI is also closely related to behavior-based AI and the ideas behind the Turing test (Turing, 1950); it is in line with an embodied viewpoint on AI (see embodied AI in section I.B) and it has influenced certain approaches towards quantum AI, touched in section VII.C. The second is that the development of better ML and other types of relevant algorithms does constitute genuine progress towards AI, conditioned only on the fact that such algorithms can be coherently combined into a whole agent. It is however important to note that to actually achieve this integration may be far from trivial.
In contrast to such strictly behavioral and operational points of view, an alternative approach towards whole agents (or complete intelligent agents) focuses on agent architectures and cognitive architectures (Russell and Norvig, 2009). In this approach to AI the emphasis is equally placed not only on intelligent behaviour, but also on forming a theory about the structure of the (human) mind. One of the main goals of a cognitive architecture is to design a comprehensive computational model which encapsulates various results stemming from research in cognitive psychology. The aspects which are predominantly focused on understanding human cognition are, however, not central for our take on AI.
We discuss this further in section VII.C. b. Notation Throughout this review paper, we have strived to use the notation specified in the reviewed works. To avoid a notational chaos, we, however keep the notation consistent within subsections -this means that, within one subsection, we adhere to the notation used in the majority of works if inconsistencies arise.
II. CLASSICAL BACKGROUND
The main purpose of this section is to provide the background regarding classical ML and AI techniques and concepts which are either addressed in quantum proposals we discuss in the following sections or important for the proper positioning of the quantum proposal in the broader learning context. The concepts and models of this section include common models found in classical literature, but also certain more exotic models, which have been addressed in modern quantum ML literature. While this section contains most of the classical background needed to understand the basic ideas of the quantum ML literature, to tame the length of this section, certain very specialized classical ML ideas are presented on-the-fly during the upcoming reviews. We first provide the basics concepts related to common ML models, emphasizing neural networks in II.A.1 and support vector machines in II.A.2. Following this, in II.A.3, we also briefly describe a larger collection of algorithmic methods, and ideas arising in the context of ML, including regression models, k−means/medians, decision trees, but also more general optimization and linear algebra methods which are now commonplace in ML. Beyond the more pragmatic aspects of model design for learning problems, in subsection II.B we provide the main ideas of the mathematical foundations of computational learning theory, which discuss learnability -i.e. the conditions under which learning is possible at all -computational learning theory and the theory of Vapnik and Chervonenkis -which rigorously investigate the bounds on learning efficiency for various supervised settings. Subsection II.C covers the basic concepts and methods of RL.
A. Methods of machine learning
Executive summary: Two particularly famous models in machine learning are artificial neural networks -inspired by biological brains, and support vector machines -arguably the best understood supervised learning model. Neural networks come in many flavours, all of which model parallel information processing of a network of simple computational units, neurons. Feed-forward networks (without loops) are typically used for supervised learning. Most of the popular deep learning approaches fit in this paradigm. Recurrent networks have loops -this allows e.g. feeding information from outputs of a (sub)-network back to its own input . Examples include Hopfield networks, which can be used as content-addressable memories, and Boltzmann machines, typically used for unsupervised learning. These networks are related Ising-type models, at zero, or finite temperatures, respectively -this sets the grounds for some of the proposals for quantization. Support vector machines classify data in an Euclidean space, by identifying best separating hyperplanes, which allows for a comparatively simple theory. The linearity of this model is a feature making it amenable to quantum processing. The power of hyperplane classification can be improved by using kernels which, intuitively, map the data to higher dimensional spaces, in a non-linear way. ML naturally goes beyond these two models, and includes regression (data fitting) methods and many other specialized algorithms.
Since the early days of the fields of AI and ML, there have been many proposals on how to achieve the flavours of learning we described above. In what follows we will describe two popular models for ML, specifically artificial neural networks, and support vector machines. We highlight that many other models exist, and indeed, in many fields other learning methods (e.g. regression methods), are more commonly used. A selection of such other models is briefly mentioned thereafter, along with examples of techiques which overlap with ML topics in a broader sense, such as matrix decomposition techniques, and which can be used for e.g. unsupervised learning. Our choice of emphasis is, in part, again motivated by later quantum approaches, and by features of the models which are particularly well-suited for cross-overs with quantum computing.
Artificial neural networks and deep learning
Artificial neural networks (artificial NNs, or just NNs) are a biologically inspired approach to tackling learning problems. Originating in 1943 (McCulloch andPitts, 1943), the basic component of NNs is the artificial neuron (AN), which is, abstractly speaking, a real-valued function AN : R k → R parametrized by a vector of real, non-negative weights (w i ) i = w ∈ R k , and the activation function φ : R → R, given with
AN (x) = φ i x i w i , with x = (x i ) i ∈ R k .(1)
For the particular choice when the activation function is the threshold function φ θ (x) = 1 if
x > θ ∈ R + and φ θ (x) = 0 otherwise, the AN is called a perceptron (Rosenblatt, 1957), and has been studied extensively. Already such simple perceptrons performing classification into subspaces specified by the hyperplane with the normal vector w, and off-set θ (c.f. support vector machines later in this section). Note, in ML terminology, a distinction should be made between artificial neurons (ANs) and perceptrons -perceptrons are special cases of ANs, with the fixed activation function -the step function -, and a specified update or training rule. ANs in modern times use various activation functions (often the differentiable sigmoid functions), and can use different learning rules. For our purposes, this distinction will not matter.The training of such a classifier/AN for supervised learning purposes consists in optimizing the parameters w and θ as to correctly label the training set -there are various figures of merit particular approaches care about, and various algorithms that perform such an optimization, which are not relevant at this point. By combining ANs in a network we obtain NNs (if ANs are perceptrons, we usually talk about multi-layered perceptrons). While single perceptrons, or single-layered perceptrons can realize only linear classification, already a three-layered network suffices to approximate any continuous real-valued function (precision depending on the number of neurons in the inner, so-called hidden, layer). Cybenko (Cybenko, 1989) was the first to prove this for sigmoid activation functions, whereas Hornik generalized this to show that the same holds for all non-constant, monotonically increasing and bounded activation functions (Hornik, 1991) soon thereafter. This shows that if sufficiently many neurons are available, a three-layered ANN can be trained to learn any dataset, in principle 17 . Although this result seems very positive, it comes with the price of a large model complexity, which we discuss in section II.B.2 18 . In recent times, it has become apparent that using multiple, sequential, hidden feed-forward layers (instead of one large), i.e. deep neural networks (deep NNs), may have additional benefits. First, they may reduce the number of parameters (Poggio et al., 2017). Second, the sequential nature of processing of information from layer to layer can be understood as a feature abstraction mechanism (each layer processes the input a bit, highlighting relevant features which are processed further). This increases the interpretability of the model (intuitively, the capacity for high level explanations of the model's performance) (Lipton, 2016), which is perhaps best illustrated in so-called convolutional (deep) NNs, whose structure is inspired by the visual cortex. One of the main practical disadvantages of such deep networks is the computational cost and computational instabilities in training (c.f..
the vanishing gradient problem (Hochreiter et al., 2001)), and also the size of the dataset which has to be large (Larochelle et al., 2009). With modern technology and datasets, both obstacles are becoming less prohibitive, which has lead to a minor revolution in the field of ML. Not all ANNs are feed-forward: recurrent neural networks (recurrent NNs) allow for the backpropagation of signals. Particular examples of such networks are so called Hopfield networks (HNs), and Boltzmann machines (BMs), which are often used for different purposes than feedforward networks. In HNs, we deal with one layer, where the outputs of all the neurons serve as inputs to the same layer. The network is initialized by assigning binary values (traditionally, −1 and 1 are used, for reasons of convenience) to the neurons (more precisely, some neurons are set to fire, and some not), which are then processed by the network, leading to a new configuration. This update can be synchronous (the output values are "frozen" and all the second-round values are computed simultaneously) or asynchronous (the update is done one neuron at a time in a random order). The connections in the network are represented by a matrix of weights (w ij ) ij , specifying the connection strength between the i th and the j th neuron. The neurons are perceptrons, with a threshold activation function, given with the local threshold vector (θ i ) i . Such a dynamical system, under a few mild assumptions (Hopfield, 1982), converges to a configuration (i.e. bit-string) which (locally) minimizes the energy functional
E(s) = − 1 2 ij w ij s i s j + i θ i s i ,(2)
with s = (s i ) i , s i ∈ {−1, 1}, that is, the Ising model. In general, this model has many local minima, which depend on the weights w ij , and the thresholds, which are often set to zero. Hopfield provided a simple algorithm (called Hebbian learning, after D. Hebb for historic reasons (Hopfield, 1982)), which enables one to "program" the minima -in other words, given a set of bitstrings S (more precisely, strings of signs +1/ − 1), one can find the matrix w ij such that exactly those strings S are local minima of the resulting functional E. Such programmed minima are then called stored patterns. Furthermore, Hopfield's algorithm achieved this in a manner which is local (the weights w ij depend only on the i th and j th bits of the targeted strings, allowing parallelizability), incremental (one can modify the matrix w ij to add a new string without having to keep the old strings in memory), and immediate. Immediateness means that the computation of the weight matrix is not a limiting, but finite process. Violating e.g. incrementality would lead to a lazy algorithm (see section I.B.3), which can be sub-optimal in terms of memory requirements, but often also computational complexity 19 . It was shown that the minima of such a trained network are also attractive fixed-points, with a finite basin of attraction. This means that if a trained network is fed a new string, and let run, it will (eventually) converge to a pattern which is closest to it (the distance measure that is used depends on the learning rule, but typically it is the Hamming distance, i.e. number of entries where the strings disagree). Such a system then forms an associative memory, also called a content-addressable memory (CAM). CAMs can be used for supervised learning (the "labels" are the stored patterns), and conversely, supervised learning machinery can be used for CAM 20 . An important feature of HNs is their capacity: how many distinct patterns it can store 21 . For the Hebbian update rule this 19 The lazy algorithm may have to process all the patterns/data-points the number of which may be large and/or growing. 20 For this, one simply needs to add a look-up table connecting labels to fixed patterns. 21 Reliable storage entails that previously stored patterns will be also recovered without change (i.e they are energetic local minima of Eq.
(2), but also that there is a basin of attraction -a ball around the stored patterns with respect to a distance measure (most commonly the Hamming distance) for which the dynamical process of the network converges to the stored pattern. An issue with capacities is the occurrence of spurious patterns: local minima with a non-trivial basin of attraction which were not stored.
number scales as O(n/ log(n)), where n is the number of neurons, which Storkey (Storkey, 1997) improved to O(n/ log(n)). In the meantime, more efficient learning algorithms have been invented (Hillar and Tran, 2014). Aside from applications as CAMs, due to the representation in terms of the energy functional in Eq.
(2), and the fact that the running of HNs minimize it, they have also been considered for the tasks of optimization early on (Hopfield and Tank, 1985). The operative isomorphism between Hopfield networks and the Ising model, technically, holds only in the case of a zero-temperature system. Boltzmann machines generalize this. Here, the value of the i th neuron is set to −1 or 1 (called "off" and "on" in literature, respectively) with probability
p(i = −1) = (1 + exp (−β∆E i )) −1 , with ∆E i = j w ij s j + θ i ,(3)
where ∆E i is the energy difference of the configuration with i th neuron being on or off, assuming the connections w are symmetric, and β is the inverse temperature of the system. In the limit of infinite running time, the network's configuration is given by the (input-state invariant) Boltzmann distribution over the configurations, which depends on the weights w, local thresholds (weights) θ and the temperature. BMs are typically used in a generative fashions, to model, and sample from, (conditional) probability distributions. In the simplest variant, the training of the network attempts to ensure that the limiting distribution of the network matches the observed frequencies in the dataset. This is achieved by the tuning of the parameters w and θ. The structure of the network dictates how complicated a distribution can be represented. To capture more complicated distributions, over say k dimensional data, the BMs have N > k neurons. k of them will be denoted as visible units, and the remainder are called hidden units, and they capture latent, not directly observable, variables of the system which generated the dataset, and which we are in fact modelling. Training such networks consists in a gradient ascent of the log-likelihood of observing the training data, in the parameter space. While this seems conceptually simple, it is computationally intractable, in part as it requires accurate estimates of probabilities of equilibrium distributions, which are hard to obtain. In practice, this is somewhat mitigated by using restricted BMs, where the hidden and visible units form the partition of a bi-partite graph (so only connections between hidden and visible units exist). (Restricted) BMs have a large spectrum of uses, including providing generative models -producing new samples from the estimated distribution, as classifiers -via conditioned generation, as feature extractors -a form of unsupervised clustering, and as building blocks of deep architectures (Larochelle et al., 2009). However, their utility is mostly limited by the cost of training -for instance, the cost of obtaining equilibrium Gibbs distributions, or by the errors stemming from heuristic training methods such as contrastive divergence (Larochelle et al., 2009;Bengio and Delalleau, 2009;Wiebe et al., 2014a).
Support Vector Machines
Support Vector Machines (SVMs) form a family of perhaps best understood approaches for solving classification problems. The basic idea behind SVMs is that a natural way to classify points based on a dataset {x i , y i } i , for binary labels y i ∈ {−1, 1}, is to generate a hyperplane separating the negative instances from the positive ones. Such observations are not new, and indeed, perceptrons, briefly discussed in the previous section, perform the same function. Such a hyperplane can then be used to classify all points. Naturally, not all sets of points allow this (those that do are called linearly separable), but SVMs are further generalized to deal with sets which are not linearly separable using so-called kernels. Kernels, effectively, realize non-linear mappings of the original dataset to higher dimensions where they may become separable, depending on a few technical conditions 22 ), and by allowing a certain degree of misclassification, which leads to so-called "soft-margin" SVMs. Even in the case the dataset is linearly separable, there will still be many hyperplanes doing the job. This leads to various variants of SVMs, but the basic variant identifies a hyperplane which: a) correctly splits the training points, and b) maximizes the so-called margin: the distance of the hyperplane to the nearest point (see Fig. 7). The distance of choice is most often the geometric Euclidean distance, which leads to so-called maximum margin classifiers.
In high-dimensional spaces, in general the maximization of the margin ends in a situation where there are multiple +1 and −1 instances of training data points which are equally far from the hyperplane. These points are called support vectors. The finding of a maximum margin classifier corresponds to finding a normal vector w and offset b of the separating hyperplane, which corresponds to the optimization problem
w * = argmin w,b 1 2 w 2 (4) such that y i (w.x i + b) ≥ 1.(5)
The formulation above is actually derived from the basic problem by noting that we may arbitrarily and simultaneously scale the pair (w, b) without changing the hyperplane. Therefore, we may always choose a scaling such that the realized margin is 1, in which case, the margin corresponds to w −1 , which simply maps a maximization problem to a minimization problem as above. The square ensures the problem is stated as a standard quadratic programming problem. This problem is often expressed in its Lagrange dual form, which reduces to
(α * 1 , . . . α * N ) = argmin α1...α N i α i − 1 2 i,j α i α j y i y j x i .x j (6)
such that α i ≥ 0 and
i α i y i = 0,(7)
where the solution of the original problem is given by
w * = i y i α i x i .(8)
In other words, we have expressed w * in the basis of the data-vectors, and the data-vectors x i for which the corresponding coefficient α i is non-zero are precisely the support vectors. The offset b * is easily computed having access to one support vector of, say, an instance +1, denoted x + , by solving w * .x + + b * = 1. The class of a new point z can also be computed directly using the support vectors via the following expression
z → sign i y i α i x i .z + b * .(9)
The dual representation of the optimization problem is convenient when dealing with kernels. As mentioned, a way of dealing with data which is not linearly separable, is to first map all the points into a higher-dimensional space via a non-linear function φ : R m → R n , where m < n is the dimensionality of the datapoints. As we can see, in the dual formulation, the data-points only appear in terms of inner products x i .x j . This leads to the notion of the kernel function k which, intuitively, measures the similarity of the points in the larger space, typically defined with k(
x i , x j ) = φ(x i ) τ φ(x j ).
In other words, to train the SVM according to a non-trivial kernel k, induced by the non-linear mapping φ, the optimization line Eq. (6) will be replaced with argmin α1.
..α N i α i − 1 2 i,j α i α j y i y j k(x i , x j )
. The offset is computed analogously, using just one application of φ. The evaluation of a new point is given in the same way with z → sign ( i y i α i k(x i , z) + b * ) . In other words, the data-points need not be explicitly mapped via φ, as long as the map-inducing inner product k(·, ·) can be computed more effectively. The choice of the kernel is critical in the performance of the classifier, and the finding of good kernels is non-trivial and often solved by trial-and-error. While increasing the dimension of the extended space (co-domain of φ) may make data-points more linearly separable (i.e. fewer mismatches for the optimal classifier), in practice they will not be fully separable (and furthermore, increasing the kernel dimension comes with a cost which we elaborate on later). To resolve this, SVMs allow for misclassification, with various options for measuring the "amount" of misclassification, inducing a penalty function. A typical approach to this is to introduce so-called "slack variables" ξ i ≥ 0 to the original optimization task, so:
w * = argmin w,b 1 2 w 2 + C i ξ i(10)
such that y i (w.
x i + b) ≥ 1 − ξ i .(11)
If the value ξ i of the optimal solution is between 0 and 1, the point i is correctly classified, but is within the margin, and ξ i > 1 denotes a misclassification. The (hyper)parameter C controls the relative importance we place on minimizing the margin norm, versus the importance we place on misclassification. Interestingly, the dual formulation of the above problem is near-identical to the hard-margin setting discussed thus far, with the small difference that the parameters α i are now additionally constrained with α i ≤ C in Eq. (7). SVMs, as described above, have been extensively studied from the perspective of computational learning theory, and have been connected to other learning models. In particular, their generalization performance, which, roughly speaking, characterizes how well a trained model 23 will perform beyond the training set can be analyzed. This is the most important feature of a classifying algorithm. We will briefly discuss generalization performance in section II.B.2. We end this short review of SVMs by considering a non-standard variant, which is interesting for our purposes as it has been beneficially quantized. SVMs as described are trained by finding the maximal margin hyperplane. Another model, called least-squares SVM (LS-SVM) takes a regression (i.e. data-fitting) approach to the problem, and finds a hyperplane which, essentially, minimizes the least square distance of the vector of labels, and the vector of distances from the hyperplane, where the i th entry of the vector is given with (w.x i + b). This is effected by a small modification of the soft-margin formulation:
w * LS = argmin w,b 1 2 w 2 + C i ξ 2 i(12)
such that y i (w.
x i + b) = 1 − ξ i ,(13)
where the only two differences are that the constraints are now equalities, and the slack variables are squared in the optimization expression. This seemingly innocuous change causes differences in performance, but also in the training. The dual formulation of the latter optimization problem reduces to a linear system of equations:
0 1 T 1 N Ω + γ −1 I b α = 0 Y ,(14)
where 1 is an "all ones" vector, Y is the vector of labels y i , b is the offset, γ is a parameter depending on C. α is the vector of the Lagrange multipliers yielding the solution. This vector again stems from the dual problem which we omitted due to space constraints, and which can be found in (Suykens and Vandewalle, 1999). Finally, Ω is the matrix collecting the (mapped) "inner products" of the training vectors so Ω i,j = k(x i , x j ), where k is a kernel function, in the simplest case, just the inner product. The training of LS-SVMs is thus simpler (and particularly convenient from a quantum algorithms perspective), but the theoretical understanding of the model, and its relationship to the well-understood SVMs, is still a matter of study, with few known results (see e.g. (Ye and Xiong, 2007)).
Other models
While NNs and SVMs constitute two popular approaches for ML tasks (in particular, supervised learning), many other models exist, suitable for a variety of ML problems. Here we very briefly list and describe some of such models which have also appeared in the context of quantum ML. While classification typically assigns discrete labels to points, in the case when the labeling function has a continuous domain (say the segment [0, 1]) we are dealing with function approximation tasks, often dealt with by using regression techniques. Typical examples here include linear regression, which approximate the relationship of points and labels with a linear function, most often minimizing the least-squares error. More broadly, such techniques are closely related to data-fitting, that is, fitting the parameters of a parametrized function such as to best fit observed (training) data. The k-nearest neighbour algorithm is an intuitive classification algorithm which given a new point considers the k nearest training points (with respect to a metric of choice), and assigns the label by the majority vote (if used for classification), or by averaging (in the case of regression, i.e. continuous label values). The mutually related k-means and k-medians algorithms are typically used for clustering: the k specifies the number of clusters, and the algorithm defines them in a manner which minimizes the within-cluster distance to the mean (or median) point.
Another method for classification and regression optimizes decision trees, where each dimension, or entry (or more generally a feature 24 ) of the new data point influences a move on a decision tree. The depth of the tree is the length of the vector (or number of features), and the degree of each node depends on the possible number of distinct features/levels per entry 25 . The vertices of the tree specify an arbitrary feature of interest, which can influence the classification result, but most often they consider the overlaps with geometrical regions of the data-point space. Decision trees are in principle maximally expressive (can represent any labeling function), but very difficult to train without constraints. More generally, classification tasks can be treated as the problem of finding a hypothesis h : Data → Labels (in ML, the term hypothesis is essentially synonymous to the term classifier, also called a learner) which is from some family H, which minimizes error (or loss) under some loss function. For instance, the hypotheses realized by SVMs are given by the hyperplanes (in the kernel space), and in neural nets they are parametrized by the parameters of the nets: geometry, thresholds, activation functions, etc. Additional to loss terms, the minimization of which is called empirical risk minimization, ML applications benefit from adding an additional component to the objective function: the regularization term, the purpose of which is to penalize complex functions, which could otherwise lead to poor generalization performance, see section II.B.2. The choices of loss functions, regularization terms, and classes of hypotheses lead to different particular models, and training corresponds to optimization problems given by the choice of the loss function and the hypothesis (function) family. Furthermore, it has been shown that essentially any learning algorithm which requires only convex optimization for training leads to poor performance under noise. Thus non-convex optimization is necessary for optimal learning (see e.g. (Long and Servedio, 2010;Manwani and Sastry, 2011)). An important class of meta-algorithms for classification problems are boosting algorithms. The basic idea behind boosting algorithms is the highly non-trivial observation, first proven via the seminal AdaBoost algorithm (Freund and Schapire, 1997), that multiple weak classifiers, which perform better than random on distinct parts of the input space, can be combined into an overall better classifier. More precisely, given a set of (weak) hypotheses/classifiers {h j }, h j : R n → {−1, 1}, under certain technical conditions, there exists a set of weights {w i }, w i ∈ R, such that the composite classifier of the form hc w (x) = sign( i w i h i (x)) performs better. Interestingly, a single (weak) learning model can be used to generate the weak hypotheses needed for the construction of a better composite classifier -one which, in principle, can achieve arbitrary high success probabilities, i.e. a strong learner. The first step of this process is achieved by altering the frequencies at which the training labeled data-points appear, one can effectively alter the distributions over the data (in a black-box setting, these can be obtained by e.g. rejection sampling methods). The training of one and the same model on such differentially distributed datasets can generate distinct weak learners, which emphasize distinct parts of the input space. Once such distinct hypotheses are generated, optimization of the weight w i of the composite model is performed. In other words, weak learning models can be boosted 26 .
24 Features, however, have a more generic meaning in the context of ML. A data vector is a vector of features, where what a feature is depends on the context. For instance, features can be simply values at particular positions, or more global properties: e.g. a feature of data vectors depicting an image may be "contains a circle", and all vectors corresponding to pictures with circles have it. Even more generically, features pertain to observable properties of the objects the data-points represent ("observable" here simply means that the property can be manifested in the data vector). 25 For instance, we can classify humans, parrots, bats and turtles, by binary features can f ly and is mamal. E.g.
choosing the root can f ly leads to the branch can f ly = no with two leaves decided by is mamal = yes pinpointing the human, whereas is mamal = no would specify the turtle. Parrots and bats would be distinguished by the same feature in the can f ly = yes subtree. 26 It should be mentioned that the above description only serves to illustrate the intuition behind boosting ideas.
In practice, various boosting methods have distinct steps, e.g. they may perform the required optimizations in differing orders, using training phases in parallel etc. which is beyond the needs of this review.
Aside from the broad classes of approaches to solve various ML tasks, ML is also often conflated with specific computational tools which are used to solve them. A prominent example of this is the development of algorithms for optimization problems, especially those arising in the training of standard learning models. This includes e.g. particle swarm optimization, genetic and evolutionary algorithms, and even variants of stochastic gradient descent. ML also relies on other methods including linear algebra tools, e.g. matrix decomposition methods, such as singular value decomposition, QR-, LU-and other decompositions, derived methods such as principal component analysis , and various techniques from the field of signal analysis (Fourier, Wavelet, Cosine, and other transforms). The latter set of techniques serves to reduce the effective dimension of the data set, and helps combat the curse of dimensionality. The optimization, linear algebra, and signal processing techniques, and their interplay with quantum information is an independent body of research with enough material to deserve a separate review, and we will only reflect on these methods when needed.
B. Mathematical theories of supervised and inductive learning
Executive summary: Aside from proposing learning models, such as NNs or SVMs, learning theory also provides formal tools to identify the limits of learnability. No free lunch theorems provide sobering arguments that naïve notions of "optimal" learning models cannot be obtained, and that all learning must rely on some prior assumptions. Computational learning theory relies on ideas from computational complexity theory, to formalize many settings of supervised learning, such as the task of approximating or identifying an unknown (boolean) function -a concept -which is just the binary labeling function. The main question of the theory is the quantification of the number of invocations of the black-box -i.e. of the function (or of the oracle providing examples of the function's values on selected inputs) -needed to reliably approximate the (partially) unknown concept to desired accuracy. In other words, computational learning theory considers the sample complexity bounds for various learning settings, specifying the concept families and type of access. The theory of Vapnik and Chervonenkis, or simply VC theory, stems from the tradition of statistical learning. One of the key goals of the theory is to provide theoretical guarantees on generalization performance. This is what is asked for in the following question: given a learning machine trained on a dataset of size N , stemming from some process, with a measured empirical risk (error on the training set) of some value R, what can be said about its future performance on other data-points which may stem from the same process? One of the key results of VC theory is that this can be answered, with the help of a third parameter -the model complexity of the learning machine. Model complexity, intuitively, captures how complicated functions the learner can learn: the more complicated the model, the higher chance of "overfitting", and consequently, the weaker the guarantees on performance beyond the training set. Good learning models can control their model complexity, leading to a learning principle of structural risk minimization. The art of ML is a juggling act, balancing sample complexity, model complexity, and the computational complexity of the learning algorithm 27 .
Although modern increase of interests in ML and AI are mostly due to applications, aspects of ML and AI do have strong theoretical backgrounds. Here we focus on such foundational results which clarify what learning is, and which investigate the questions of what learning limits are. We will very briefly sketch some of the basic ideas. The first collection of results, called No Free Lunch theorems place seemingly pessimistic bounds on the conditions under which learning is at all possible (Wolpert, 1996). No Free Lunch theorems are, essentially, a mathematical formalization of Hume's famous problem of induction (Hume, 1739;Vickers, 2016), which deals with the justification of inductive reasoning. One example of inductive reasoning occurs during generalization. Hume points out that, without a-priori assumptions, concluding any property concerning a class of objects based on any number of observations 28 is not justified.
In a similar vein, learning based on experience cannot be justified without further assumptions: expecting that a sequence of events leads to the same outcome as it did in the past, is only justified if we assume a uniformity of nature. The problems of generalization and of uniformity can be formulated in the context of supervised learning and RL, with (not uncontroversial) consequences (c.f. (NFL)). For instance, one of the implications is that the expected performance of any two learning algorithms beyond the training set must be equal, if one uniformly averages over all possible labeling functions, and analogous statements hold for RL settings -in other words, without assumptions on environments/datasets, the expected performance of any two learning models will be essentially the same, and two learning models cannot be meaningfully compared in terms of performance without making statements about the task environments in question. In practice, we, however, always have some assumptions on the dataset and environment: for instance the principle of parsimony (i.e. Occam's razor), asserting that simpler explanations tend to be correct, prevalent in science, suffices to break the symmetries required for NFLs to hold in their strongest form (Lattimore and Hutter, 2011;Hutter, 2010;Ben-David et al., 2011). No review of theoretical foundations of learning theory should circumvent the works of Valiant, and the general computational learning theory (CLT), which stems from a computer science tradition, initiated by Valiant (Valiant, 1984), and the related VC theory of Vapnik and Chervonenkis, developed from a statistical viewpoint (Vapnik, 1995). We present the basic ideas of these theories in no particular order.
Computational learning theory
CLT can be understood as a rigorous formalization of supervised learning, and which stems from a computational complexity theory tradition. The most famous model in CLT is that of probably approximately correct (PAC) learning. We will explain the basic notions of PAC learning on a simple example: optical character recognition. Consider the task of training an algorithm to decide whether a given image (given as a black and white bitmap) of a letter corresponds to the letter "A", by supplying a set of examples and counterexamples: a collection of images. Each image x can be encoded as a binary vector in {0, 1} n (where n =height×width of the image).
Assuming that there exists an univocally correct assignment of label 0 (not "A") and 1 to each image implies there exists a characteristic function f : {0, 1} n → {0, 1} which discerns letters A from other images. Such an underlying characteristic function (or, equivalently, the subset of bitstrings for which it attains value "1") is, in computational learning theory, called a concept. Any (supervised) learning algorithm will first be supplied with a collection of N examples (x i , f ((x i )) i . In some variants of PAC learning, it is assumed that the data-points (x) are drawn from some distribution D attaining values in {0, 1} n . Intuitively, this distribution can model the fact that in practice, the examples that are given to the learner stem from its interaction with the world, which specifies what kinds of "A"s we are more likely to see 29 . PAC learning typically assumes inductive settings, meaning that the learning algorithm, given a sample set S N (comprising N identically independently distributed samples from D) outputs a hypothesis h : {0, 1} n → {0, 1} which is, intuitively, the algorithms "best guess" for the actual concept f . The quality of the guess is measured by the total error (also known as loss, or regret),
err D (h S N ) = x P (D = x)|h S N (x) − f (x)|,(15)
averaged according to the same (training) distribution D, where h S N is the hypothesis the (deterministic) learning algorithm outputs given the training set S N . Intuitively, the larger the training set is (N ), the smaller the error will be, but this also depends on the actual examples (and thus S N and D). PAC theory concerns itself with probably (δ), approximately ( ) correct learning, i.e. with the following expression:
P S N ∼D N [err D (h S N ) ≤ ] ≥ 1 − δ,(16)
where S ∼ D means S was drawn according to the distribution D. The above expression is a statement certifying that the learning algorithm, having been trained on the dataset sampled from D, will, except with probability δ, have a total error below . We say a concept f is ( , δ)-learnable, under distribution D, if there exists a learning algorithm, and an N , such that Eq. (16) holds, and simply learnable, if it is ( , δ)-learnable for all ( , δ) choices. The functional dependence of N on ( , δ) (and on the concept and distribution D) is called the sample complexity. In PAC learning, we are predominantly concerned with identifying tractable problems, so a concept/distribution pair f, D is PAC-learnable if there exists an algorithm for which the sample complexity is polynomial in −1 and δ −1 . These basic ideas are generalized in many ways. First, in the case the algorithm cannot output all possible hypotheses, but only a restricted set H (e.g. the hypothesis space is smaller than the total concept space), we can look for the best case solution by substituting the actual concept f with the optimal choice h * ∈ H which minimizes the error in (15), in all the expressions above. Second, we are typically not interested in just distinguishing the letter "A" from all other letters, but rather recognizing all letters. In this sense, we typically deal with a concept class (e.g. "letters"), which is a set of concepts, and it is (PAC) learnable if there exists an algorithm for which each of the concepts in the class are (PAC) learnable. If, furthermore, the same algorithm also learns for all distributions D, then the class is said to be (distribution-free) learnable. CLT contains other models, generalizing PAC. For instance, concepts may be noisy or stochastic. In the agnostic learning model, the labeled examples (x, y) are sampled from a distribution D over {0, 1} n × {0, 1}, which also models probabilitstic concepts 30 . Further, in agnostic learning, we define a set of concepts C ⊆ {c|c : {0, 1} n → {0, 1}}, and given D, we can identify the best deterministic approximation of D in the set C, given with opt C = min c∈C err D (c). The goal of learning is to produce a hypothesis h ∈ C which performs not much worse than the best approximation opt C , in the PAC sense -the algorithm is a ( , δ)−agnostic learner for D and C, if given access to samples from D it outputs a hypothesis h ∈ C such that err D (c) ≤ + opt C , except with probability δ. Another common model in CLT is, the exact learning from membership queries model (Angluin, 1988), which is, intuitively, related to active supervised learning (see section I.B.3). Here, we have access to an oracle, a black-box, which outputs the concept value f (x) when queried with an example x. The basic setting is exact, meaning we are required to output a hypothesis which makes no errors whatsoever, however with a bounded probability (say 3/4). In other words, this is PAC learning where = 0, but we get to choose which examples we are given, adaptively, and δ is bounded away from 1/2. The figure of merit usually considered in this setting is query complexity, which denotes the number of calls to the oracle the learning algorithm uses, and is for most intents and purposes synonymous to sample complexity 31 . This, in spirit, corresponds to an active supervised learning setting. Much of PAC learning deals with identifying examples of interesting concept classes which are learnable (or proving that relevant classes are not), but other more general results exist connecting this learning framework. For instance, we can ask whether we can achieve a finite-sampling universal learning algorithm: that is, an algorithm that can learn any concept, under any distribution using some fixed number of samples N . The No Free Lunch theorems we mentioned previously imply that this is not possible: for each learning algorithm (and , δ), and any N there is a setting (concept/distribution) which requires more than N samples to achieve ( , δ)-learning. Typically, the criterion for a problem to be learnable assumes that there exists a classifier whose performance is essentially arbitrarily good -that is, it assumes the classifier is strong. The boosting result in ML, already touched upon in section II.A.3, shows that settling on weak classifiers, which perform only slightly better than random classification, does not generate a different concept of learnability (Schapire, 1990).
Classical CLT theory has also been generalized to deal with concepts with continuous ranges. In particular, so called p-concepts have range in [0, 1] (Kearns and Schapire, 1994). The generalization of the entire CLT to deal with such continuous-valued concepts is not without problems, but nonetheless, some of the central results, for instance quantities which are analogs of the VC-dimension, and analogous theorems relating this to generalization performance, can still be provided (see (Aaronson, 2007) for an overview given in the context of the learning of quantum states discussed in section V.A.1). Computational learning theory is closely related to the statistical learning theory of Vapnik and Chervonenkis (VC theory) which we discuss next.
VC theory
The statistical learning formalism of Vapnik and Chervonenkis was developed over the course of more than 30 years, and in this review we are forced to present just a chosen aspect of the total 31 When the oracle allows non-trivial inputs, one typically talks about query complexity. Sample complexity deals with the question of "how many samples" which suggest the setting where the oracle only produces outputs, without taking inputs. The distinction is not relevant for our purposes and is more often a matter of convention of the research line.
theory, which deals with generalization performance guarantees. In the previous paragraph on PAC learning, we have introduced the concept of total error, which we will refer to as (total) risk. It is defined as the average over all the data points, which is, for a hypothesis h, given with
R(h) = error(h) = x P (D = x)|h(x) − f (x)
| (we are switching notation to maintain consistency with literature of differing communities). However, this quantity cannot be evaluated in practice, as in practice we only have access to the training data. This leads us to the notion of the empirical risk given withR
(h) = 1 N x∈S N |h(x) − f (x)|,(17)
where S N is the training set drawn independently from the underlying distribution D.
The quantityR(h) is intuitive and directly measurable. However, the problem of finding learning models which optimize empirical risk alone is not in it self interesting as it is trivially resolved with a look-up table. From a learning perspective, the more interesting and relevant quantity is the performance beyond the training set, which is contained in the unmeasurable R(h), and indeed the task of inductive supervised learning is identifying h which minimizes R(h), given only the finite training set S N . Intuitively, the hypothesis h which minimizes the empirical risk should also be our best bet for the hypothesis which minimizes R(h), but this can only make sense if our hypothesis family is somehow constrained, at least to a family of total functions: again, a look-up table has zero empirical risk, yet says nothing about what to do beyond. One of the key contributions of VC theory is to establish a rigorous relationship between the observable quantityR(h) -the empirical risk, the quantity we actually wish to bound R(h) -the total risk, and the family of hypotheses our learning algorithm can realize. Intuitively, if the function family is too flexible (as is the case with just look-up tables) a perfect fit on the examples says little. In contrast, having a very restrictive set of hypotheses, say just one (which is independent from the dataset/concept and the generating distribution), suggest that the empirical risk is a fair estimate of the total risk (however bad it may be), as nothing has been tailored for the training set. This brings us to the notion of the model complexity of the learning model, which has a few formalizations, and here we focus on the Vapnik-Chervonenkis dimension of the model (VC dimension) 32 . The VC-dimension is an integer number assigned to a set of hypotheses H ⊆ {h|h : S → {0, 1}}, (e.g. the possible classification functions our learning algorithm can even in principle be trained to realize), where S can be, for instance, the set of bitstrings {0, 1} n , or, more generally, say real vectors in R n . In the context of basic SVMs, the set of hypotheses are "all hyperplanes" 33 . Consider now a subset C k of k points in R n in a general position 34 . These points can attain binary labels in 2 k different ways. The hypothesis family H is said to shatter the set C, if for any labeling of the set C k , there exists a hypothesis h ∈ H which correctly labels the set C k according to . In other words, using functions from H we can learn any labeling function on the set C k of k points in a general position perfectly. The VC dimension of H is then the largest k max such that there exists the set C kmax of points in general position which is shattered (perfectly "labelable" for any labeling) by H. For instance, for n = 2, "rays" shatter three points but not 4 (imagine vertices of a square where diagonally opposite vertices share the same label), and in n = N , "hyperplanes" 32 Another popular measure of model complexity is e.g. Rademacher complexity (Bartlett and Mendelson, 2003). 33 Naturally, a non-trivial kernel function enriches the set of hypotheses realized by SVMs. 34 General position implies that no sub-set of points is co-planar beyond what is necessary, i.e. points in SR n are in general position if no hyperplane in R n contains more than n points in S.
shatter N + 1 points. While it is beguiling to think that the VC dimension corresponds to the number of free parameters specifying the hypothesis family, this is not the case 35 . The VC theorem (in one of its variants) (Devroye et al., 1996) then states that the empirical risk matches total risk, up to a deviation which decays in the number of samples, but grows in the VC-dimension of the model, more formally:
P R (h S N ) − R(h S N ) ≤ = 1 − δ (18) = d (log(2N/d) + 1) N − log(δ/4) N ,(19)
where d is the VC-dimension of the model, N number of samples, and h S N is the hypothesis output by the model, given the training set S N , which is sampled from the underlying distribution D.
The underlying distribution D implicitly appears also in the total risk R. Note that the chosen acceptable probability of incorrectly bounding the true error, that is, probability δ, contributes only logarithmically to the misestimation bound , whereas the VC dimension and the number of samples contribute (mutually inversely) linearly to the square of .
The VC theorem suggests that the ideal learning algorithm would have a low VC dimension (allowing a good estimate of the relationship of the empirical and total risk), while at the same time, performing well on the training set. This leads to a learning principle called structural risk minimization.
Consider a parametrized learning model (say parametrized by an integer l ∈ l) such that each l induces a hypothesis family H l , each more expressive then the previous, so H l ⊆ H l+1 . Structural risk minimization (contrasted to empirical risk minimization which just minimizes empirical risk) takes into account that in order to have (a guarantee on) good generalization performance we need to have both good observed performance (i.e. low empirical risk) and low model complexity. High model complexity induces the risk stemming from the structure of the problem, manifested in common issues such as data overfitting. In practice, this is achieved by considering (meta-)parametrized models, like {H l },where we minimize a combination of l (influencing the VC-dimension) and the empirical risk associated to H l . In practice, this is realized by adding a regularization term to the training optimization, so generically the (unregularized) learning process resulting in argmin h∈HR (h)
is updated to argmin h l ∈H l R (h) + reg(l) , where reg(·) penalizes the complexity of the hypothesis family, or just the given hypothesis. VC dimension is also a vital concept in PAC learning, connecting the two frameworks. Note first that a concept class C, which is a set of concepts is also a legitimate set of hypotheses, and thus has a well-defined VC dimension d C . Then, the sample complexity of ( , δ)−(PAC)-learning of C is given with O (d C + ln 1/δ) −1 . Many of the above results can also be applied in the contexts of unsupervised learning, however the theory of unsupervised (or structure learning), is mostly concerned with the understanding of particular methodologies, the topic of which is beyond this review paper. 35 The canonical counterexample is the family specified by the partition of the real plane, halved by the graph of the two-parametric function h α,β (x) = α sin(βx), which can be proven to shatter any finite number of points in n = 2. The fact that the number of parameters of a function does not fully capture the complexity of the function should not be surprising as any (continuous) function over k + n variables (parameters + dimension) can be encoded as a function over 1 + n variables.
C. Basic methods and theory of reinforcement learning
Executive summary: While RL, in all generality, studies learning in and from interactive task environments, perhaps the best understood models consider more restricted settings. Environments can often be characterized by Markov Decision Processes, i.e. they states, which can be observed by the agent. The agent can cause transitions from states to states, by its actions, but the rules of transitions are not known beforehand. Some of the transitions are rewarded. The agent learns which actions to perform, given that the environment is in some state, such that it receives the highest value of rewards (expected return), either in a fixed time frame (finite-horizon) or over (asymptotically) long time periods, where future rewards are geometrically depreciated (infinite-horizon). Such models can be solved by estimating action-value functions, which assign expected return to actions given states, for which the agent must explore the space of strategies, but other methods exist. In more general models, the state of the environment need not be fully observable, and such settings are significantly harder to solve. RL settings can also be tackled by models from the so-called Projective Simulation framework for the design of learning agents, inspired by physical stochastic processes. While comparatively new, this model is of particular interest as it had been designed with the possibilities of beneficial quantization in mind. Interactive learning methods include models beyond textbook RL, including partially observable settings, which require generalization and more. Such extensions, e.g. generalization, typically require techniques from non-interactive learning scenarios, but also lead to agents with an ever increasing level of autonomy. In this sense, RL forms a bridge between ML and general AI models.
Broadly speaking, RL deals with the problem of learning how to optimally behave in unknown environments. In the basic textbook formalism, we deal with a task environment, which is specified by a Markov decision process (MDP). MDPs are labeled, directed graphs with additional structures, comprising a discrete and finite sets of states S = {s i } and actions A = {a i }, which denote the possible states of the environment, and the actions the learning agent can perform on it, respectively. The choice of the actions of the agent change the state of the environment, in a manner which is specific to the environment (MDP), and which may be probabilistic. This is captured by a transition rule P(s|s , a), denoting the probability of the environment ending up in the state s, if the action a had been performed in the state s . Technically, this can be viewed as a collection of actionspecific Markov transition matrices {P a } a∈A that the learner can apply on the environment by performing an action.
These describe the dynamics of the environment conditioned on the actions of the agent. The final component specifying the environment is a reward function R : S × A × S → Λ, where Λ is a set of rewards, often binary. In other words, the environment rewards certain transitions 36 . At each time instance, the action of the learner is specified by a policy: a conditional probability distribution π(a|s), specifying the probability of the agent outputting the action a provided it is in the state s. Given an MDP, intuitively the goal is finding good policies, i.e. those which yield high rewards. This can be formalized in many non-equivalent ways. Given a policy π and some initial state s we can e.g. define finite-horizon expected total reward after N interaction steps with R s N (π) = N i=1 r i , where r i is the expected reward under policy π at time-step i, in the given environment, and assuming we started from the state s. If the environment is finite and strongly connected 37 , the finite-horizon rewards diverge as the horizon N grows. However, by adding a geometrically depreciating factor (rate γ) we obtain an always bounded expression R γ (π) = ∞ i=1 γ i r i , called the infinite horizon expected reward (parametrized by γ), which is more commonly studied in literature. The expected rewards in finite or infinite horizons form the typical figures of merit in solving MDP problems, which come in two flavors. First, in decision theory, or planning (in the context of AI), the typical goal is finding the policy π opt which optimizes the (in)finite horizon reward in a given MDP, formally: given the (full or partial) specification of the MDP M , solve π opt = argmax π R N/γ (π), where R is the expected reward in finite (for N steps) or infinite horizon (for a given depreciation γ) settings, respectively. Such problems can be solved by dynamic and linear programming. In RL (Sutton and Barto, 1998), the specification of the environment (the MDP), in contrast, is not given, but rather can be explored by interacting with it dynamically. The agent can perform an action, and receive the subsequent state (and perhaps a reward). The ultimate goal here comes in two related (but conceptually different) flavours. One is to design an agent which will over time learn the optimal policy π opt , meaning the policy can be read out from the memory of the agent/program. Slightly differently, we wish an agent which will, over time gradually alter its behaviour (policy) as to act according to the optimal policy. While in theory these two are closely related, e.g. in robotics these are quite different as the reward rate before convergence (perfect learning) also matters 38 . First of all, we point out that RL problems as given above can be solved reliably whenever the MDP is finite and strongly connected: a trivial solution is to stick to a random policy until a reliable tomography of the environment can be done, after which the problem is resolved via dynamic programming 39 . Often, environments actually have additional structure, so-called initial and terminal states: if the agent reaches the terminal state, it is "teleported" to the fixed initial state. Such structure is called episodic, and can be used as a means of ensuring the strong connectivity of the MDP.
One way of obtaining solutions is by tracking so-called value functions V π (s) : S → R which assign expected reward under policy π assuming we start from state s; this is done recursively: the value of the current state is the current reward plus the averaged value of the subsequent state (averaged under the stochastic transition rule of the environment P (s|a, s )). Optimal policies optimize these functions, and this too is achieved sequentially by modifying the policy as to maximize the value functions. This, however, assumes the knowledge of the transition rule P (s|a, s ). In further 36 Rewards can also be probabilistic. This can be modelled by explicitly allowing stochastic reward functions, or by extending the state space, to include rewarding and non-rewarding instances of states (note, the reward depends on current state, action and the reached state) in which case the probability of the reward is encoded in the transition probabilities. 37 In this context this means that the underlying MDP has finite return times for all states, that is, there is a finite probability of going back to the initial state from any state for some sequence of actions. 38 These two flavours are closely related to the notions of on-policy and off-policy learning. These labels typically pertain to how the estimates of the optimal policy are internally updated, which may be in accordance to the actual current policy and actions of the agent, or independently from the executed action, respectively. For more details see e.g. (Sutton and Barto, 1998). 39 If the environment is not strongly connected, this is not possible: for instance the first move of the learner may lead to "good" or "bad" regions from which there is no way out, in which case optimal behaviour cannot be obtained with certainty. development of the theory, it was shown that tracking action-value functions Q π (s, a), given by
Q π (s, a) = s P (s |a, s)(Λ(s, a, s ) + γV π (s ))(20)
assigning the value not only to the state, but the subsequent action as well can be modified into an online learning algorithm 40 . In particular, the Q-values can be continuously estimated by weighted averaging the current reward (at timestep t) for an action-value, and the estimate of the highest possible Q-value of the subsequent action-value:
Q t+1 (s t , a t ) = Q t (s t , a t ) old value + α t learning rate · learned value r t+1 reward + γ discount · max a Q t (s t+1 , a) estimate of optimal future value − Q t (s t , a t ) old value .(21)
Note that having access to the optimal Q-values suffices to find the optimal policy: given a state, simply pick an action with the highest Q-value, but the algorithm above says nothing about which policy the agent should employ while learning. In (Watkins and Dayan, 1992) it was shown that the algorithm, specified by the update rule of Eq. 21, called Q-learning indeed converges to optimal Q values as long as the agent employs any fixed policy which has non-zero probabilities for all actions given any state (the parameter α t , which is a function of time, has to satisfy certain conditions, and γ should be the γ of the targeted figure of merit R γ ) 41 . In essence, this result suffices for solving the first flavour of RL, where the optimal policy is "learned" by the agent in the limit, but, in principle, never actually used. The convergence of the Q-learning update to the optimal Q-values, and consequently to the optimal behaviour, has been proven for all learning agents using Greedy-in-the-limit, infinite exploration (GLIE) policies. As the name suggests, such policies, in the asymptotic limit perform actions with the highest value estimated 42 . At the same time, infinite exploration means that, in the limit all state/action combinations will be tried out infinitely many times ensuring true optimal action values are found, and that the local minima are avoided. In general, the optimal trade off between these two competing properties, the exploration of the learning space, and the exploitation of obtained knowledge is quintessential for RL. There are many other RL algorithms which are based on state value, or action-value optimizations, such as SARSA 43 , various value iteration methods, temporal difference methods etc. (Sutton and Barto, 1998). In more recent times, progress has been achieved by using parametrized approximations of state-action-value-functions -a cross-breed between function approximation and reinforcement learning -which reduces the search space of available Q-functions. Here, the results which combine deep learning for value function approximation with RL have been particularly successful (Mnih et al., 2015) and the same approach also underpins the AlphaGo (Silver et al., 2016) system. This brings us to a different class of methods which do not optimize state, or action-value functions, but rather learn complete policies, often by performing an estimate of gradient descent, or other means of direct optimization in policy space. This is feasible whenever the policies are specified indirectly, by a comparably small number of parameters, and can in some cases be faster (Peshkin, 2001). The methods we discussed thus far consider special cases of environments, where the environment is Markovian, or, related to this, fully observable. The most common generalization of this are so-called partially observable MDPs (POMDP), where the underlying MDP structure is extended to include a set of observations O and a stochastic function defined with the conditional probability distribution P P OM DP (o ∈ O|s ∈ S, a ∈ A). The set of states of the environment are no longer directly accessible to the agent, but rather the agent perceives the observations from the set O, which indirectly and, in general, stochastically depend on the actual unobservable environmental state, as given by the distribution P P OM DP , and the action the agent took last. POMDPs are expressive enough to capture many real world problems, and are thus a common world model in AI, but are significantly more difficult to deal with compared to MDPs 44 . As mentioned, the setting of POMDPs moves us one step closer to arbitrary environment settings, which is the domain of artificial (general) intelligence 45 . The context of AGI is often closely related to modern view on robotics, where the structure of what can be observed, and what actions are possible stems not only from the nature of the environment, but also (bodily) constraints of the agent: e.g. a robot is equipped with sensors, specifying and limiting what the robot can observe or perceive, and actuators, constraining the possible actions. In such an agent-centric viewpoint, we typically talk about the set of percepts -signals that the agent can perceive -which may correspond to full states, or partial observations, depending on the agent-environment setting -and the set of actions 46 . This latter viewpoint, that the percept/action structure stems from the physical constitution of the agent and the environment, which we will refer to as an embodied perspective, was one of the starting points of the development of the projective simulation (PS) model for AI. PS is a physics-inspired model for AI which can be used for solving RL tasks. The centerpiece of the model is the so-called Episodic and Compositional Memory (ECM), which is a stochastic network of clips, see Fig. 9.
Clips are representations of short autobiographical episodes, i.e. memories of the agent. Using the compositional aspects of the memory, which allows for a rudimentary notion of creativity, the agent can also combine actual memories to generate fictitious, conceivable clips which need not have actually occurred. More formally, clips can be defined recursively as either memorized percepts or actions, or otherwise structures (e.g. sequences) of clips. Given a current percept, the PS agent calls its ECM network to perform a stochastic random walk over its clip space (the structure of which depends on the history of the agent) projecting itself into conceivable situations, before committing to an action. Aspects of this model have been beneficially quantized, and also used both in quantum experiments and in robotics and we will focus more on this model in section VII.A.
a. Learning efficiency and learnability for RL As mentioned in the introduction to this section, No Free Lunch theorems also apply to RL, and any statement about learning requires us to restrict the space of possible environments. For instance, "finite-space, time-independent MDPs" is a restriction which allows perfect learning relative to some of the standard figures of merit, as was first proven by the Q-learning algorithm. Beyond learnability, in more recent times, notions of sample complexity for RL tasks have also been explored, addressing the problem from different perspectives. The theory of sample complexity for RL settings is significantly more involved than for supervised learning, although the very basic desiderata remain the same: how many interaction steps are needed before the agent learns. Learning can naturally mean many things, but most often what is meant is that the agent learns the optimal policy. Unlike supervised learning, RL has the additional temporal dimension in the definitions of optimality (e.g. finite or infinite horizons), leading to an even broader space of options one can explore. Further details on this important field of research are beyond the scope of this review, and we refer the interested reader to e.g. the thesis of Kakade (Kakade, 2003) which also does a good job of reviewing some of the early works, and finds sample complexity bounds for RL for many basic settings, or e.g. (Lattimore et al., 2013;Dann and Brunskill, 2015) for some of the newer results.
III. QUANTUM MECHANICS, LEARNING, AND AI
Quantum mechanics has already had profound effect on the fields of computation and information processing. However, its impact on AI and learning has, up until very recently, been modest. Although the fields of ML and AI have a strong connection to theory of computation, these fields are still different, and not all progress in (quantum) computation implies qualitative progress in AI. For instance, although it has been more than 20 years, still the arguably most celebrated result in QC is that of Shor's factoring algorithm (Shor, 1997), which, on the face of it, has no impact on AI 47 . Nonetheless, other, less famous results may have application to various aspects of AI and learning. The field of QIP has thus, from its early stages had a careful and tentative interplay with various aspects of AI, although it is only recently that this line of research has received a broader attention. Roughly speaking, we can identify four main directions covering the interplay between ML/AI summarized in in Fig. 10.
Applications of ML in quantum physics
(1) Estimation and metrology (2) Historically speaking, the first contacts between aspects of QIP and learning theory occurred in terms of the direct application of statistics and statistical learning in light of the quantum theory, which forms the first line: classical machine learning applied in quantum theory and experiment reviewed in section IV. In this first topic, ML techniques are applied to data stemming from quantum experiments. The second topic, in contrast, machine learning over genuinely quantum data: quantum generalization of machine learning-type tasks, discussed in section V. This brings us to the topic which has been receiving substantial interest in recent times: can quantum computers genuinely help in machine learning problems, addressed in section VI. The final topic we will investigate considers aspects of QIP which extend beyond machine learning (taken in a narrow sense), such as generalizations of RL, and which can be understood as stepping-stones towards quantum AI. This is reflected upon in section VII.C It is worthwhile to note that there are many possible natural classifications of the comprehensive field we discuss in this review. Our chosen classification is motivated by two subtly differing perspectives on the classification of quantum ML, discussed further in section VII.B.1.
IV. MACHINE LEARNING APPLIED TO (QUANTUM) PHYSICS
In this section we review works and ideas where ML methods have been either directly utilized, or have otherwise been instrumental for QIP results. To do so, we are however, facing the ungrateful task of specifying the boundaries of what is considered a ML method. In recent times, partially due to its successes, ML has become a desirable key word, and consequently an umbrella term for a broad spectrum of techniques. This includes algorithms for solving genuine learning problems, but also methods and techniques designed for indirectly related problems. From such an all-encompassing viewpoint, ML also includes aspects of (parametric) statistical learning, the solving of black-box (or derivative-free) optimization problems, but also the solving of hard optimization problems in general 48 . As we do not presume to establish hard boundaries, we adopt a more inclusive perspective. The collection of all works which utilize such methods, which could conceivably fit in broad-scope ML, for QIP applications cannot be covered in one review. Consequently, we place emphasis on pioneering works, and works where the authors themselves advertise the ML flavour of used methodologies, thereby emphasizing the potential of such ML/QIP interdisciplinary endeavors. The use of ML in the context of QIP, understood as above, has been considerable, with an effective explosion of related works in the last few years. ML has been shown to be effective in a great variety of QIP related problems: in quantum signal processing, quantum metrology, Hamiltonian estimation, and in problems of quantum control. In recent times, the scope of applications has been significantly extended, ML and involved techniques have also been applied to combatting noise in the process of performing quantum computations, problems in condensed-matter and many-body physics, and in the design of novel quantum optical experiments. Such results suggest that advanced ML/AI techniques will play an integral role in quantum labs of the future, and in particular, in the construction of advanced quantum devices and, eventually, quantum computers. In a complementary direction, QIP applications have also engaged many of the methods of ML, showing that QIP may also become a promising proving ground for cutting edge ML research. Contacts between statistical learning theory (as a part of the theoretical foundations of ML) and quantum theory come naturally due to the statistical foundations of quantum theory. Already the very early theories of quantum signal processing (Helstrom, 1969), probabilistic aspects of quantum theory and quantum state estimation (Holevo, 1982), and early works (Braunstein and Caves, 1994) which would lead to modern quantum metrology (Giovannetti et al., 2011) included statistical analyses which establish tentative grounds for more advanced ML/QIP interplay. Related early works further emphasize the applicability of statistical methods, in particular maximum likelihood estimation, to quantum tomographic scenarios, such as the tasks of state estimation (Hradil, 1997), the estimation of quantum processes (Fiurášek and Hradil, 2001) and measurements (Fiurášek, 2001) and the reconstruction of quantum processes from incomplete tomographic data (Ziman et al., 2005) 49 . The works of this type generically focus on physical scenarios where clean analytic theory can be applied. However, in particular in experimental, or noisy (thus, realistic) settings, many of the assumptions, which are crucial for the pure analytic treatment, fail. This leads to the first category of ML applications to QIP we consider.
A. Hamiltonian estimation and metrology
Executive summary: Metrological scenarios can involve complex measurement strategies, where, e.g., the measurements which need to be performed may depend on previous outcomes. Further, the physical system under analysis may be controlled with the help of additional parameters -so-called controls -which can be sequentially modified, leading to a more complicated space of possibilities. ML techniques can help us find optima in such a complex space of strategies, under various constraints, which are often pragmatically and experimentally motivated constraints.
The identifying of properties of physical systems, be it dynamic properties of evolutions (e.g. process tomography), or properties of the states of given systems (e.g. state tomography), is a fundamental task. Such tasks are resolved by various (classical) metrological theories and methods, which can identify optimal strategies, characterize error bounds, and which have also been quite generally exported to the quantum realm. For instance, quantum metrology studies the estimation of the parameters of quantum systems, and, generally, identifies optimal measurement strategies, for their estimation. Further, quantum metrology places particular emphasis on scenarios where genuine quantum phenomena -a category of phenomena associated to and sometimes even defined by the need for complex, and difficult-to-implement quantum devices for their realization -yield an advantage over simpler, classical strategies. The specification of optimal strategies, in general, constitute the problem of planning 50 , for which various ML techniques can be employed. The first examples of ML applications for finding measurement strategies originate from the problem of phase estimation, a special case of Hamiltonian estimation. Interestingly, already this simple case, provides a fruitful playground for ML techniques: analytically optimal measurement strategies are relatively easy to find, but are experimentally unfeasible. In turn, if we limit ourselves to a set of "simple measurements", near-optimal results are possible, but they require difficult-to-optimize adaptive strategies -the type of problem ML is good for. Hamiltonian estimation problems have also been tackled in more general settings, invoking more complex machinery. We first briefly describe basic Hamiltonian estimation settings and metrological concepts. Then we will delve deeper in these results combining ML with metrology problems.
Hamiltonian estimation
The generic scenarios of Hamiltonian estimation, a common instance of metrology in the quantum domain, consider a quantum system governed by a (partially unknown) Hamiltonian within a specified family H(θ), where θ = (θ 1 , . . . , θ n ), is a set of parameters θ. Roughly speaking, Hamiltonian estimation deals with the task of identifying the optimal methods (and the performance thereof) for estimating the Hamiltonian parameters. This amounts to optimizing the choice of initial states (probe states), which will evolve under the Hamiltonian, and the choice of the subsequent measurements, which uncover the effect the Hamiltonian had, and thus, indirectly, the parameter values 51 . This prolific research area considers 50 More specifically, most metrology settings problems constitute instances of off-line planning, and thus not RL, as the "environment specification" is fully specified -in other words, there is no need to actually run an experiment, and the optimal strategies can be found off-line. See section I.B for more detail. 51 Technically, the estimation also involves the use of a suitable estimator function, but these details will not matter. many restrictions, variations and generalizations of this task. For instance, one may assume settings in which we either have control over the Hamiltonian evolution time t, or it is fixed so that t = t 0 , which are typically referred to as frequency, and phase estimation, respectively. Further, the efficiency of the process can be measured in multiple ways. In a frequentist approach, one is predominantly interested in estimation strategies which, roughly speaking, allow for the best scaling of precision of the estimate, as a function of the number of measurements. The quantity of interest is the so-called quantum Fisher information, which bounds and quantifies the scaling. Intuitively, in this setting, also called the local regime, many repetitions of measurements are typically assumed. Alternatively, in the Bayesian, or single-shot, regime the prior information, which is given as a distribution over the parameter to be estimated, and its update to the posterior distribution given a measurement strategy and outcome, are central objects (Jarzyna and Demkowicz-Dobrzański, 2015). The objective here is the identification of preparation/measurement strategies which optimally reduce the average variance of the posterior distribution, which is computed via Bayes' theorem. One of the key interests in this problem is that the utilization of, arguably, genuine quantum features, such as entanglement, squeezing etc. in the structure of the probe states and measurements may lead to provably more efficient estimation than is possible by so-called classical strategies for many natural estimation problems. Such quantum-enhancements are potentially of immense practical relevance (Giovannetti et al., 2011). The identification of optimal scenarios has been achieved in certain "clean" theoretical scenarios, which are, however, often unrealistic or impractical. It is in this context that ML-flavoured optimization, and other ML approaches can help.
Phase estimation settings
Interesting estimation problems, from a ML perspective, can already be found in the simple examples of a phase shift in an optical interferometer, where one of the arms of an otherwise balanced interferometer contains a phase shift of θ. Early on, it was shown that given an optimal probe state, with mean photon number N , and an optimal (so-called canonical ) measurement, the asymptotic phase uncertainty can decay as N −1 (Sanders and Milburn, 1995) 52 , known as the Heisenberg limit. In contrast, the restriction to "simple measurement strategies" (as characterized by the authors) , involving only photon number measurements in the two output arms, achieve a quadratically weaker scaling of √ N −1 , referred to as the standard quantum limit. This was proven in more general terms: the optimal measurements cannot be achieved by the classical post-processing of photon number measurements of the output arms, but constitute an involved, experimentally unfeasible POVM (Berry and Wiseman, 2000). However in (Berry and Wiseman, 2000) it was shown how this can be circumvented by using "simple measurements", provided they can be altered in run-time. Each measurement consists of a photon number measurement of the output arms, and is parametrized by an additional, controllable phase shift of φ in the free arm -equivalently, the unknown phase can be tweaked by a chosen φ. The optimal measurement process is an adaptive strategy: an entangled N-photon state is prepared (see e.g. (Berry et al., 2001)), the photons are sequentially injected into the interferometer, and photon numbers are measured. At each step, the measurement performed is modified by choosing a differing phase shift φ, which depends on previous measurement outcomes. In (Berry and Wiseman, 2000;Berry et al., 2001), an explicit strategy was given, which achieves the Heisenberg scaling of the optimal order O(1/N ). However, for N > 4 it was shown this strategy is not strictly optimal. This type of planning is hard as it reduces to the solving of non-convex optimization problems 53 . The field of ML deals with such planning problems as well, and thus many optimization techniques have been developed for this purpose. The applications of such ML techniques, specifically particle swarm optimization were first suggested in pioneering works Sanders, 2010, 2011), and later in (Sergeevich and Bartlett, 2012). In subsequent work, perhaps more well-known methods of differential evolution have been demonstrated to be superior and more computationally efficient (Lovett et al., 2013).
Generalized Hamiltonian estimation settings
ML techniques can also be employed in significantly more general settings of quantum process estimation. More general Hamiltonian estimation settings consider a partially controlled evolution given by H C (θ), where C is a collection of control parameters of the system. This is a reasonable setting in e.g. the production of quantum devices, which have controls (C), but whose actual performance (dependant on θ) needs to be confirmed. Further, since production devices are seldom identical, it is beneficial to even further generalize this setting, by allowing the unknown parameters θ to be only probabilistically characterized. More precisely, they are probabilistically dependent on another set of hyperparameters ζ = (ζ 1 , . . . , ζ k ), such that the parameters θ are distributed according to a known conditional probability distribution P (θ|ζ). This generalized task of estimating the hyperparameters ζ thus allows the treatment of systems with inherent stochastic noise, when the influence of noise is understood (given by P (θ|ζ)). Such very general scenarios are addressed in (Granade et al., 2012), relying on classical learning techniques of Bayesian experimental design (BED) (Loredo, 2004), combined with Monte Carlo methods. The details of this method are beyond the scope of this review, but, roughly speaking, BED assumes a Bayesian perspective on the experiments of the type described above. The estimation methods of the general problem (ignoring the hyperparameters and noise, for simplicity, although the same techniques apply) realize a conditional probability distribution P (d|θ; C) where d corresponds to experimental data, i.e. measurement outcomes collected in the experiment. Assuming some prior distribution over hidden parameters (P (θ)), the posterior distribution, given experimental outcomes, is given via Bayes theorem by
P (θ|d; C) = P (d|θ; C)P (θ) P (d|C) .(22)
The evaluation of above is already non trivial, predominantly as the normalization factor P (d|C) includes an integration over the parameter space. Further, of particular interest are scenarios where an experiment is iterated many times. In this case, analogously to the adaptive setting for metrology discussed above, it is beneficial to tune the control parameters C dependent on the outcomes. BED (Loredo, 2004), tackles such adaptive settings, by selecting the subsequent control parameters C as to maximize a utility function 54 , for each update step. The Bayes updates consist of the 53 The non-convexity stems from the fact that the effective input state at each stage depends on previous measurements performed. As the entire interferometer set-up can be viewed as a one-subsystem measurement, the conditional states also depend on unknown parameters, and these are used in the subsequent stages of the protocol (Hentschel and Sanders, 2010). 54 The utility function is an object stemming from decision theory and, in the case of BED it measures how well the experiment improves our inferences. It is typically defined by the prior-posterior gain of information as measured by the Shannon entropy, although there are other possibilities.
computing of P (θ|d 1 , . . . , d l−1 d k ) ∝ P (d k |θ)P (θ|d 1 , . . . , d l−1 ) at each step. The evaluation of the normalization factor P (d|C) is, however, also non-trivial, as it includes an integration over the parameter space. In (Granade et al., 2012) this integration is tackled via numerical integration techniques, namely sequential Monte Carlo, yielding a novel technique for robust Hamiltonian estimation.
The robust Hamiltonian estimation method was subsequently expanded to use access to trusted quantum simulators, which forms a more powerful and efficient estimation scheme (Wiebe et al., 2014b) 55 , which was also shown to be robust to moderate noise and imperfections in the trusted simulators (Wiebe et al., 2014c). A restricted version of the method of estimation with simulators was experimentally realized in (Wang et al., 2017). More recently, connected to the methods of robust Hamiltonian estimation, Bayesian and sequential Monte Carlo based estimation have further been combined with particle swarm optimization techniques (Stenberg et al., 2016). There the goal was to achieve reliable coupling strength and frequency estimation in simple decohering systems, corresponding to realistic physical models. More specifically, the studied problem is the estimation of field-atom coupling terms, and the mode frequency term, in the Jaynes-Cummings model. The controlled parameters are the local qubit field strength, measurements are done via swap spectroscopy. Aside from using ML to perform partial process tomography of controlled quantum systems, ML can also help in the genuine problems of quantum control, specifically, the design of target quantum gates. This forms the subsequent topic.
B. Design of target evolutions
Executive summary: One of the main tasks quantum information is the design of target quantum evolutions, including quantum gate design. This task can be tackled by quantum control which studies controlled physical systems where certain parameters can be adjusted during system evolution, or by using extended systems, and unmodulated dynamics. Here, the underlying problem is an optimization problem, that is, the problem of finding optimal control functions or extended system parameters, of a system which is otherwise fully specified. Under realistic constraints these optimization tasks are often non-convex, thus hard for conventional optimizers, yet amenable to advanced ML technologies. Target evolution design problems can also be tackled by using feed-back from the actual experimental system, leading to the use of on-line optimization methods and RL.
From a QIP perspective, one of the most important tasks is the design of elementary quantum gates, needed for quantum computation. The paradigmatic approach to this is via quantum control, which aims to identify how control fields of physical systems need to be adapted in time, to achieve desired evolutions. The designing of target evolutions can also be achieved in other settings, e.g. by using larger systems, and unmodulated dynamics. In both cases, ML optimization techniques can be used to design optimal strategies, off line. However, target evolutions can also be achieved in run-time, by interacting with a tunable physical system, and without the need for the complete description of the system. We first consider off-line settings, and briefly comment on the latter on-line settings thereafter.
Off-line design
The paradigmatic setting in quantum control considers a Hamiltonian with a controllable (c) and a drift part (dr), e.g.
H(C(t)) = H dr + C(t)H c . The free part is modulated via a (real-valued) control field C(t). The resulting time-integrated operator U = U [C(t)] ∝ exp −i T 0 dtH(C(t))
, over some finite time T , is a function of the chosen field function C(t). The typical goal is to specify the control field C(t) which maximizes the transition probability from some initial state |0 to a final state |φ , thus find argmax C | φ| U [C(t)] |0 | 56 . Generically, the mappings C(t) → U [C(t)] are highly involved, but nonetheless, empirically it was shown that greedy optimization approaches provide optimal solutions (which is the reason why greedy approaches dominate in practice). This empirical observation was later elucidated theoretically (Rabitz et al., 2004), suggesting that in generic systems local minima do not exist, which leads to easy optimization (see also (Russell and Rabitz, 2017) for a more up-to-date account). This is good news for experiments, but also suggests that quantum control has no need for advanced ML techniques. However, as is often the case with claims of such generality, the underlying subtle assumptions are fragile which can often be broken. In particular, greedy algorithms for optimizing the control problem as above can fail, even in the low dimensional case, if we simply place rather reasonable constraints on the control function and parameters. Already for 3-level and 2-qubit systems with constraints on the allowed evolution time t, and the precision of the linearization of the time-dependent control parameters 57 , it is possible to construct examples where greedy approaches fail, yet global (derivative-free) approaches, in particular differential evolution, succeed (Zahedinejad et al., 2014). Another example of hard off-line control concerns the design of high fidelity single-shot three-qubit gates 58 , which is in (Zahedinejad et al., 2015 addressed using a specialized novel optimization algorithm the authors called subspace-selective self-adaptive differential evolution (SuSSADE) . An interesting alternative approach to gate design is by utilizing larger systems. Specifically designed larger systems can naturally implement desired evolutions on a subsystem, without the need of time-dependent control (c.f. QC with always-on interaction (Benjamin and Bose, 2003)). In other words, local gates are realized despite the fact that the global dynamics is unmodulated. The non-trivial task of constructing such global dynamics, for the Toffoli gate, is in (Banchi et al., 2016) tackled by a method which relies stochastic gradient descent, and draws from supervised learning techniques.
On-line design
Complementary to off-line methods, here we assume access to an actual quantum experiment, and the identification of optimal strategies relies on on-line feedback. In these cases, the quantum experiment 56 An example of such additional fields would be controlled laser fields in ion trap experiments, and the field function C specifies how the laser field strengths are modulated over time. 57 It is assumed that the field function C(t) describing parameter values as functions of time is step-wise constant, split in K segments. The larger the value K is, the better is the approximation of a smooth function which would arguably be better suited for greedy approaches. 58 This includes the Toffoli (and Fredkin) gate which is of particular interest as it forms a universal gate set together with the simple single-qubit Hadamard transform (Shi, 2002) (if ancillas qubits are used).
need not be fully specified beforehand. Further, the required methodologies lean towards on-line planning and RL, rather than optimization. In the case optimization is required, the parameters of optimization are different due to experimental constraints, see (Shir et al., 2012) for an extensive treatment of the topic. The connections between on-line methods which use feedback from experiments to "steer" systems to desired evolutions, have been connected to ML in early works (Bang et al., 2008;Gammelmark and lmer, 2009). These exploratory works deal with generic control problems via experimental feedback, and have, especially at the time, remained mostly unnoticed by the community. In more recent times, feedback-based learning and optimization has received more attention. For instance in (Chen et al., 2014) the authors have explored the applicability of a modified Q-learning algorithm for RL (see section II.C) on canonical control problems. Further, the potential of RL methods had been discussed in the context of optimal parameter estimation, but also typical optimal control scenarios in (Palittapongarnpim et al., 2016). In the latter work, the authors also provide a concise yet extensive overview of related topics, and outline a perspective which unifies various aspects of ML and RL in an approach to resolve hard quantum measurement and control problems. In (Clausen and Briegel, 2016), RL based on PS updates was analyzed in the context of general control-and-feedback problems. Finally, ideas of unified computational platforms for quantum control, albeit without explicit emphasis on ML techniques had been previously provided in (Machnes et al., 2011).
In the next section, we further coarse-grain our perspective, and consider scenarios where ML techniques control various gates, and more complex processes, and even help us learn how to do interesting experiments.
C. Controlling quantum experiments, and machine-assisted research
Executive summary: ML and RL techniques can help us control complex quantum systems, devices, and even quantum laboratories. Furthermore, almost as a by-product, they may also help us to learn more about the physical systems and processes studied in an experiment. Examples include adaptive control systems (agents) which learn how to control quantum devices, e.g. how to preserve the memory of a quantum computer, combat noise processes, generate entangled quantum states, and target evolutions of interest.
In the process of learning such optimal behaviours even simple artificial agents also learn, in an implicit, embodied embodied, sense, about the underlying physics, which can be used by us to obtain novel insights. In other words artificial learning agents can genuinely help us do research.
The prospects of utilizing ML and AI in quantum experiments have been investigated also for "higher-level" experimental design problems. Here one considers automated machines that control complex processes which e.g. specify the execution of longer sequences of simple gates, or the execution of quantum computations. Moreover, it has been suggested that learning machines can be used for, and integrated into, the very design of quantum experiments, thereby helping us in conducting genuine research. We first present two results where ML and RL methods have been utilized to control more complex processes (e.g. generate sequences of quantum gates to preserve memory), and consider the perspectives of machines genuinely helping in research thereafter.
Controlling complex processes
The simplest example of involved ML machinery used to generate control of slightly more complex systems was done in the context of is the problem of dynamical decoupling for quantum memories.
In this scenario, a quantum memory is modelled as a system coupled to a bath (with a local Hamiltonian for the system (H S ) and the bath H B ), and decoherence is realized by a coupling term H SB ; the local unitary errors are captured by H S . The evolution of the total Hamiltonian H noise = H S + H B + H SB would destroy the contents of the memory, but this can be mitigated by adding a controllable local term H C acting on the system alone 59 . Certain optimal choices of the control Hamiltonian H C are known. For instance, we can consider the scenario where H C is modulated such that it implements instantaneous 60 Pauli-X and Pauli-Y unitary operations, sequentially, at intervals ∆t. As this interval, which is also the time of the decoherence-causing free evolution, approaches zero, so ∆t → 0, this process is known to ensure perfect memory. However, the moment the setting is made more realistic, allowing finite ∆t times, the space of optimal sequences becomes complicated. In particular, optimal sequences start depending on ∆t, the form of the noise Hamiltonian, and total evolution time. To identify optimal sequences, in (August and Ni, 2017), the authors employ recurrent NNs, which are trained as a generative model -meaning they are trained to generate sequences which minimize final noise. The entire sequences of pulses (Pauli gates) which the networks generated were shown to outperform well-known sequences.
In a substantially different setting, where interaction necessarily arises, the authors studied how AI/ML techniques can be used to make quantum protocols themselves adaptive. Specifically, the authors applied RL methods based on PS (Briegel and De las Cuevas, 2012) (see section VII.A) to the task of protecting quantum computation from local stray fields . In MBQC (Raussendorf and Briegel, 2001;Briegel et al., 2009), the computation is driven by performing adaptive single-qubit projective measurements on a large entangled resource state, such as the cluster state (Raussendorf and Briegel, 2001). In a scenario where the resource state is exposed to a stray field, each qubit undergoes a local rotation. To mitigate this, in , the authors introduce learning agent, which "plays" with a local probe qubit, initialized in say the +1 eigenstate of σ x , denoted |+ , learning how to compensate for the unknown field. In essence, given a measurement, the agent chooses a different measurement, obtaining a reward whenever a +1 outcome is observed. The agent is thus trained to compensate for the unknown field, and serves as an "interpreter" between desired measurements and the measurements which should be performed in the given setting (i.e. in the given field with given frequency of measurements (∆t)), see Fig. 11. The problem of mitigating such fixed stray fields could naturally be solved with non-adaptive methods where we use the knowledge about the system to solve our problem, by e.g. measuring the field and adapting accordingly, or by using fault-tolerant constructions. From a learning perspective, such direct methods have a few shortcomings which may be worth presenting for didactic purposes. Fault tolerant methods are clearly wasteful, as they fail to gain utilize any knowledge about the noise processes. In contrast, field estimation methods learn too much, and assume a model of the world. To clarify the latter, to compensate the measured field, we need to use quantum mechanics, specifically the Born rule. In contrast, RL approach is model-free: the Born rule plays no part, and "correct behavior" is learned, and established exclusively based on experience. This is conceptually different, but also operatively critical, as model-free approaches allow for more autonomy and flexibility (i.e. the same machinery can be used in more settings without intervention) 61 . Regarding learning too much, one of the basic principles of statistical learning posits that "when solving a problem of interest, one should not solve a more general problem as an intermediate step" (Vapnik, 1995), which is intuitive. The problem of the presented setting is "how to adapt the measurement settings," and not "characterize the stray fields". While in the present context, the information-theoretic content of the two questions may be the same, it should easy to imagine that if more complex fields are considered, full process characterization contains a lot more information than needed to optimally adapt the local measurements. The approaches of can further be generalized to utilize information from stabilizer measurements (Orsucci et al., 2016), or similarly outcomes of syndrome measurements when codes are utilized (Combes et al., 2014), (instead of probe states) to similar ends. Addressing somewhat related problems, but using supervised learning methods, the authors in (Mavadia et al., 2017) have shown how to compensate for qubit decoherence (stochastic evolution) also in experiments .
Learning how to experiment
One of the first examples of applications of RL in QIP appears in the context of experimental photonics, where one of the current challenges lies in the generation of highly entangled, high dimensional, multi-party states. Such states are generated on optical tables, the configuration of which, to generate complex quantum states, can be counter-intuitive and unsystematic. The searching for configurations which are interesting can be mapped to a RL problem, where a learning agent is rewarded whenever it generates an interesting state (in a simulation). In a precursor work (Krenn et al., 2016), the authors used a feedback-assisted search algorithm to identify previously unknown configurations which generate novel highly entangled states. This demonstrated that the design of novel quantum experiments can also be automatized, which can significantly aid in research. This idea given in the context of optical tables, has subsequently been combined with earlier proposals to employ AI agents in quantum information protocols and as "lab robots" in future quantum laboratories (Briegel, 2013). This led to the application of more advanced RL techniques, based on the PS framework, for the tasks of understanding the Hilbert space accessible with optical tables, and the autonomous machine-discovery of useful optical gadgets (Melnikov et al., 2017). Related to the topic of learning new insight from experimenting machines, in (Bukov et al., 2017) the authors consider the problem of preparing target states by means of chosen pulses implementing (a restricted set) of rotations. This is a standard control task, and authors show that RL achieves respectable and sometimes near-optimal results. However, for our purposes, the most relevant aspects of this work pertain to the fact that the authors also illustrate how of ML/RL techniques can be used to obtain new insights in quantum experiments, and non-equilibrium physics, by circumventing human intuition which can be flawed. Interestingly, the authors also demonstrate the reverse, i.e. how physics insights can help elucidate learning problems 62 .
D. Machine learning in condensed-matter and many-body physics Executive summary: One of the quintessential problems of many-body physics is the identification of phases of matter. A popular overlap between ML and this branch of physics demonstrates that supervised and unsupervised systems can be trained to classify different phases. More interestingly, unsupervised learning can be used to detect phases, and even discover order parameters -possibly genuinely leading to novel physical insights. Another important overlap considers the representational power of (generalized) neural networks, to characterize interesting families of quantum systems. Both suggest a deeper link between certain learning models, on the one side, and physical systems, on the other side, the scope of which is currently an important research topic.
ML techniques have, over the course of last 20 years, become an indispensable toolset of many natural sciences which deal with highly complex systems. These include biology (specifically genetics, genomics, proteomics, and the general field of computational biology) (Libbrecht and Noble, 2015), medicine (e.g. in epidemiology, disease development, etc.) (Cleophas and Zwinderman, 2015), chemistry (Cartwright, 2007), high energy and particle physics (Castelvecchi, 2015). Unsurprisingly, they have also permeated various aspects of condensed matter and many-body physics. Early examples of this were proposed in the context of quantum chemistry and density functional theory (Curtarolo et al., 2003;Snyder et al., 2012;Li et al., 2015a), or for the approximation of the Green's function of the single-site Anderson impurity model (Arsenault et al., 2014). The interest in connections between NNs and many-body and condensed matter physics has undergone immense growth since. Some of the results which we cover next deviate from the primary topic of this review, those concerning the overlaps of QIP and ML. However, since QIP, condensed matter, and many-body physics share significant overlaps we feel it is important to at least briefly flesh out the basic ideas. One of the basic lines of research in this area deals with the learning of phases of matter, and the detection of phase transitions in physical systems. A canonical example is the discrimination of samples of configurations stemming from different phases of matter, e.g. Ising model configurations of thermal states below, or above the critical temperature. This problem has been tackled using principal component analysis and nearest neighbour unsupervised learning techniques (Wang, 2016) (see also (Hu et al., 2017)). Such methods also have the potential to, beyond just detecting phases, actually identify order parameters (Wang, 2016) -in the above case, magnetization. More complicated discrimination problems, e.g. discriminating Coulomb phases, have been resolved using basic feed-forward networks, and convolutional NNs were trained to detect topological phases, , but also phases in fermionic systems on cubic lattices (Ch'ng et al., 2016). Neural networks have also been combined with quantum Monte Carlo methods (Broecker et al., 2016), and with unsupervised methods (van Nieuwenburg et al., 2017) (applied also in (Wang, 2016)), in both cases to improve classification performance in various systems. It is notable that all these methods prove quite successful in "learning" phases, without any information of the system Hamiltonian. While the focus in this field had mostly been on neural network architectures, other supervised methods, specifically kernel methods (e.g. SVMs) had been used for the same purpose (Ponte and Melko, 2017). Kernel methods may be in some cases advantageous as they can have a higher interpretability: it is often easier to understand the reason behind the optimal model in the cases of kernel methods, rather than NNs, which also means that learning about the underlying physics may be easier in the cases of kernel methods. Note that this will most likely be challenged by deep NN approaches in years to come. A partial explanation behind the success of neuronal approaches for classifying phases of matter may lie in their form. Specifically, they may have the capacity to encode important properties of physical systems both in the classical in quantum case. This motivates the second line of research we mention in this context. BMs, even in their restricted variant, are known to have the capacity to encode complicated distributions. In the same sense, restricted BMs, extended to accept complex weights (i.e. the weights w ij in Eqs. (2) and (3)) encode quantum states, and the hidden layer captures correlations, both classical and quantum (entanglement). In it was shown that this approach describes equilibrium and dynamical properties of many prototypical systems accurately: that is, restricted BMs form a useful ansatz for interesting quantum states (called neural-network quantum states (NQS)), where the number of neurons in the hidden layer controls the size of the representable subset of the Hilbert space. This is analogous to how, for instance, the bond dimension controls the scope of the matrix product state ansatz (Verstraete et al., 2008). This property can also be exploited in order to achieve efficient quantum state tomography 63 (Torlai et al., 2017). In subsequent works, the authors have also analyzed the structure of entanglement of NQS states (Deng et al., 2017), and have provided analytic proofs of the representation power of deep restricted BMs, proving they can e.g. represent ground states of any k-local Hamiltonians with polynomial-size gaps (Gao and Duan, 2017). It is worthwhile to note that representational powers of standard variational representations (e.g. that of the variational renormalization group) had previously been contrasted to those of deep NNs (Mehta and Schwab, 2014), with the goal of elucidating the success of deep networks. Related to this, the Tensor Network (Östlund and Rommer, 1995;Verstraete and Cirac, 2004) formalism has been used for the efficient description of deep convolutional arithmetic circuits, establishing also a formal connection between quantum many-body states and deep learning (Levine et al., 2017). Very recently, the intersection between ML and many-body quantum physics have also inspired research into ML-motivated entanglement witnesses and classifiers (Ma and Yung, 2017;, and also into furthering the connections between ML and many-body physics, specifically, entanglement theory. These recent results have positioned NNs as one of the most exciting new techniques to be applied in the context of both condensed-matter and many-body physics. Additionally, they also show the potential of the converse direction of influence -the application of mathematical formalism of many-body physics for the deepening of our understanding of complex learning models.
V. QUANTUM GENERALIZATIONS OF MACHINE LEARNING CONCEPTS
The onset of quantum theory necessitated a change in how we describe physical systems, but also a change in our understanding of what information is 64 . Quantum information is a more general concept, and QIP exploits the genuine quantum features for more efficient processing (using quantum computers) and more efficient communication. Such quintessential quantum properties, such as the fact that even pure states cannot be perfectly copied (Wootters and Zurek, 1982), are often argued to be at the heart of many quantum applications, such as cryptography. Similarly, quintessential information processing operations are more general in the quantum world: closed quantum systems can undergo arbitrary unitary evolutions, whereas the corresponding classical closed-system evolutions correspond to the (finite) group of permutations 65 . The majority of ML literature deals with learning from, and about data -that is, classical information. This section examines the question of what ML looks like, when the data (and perhaps its processing) is fundamentally quantum. We will first explore quantum generalizations of supervised learning, where the "data-points" are now genuine quantum states. This generates a plethora of scenarios which are indistinguishable in the classical case (e.g. having one or two copies of the same example is not the same!). Next, we will consider another quantum generalization of learning, where quantum states are used to represent the generalizations of unknown concepts in CLT -thus we talk about the learning of quantum states. Following this we will present some results on quantum generalizations of POMDP's which could lead to quantum-generalized reinforcement learning (although this actually just generalizes the mathematical structure).
A. Quantum generalizations: machine learning of quantum data
Executive summary: A significant fraction of the field of ML deals with data analysis, classification, clustering, etc. QIP generalizes standard notions of data, to include quantum states. The processing of quantum information comes with restrictions (e.g. no-cloning or no-deleting), but also new processing options. This section addresses the question of how conventional ML concepts can be extended to the quantum domain, mostly focusing on aspects of supervised learning and learnability of quantum systems, but also concepts underlying RL.
One of the basic problems of ML is that of supervised learning, where a training set D = {(x i , y i )} i is used to infer a labeling rule mapping data points to labels x i rule → y i (see section I.B for more details). More generally, supervised learning deals with classification of classical data. In the tradition of QIP, data can also be quantum -that is, all quantum states carry, or rather represent, (quantum) information. What can be done with datasets of the type {(ρ i , y i )} i , where ρ i is a quantum state? Colloquially it is often said that one of the critical distinction between classical and quantum data is that quantum data cannot be copied. In other words, having one instance of an example, by notation abuse denoted (ρ i ⊗ y i ), is not generally as useful as having two copies (ρ i ⊗ y i ) ⊗2 . In contrast in the case of classification with functional labeling rules, this is the same. The closest classical analog of dealing with quantum data is the case where labelings are not deterministic, or equivalently, where the conditional distribution P (label|datapoint) is not extremal (Dirac). This is the case of classification (or learning) of random variables, or probabilistic concepts, where the task is to produce the best guess label, specifying the random process which "most likely" produced the datapoint 66 . In this case, having access to two examples in the training phase which are independently sampled from the same distribution is not the same as having two copies of one and the same individual sample-these are perfectly correlated and carry no new information 67 . To obtain full information about a distribution, or random variable, one in principle needs infinitely many samples. Similarly, in the quantum case, having infinitely many copies of the same quantum state ρ is operatively equivalent to having a classical description of the given state. Despite similarities, quantum information is still different from mere stochastic data. The precursors of ML-type classification tasks can be identified in the theories of quantum state discrimination, which we briefly comment on first. Next, we review some early works dealing with "quantum pattern matching" which spans various generalizations of supervised settings, and first works which explicitly propose the study of quantum-generalized machine learning. Next, we discuss more general results, which characterize inductive learning in quantum settings. Finally, we present a CLT perspective on learning with quantum data, which addresses the learnability of quantum states.
1. State discrimination, state classification, and machine learning of quantum data a. State discrimination The entry point to this topic can again be traced to seminal works of Helstrom and Holevo (Helstrom, 1969;Holevo, 1982) as the problems of state discrimination can be rephrased as variants of supervised learning problems. In typical state discrimination settings, the task is the identifying of a given quantum state (given as an instance of a quantum system prepared in that state), under the promise that it belongs to a (typically finite) set {ρ i } i , where the set is fully classically specified. Recall, state estimation, in contrast, typically assumes continuous parametrized families, and the task is the estimation of the parameter. In this sense, discrimination is a discretized estimation problem 68 , and the problems of identifying optimal measurements (under various figures of merit), and success bounds have been considered extensively and continuously throughout the history of QIP (Helstrom, 1969;Croke et al., 2008;Slussarenko et al., 2017). Remark: Traditional quantum state discrimination can be rephrased as degenerate supervised learning setting for quantum states. Here, the space of "data-points" is restricted to a finite (or parametrized) family {ρ i } i , and the training set contains an effective infinite number of examples D = {(ρ i , i) ⊗∞ }; naturally, this notation is just a short-hand for having the complete classical description of the quantum states 69 . In what follows we will sometimes write ρ ⊗∞ to denote a quantum system containing the classical description of the density matrix ρ. 66 Note that in this setting we do not have the descriptions of the stochastic processes given a-priory -they are to be inferred from the training examples. 67 In this sense, no-cloning theorem also applies to classical information: an unknown random variable cannot be cloned. In QIP language this simply means that no-cloning theorem applies to diagonal density matrices, i.e. ρ → ρ ⊗ ρ, even when ρ is promised to be diagonal. 68 Intuitively, estimation is to discrimination, what regression is to classification in the ML world. 69 From an operative, and information content perspective, having infinitely many copies is equivalent to having a full classical description: infinite copies are sufficient and necessary for perfect tomography -yielding the exact classical description -whereas having an exact classical description is sufficient and necessary for generating an unbounded copy number.
b. Quantum template matching -classical templates A variant of discrimination, or class assignment task, which is one of the first instances of works which establish explicit connections with ML and discrimination-type problems, is "template matching" (Sasaki et al., 2001). In this pioneering work, the authors consider discrimination problems where the input states ψ may not correspond to the (known) template states {ρ i } i , and the correct matching label is determined by the largest the Uhlmann fidelity. More precisely, the task is defined as follows: given a classically specified family of template states {ρ i } i , given M copies of a quantum input ψ ⊗M , output the label i corr defined
with i corr = argmax i Tr √ ψρ i √ ψ 2 .
In this original work, the authors focused on two-class cases, with pure state inputs, and identify fully quantum, and semi-classical strategies for this problem. "Fully quantum strategies" identify the optimal POVM. Semi-classical strategies impose a restriction of measurement strategies to separable measurements, or perform state estimation on the input, a type of "quantum feature extraction".
c. Quantum template matching -quantum templates. In a generalization of the work in (Sasaki et al., 2001), the authors in (Sasaki and Carlini, 2002) consider the case where instead of having access to the classical descriptions of the template states {ρ i } i , we are given access to a certain number K of copies. In other words, we are given access to a quantum system in the state i ρ ⊗K i .. Setting K → ∞, recovers the case with classical templates. This generalized setting introduces many complications, which do not exist in the "more classical" case with classical templates. For instance, classifying measurements now must "use up" copies of template states, as they too cannot be cloned. The authors identify various flavors of semi-classical strategies for this problem. For instance, if the template states are first estimated, we are facing the scenario of classical templates (albeit with error). The classical template setting itself allows semiclassical strategies, where all systems are first estimated, and it allows coherent strategies. The authors find optimal solutions for K = 1, and show that there exists a fully quantum procedure that is strictly superior to straightforward semiclassical extensions. Remark: Quantum template matching problems can be understood as quantum-generalized supervised learning, where the training set is of the form {(ρ ⊗K i , i) i }, data beyond the training set comes from the family ψ ⊗M (number of copies is known), and the classes are defined via minimal distance, as measured by the Uhlmann fidelity. The case K → ∞ approaches the special case of classical templates. Restricting the states ψ to the set of template states (restricted template matching), and setting M = 1 recovers standard state discrimination. d. Other known optimality results for (restricted) template matching For the restricted matching case, where the input is promised to be from the template set, the optimal solutions for the two-class setting, minimum error figure of merit, and uniform priors of inputs, have been found in (Bergou and Hillery, 2005;Hayashi et al., 2005) for the qubit case. In (Hayashi et al., 2006) the authors found optimal solutions for the unambiguous discrimination case 70 . An asymptotically optimal strategy restricted matching with finite templates K < ∞, for arbitrary priors, and mixed qubit states was later found in (Guţȃ and Kot lowski, 2010). This work also provides a solid introduction 70 In unambiguous discrimination, the device is allowed to output an ambiguous "I do not know" outcome, but is not allowed to err in the case it does output an outcome. The goal is to minimize the probability of the ambiguous outcome.
to the topic, a review of quantum analogies for statistical learning, and emphasizes connections to ML methodologies and concepts. Later, in (Sentís et al., 2012) the authors introduced and compared all three strategies: classical estimate-and-discriminate, classical optimal, and quantum strategy, for the restricted template matching case with finite templates. Recall, the adjective "classical" here denotes that the training states are fully measured out as the first step -the quantum set is converted to classical information, meaning that no quantum memory is further required, and that the learning can be truly inductive.
A surprising result is that the intuitive estimate-and-discriminate strategy, which reduces supervised classification to optimal estimation coupled with a (standard) quantum state discrimination problem, is not optimal for learning. Another measurement provides not only better performance, but matches the optimal quantum strategy exactly (as opposed to asymptotically). Interestingly, the results of (Guţȃ and Kot lowski, 2010) and (Sentís et al., 2012) opposite claims for essentially the same setting: no separation, vs. separation between coherent (fully quantum) and semi-classical strategies, respectively. This discrepancy is caused by differences in the chosen figures of merit, and a different definition of asymptotic optimality , and serves as an effective reminder of the subtle nature of quantum learning. Optimal strategies had been subsequently explored in other settings as well, e.g. when the data-set comprises coherent states (Sentís et al., 2015), and or in the cases where an error margin is in an otherwise unambiguous setting (Sentís et al., 2013).
e. Quantum generalizations of (un)supervised learning The works of the previous paragraph consider particular families of generalizations of supervised learning problems. The first attempts to classify and characterize what ML could look like in a quantum world from a more general perspective was, however, first explicitly done in (Aïmeur et al., 2006). There, the basic object introduced is the database of labeled quantum or classical objects, i.e. D K n = {(|ψ i ⊗i , y i )} n i=1 71 , which may come in copies. Such a database can, in general then be processed to solve various types of tasks, using classical or quantum processing. The authors propose to characterize quantum learning scenarios in terms of classes, denoted L context goal . Here context may denote we are dealing with classical or quantum data and whether the learning algorithm is relying on quantum capabilities or not. The goal specifies the learning task or goal (perhaps in very broad terms). Examples include L c c , which corresponds to standard classical ML, and L q c , which could mean we use a quantum computer to analyze classical data. The example of template matching classical templates (K = ∞) (Sasaki et al., 2001) considered earlier in this section would be denoted L c q , and the generalization with finite template numbers K < ∞ would fit in L ⊗K q . While the formalism above suggests focus on supervised settings, the authors also suggest that datasets could be inputs for (unsupervised) clustering. The authors further study quantum algorithms for determining closeness of quantum states 72 , which could be the basic building block of quantum clustering algorithms, and also compute certain error bounds for special cases of classification (state discrimination) using well known results of Helstrom (Helstrom, 1969). Similar ideas were used in (Lu and Braunstein, 2014) for the purpose of definition of a quantum decision tree algorithm for data classification in the quantum regime. The strong connection between quantum-generalized learning theory sketched out in (Aïmeur et al., 2006) and the classical 73 theory of Helstrom (Helstrom, 1969) was more deeply explored in (Gambs,71 Such a dataset can be stored in, or instantiated by, a 2-n partite quantum system, prepared in the state n i=1 |ψ i ⊗K i |y i . 72 These are based on the SWAP-test (see section VI.C.2), in terms of Uhlmann fidelity 73 Here we mean classical in the sense of "being a classic", rather than pertaining to classical systems. 2008). There, the author computed the lower bounds of sample complexity -in this case the minimal number of copies K -needed to solve a few types of classification problems. For this purpose the author introduced a few techniques which reduce ML-type classification problems to the settings where theory (Helstrom, 1969) of could be directly applied. These types of results contribute to the establishing of a deeper connection between problems of ML and techniques of QIP.
f. Quantum inductive learning Recall that inductive, eager learning, produces a best guess classifier which can be applied to the entire domain of data-points, based on the training set. But, already the results of (Sasaki and Carlini, 2002) discussed in paragraph on template matching with quantum templates, point to problems with this concept in the quantum realm -the optimal classifier may require a copy of the quantum data-points to perform classification, which seemingly prohibits unlimited use. The perspectives of such quantum generalizations of supervised learning in its inductive form, were recently addressed from a broad perspective (Monràs et al., 2017). Recall that inductive learning algorithms, intuitively, use only the training set to specify a hypothesis (the estimation of the true labeling function). In contrast, in transductive learning, the learner is also given the data points the labels of which are unknown. These unlabeled points may correspond to the cross-validation test set, or the actual target data. Even though the labels are unknown, they carry additional information of the complete dataset which can be helpful in identifying the correct labeling rule 74 . Another distinction is that transductive algorithms need only label the given points, whereas inductive algorithms need to specify a classifier, i.e., a labeling function, defined on the entire space of possible points. In (Monràs et al., 2017), the authors notice that the property of an algorithm being inductive corresponds to a non-signaling property 75 , using which they can prove that "being inductive" (i.e. being "no signalling") is equivalent to having an algorithm which outputs a classifier h based on the training set alone, which is then applied to every training instance. A third equivalent characterization of inductive learning is that the training and testing cleanly separate as phases. While these observations are quite intuitive in the classical case, they are in fact problematic in the quantum world. Specifically, if the training examples are quantum objects, quantum no-cloning, in general, prohibits the applying of a hypothesis function (candidate labeling function) h arbitrarily many times. This is easy to see since each instance of h must depend on the quantum data in some non-trivial way, if we are dealing with a learning algorithm. Multiple copies of h would then require multiple copies of (at least parts of) the quantum data. A possible implication of this would be that, in the quantum realm, inductive learning cannot be cleanly separated into training and testing. Nonetheless, the authors show that the no-signalling criterion, for certain symmetric measures of performance, implies that a separation is, asymptotically, possible. Specifically, the authors show that for any quantum inductive no-signalling algorithm A there exists another, perhaps different algorithm A which does separate in a training and testing phase and which, asymptotically, attains the same performance (Monràs et al., 2017). Such a protocol A , essentially, utilizes a semi-classical strategy. In other words, for inductive settings, classical intuition survives, despite no-cloning theorems.
74 For instance, a transductive algorithm may use unsupervised clustering techniques to assign labels, as the whole set is given in advance. 75 The outcome of the entire learning and evaluation process can be viewed as a probability distribution P (y) = P (y 1 . . . y k |x 1 . . . x k ; A), where A is the training set, x 1 , . . . x k are the points of the test state and y 1 . . . y k the respective labels the algorithm assigns with the probability P (y). No signaling implies that the marginal distribution for the k th test element P (y k ) only depends on x k and the training set, but not on other test points {x l } l =k .
Computational learning perspectives: quantum states as concepts
The previous subsections addressed the topics of classification of quantum states, based on quantum database examples. The overall theory, however, relies on the assumption that there exists a labeling rule, which generates such examples, and what is learned is the labeling rule. This rule is also known as concept, in CLT (e.g. PAC learning, see section II.B.1 for details). A reasonable sufficient criterion is, if one can predict the probabilities of outcomes of any two-outcome measurements on this state, as this already suffices for a full tomographic reconstruction. What would "the learning of quantum states" mean, from this perspective? What does it mean to "know a quantum state"? A natural criterion is that one "knows" a quantum state, if one can predict the measurement outcome probabilities of any given measurement. In (Aaronson, 2007), the author addressed the question of the learnability of quantum states in the sense above, where the role of a concept is played by a given quantum state, and "knowing" the concept then equates to the possibility of predicting the outcome probability of a given measurement and its outcome. One immediate distinction from conventional CLT, discussed in II.B.1, is that the concept range is no longer binary. However, as as we clarified, classical CLT theory has generalizations with continuous ranges. In particular, so called p-concepts have range in [0, 1] (Kearns and Schapire, 1994), and quantities which are analogs of the VC-dimension, and analogous theorems relating this to generalization performance, exist for the p-concept case as well (see (Aaronson, 2007)). Explicitly, the basic elements of such the generalized theory are: domain of concepts X, a sample x ∈ X and the p-concept f : X → [0, 1]. These abstract objects are mapped to central objects of quantum information theory (Aaronson, 2007) as follows: the domain of concepts is the set of two-outcome quantum measurement, and a sample is a POVM element Π 76 (in short: x ↔ Π); the p-concept to be learned is a quantum state ψ and the evaluation of the concept/hypothesis on the sample corresponds to the probability Tr[Πψ] ∈ [0, 1] of observing the measurement outcome associated with Π when the state ψ is measured.
To connect the data classification-based perspectives of supervised learning to the CLT perspective above, note that in the given quantum state CLT this framework, the quantum concept -quantum state -"classifies" quantum POVM elements (the effects) according to the probability of observing that effect. The training set elements for this model are of the form (Π, Tr(ρΠ)), with 0 ≤ Π ≤ 1.
In the spirit of CLT, the concept class "quantum states", is said to be learnable under some distribution D over two-outcome generalized measurement elements (Π), if for every conceptquantum state ρ -there exists an algorithm with access to examples of the form (Π, Tr(ρΠ)), where Π is drawn according to D, which outputs a hypothesis h which (approximately) correctly predicts the label Tr(ρΠ ) with high probability, when Π is drawn from D. Note that the role of a hypothesis here can simply be played by a "best guess" classical description of the quantum state ρ. The key result of (Aaronson, 2007) is that quantum states are learnable with sample complexity scaling only linearly in the number of qubits 77 , that is logarithmically in the dimension of the density matrix. In operative terms, if Alice wishes to send an n qubit quantum state to Bob who will perform on it a two-outcome measurement (and Alice does not know which), she can achieve near-ideal performance by sending (O(n)) classical bits 78 , which has clear practical but also theoretical importance. In some sense, these results can also be thought of as a generalized variant of Holevo bound theorems (Holevo,76 More precisely Π is a positive-semidefinite operator such that 1 − Π is positive-semidefinite as well. 77 The dependencies on the allowed inverse error and inverse allowed failure probability are polynomial and polylogarithmic, respectively. 78 Here we assume Alice can locally generate her states at will. A classical strategy (using classical channels) is thus always possible, by having Alice send the outcomes of full state tomography (or equiv. the classical description of the state), but this requires the using of O(2 n ) bits already for pure states.
1982), limiting how much information can be stored and retrieved in the case of quantum systems.
This latter result has thus far been more influential in the contexts of tomography than quantum machine learning, despite being quite a fundamental result in quantum learning theory. However, for fully practical purposes. The results above come with a caveat. The learning of quantum state is efficient in sample complexity (e.g. number of measurements one needs to perform), however, the computational complexity of the reconstruction of the hypothesis is, in fact, likely exponential in the qubit number. Very recently, the efficiency of also the reconstruction algorithms for the learning of stabilizer states was shown in (Rocchetto, 2017).
B. (Quantum) learning and quantum processes
Executive summary: The notion of quantum learning has been used in literature to refer to the studying of various aspects of "learning about" quantum systems. Beyond the learning of quantum states, one can also consider the learning of quantum evolutions. Here "knowing" is operatively defined as having the capacity to implement the given unitary at a later point -this is similar to how "knowning" in computational learning theory implies we can apply the concept function at a later point. Finally, as learning can pertain to learning in interactive environments -RL -one can consider the quantum generalizations of such settings. One of the first results in this direction formulates a quantum generalization of POMDPs. Note as POMDPs form the mathematical basis of RL, the quantum-generalized mathematical object -quantum POMDP, may form a basis of quantum-generalized RL.
a. Learning of quantum processes The concept of learning is quite diffuse and "quantum learning" has been used in literature quite often, and not every instance corresponds to generalizations of "classical learning" in a machine or statistical learning sense. Nonetheless, some such works further illustrate the distinctions between the approaches one can employ with access to classical (quantum) tools, while learning about classical or quantum objects.
Learning unitaries For instance "quantum learning of unitary operations" has been used to refer to the task of optimal storing and retrieval of unknown unitary operations, which is a two stage process. In the storing phase, one is given access to a few uses of some unitary U . In the retrieval phase, one is asked to approximate the state U |ψ , given one or few instances of a (previously fully unknown) state |ψ . Like in the case of quantum template states (see section V.A.1), we can distinguish semi-classical prepare-and-measure strategies (where U is estimated and represented as classical information), and quantum strategies, where the unitaries are applied on some resource state, which is used together with the input state |ψ in the retrieval stage. There is no simple universal answer to the question of optimal strategies. In (Bisio et al., 2010), the authors have shown that, under reasonable assumptions, the surprising result that optimal strategies are semi-classical. In contrast, in (Bisio et al., 2011) the same question was asked for generalized measurements, and the opposite was shown: optimal strategies require quantum memory. See e.g. (Sedlák et al., 2017) for some recent results on probabilistic unitary storage and retrieval, which can be understood as genuinely quantum learning 79 of quantum operations.
Learning measurements The problem of identifying which measurement apparatus one is facing has first been in comparatively fewer works, see e.g. (Sedlák and Ziman, 2014) for a more recent example. Related to this, we encounter a more learning-theoretical perspective on the topic of learning measurements. In the comprehensive paper (Cheng et al., 2016) (which can serve as a review of parts of quantum ML in its own right), the authors explore the question of the learnability of quantum measurements. This can be thought of as the dual of the task of learning quantum states discussed previously in this section. Here, the examples are of the form (ρ, T r(ρE)), and it is the measurement that is fixed. In this work, the authors compute a number of complexity measures, which are closely related to the VC dimension (see section II.B.1), for which sample complexity bounds are known. From such complexity bounds one can, for instance, rigorously answer various relevant operative questions, such as, how many random quantum probe states we need to prepare on average, to accurately estimate a quantum measurement. Complementing the standard estimation problems, here we do not compute the optimal strategy, but effectively gauge the information gain of a randomized strategy. These measures are computed for the family of hypotheses/concepts which can be obtained by either fixing the POVM element (thus learning the quantum measurement), or by fixing the state (which is the setting of (Aaronson, 2007)), and clearly illustrate the power of ML theory when applied in QIP context.
b. Foundations of quantum-generalized RL The majority of quantum generalizations of machine learning concepts fit neatly in the domain of supervised learning, however, with few notable exceptions. In particular, in , the authors introduce a quantum generalization of partially observable Markov decision processes (POMDP), discussed in section II.C. For convenience of the reader we give a brief recap of these objects. A fully observable MDP is a formalization of task environments: the environment can be in any number of states S the agent can observe. An action a ∈ A of the agent triggers a transition of the state of the environment -the transition can be stochastic, and is specified by a Markov transition matrix P a . 80 Additionally, beyond the dynamics, each MDP comes with a reward function R : S × A × S → Λ, which rewards certain state-action-state transitions. In POMDP, the agent does not see the actual state of the environment, but rather just observations o ∈ O, which are (stochastic) functions of the environmental state 81 . Although the exact environmental state of the environment is not directly accessible to the agent, given the full specification of the system, the agent can still assign a probability distribution over the state space given an interaction history. This is called a belief state, and, can be represented as a mixed state (mixing the "classical" actual environmental states), which is diagonal in the POMDP state basis. The quantum generalization promotes the environment belief state to any quantum state defined on the Hilbert space spanned by the orthonormal basis {|s |s ∈ S}. of observing that outcome. Finally, rewards are defined via the expected values of action-specific positive operators R a , so T r[R a ρ], given the state ρ. In , the authors have studied this model from the computational perspective of the hardness of identifying the best strategies for the agent, contrasting this setting to classical settings, and proving separations. In particular, the complexity of deciding policy existence for finite horizons 82 , are the same for the quantum and classical cases 83 . However, a separation can be found with respect to the goal reachability problem, which asks whether there exists a policy (of any length) which, with probability 1, reaches some target state. This separation is maximal -this problem is decidable in the classical case, yet undecidable in the quantum case. While this particular separation may not have immediate consequences for quantum learning, it suggests that there may be other (dramatic) separations, with more immediate relevance.
VI. QUANTUM ENHANCEMENTS FOR MACHINE LEARNING
One of the most advertised aspects of quantum ML deals with the question of whether quantum effects can help us solve classical learning tasks more efficiently, ideally mirroring the successes of quantum computation. The very first attempts to apply quantum information techniques to ML problems were made even before the seminal works of Shor and Grover (Shor, 1997;Grover, 1996). Notable examples include the pioneering research into quantum neural networks and quantum perceptrons (Lewenstein, 1994;Kak, 1995), and also in the potential of quantum computational learning theory (Bshouty and Jackson, 1998). The topic of quantum neural networks (quantum NNs) has had sustained growth and development since these early days, exploring various types of questions regarding the interplay of quantum mechanics and neural networks. Most of the research in this area is not directly targeted at algorithmic improvements, hence will be only briefly mentioned here. A fraction of the research into quantum NNs, which was disproportionately more active in the early days, considered the speculative topics of the function of quantum effects in neural networks, both artificial and biological (Kak, 1995;Penrose, 1989). Parts of this research line has focused concrete models, such as the effect of transverse fields in HNs (Nishimori and Nonomura, 1996), and decoherence in models of biological nets (Tegmark, 2000), which, it is argued, would destroy any potential quantum effect. A second topic which permeates the research in quantum NNs is concerned with the fundamental question of a meaningful quantization of standard feed-forward neural networks. The key question here is finding the best way to reconcile the linear nature of quantum theory, and the necessity for non-linearities in the activation function of a neural network (see section II.A.1), and identifying suitable physical systems to implement such a scheme. Early ideas here included giving up on non-linearities per se, and considering networks of unitaries which substitute layers of neurons (Lewenstein, 1994). Another approach exploits non-linearities which stem from measurements and post-selection (arguably first suggested in (Kak, 1995)). The same issue is addressed by Behrman et al. (Behrman et al., 1996) by using a continuous mechanical system where the non-linearity is achieved by coupling the system with an environment 84 , in the model system of quantum dots. The purely foundational research into implementations of such networks, and analysis of their quantum mechanical features, has been and is continuing to be an 82 That is, given a full specification of the setting, decide whether there exist a policy for the agent which achieves a cumulative reward above some value, in a certain number of states. 83 This decision problem is undecidable in the infinite horizon case, already for the classical problem, and thus trivially undecidable in the quantum case as well. 84 Similar ideas were also discussed by Peruš in (Peruš, 2000). active field of research (see e.g. (Altaisky et al., 2017)). For more information on this topic we refer the reader to more specialized reviews (Schuld et al., 2014b;Garman, 2011). Unlike the research into quantum NNs, which has a foundational flavor, majority of works studying quantum effects for classical ML problems are specifically focused on identifying improvements.. First examples of quantum advantages in this context were provided in the context of quantum computational learning theory, which is the topic of the first subsection below. In the second subsection we will survey research suggesting the possibilities of improvement of the capacity of associative memories. The last subsection deals with proposals which address computational run-time improvements of classical learning algorithms, the first of which came out already in the early 2000s. Here we will differentiate approaches which focus on quantum improvements in the training phase of a classifier by means of quantum optimization (mostly focused on exploiting near-term technologies, and restricted devices), and approaches which build algorithms based on, roughly speaking, quantum parallelism and "quantum linear algebra" -which typically assume universal quantum computers, and often "pre-filled" database. It should be noted that the majority of research in quantum ML is focused precisely on this last aspect, and the results here are already quite numerous. We can thus afford to present only a chosen selection of results.
A. Learning efficiency improvements: sample complexity
Executive summary: The first results showing the separation between quantum and classical computers were obtained in the context of oracles, and for sample complexity -even the famous Grover's search algorithm constitutes such a result. Similarly, CLT deals with the learning, i.e., the identification or the approximation of concepts, which are also nothing but oracles. Thus, quantum oracular computation settings and learning theory share the same underlying framework, which is investigated and exploited in this formal topic. To talk about quantum CLT, and improvements, or bounds, on sample complexity, the classical concept oracles are thus upgraded to quantum concept oracles, which output quantum states, and/or allow access in superposition.
As elaborated in section II.B.1, CLT deals with the problem of learning concepts, typically abstracted as boolean functions of bit-strings of length n so c : {0, 1} n → {0, 1}, from input-output relations alone. For intuitive purposes it is helpful to think of the task of optical character recognition (OCR), where we are given a bitmap image (black-and-white scan) of some size n = N × M , and a concept may be say "everything which represents the letter A", more precisely, the concept, specifying which bitmaps correspond to the bitmaps of the letter "A". Further, we are most often interesting in a learning performance for a set of concepts: a concept class C = {c|c : {0, 1} n → {0, 1}} -in the context of the running example of OCR, we care about algorithms which are capable of recognising all letters, and not just "A". The three typical settings studied in literature are the PAC model, exact learning from membership queries, and the agnostic model, see section II.B.1. These models differ in the type of access to the concept oracle which is allowed. In the PAC model, the oracle outputs labeled examples according to some specified distribution, analogous to basic supervised learning. In the membership queries model, the learner gets to choose the examples, and this is similar to active supervised learning. In the agnostic model, the concept is "noisy", i.e. forms a stochastic function, which is natural in supervised settings (the joint datapoint-label distribution P (x, y) need not be functional), for details we refer the reader to section II.B.1. All three models have been treated from a quantum perspective, and whether or not quantum advantages are obtainable greatly depends on the details of the settings. Here we give a very succinct overview of the main results, partially following the structure of the recent survey on the topic by Arunachalam and de Wolf (Arunachalam and de Wolf, 2017).
Quantum PAC learning
The first quantum generalization of PAC learning was presented in (Bshouty and Jackson, 1998), where the quantum example oracle was defined to output coherent superpositions
x p D (x) |x, c(x) ,(23)
for a given distribution D over the data points x, for a concept c. Recall, classical PAC oracles output a sample pair (x, c(x)), where x is drawn from D, which can be understood as copies of the mixed state
x p D (x) |x, c(x) x, c(x)|, with p D (x) = P (D = x)
. The quantum oracle reduces to the standard oracle if the quantum example is measured in the standard (computational) basis. This first pioneering work showed that quantum algorithms, with access to such a quantumgeneralized oracle can provide more efficient learning of certain concept classes. The authors have considered the concept class of DNF formulas, under the uniform distribution: here the concepts are s-term formulae in disjunctive normal form. In other words, each concept c is of the form c(x) = I j (x I ) j , where x I is a substring of x associated to I, which is a subset of the indices of cardinality at most s, and (x I ) j is a variable or its negation (a literal). An example of a DNF is of the form (x 1 ∧ x 3 ∧ ¬x 6 ) ∨ (x 4 ∧ ¬x 8 ∧ x 1 ) · · · , where parentheses (terms) only contain variables or their negations in conjunction (ANDs, ∧), whereas all the parentheses are in disjunction (ORs, ∨). The uniform DNF learning problem (for n variables, and poly(n) terms) is not known to be efficiently PAC learnable, but, in (Bshouty and Jackson, 1998) it was proven to be efficiently quantum PAC learnable. The choice of this learning problem was not accidental: DNF learning is known to be learnable in the membership query model, which is described in detail in the next section. The corresponding classical algorithm which learns DNF in the membership query model directly inspired the quantum variant in the PAC case 85 . If the underlying distribution over the concept domain is uniform, other concept classes can be learned with a quantum speed-up as well, specifically, so called k-juntas: n-bit binary functions which depend only on k < n bits. In (Atıcı and Servedio, 2007), Atıcı and Servedio have shown that there exists a quantum algorithm for learning k-juntas using O(k log(k)/ ) uniform quantum examples, O(2 k ) uniform classical examples, and O(n k log(k)/ + 2 k log(1/ )) time. Note the improvement in this case is not in query complexity, but rather in the classical processing, which, for the best known classical algorithm has complexity at least O(n 2k/3 ) (see (Arunachalam and de Wolf, 2017;Atıcı and Servedio, 2007) for further details). Diverging from perfect PAC settings, in (Cross et al., 2015), the authors considered the learning of linear boolean functions 86 under the uniform distribution over the examples. The twist in this work is the assumption of noise 87 which allows for evidence of a classical quantum learnability separation.
a. Distribution-free PAC While the assumption of the uniform distribution D constitutes a convenient theoretical setting, in reality, most often we have few guarantees on the underlying distribution of the examples. For this reason PAC learning often refers to distribution-free learning, meaning, learning under the worst case distribution D. Perhaps surprisingly, it was recently shown that the quantum PAC learning model offers no advantages, in terms of sample complexity, over the classical model. Specifically, in (Arunachalam and de Wolf, 2016) the authors show that if C is a concept class of VC dimension d + 1, then for every (non-negative) δ ≤ 1/2 and ≤ 1/20, every ( , δ)-quantum PAC learner requires Ω(d/ + log(d −1 )/ ) samples. The same number of samples, however, is also known to suffice for a classical PAC learner (for any and δ).
A similar result, showing no separation between quantum and classical agnostic learning was also proven in (Arunachalam and de Wolf, 2016) 88 .
b. Quantum predictive PAC learning Standard PAC learning settings do not allow exponential separations between classical and quantum sample complexity of learning, and consequently the notion of learnable concepts is the same in the classical and the quantum case. This changes if we consider weaker learning settings, or rather, a weaker meaning of what it means to learn. The PAC learning setting assumes that the learning algorithm outputs a hypothesis h with a low error with high confidence. In the classical case, there is no distinction between expecting that the hypothesis h can be applied once, or any arbitrary number of times. However, in the quantum case, where the examples from the oracle may be quantum states, this changes, and inductive learning in general may not be possible in all settings, see section V. In (Gavinsky, 2012), the author considers a quantum PAC settings where only one (or polynomially few) evaluations of the hypothesis are required, called the Predictive Quantum (PQ) model 89 . In this setting the author identifies a relational concept class (i.e. each data point may have many correct labels) which is not (polynomially) learnable in the classical case, but is PQ learnable under a standard quantum oracle, under the uniform distribution. The basic idea is to use quantum states, obtained by processing quantum examples, for each of the testing instances -in other words, the "implementation" of the hypothesis contains a quantum state obtained from the oracle. This quantum state cannot be efficiently estimated, but can be efficiently obtained using the PQ oracle. The concept class, and the labeling process are inspired by a distributed computation problem for which an exponential classical-quantum separation had been identified earlier in (Bar-Yossef et al., 2008). This work provides another noteworthy example of the intimate connection between various aspects of QIP -in this case, quantum communication complexity theory -and quantum learning.
Learning from membership queries
In the model of exact learning from membership queries, the learner can choose the elements from the concept domain it wishes labeled (similar to active learning), however, the task is to identify the concept exactly (no error), except with probability δ < 1/3 90 Learning from membership queries 88 The notions of efficiency and sample complexity in the agnostic model are analogous to those in the PAC model, as is the quantum oracle which provides the coherent samples x,y p D (x, y) |x, y . See section II.B.1 for more details. 89 In a manner of speaking, to learn a concept, in the PAC sense, implies we can apply what we have learned arbitrarily many times. In PQ it suffices that the learner be capable of applying what it had learned just once once, to be considered successful. It however follows that if the number of examples is polynomial, PQ learnability also implies that the verification of learning can be successfully executed polynomially many times as well. 90 As usual, success probability which is polynomially bounded away from 1/2 would also do.
has, in the quantum domain, usually been called oracle identification. While quantum improvements in this context are possible, in (Servedio and Gortler, 2004), the authors show that they are at most low-degree polynomial improvements in the most general cases. More precisely, if a concept class C over n − bits has classical and quantum membership query complexities D(C) and Q(C), respectively, then D(C) = O(nQ(C) 3 ) 91 -in other words, improvements in sample complexity can be at most polynomial. Polynomial relationships have also been established for worst-case exact learning sample complexitites (so-called (N, M )-query complexity), see (Kothari, 2013) and (Arunachalam and de Wolf, 2017). The above result is in spirit similar to earlier results in (Beals et al., 2001), where it was shown quantum query complexity cannot provide a better than polynomial improvement over classical results, unless structural promises on the oracle are imposed.
The results so far considered are standard, comparatively simple generalizations of classical learning settings, leading to somewhat restricted improvements in sample complexity. More dramatic improvements are possible if computational (time) complexity is taken into account, or if slightly non-standard generalizations of the learning model are considered. Note, we are not explicitly bringing computational complexity separations into the picture. Rather, under the assumption that certain computation problems are hard for the learner, we obtain a sample complexity separation.
In particular, already in (Kearns and Valiant, 1994) the authors have constructed several classes of Boolean functions in the distribution-free model whose efficient learning (in the sample complexity sense) implies the capacity of factoring of so-called Blum integers -a task not known to be solvable classically, but solvable on a quantum computer 92 . Using this observations, Servedio and Gortler have demonstrated classes which are efficiently quantum PAC learnable, and classes which are efficiently learnable in the quantum membership query model, but which not efficiently learnable in the corresponding classical models, unless Blum integers 93 can be efficiently factored on a classical computer (Servedio and Gortler, 2004).
. 91 This simple formulation of the claim of (Servedio and Gortler, 2004) was presented in (Arunachalam and de Wolf, 2017). 92 These ideas exploit the connections between asymmetric cryptography and learning. In asymmetric cryptography, a message can be decrypted easily using a public key, but the decryption is computationally hard, unless one has a private key. To exemplify public key can be a Blum integer, whereas the private key one of the factors. The data-points are essentially the encryptions of integers k E(k, N ), for a public key N . The concept is defined by the least significant bit of k, which, provably, is not easier to obtain with bounded error than the decryption itselfwhich is computationally hard. A successful efficient learner of such a concept could factor Blum integers.The full proposal has further details we omit for simplicity. 93 The integer n is a Blum integer if it is a product of two distinct prime numbers p and q, which are congruent to 3 mod 4 (i.e. both can be written in the form 4t + 3, for a non-negative integer t.).
B. Improvements in learning capacity
Executive summary: The observation that a complete description of quantum systems typically requires the specification of exponentially many complex-valued amplitudes has lead to the idea that those same amplitudes could be used to store data using only logarithmically few systems. While this idea fails for most applications, it has inspired some of the first proposals to use quantum systems for the dramatic improvement of the capacities of associative, or content-addressable memories. More likely quantum upgrades of CAM memories, however, may come from a substantially different direction -which explores methods of extracting information from HNs -used as CAM memories -and which is inspired by quantum adiabatic computing to realize a recall process which is similar yet different from standard recall methods. The quantum methods may yield advantages by outputting superpositions of data, and it has been suggested they also utilize the memory more efficiently, leading to increased capacities.
The pioneering investigations in the areas between CLT, NNs and QIP, have challenged the classical sample complexity bounds. Soon thereafter (and likely independently), the first proposals suggesting quantum improvements in the context of space complexity emerged -specifically the efficiency of associative memories. Recall, associative, or content-addressable memory (abbreviated CAM) is a storage device which can be loaded with patterns, typically a subset of n-bit bit-strings P = {x i } i , x i ∈ {0, 1} n , which are then, unlike in the case of standard RAM-type memories, not recovered by address but by content similarity: given an input string y ∈ {0, 1} n , the memory should return y if it is one of the stored patterns (i.e. y ∈ P ), or a stored pattern which is "closest" to y, with respect to some distance, typically the Hamming distance. Deterministic perfect storage of any set of patterns clearly requires O(n × 2 n ) bits (there are in total 2 n distinct patterns each requiring n bits), and the interesting aspects of CAMs begin when the requirements are somewhat relaxed. We can identify roughly two basic groups of ideas which were suggested to lead to improved capacities. The first group, sketched next, relies directly on the structure of the Hilbert space, whereas the second group of ideas stems from the quantization of a well-understood architecture for a CAM memory system: the Hopfield network.
Capacity from amplitude encoding
In some of the first works (Ventura and Martinez, 2000;Trugenberger, 2001) it was suggested that the proverbial "exponential-sized" Hilbert space describing systems of qubits may allow exponential improvements: intuitively even exponentially numerous pattern sets P can be "stored" in a quantum state of only n qubits: |ψ P = |P | − 1 2 x∈P |x . These early works suggested creative ideas on how such a memory could be used to recover patterns (e.g. via modified amplitude amplification), albeit, often suffering from lack of scalability, and other quite fundamental issues to yield complete proposals 94 , and thus we will not dig into details. We will, however, point out that these works may be interpreted to propose some of the first examples of "amplitude encoding" of classical data, which is heavily used in modern approaches to quantum ML. In particular, the stored memory of a CAM can always be represented as a single bit-string (b (0···0) , b (0···1) . . . , b (1...1) ) of length 2 n (each bit in the bit-string is indexed by a pattern, and its value encodes if it is stored or not). This data-vector (in this case binary, but this is not critical) is thus encoded into amplitudes of a quantum state of an exponentially smaller number of qubits: b = (b (0···0) , b (0···1) . . . , b (1...1) ) → x∈{0,1} n b x |x (up to normalization).
Capacity via quantized Hopfield networks
A different approach to increasing the capacities of CAM memories arises from the "quantization" of different aspects of classical HNs, which constitute well-understood classical CAM systems. a. Hopfield networks as a content-addressable memory Recall, a HN is a recurrent NN characterized by a set of n neurons, whose connectivity is given by a (typically symmetric) real matrix of weights W = (w ij ) ij and a vector of (real) local thresholds {θ i } n i=1 . In the context of CAM memories, the matrix W encodes the stored patterns, which are in this setting best represented as sequences of signs, so x ∈ {1, −1} n . The retrieval, given an input pattern y ∈ {1, −1} n , is realized by setting the k th neuron s k to the k th value of the input pattern y k , followed by the "running of the network" according to standard perceptron rules: each neuron k computes its subsequent value by checking if its inbound weighted sum is above the local threshold: s k ← sign( l w kl s l − θ k ) (assuming sign(0) = +1) 95 . As discussed previously, under moderate assumptions the described dynamical system converges to local attractive points, which also correspond to the energy minima of the Ising functional
E(s) = − 1 2 ij w ij s i s j + i θ i s i .(24)
Such a system still allows significant freedom in the rule specifying the matrix W, given a set of patterns to be stored: intuitively, we need to "program" the minima of E (choosing the appropriate W will suffice, as the local thresholds can be set to zero) to be the target patterns, ideally without storing too many unwanted, so-called spurious, patterns. This, and other properties of a useful storing rule, that is, rule which specifies W given the patterns, are given as follows (Storkey, 1997): a) locality: an update of a particular connection should depend only on the information available to the neurons on either side of the connection 96 ; b) incrementality: the rule should allow the updating of the matrix W to store an additional pattern based only on the new pattern and W itself 97 c) immediateness: the rule should not require a limiting computational process for the evaluation of the weight matrix (rather, it should be a simple computation of few steps). The most critical property of a useful rule is that it d) results in a CAM with a non-trivial capacity: it should be capable of storing and retrieving some number of patters, with controllable error (which includes few spurious patterns, for instance). 95 The updates can be synchronous, meaning all neurons update their values at the same time, or asynchronous, in which case usually a random order is assigned. In most analyses, and here, asynchronous updates are assumed. 96 Locality matters as the lack of it prohibits parallelizable architectures. 97 In particular, it should not be necessary to have external memory storing e.g. all stored patters, which would render HN-based CAM memories undesirably non-adaptive and inflexible.
The historically first rule, the Hebbian rule, satisfies all the conditions above and is given by a simple recurrence relation: for the set of patterns {x k } k the weight matrix is given with w ij = k x k i x k j /M (where x k j is the j th sign of the k th pattern, and M is the number of patterns). The capacity of HN's under standard recall and Hebbian updates has been investigated from various perspectives, and in the context of absolute capacity (the asymptotic ratio of the number of patterns that can be stored without error to the number of neurons, as the network size tends to infinity), it is known to scale as O( n 2ln(n) ). A well known result in the field improves on this to the capacity of O( n √ 2ln(n)
), and is achieved by a different rule introduced by Storkey (Storkey, 1997), while maintaining all the desired properties. Here, we should emphasize that, in broad terms, the capacity is typically (sub)-linear in n. Better results, however, can be achieved in the classical settings if some of the assumptions a) − c) are dropped, but this is undesirable.
b. Quantization of Hopfield-based CAMs In early works Tzafestas, 2006, 2007), the authors have considered fuzzy and probabilistic learning rules, and have broadly argued that a) such probabilistic rules correspond to a quantum deliberation process and that b) the resulting CAMs can have significantly larger capacities. However, more rigorous (and fully worked out) results were shown more recently, by combining HNs with ideas from adiabatic QC. The first idea, presented in (Neigovzen et al., 2009) connects HNs and quantum annealing. Recall that the HN can be characterized by the Ising functional E(s) = − 1 2 ij w ij s i s j (see Eq. 2), where the stored patterns correspond to local minima, and where we have, without the loss of generality, assumed that the local thresholds are zero. The classical recall corresponds to the problem of finding local minima closest to the input pattern y. However, an alternative system, with similar features, is obtained if the input pattern is added in place of the local thresholds:
E(s, y) = − 1 2 ij w ij s i s j − Γ i y i s i .
Intuitively, this lowers the energy landscape of the system specifically around the input pattern configuration. But then, the stored pattern (previous local minimum) which is closest to the input pattern is the most likely candidate for a global minimum. Further, the problem of finding such configurations can now be tackled via quantum annealing: we define the quantum "memory Hamiltonian" naturally as H mem = − 1 2 ij w ij σ z i σ z j , and the HN Hamiltonian, given input y with H p = H mem + ΓH inp , where the input Hamiltonian is given with
H inp = − i y i σ z i .
The quantum recall is obtained by the adiabatic evolution via the Hamiltonian trajectory H(t) = Λ(t)H init + H p , where Λ(0) is large enough that H init dominates, and Λ(1) = 0. The system is initialized in the ground state of the (arbitrary and simple) Hamiltonian H init , and if the evolution in t is slow enough to satisfy the criteria of the adiabatic theorem, the system ends in the ground state of H p . This proposal exchanged local optimization (classical retrieval) for global optimization. While this is generally a bad idea 98 , what is gained is a quantum formulation of the problem which can be run on adiabatic architectures, and also the fact that this system can return quantum superpositions of recalled patterns, if multiple stored patterns are approximately equally close to the input, which can be an advantage (Neigovzen et al., 2009). However, the system above does not behave exactly the same as the classical recall network, which was further investigated in subsequent work (Seddiqi and Humble, 2014) analysing the sensitivity of the quantum recall under various classical learning rules. Further, in (Santra et al., 2016) the authors have provided an extensive analysis of the capacity of the Hebb-based HN, but under quantum annealing recall as proposed in (Neigovzen et al., 2009) showing, surprisingly, that this model yields exponential storage capacity, under the assumption of random memories. This result stands in apparent stark contrast to standard classical capacities reported in textbooks 99 . Regarding near-term implementability, in (Santra et al., 2016) the authors have investigated the suitability of the Chimera graph-based architectures of D-Wave programmable quantum annealing device for quantum recall HN tasks, showing potential for demonstrable quantum improvements in near-term devices.
C. Run-time improvements: computational complexity
Executive summary: The theory of quantum algorithms has provided examples of computational speed ups for decision problems, various functional problems, oracular problems, sampling tasks, and optimization problems. This section presents quantum algorithms which provide speed-ups for learning-type problems. The two main classes of approaches differ in the underlying computational architecture -a large class of algorithms relies on quantum annealers, which may not be universal for QC, but may natively solve certain sub-tasks important in the context of ML. These approaches then have an increased likelihood of being realizable with near-term devices. In contrast, the second class of approaches assumes universal quantum computers, and often data prepared and accessible in quantum database, but offers up to exponential improvements. Here we distinguish between quantum amplitude amplification and amplitude encoding approaches, which, with very few exceptions, cover all quantum algorithms for supervised and unsupervised learning.
The most prolific research area within quantum ML in the last few years has focused on identifying ML algorithms, or their computationally intensive subroutines, which may be sped up using quantum computers. While there are multiple natural ways to classify the performed research, an appealing first-order delineation follows the types of quantum computational architectures assumed 100 . Here we can identify research which is focused on using quantum annealing architectures, which are experimentally well justified and even commercially available in recent times (mostly in terms of the D-Wave system set-ups). In most of such research, the annealing architecture will be utilized to perform a classically hard optimization problem usually emerging in the training phases of many classical algorithms. An involved part of such approaches will often be a meaningful rephrasing of such ML optimization to a form which an annealing architecture can (likely) handle. While the overall supervised task comprises multiple computational elements, it is only the optimization that will be treated by a quantum system in these proposals. The second approach to speeding up ML algorithms assumes universal quantum computation capabilities. Here, the obtained algorithms are typically expressed in terms of quantum circuits.
99 At this point it should be mentioned that recently exponential capacities of HNs have been proposed for fully classical systems, by considering different learning rules (Hillar and Tran, 2014;Karbasi et al., 2014), which also tolerate moderate noise. The relationship and potential advantages of the quantum proposals remains to be elucidated. 100 Other classification criteria could be according to tasks, i.e. supervised vs. unsupervised vs. generative models etc., or depending on the underlying quantum algorithms used, e.g. amplitude amplification, or equation solving.
For most proposals in this research line, to guarantee actual speed-ups, there will be additional assumptions. For instance, most proposals can only guarantee improvements if the data, which is to be analyzed, is already present in a type of a quantum oracle or a quantum memory, and, more generally, that certain quantum states, which depend on the data, can be prepared efficiently. The overhead of initializing such a memory in the first place is not counted, but this may not unreasonable as in practice, the same database is most often used for a great number of analyses.
Other assumptions may also be placed on the structure of the dataset itself, such as low condition numbers of certain matrices containing the data (Aaronson, 2015).
Speed-up via adiabatic optimization
Quantum optimization techniques play an increasingly important role in quantum ML. Here, we can roughly distinguish two flavours of approaches, which differ in what computationally difficult aspect of training of a classical model is tackled by adiabatic methods. In the (historically) first approach, we deal with clear-cut optimization in the context of binary classifiers, and more specifically, boosting (see II.A.3). Since, it has been shown that annealers can also help by generating samples from hard-to-simulate distributions. We will mostly focus on the historically first approaches, and only briefly mention the other more recent results.
a. Optimization for boosing The representative line of research, which also initiated the development of this topic of quantum-enhanced ML based on adiabatic quantum computation, focuses on a particular family of optimization problems called quadratic unconstrained optimization (QUBO) problems of the form
x * = (x * 1 , . . . , x * n ) = argmin (x1,...,xn) i<j J ij x i x j , x k ∈ {0, 1}(25)
specified by a real matrix J. QUBO problems are equivalent to the problem of identifying lowest energy states of the Ising functional 101 E(s) = − 1 2 ij J ij s i s j + i θ i s i , provided we make no assumptions on the underlying lattice. Modern annealing architectures provide means for tackling the problem of finding such ground states using adiabatic quantum computation. Typically we are dealing with systems which can implement the tunable Hamiltonian of the form
H(t) = −A(t) i σ x H initial +B(t) ij J ij σ z i σ z j Htarget ,(26)
where A, B are smooth positive functions such that A(0) B(0) and B(1) A(1), that is, by tuning t sufficiently slowly, we can perform adiabatic preparation of the ground state of the Ising Hamiltonian H target , thereby solving the optimization problem. In practice, the parameters J ij cannot be chosen fully freely (e.g. the connectivity is restricted to the so-called Chimera graph (Hen et al., 2015) in D-Wave architectures), and also the realized interaction strenght values have a limited precision and accuracy (Neven et al., 2009a;Bian et al., 2010), but we will ignore this for the moment. In general, finding ground states of the Ising model is functional NP-hard 102 , which is likely beyond the reach of quantum computers. However, annealing architectures still may have many advantages, for instance it is believed they may still provide speed ups in all, or at least average instances, and/or that they may provide good heuristic methods, and hopefully near optimal solutions 103 . In other words, any aspect of optimization occurring in ML algorithms which has an efficient mapping to (non-trivial) instances of QUBO problems, specifically those which can be realized by experimental set-ups, is a valid candidate for quantum improvements. Such optimization problems have been identified in a number of contexts, mostly dealing with training binary classifiers, thus belong to the class of supervised learning problems. The first setting considers the problem of building optimal classifiers from linear combinations of simple hypothesis functions, which minimize empirical error, while controlling the model complexity through a so-called regularization term. This is the common optimization setting of boosting (see II.A.3), and, with appropriate mathematical gymnastics and few assumptions, it can be reduced to a QUBO problem. The overarching setting of this line of works can be expressed in the context of training a binary classifier by combining weaker hypotheses. For this setting, consider a dataset
D = {x i , y i } M i=1 ,
x i ∈ R n , y i ∈ {−1, 1}, and a set of hypotheses {h j } K j=1 , h j : R n → {−1, 1}. For a given weight vector w ∈ R n we define the composite classifier of the form hc w (x) = sign( k w k h k (x)).
The training of the composite classifier is achieved by the optimization of the vector w as to minimize misclassification on the training set, and as to decrease the risk of overtraining. The misclassification cost is specified via a loss function L, which depends on the dataset, and the hypothesis set in the boosting context. The overtraining risk, which tames the complexity of the model, is controlled by a so-called regularization term R. Formally we are solving argmin w L(w; D) + R(w).
This constitutes the standard boosting frameworks exactly, but is also closely related to the training of certain SVMs, i.e. hyperplane classifiers 104 . In other words, quantum optimization techniques which work for boosting setting can also help for hyperplane classification.
There are a few well-justified choices for L and R, leading to classifiers with different properties. Often, best choices (the definition of which depends on the context) lead to hard optimization(Long and Servedio, 2010), and some of those can be reduced to QUBOs, but not straightforwardly.
In the pioneering paper on the topic (Neven et al., 2008), Neven and co-authors consider the boosting setting. The regularization term is chosen to be proportional to the 0-norm, which counts the number of non-zero entries, that is, R(w, λ) = λ w 0 . The parameter λ controls the relative importance of regularization in the overall optimization task. A common choice for the loss function would be the 0-1 loss function L 0−1 , optimal in some settings, given with L 0−1 (w) = M j=1 Θ (−y j k w k h k (x j )) (where Θ is the step function), which simply counts the number of misclassifications. This choice 102 Finding ground states is not a decision problem, so, technically it is not correct to state it is NP-hard. The class functional NP (FNP) is the extension of the NP class to functional (relational) problems. 103 Indeed, one of the features of adiabatic models in general is that they provide an elegant means for (generically) providing approximate solutions, by simply performing the annealing process faster than prescribed by the adiabatic theorem. 104 If we allow the hypotheses h j to attain continuous real values, then by setting h j to be the projection on the j th component of the input vector, so h j (x) = x j , then the combined classifier attains attains the inner-productthreshold form hcw(x) = sign(w τ x) which contains hyperplane classifiers -the only component missing is the hyperplane offset b which incorporated into the weight vector by increasing the dimension by 1.
is reasonably well motivated in terms of performance, and is likely to be computationally hard.
With appropriate discretization of the weights w, which the authors argue likely does not hurt performance, the above forms a solid candidate for a general adiabatic approach. However, it does not fit the QUBO structure (which has only quadratic terms), and hence cannot be tackled using existing architectures. To achieve the desired QUBO structure the authors impose two modifications: they opt for a quadratic loss function L 2 (w) = M j=1 |y j − k w k h k (x j )| 2 , and restrict the weights to binary (although this can be circumvented to an extent). Such a system is also tested using numerical experiments. In a follow-up paper (Neven et al., 2009a), the same team has generalized the initial proposal to accommodate another practical issue: problem size. Available architectures allow optimization over a few thousand variables, whereas in practice the number of hypotheses one optimizes over (K) may be significantly larger. To resolve this, the authors show how to break a large optimization problem into more manageable chunks while maintaining (experimentally verified) good performance. These ideas were also tested in an actual physical architecture (Neven et al., 2009b), and combined and refined in a more general, iterative algorithm in , tested also using actual quantum architectures. While L 0−1 loss functions were known to be good choices, they were not the norm in practice as they lead to non-convex optimization -so convex functions were preferred. However, in 2010 it became increasingly clear that convex functions are provably bad choices. For instance, in the seminal paper (Long and Servedio, 2010) Long and Servedio 105 , showed that boosting with convex optimization completely fails in noisy settings. Motivated by this in , the authors re-investigate D-Wave type architectures, and identify a reduction which allows a non-convex optimization. Expressed in the hyperplane classification setting (as explained, this is equivalent to the boosting setting in structure), they identify a reduction which (indirectly) implements a non-convex function l q (x) = min{(1 − q) 2 , (max(0, 1 − x)) 2 }. This function is called the q-loss function, where q is a real parameter. The implementation of the q-loss function allows for the realization of optimization relative to the total loss of the form L q (w, b; D) = j l q (y j (w τ x + b)). The resulting regularization term is in this case proportional to the 2-norm of w, instead of the 0-norm as in the previous examples, which may be sub-optimal. Nonetheless, the above forms a prime example where quantum architectures lead to ML settings which would not have been explored in the classical case (the loss L q is unlikely to appear naturally in many settings) yet are well motivated, as a) the function is non-convex and thus has the potential to circumvent all the no-go results for convex functions, and b) the optimization process can be realized in a physical system. The authors perform a number of numerical experiments demonstrating the advantages of this choice of a non-convex loss function when analysing noisy data, which is certainly promising. In later work (Denchev et al., 2015), it was also suggested that combinations of loss-regularization which are realizable in quantum architectures can also be used for so-called totally corrective boosting with cardinality penalization, which is believed to be classically intractable. The details of this go beyond the scope of this review, but we can at least provide a flavour of the problem. In corrective boosting, the algorithm updates the weights w essentially one step at a time. In totally corrective boosting, at the t th step of the boosting algorithm optimization, t entries of w are updated simultaneously. This is known to lead to better regularized solutions, but the optimization is harder. Cardinality penalization pertains to using explicitly the 0-norm for the regularization (discussed earlier), rather than the more common 1-norm. This, too, leads to harder optimization which may be treated using an annealing architecture. In (Babbush et al., 2014), the authors significantly generalized the scope of loss functions which can be embedded into quantum architectures, by observing that any polynomial unconstrained binary optimization can, with small overhead, be mapped onto a (slightly larger) QUBO problem. This, in particular, opens up the possibility of implementing odd-degree polynomials which are non-convex and can approximate the 0-1 loss function. This approach introduced new classes of unusual yet promising loss functions.
b. Applications of quantum boosting Building on the "quantum boosting" architecture described above, in (Pudenz and Lidar, 2013), the authors explore the possibility of (aside from boosting) realizing anomaly detection, specifically envisioned in the computationally challenging problem of software verification and validation 106 . In the proposed learning step the authors use quantum optimization (boosting) to learn the characteristics of the program being tested. In the novel testing step the authors modify the target Hamiltonian as to lower the energy of the states which encode input-outputs where the real and ideal software differ. These can then be prepared in superposition (i.e. they can prepare a state which is a superposition over the inputs where the P will produce an erroneous output) similarly to the previously mentioned proposals in the context of adiabatic recall of superpositions in HN (Neigovzen et al., 2009). c. Beyond boosting Beyond the problems of boosting, annealers have been shown to be useful for the training of so-called Bayesian Network Structure Learning problems (O'Gorman et al., 2015), as their training can also be reduced to QUBOs. Further, annealing architectures can also be used the training of deep neural networks, relying on sampling, rather than optimization. A notable approach to this is based on the fact that the training of deep networks usually relies on the use of a so-called generative deep belief network, which are, essentially, restricted BMs with multiple layers 107 . The training of deep belief networks, in turn, is the computational bottleneck, as i requires the sampling of hard-to-generate distributions, which may be more efficiently prepared using annealing architectures, see e.g. (Adachi and Henderson, 2015). Further. novel ideas introducing fully quantum BM-like models have been proposed (Amin et al., 2016). Further, in recent work (Sieberer and Lechner, 2017) which builds on the flexible construction in (Lechner et al., 2015), the authors have shown how to achieve programmable adiabatic architectures, which allows running algorithms where the weights themselves are in superposition. This possibility is also sure to inspire novel QML ideas. Moving on from BMs, in recent work (Wittek and Gogolin, 2017), the authors have also shown how suitable annealing architectures may be useful to speed-up the performing of probabilistic inference in so-called Markov logic networks 108 . This task involves the estimation of partition functions of arising from statistical models, concretely Markov random fields, which include the Ising model as a special case. Quantum annealing may speed up this sub-task. More generally, general, the ideas that restricted, even simple, quantum systems which may be realizable with current technologies, could implement information processing elements useful for 106 A software is represented as a map P from input to output spaces, here specified as subset of the space of pairs (x input , xoutput). An implemented map (software) P is differentiated from the ideal softwareP by the mismatches in the defining pairs. 107 In other words, they are slightly less restricted BMs, with multiple layers and no within-layer connectivity. 108 Markov logic networks (Richardson and Domingos, 2006) combine first-order logic as used for knowledge representation and reasoning, and statistical modelling -essentially, the world is described via first-order sentences (a knowledge base), which gives rise to a graphical statistical model (a Markov random field), where correlations stem from the relations in the knowledge base.
supervised learning are beginning to be explored in setting beyond annealers. For instance, in (Schuld et al., 2017), a simple interferometric circuit is used for the efficient evaluation of distances between data-vectors, useful for classification and clustering. A more complete account of these recent ideas is beyond the scope of this review.
Speed-ups in circuit architectures
One of the most important applications of ML in recent times has been in the context of data mining, and analyzing so-called big data. The most impressive improvements in this context have been achieved by proposing specialized quantum algorithms which solve particular ML problems. Such algorithms assume the availability of full-blown quantum computers, and have been tentatively probed since early 2000s. In recent times, however, we have witnessed a large influx of ideas. Unlike the situation we have seen in the context of quantum annealing, where an optimization subroutine alone was run on a quantum system, in most of the approaches of this section, the entire algorithm, and even the dataset may be quantized. The ideas for quantum-enhancements for ML can roughly be classified into two groups: a) approaches which rely on Grover's search and amplitude amplification to obtain up-to-quadratic speed-ups, and, b) approaches which encode relevant information into quantum amplitudes, and which have a potential for even exponential improvements. The second group of approaches forms perhaps the most developed research line in quantum ML, and collects a plethora quantum tools -most notably quantum linear algebra, utilized in quantum ML proposals.
a. Speed-ups by amplitude amplification In (Anguita et al., 2003), it was noticed that the training of support vector machines may be a hard optimization task, with no obviously better approaches than brute-force search. In turn, for such cases of optimization with no structure, QIP offers at least a quadratic relief, in the guise of variants of Grover's (Grover, 1996) search algorithm or its application to minimum finding (Durr and Hoyer, 1999). This idea predates, and is, in spirit, similar to some of the early adiabatic-based proposals of the previous subsection, but the methodology is substantially different. The potential of quadratic improvements stemming from Grover-like search mechanisms was explored more extensively in (Aïmeur et al., 2013), in the context of unsupervised learning tasks. There the authors assume access to a black-box oracle which computes a distance measure between any two data-points. Using this, combined with amplitude amplification techniques (e.g. minimum finding in (Durr and Hoyer, 1999)), the authors achieve up to quadratic improvements in key subroutines used in clustering (unsupervised learning) tasks. Specifically, improvements are obtained in algorithms performing minimum spanning tree clustering, divisive clustering and k-medians clustering 109 . Additionally, the authors also show that quantum effects allow for a better parallelization of clustering tasks, by constructing a distributed version of Grover's search. This construction may be particularly relevant as large databases can often be distributed. More recently, in (Wiebe et al., 2014a) the author considers the problem of training deep (more than two-layered) BMs. As we mentioned earlier, one of the bottlenecks of exactly training BMs stems from the fact that it requires the estimation of probabilities of certain equilibrium distributions. Computing this analytically is typically not possible (it is as hard as computing partition functions), and sampling approaches are costly as it requires attaining the equilibrium distribution and many iterations to reliably estimate small values. This is often circumvented by using proxy solutions (e.g. relying on contrastive divergence) to train approximately, but it is known that these methods are inferior to exact training. In (Wiebe et al., 2014a), a quantum algorithm is devised which prepares coherent encodings of the target distributions, relying on quantum amplitude amplification, often attaining quadratic improvements in the number of training points, and even exponential improvements in the number of neurons, in some regimes. Quadratic improvements have also been obtained in pure data mining contexts, specifically in association rules mining (Yu et al., 2016), which, roughly speaking identifies correlations between objects in large databases 110 . As our final example in the class of quantum algorithms relying on amplitude amplification we mention the algorithm for the training perceptrons . Here, quantum amplitude amplification was used to quadratically speed up training, but, interestingly, also to quadratically reduce the error probability. Since perceptrons constitute special cases of SVMs, this result is similar in motivation to the much older proposal (Anguita et al., 2003), but relies on more modern and involved techniques.
b. Precursors of amplitude encoding In an early pioneering, and often overlooked, work (Schützhold, 2003), Schützhold proposed an interesting application of QC on pattern recognition problems, which addresses many ideas which have only been investigated, and re-invented, by the community relatively recently. The author considers the problem of identifying "patterns" in images, specified by ). The function f is given as a quantum oracle |x |y |b U f → |x |y |b ⊕ f (x, y) . The oracle is used in quantum parallel (applied to a superposition of all coordinates), and conditioned on the bit-value function being 1 (this process succeeds with constant probability, whenever the density of points is constant,) leading to the state |ψ = N x,y s.t.f (x,y)=1 |x |y , where N is a normalization factor. Note, this state is proportional to the vectorized bitmap image itself, when given in the computational basis. Next, the author points out that "patterns" -repeating macroscopic features -can often be detected by applying discrete Fourier transform to the image vector, which has classical complexity O(N M log(N M )). However, the quantum Fourier transform (QFT) can be applied to the state |ψ utilizing exponentially fewer gates. The author proceeds to show that the measurements of the QFT transformed state may yield useful information, such as pattern localization. This work is innovative in a few aspects. First, the author utilized the encoding of data-points (here strings of binary values) into amplitudes by using a quantum memory, in a manner which is related to the applications in the context of content-addressable memories discussed in VI.B.1. It should be pointed out, however, that in the present application of amplitude encoding, non-binary amplitudes have clear meaning (in say grayscale images), although this is not explicitly discussed by the author. Second, in contrast to all previous proposals, the author shows the potential for a quantifiable exponential computational complexity improvement for a family of tasks. However, this is all contingent on having access of the pre-filled database (U f ) the loading of which would nullify any advantage. Aside from the fact that this may be considered a one-off overhead, Schützhold discusses physical means of loading data from optical images in a quantum-parallel approach, which may be effectively efficient.
c. Amplitude encoding: linear algebra tools The very basic idea of amplitude encoding is to treat states of N −level quantum systems, as data vectors themselves. More precisely given a data-vector
x ∈ R n , the amplitude encoding would constitute the normalized quantum state |x = i x i |i /||x||, where it is often also assumed that norm of the vector x can always be accessed. Note that N −dimensional data-points are encoded into amplitudes of n ∈ O(log(N )) qubits. Any polynomial circuit applied to the n-qubit register encoding the data thus constitutes only a polylogarithmic computation relative to the data-vector size, and this is at the basis of all exponential improvements (also in the case of (Schützhold, 2003), discussed in the previous section) 111 . These ideas have lead to a research area which could be called "quantum linear algebra" (QLA), that is, a collection of algorithms which solve certain linear algebra problems, by directly encoding numerical vectors into state vectors. These quantum sub-routines have then been used to speed up numerous ML algorithms, some of which we describe later in this section. QLA includes algorithms for matrix inversion and principal component analysis (Harrow et al., 2009;, and many others. For didactic purposes, we will first give the simplest example which performs the estimation of inner products in logarithmic time.
Tool 1: inner product evaluation Given access to boxes which prepare quantum states |ψ and |φ , the overlap | φ |ψ | 2 can be estimated to precision using O(1/ ) copies, using the so-called the swap-test. The swap test ) applies a controlled-SWAP gate onto the state |ψ |φ , where the control qubit is set to the uniform superposition |+ . The probability of "succeeding", i.e. observing |+ on the control after the circuit is given with (1+| φ |ψ | 2 )/2, and this can be estimated by iteration (a more efficient option using quantum phase estimation is also possible). If the states |ψ and |φ encode unit-length data vectors, the success value encodes their inner product up to sign. Norms, and phases can also be estimated by minor tweaks to this basic idea -in particular, actual norms of the amplitude-encoded states will be accessible in a separate oracle, and used in algorithms. The sample complexity of this process depends only on precision, whereas the gate complexity is proportional to O(log(N )) as that many qubits need to be control-swapped and measured. The swap test also works as expected if the reduced states are mixed, and the overall state is product. This method of computing inner products, relative to classical vector multiplication, offers an exponential improvement with respect to N (if calls to devices which generate |ψ and |φ take O(1)), at the cost of significantly worse scaling with respect to errors, as classical algorithms have typical error scaling with the logarithm of inverse error, O(log(1/ )). However, in context of ML problems, this is can constitute an excellent compromise.
Tool 2: quantum linear system solving Perhaps the most influential technique for quantum enhanced algorithms for ML is based on one of the quintessential problems of linear algebra: solving systems of equations. In their seminal paper (Harrow et al., 2009), the authors have proposed the first algorithm for "quantum linear system" (QLS) solving, which performs the following. Consider an N × N linear system Ax = b, where κ and d are the condition number 112 , and sparsity of the Hermitian system matrix A 113 . Given (quantum) oracles giving positions and values of non-zero elements of A, (that is, given standard oracles for A as encountered in Hamiltonian simulation, cf. (Berry et al., 2015)) and an oracle which prepares the quantum state |b which is the amplitude encoding of b (up to norm), the algorithm in (Harrow et al., 2009) prepares the quantum state |x which is −close to the amplitude encoding of the solution vector x. The run-time of the first algorithm is O(κ 2 d 2 log(N )/ ). Note, the complexity scales proportionally to the logarithm of the system size. Note that any classical algorithm must scale at least with N , and this offers room for exponential improvements. The original proposal in (Harrow et al., 2009) relies on Hamiltonian simulation (implementing exp(iAt),) upon which phase estimation is applied. Once phases are estimated, inversely proportional amplitudes -that is, the inverses of the eigenvalues of A -are imprinted via a measurement. It has also been noted that certain standard matrix pre-conditioning techniques can also be applicable in the QLS scheme (Clader et al., 2013). The linear scaling in the error in these proposals stems from the phase estimation subroutine. In more recent work , the authors also rely on best Hamiltonian simulation techniques, but forego the expensive phase estimation. Roughly speaking, they (probabilistically) implement a linear combination of unitaries of the form k α k exp(ikAt) upon the input state. This constitutes a polynomial in the unitaries which can be made to approximate the inverse operator A −1 (in a measurement-accessible subspace) more efficiently. This, combined with other numerous optimizations, yields a final algorithm with complexityÕ(κdpolylog(N/ )), which is essentially optimal. It is important to note that the apparently exponentially more efficient schemes above do not trivially imply provable computational improvements, even if we assume free access to all oracles. For instance, one of the issues is that the quantum algorithm outputs a quantum state, from which classical values can only be accessed by sampling. This process for the reconstruction of the complete output vector would kill any improvements. On the other hand, certain functions of the amplitudes can be computed efficiently, the computation of which may still require O(N ) steps classically, yielding the desired exponential improvement. Thus this algorithm will be most useful as a sub-routine, an intermediary step of bigger algorithms, such as those for quantum machine learning.
Tool 3: density matrix exponentiation Density matrix exponentiation (DME) is a remarkably simple idea, with few subtleties, and, arguably, profound consequences. Consider an N -dimensional density matrix ρ. Now, from a mathematics perspective, ρ is nothing but a semidefinite positive matrix, although it is also commonly used to denote the quantum state of a quantum system -and these two are subtly different concepts. In the first reading, where ρ is a matrix (we will denote it [ρ] to avoid confusion), [ρ] is also a valid description of a physical Hamiltonian, with time-integrated unitary evolution exp(−i[ρ]t). Could one approximate exp(−i[ρ]t), having access to quantum systems prepared in the state ρ? Given sufficiently many copies (ρ ⊗n ), the obvious answer is yes -one could use full state tomography to reconstruct [ρ], to arbitrary precision, and then execute the unitary using say Hamiltonian simulation (efficiency notwithstanding). In , the authors show a significantly simpler method: given any input state σ, and one copy of ρ, the quantum state
σ = T r B [exp(−i∆tS)(σ A ⊗ ρ B ) exp(i∆tS)],(28)
where S is the Hermitian operator corresponding to the quantum SWAP gate, approximates the desired time evolution to first order, for small ∆t: σ = σ − i∆t[ρ, σ] + O(∆t 2 ). If this process is iterated, by using fresh copies of ρ, we obtain that the target state σ ρ = exp(−iρt)σ exp(iρt) can be approximated to precision , by setting ∆t to O( /t) and using O(t 2 / ) copies of the state ρ. DME is, in some sense, a generalization of the process of using SWAP-tests between two quantum states, to simulate aspects of a measurement specified by one of the quantum states. One immediate consequence of this result is in the context of Hamiltonian simulation, which can now be efficiently realized (with no dependency on the sparsity of the Hamiltonian), whenever one can prepare quantum systems in a state which is represented by the matrix of the Hamiltonian. In particular, this can be realized using qRAM stored descriptions of the Hamiltonian, whenever the Hamiltonian itself is of low rank. More generally, this also implies, e.g. that QLS algorithms can also be efficiently executed when the system matrix is not sparse, but rather dominated by few principal components, i.e. close to a low rank matrix 114 .
Remark: Algorithms for QLS, inner product evaluation, quantum PCA, and consequently, almost all quantum algorithms listed in the remainder of this section also assume "pre-loaded databases", which allow accessing of information in quantum parallel, and/or the accessing or efficient preparation of amplitude encoded states. The problem of parallel access, or even the storing of quantum states has been addressed and mostly resolved using so-called quantum random access memory (qRAM) architectures (Giovannetti et al., 2008) 115 . The same qRAM structures can be also used to realize oracles utilized in the approaches based on quantum search. However, having access to quantum databases pre-filled with classical data does a-priori not imply that quantum amplitude encoded states can also be generated efficiently, which is, at least implicilty, assumed in most works below. For a separate discussion on the cost of some of similar assumptions, we refer the reader to (Aaronson, 2015).
d. Amplitude encoding: algorithms With all the quantum tools in place, we can now present a selection of quantum algorithms for various supervised and unsupervised learning tasks, grouped according to the class of problems they solve. The majority of proposals of this section follow a clear paradigm: the authors investigate established ML approaches, and identify those where the computationally intensive parts can be reduced to linear algebra problems, most often, diagonalization and/or equation solving. In this sense, further improvements in quantum linear algebra approaches, are likely to lead to new results in quantum ML. As a final comment, all the algorithms below pertain to discrete-system implementations. Recently, in (Lau et al., 2017), the authors have also considered continuous variable variants of qRAM, QLS and DME, which immediately lead to continuous variables implementations of all the quantum tools and most quantum-enhanced ML algorithms listed below.
Regression algorithms One of the first proposals for quantum enhancements tackled linear regression 114 Since a density operator is normalized, the eigenvalues of data-matrices are rescaled by the dimension of the system. If the eigenvalues are close to uniform, they are rendered exponentially small in the qubit number. This then requires exponential precision in DME, which would off-set any speed-ups. However, if the spectrum is dominated by a constant number of terms, the precision required, and overall complexity, is again independent from the dimension, allowing overall efficient algorithms. 115 qRAM realizes the following mapping: |addr |b qRAM −→ |addr |b ⊕ d addr , where d addr represents the data stored at the address addr (the ⊕ represents modulo addition, as usual), which is the reversible variant of conventional RAM memories. In (Giovannetti et al., 2008), it was shown a qRAM can be constructed such that its internal processing scales logarithmically in the number of memory cells.
problems, specifically, least squares fitting, and relied on QLS. In least squares fitting, we are given N M-dimensional real datapoints paired with real labels, so (
x i , y i ) N i=1 , x i = (x j i ) j ∈ R M , y = (y i ) i ∈ R N .
In regression y is called the response variable (also regressant or dependant variable), whereas the datapoints x i are called predictors (or regressors or explanatory variables), and the goal of least-squares linear regression is to establish the best linear model, that is β = (β j ) j ∈ R M given with argmin β Xβ − y 2 ,
where the data matrix X collects the data-points x i as rows. In other words, linear regression assumes a linear relationship between the predictors and the response variables. It is well-established that the solution to the above least-squares problem is given with β = X + y, where X + is the Moore-Penrose pseudoinverse of the data-matrix, which is, in the case that X † X is invertible, given with X + = (X † X) −1 X † . The basic idea in is to apply X † onto the initial vector |y which amplitude-encodes the response variables, obtaining a state proportional to X † |y . This can be done e.g. by modifying the original QLS algorithm (Harrow et al., 2009) to imprint not the inverses of eigenvalues but the eigenvalues themselves. Following this, the task of applying (X † X) −1 (onto the generated state proportional to X † |y ) is interpreted as an equation-solving problem for the system (X † X)β = X † y.
The end result is a quantum state |β proportional to the solution vector β, in time O(κ 4 d 3 log(N )/ ), where κ, d and are the condition number, the sparsity of the "symmetrized" data matrix X † X, and the error, respectively. Again, we have in general few guarantees on the behaviour of κ, and an obvious restriction on the sparsity d of the data-matrix. However, whenever both are O(polylog(N )), we have a potential 116 for exponential improvements. This algorithm is not obviously useful for actually finding the solution vector β, as it is encoded in a quantum state. Nonetheless, it is useful for estimating the quality of fit: essentially by applying X onto |β we obtain the resulting prediction of y, which can be compared to the actual response variable vector via a swap test efficiently 117 . These basic ideas for quantum linear regression have since been extended in a few works. In an extensive, and complementary work (Wang, 2014), the authors rely on the powerful technique of "qubitization" (Low and Chuang, 2016), and optimize the goal of actually producing the best-fit parameters β. By necessity, the complexity of their algorithm is proportional to the number of data-points M , but is logarithmic in the data dimension N , and quite efficient in other relevant parameters. In (Schuld et al., 2016), the authors follow the ideas of more closely, and achieve the same results as in the original work also when the data matrix is not sparse, but rather low-rank. Further, they improve on the complexities by using other state-of-the-art methods. This latter work critically relies on the technique of DME.
Clustering algorithms In (Lloyd et al., 2013), amplitude encoding and inner product estimation are used to estimate the distance u −v between a given data vector u and the average of a collection of data points (centroid
)v = i v i /M for M datapoints {v i } i ,
in time which is logarithmic in both
116 In this section we often talk about the "potential" for exponential speed-ups because some of the algorithms as given do not solve classical computational problems for which classical lower bounds are known. Consider the conditions which have to be satisfied for the QLS algorithm to offer exponential speed-ups. First, we need to be dealing with problems where the preparation of the initial state and qRAM memory can be done in O(polylog(N )). Next, the problem condition number must be O(polylog(N )) as well. Assuming all this is satisfied, we are still not done: the algorithm generates a quantum state. As classical algorithms do not output quantum states, we cannot talk about quantum speed-ups. The quantum state can be measured, outputting at most O(polylog(N )) (more would kill exponential speed-ups due to printout alone) bits which are functions of the quantum state. However, the hardness of computing these output bits, given all the initial assumptions is clearly not obvious, needs to be proven. 117 In the paper, the authors take care to appropriately symmetrize all the matrices in a manner we discussed in a previous footnote, but for clarity, we ignore this technical step.
the vector length N , and number of points M . Using this as a building block, the authors also show an algorithm for k-means classification/clustering (where the computing of the distances to the centroid is the main cost), achieving an overall complexity O(M log(M N )/ ), which may even further be improved in some cases. Here, it is assumed that amplitude-encoded state vectors, and their normalization values, are accessible via an oracle, or that they can be efficiently implemented from a qRAM storing all the values. Similar techniques, combined with coherent quantum phase estimation, and Grover-based optimization, have been also used for the problem of k-nearest neighbour algorithms for supervised and unsupervised learning .
Quantum Principal Component Analysis The ideas of DME were in the same paper immediately applied to a quantum version of principal component analysis (PCA). PCA constitutes one of the most standard unsupervised learning techniques, useful for dimensionality reduction but, naturally, has a large scope of applications beyond ML. In quantum PCA, for a quantum state ρ one applies quantum phase estimation of the unitary exp(−i[ρ]) using DME, applied onto the state ρ itself. In the ideal case of absolute precision, given the spectral decomposition ρ = i λ i |λ i λ i | , this process generates the state i λ i |λ i λ i | ⊗ |λ i λ i |, whereλ i denotes the numerical estimation of the eigenvalue λ i , corresponding to the eigenvector |λ i . Sampling from this state recovers both the (larger) eigenvalues, and the corresponding quantum states, which are amplitude-encoding the eigenvectors, which may be used in further quantum algorithms. The recovery of high-value eigenvalues and eigenvectors constitutes the essence of classical PCA as well.
Quantum Support Vector Machines One of the most influential papers in quantum-enhanced ML relies on QLS and DME for for the task of quantizing support vector machine algorithms. For the basic ideas behind SVMs see section II.A.2. We focus our attention to the problem of training SVMs, as given by the optimization task in its dual form, in Eq. (6), repeated here for convenience:
(α * 1 , . . . α * N ) = argmin α1...α N i α i − 1 2 i,j α i α j y i y j x i .x j , such that α i ≥ 0 and i α i y i = 0.
The solution of the desired SVM is then easily computed by w * = i y i α i x i . As a warm-up result, in the authors point out that using quantum evaluation of inner products, appearing in Eq. (30), already can lead to exponential speed-ups, with respect to the data-vector dimension N . The quantum algorithm complexity is, however, still polynomial in the number of datapoints M , and the error dependence is now linear (as the error of the inner product estimation is linear). The authors proceed to show that full exponential improvements can be possible (with respect to N and M both), however for the special case of least-squares SVMs. Given the background discussions we have already done with respect to DME and QLS, the basic idea is here easy to explain. Recall that the problem of training least-squares SVMs reduces to a linear program, specifically a least-squares minimization. As we have seen previously, such minimization reduces to equation solving, which was given by the system in Eq. (14), which we repeat here:
0 1 T 1 N Ω + γ −1 I b α = 0 Y .(30)
Here, 1 is an "all ones" vector, Y is the vector of labels y i , α is the vector of the Lagrange multipliers yielding the solution, b is the offset, γ is a parameter depending on the hyperparameter C, and Ω is the matrix collecting the (mapped) inner products of the training vectors so Ω i,j = x i .x j . The key technical aspects of ) demonstrate how the system above is realized in a manner suitable for QLS. To give a flavour of the approach, we will simply point out that the system sub-matrix Ω is proportional to the reduced density matrix of the quantum state i |x i | |i 1 |x i 2 , obtained after tracing out the subsystem 2. This state can, under some constraints, be efficiently realized with access to qRAM encoding the data-points. Following this, DME enables the application of QLS where the system matrix has a block proportional to Ω, up to technical details we omit for brevity. The overall quantum algorithm generates the quantum state proportional to |ψ out ∝ b |0 + M i=1 α i |i , encoding the offset and the multipliers. The multipliers need not be extracted from this state by sampling. Instead any new point can be classified by (1) generating an amplitude-encoded state of the input, and (2) estimating the inner product between this state and |ψ out ∝ b |0 |0 + M i=1 α i |x i | |i |x i , which is obtained by calling the quantum data oracle using |ψ out . This process has an overall complexity of O(κ 3 ef f −3 log(M N )), where κ ef f depends on the eigenstructure of the data matrix. Whenever this term is polylogarithmic in data size, we have a potential for exponential improvements.
Gaussian process regression In (Zhao et al., 2015) the authors demonstrate how QLS can be used to dramatically improve Gaussian process regression (GPR), a powerful supervised learning method. GPR can be thought of as a stochastic generalization of standard regression: given a training set {x i , y i }, it models the latent function (which assigns labels y to data-points), assuming Gaussian noise on the labels f (x) = y + where encodes independent and identically distributed More precisely, GPR is a process in which an initial distribution over possible latent functions is refined by taking into account the training set points, using Bayesian inference. Consequently, the output of GPR is, roughly speaking, a distribution over models f which are consistent with the observed data (the training set). While the descriptions of such a distribution may be large, in computational terms, to predict the value of a new point x * , in GPR, one needs to compute two numbers: a linear predictor (also referred to as the predictive mean, or simply mean), and the variance of the predictor, which are specific to x * . These numbers characterize the distribution of the predicted value y * by the GPR model which is consistent with the training data. Further, it turns out, both values can be computed using modified QLS algorithms. The fact that this final output size is independent from the dataset size, combined with QLS, provides possibilities for exponential speed-ups in terms of data size. This, naturally holds, provided the data is available in qRAM, as is the case in most algorithms of this section. It should be mentioned that the authors take meticulous care of listing out all the "hidden costs", (and the working out intermediary algorithms) in the final tally of the computational complexity.
Geometric and topological data analysis All the algorithms we presented in this subsection thus far critically depend on having access to "pre-loaded" databases -the loading itself would introduce a linear dependence on the database size, whereas the inner-product, QLS and DME algorithms provide potential for just logarithmic dependence. However, this can be circumvented in the cases where the data-points in the quantum database can be efficiently computed individually. This is reminiscent of the fact that most applications of Grover's algorithm have a step in which the Grover oracle is efficiently computed. In ML applications, this can occur if the classical algorithm requires, as a computational step, a combinatorial exploration of the (comparatively small) dataset. Then, the quantum algorithm can generate the combinatorially larger space in quantum parallelthereby efficiently computing the effective quantum database. The first example where this was achieved was presented in , in context of topological and geometric data analysis.
These techniques are very promising in the context of ML, as topological features of data do not depend on the metric of choice, and thus capture the truly robust, features of the data. The notion of topological features (in the ML world of discrete data points) are given by those properties which exist when data is observed at different spatial resolutions. Such persistent features are thus robust and less likely to be artefacts of noise, or choice of parameters, and are mathematically formalized through so-called persistent homology. A particular family of features of interest are the number of connected components, holes, voids (or cavities). These numbers, which are defined for simplicial complexes (roughly, a closed set of simplices), are called Betti numbers. To extract such features from data, one must thus construct nested families of simplical complexes from the data, and compute the corresponding features captured by the Betti numbers. However, there are combinatorially many simplices one should consider, and which should be analyzed, and one can roughly think of each possible simplex as data-points which need further analysis. However, they are efficiently generated from a small set -essentially the collection of the pair-wise distances between datapoints. The authors show how to generate quantum states which encode the simplexes in logarithmically few qubits, and further show that from this representation, the Betti numbers can be efficiently estimated. Iterating this at various resolutions allows the identification of persistent features. As usual, full exponential improvements happen under some assumptions on the data, and here they are manifest in the capacity to efficiently construct the simplical states -in particular, having the total number of simplices in the complex be exponentially large would suffice, although it is not clear when this is the case, see (Aaronson, 2015). This proposal provides evidence that quantum ML methods based on amplitude encoding may, at least in some cases, yield exponential speed-ups even if data is not pre-stored in a qRAM or an analogous system. As mentioned a large component of modern approaches to quantum-enhanced ML, relies on quantum linear algebra techniques, and any progress in this area may lead to new quantum ML algorithms. A promising recent examples of this were given in terms of algorithms for quantum gradient descent (Rebentrost et al., 2016b;Kerenidis and Prakash, 2017), which could e.g. lead to novel quantum methods for training neural networks.
VII. QUANTUM LEARNING AGENTS, AND ELEMENTS OF QUANTUM AI
The topics discussed thus far in this review, with few exceptions, deal with the relationship between physics, mostly QIP, and traditional ML techniques which allow us to better understand data, or the process which generates it. In this section, we go one step beyond data analysis and optimization techniques and address the relationship between QIP and more general learning scenarios, or even between QIP and AI. As mentioned, in more general learning or AI discussions, we typically talk about agents, interacting with their environments, which may be, or more often fail to be, intelligent. In our view, by far the most important aspect of any intelligent agent, is its capacity to learn from its interactions with its environment. However, general intelligent agents learn in environments which are complex and changeable. Further, the environments are susceptible to being changed by the agent itself, which is the crux of e.g. learning by experiments. All this delineates general learning frameworks, which begin with RL, from more restricted settings of data-driven ML. In this section, we will consider physics-oriented approaches to learning via interaction, specifically the PS model, and then focus on quantum-enhancements in the context of RL 118 . Following this, we will discuss an approach for considering the most general learning scenarios, where the agent, the environment and their interaction, are treated quantum-mechanically: this constitutes a quantum generalization of the broad AE framework, underlying modern AI. We will finish off briefly discussing other results from QIP which may play a role in the future of QAI, which do not directly deal with learning, but which may still play a role in the future of QAI.
A. Quantum learning via interaction
Executive summary: The first proposal which addressed the specification of learning agents, designed with the possibility of quantum processing of episodic memory in mind, was the model of Projective Simulation PS. The results on quantum improvements of agents which learn by interacting with classical environments have mostly been given within this framework. The PS agent deliberates by effectively projecting itself into conceivable situations, using its memory, which organizes its episodic experiences in a stochastic network. Such an agent can solve basic RL problems, meta-learn, and solve problems with aspects of generalization. The deliberation is a stochastic diffusion process, allowing for a few routes for quantization. Using quantum random walks, quadratic speed-ups can be obtained.
The applications of QIP to reinforcement and other interactive learning problems has been comparatively less studied, when compared to quantum enhancements in supervised and unsupervised problems. One of the first proposals which provides a coherent view on learning agents from a physics perspective was that of Projective Simulation (abbrv. PS) (Briegel and De las Cuevas, 2012). We first provide a detailed description the PS model, and review the few other works related to this topic at the end of the section. PS is a flexible framework for the design of learning agents motivated both from psychology and physics, and influenced by modern views on robotics. One of the principal reasons why we focus on this model is that it provides a natural route to quantization, which will be discussed presently. However already the classical features of the model reveal an underlying physical perspective which may be of interest for the reader, and which we briefly expose first. The PS viewpoint on (quantum) agents is conceived around a few basic principles. First, in the PS view, the agent is a physical, or rather, an embodied entity, existing relative to its environment, rather than a mathematical abstraction 119 . Note, this does not prohibit computer programs to be agents: while the print-out of the code is not an agent, the executed instantiation of the code, the running program, so to speak, has its own well-defined virtual interfaces, which delineate it from, and allow interaction with other programs in its virtual world -in this sense, that program too is embodied. Second, the interfaces of the agent are given by its sensors, collecting the environmental input, and the actuators, enabling the agent to act on the environment. Third, the learning is learning from experience, and, the interfaces of the agent constrain the elementary experiences of the agent to be collections from the sets of percepts S = {s i } i which the agent can perceive and actions A = {a i } i . At this point we remark that the basic model assumes discretized time, and sensory it is not fully general -for instance learning in real environments always involves supervised and other learning paradigms to control the size of the exploration space, but also various other techniques which occur when we try to model settings in continuous, or otherwise not turn-based fashion.
space, which is consistent with actual realizations, although this could be generalized. Fourth, a (good) learning agent's behaviour -that is, the choice of actions, given certain percepts -is based on its cumulative experience, accumulated in the agent's memory, which is structured. This brings us to the central concept of the PS framework, which is the memory of the agent: the episodic and compositional memory (ECM). The ECM is a structured network of units of experience which are called clips or episodes. A clip, denoted c i , can represent 120 an individual percept or action, so c i ∈ S ∪ A -and indeed there is no other external type appearing in the PS framework. However, experiences may be more complex (such as an autobiographical episodic memory, similar to short video-clips, where we remember a temporally extended sequence of actions and percepts that we experienced). This brings us to the following recursive definition: a clip is either a percept, an action, or a structure over clips.
a) b)
FIG. 12 a) The agent learns to associate symbols to one of the two movements. b) the internal PS network requires only action and percept clips, arranged in two layers, with connections only from actions to percepts. The "smiling" edges are rewarded. Adapted from (Briegel and De las Cuevas, 2012).
Typical examples of structured clips are percept-action (s 1 , a 1 , . . . , s k , a k ) sequences describing what happened, i.e. a k−length history of interaction between the agent and environment. Another example are simple sets of percepts (s 1 or s 2 . . .), which will be later used to generalize knowledge. The overall ECM is a network of clips (that is, a labeled directed graph, where the vertices are the clips), where the edges organize the agent's previous experiences, and has a functional purpose explained momentarily. Fifth, learning agent must act: that is, there has to be a defined deliberation mechanism, which given a current percept, the state of memory, i.e. the current ECM network, the agent, probabilistically decides on (or rather "falls into") the next action and performs it. Finally, sixth, a learning agent must learn, that is, the ECM network must change under experiences and this occurs in two modes: by (1) changing the weights of the edges, and (2) the topology of the network, through the addition of deletion of clips. The above six principles describe the basic blueprint behind PS agents. The construction of a particular agent will require us to further specify certain components, which we will exemplify using the simplest example: a reinforcement learning PS agent, capable of solving the so-called invasion game. In the invasion game, the agent Fig 12 is facing an attacker, who must be blocked by appropriately moving to the left or right. These two options form the actions of the agent. The attacker presents a symbol, say a left-or right-pointing arrow, to signal what its next move will be. Initially, the percepts have no meaning for the agent, and indeed the attacker can alter the meaning in time. The basic scenario here is, in RL terms a contextual two-armed bandit problem (Langford and Zhang, 2008), where the agent gets rewarded when it correctly couples the two percepts to the two actions. The basic PS agent that can solve this is specified as follows. The action and percept spaces are the two moves, and two signals, so A = {−, +} (left and right move), and S = {←, →}, respectively. The clips set is just the union of the two sets. The connections are directed edges from percepts to actions, weighted with real values, called h−values, h ij ≥ 1, which form the h−matrix. The deliberation is realized by a random walk in the memory space, governed proportionally to the h−matrix: that is the probability of transition from percept s to action a is given with p(a|s) = h s,a a h s,a
. In other words, the column-wise normalized h−matrix specifies the stochastic transition matrix of the PS model, in the Markov chain sense. Finally, the learning is manifest in the tuning of the h−values, via an update rule, which is in its most basic form given with:
h t+1 (c j , c i ) = h t (c j , c i ) + δ cj ,ci λ,(31)
where t, t + 1 denote consecutive time steps, λ denotes the reward received in the last step, and δ cj ,ci is 1 if and only if the c i to c j transition occurred in the previous step. Simply stated, used edges get rewards. The h−value h t (c i , c j ) associated to the edges connecting clips c i , c j , when the time step t is clear from context we will simply denote h ij . One can easily see that the above rule constitutes a simple RL mechanism, and that it will indeed over time lead to a winning strategy in the invasion game; since only the correctly paired transitions get rewards, they are taken more and more frequently. However, these h−values in this simple process diverge, which also makes re-learning, in the eventuality the rules of the game change, more difficult with time. To manage this, one typically introduces a decay, or dissipation, parameter γ leading to the rule:
h t+1 (c j , c i ) = h t (c j , c i ) − γ(h t (c j , c i ) − 1) + δ cj ,ci λ.(32)
The dissipation is applied at each time step. Note that since the dissipating term diminishes the values of h t (c j , c i ) by an amount proportional to the deviation of these values from 1, where 1 is the initial value. The above rule leads to the unit value h = 1 when there are no rewards, and a limiting upper value of 1 + λ/γ, when every move is rewarded. This limits maximal efficiency to 1−(2+λ/γ) −1 , but, as a trade-off, leads to much faster relearning. This is illustrated in Fig. 13. The update rules can get a bit more involved, in the setting of delayed rewards. For instance, in a maze, or so called grid-world settings, illustrated in Fig. 14, it is a sequence of actions that leads to a reward. In other words, the final reward must "propagate" to all relevant percept-action edges which were involved in the winning move sequence.
In the basic PS model, this is done via a socalled glow mechanism: to each edge in the ECM, a glow value g ij is assigned in addition to the h ij −value. It is set to 1 whenever the edge is used, and decays with the rate η ∈ [0, 1], that is, g t ij = (1 − η)g t−1 ij . The h−value update rule is appended to reward all "glowing" edges, proportional to the glow value, whenever a reward is issued:
h t+1 (c j , c i ) = h t (c j , c i ) − γ(h t (c j , c i ) − 1) + g t (c j , c i )λ.(33)
In other words, all the edges which contributed to the final reward get a fraction, in proportion to how recently they were used. This parallels the intuition that the more recent actions relative to the rewarded move played a larger role in getting rewarded. The expression in Eq. 33 has functional similarities to the Q-learning action-value update rule in Eq. 21. However, the learning dynamics is different, and the expressions are conceptually different -Q-learning updates estimate bounded Q-values, whereas the PS is not a state-value estimation method, but rather a purely reward-driven system. The PS framework allows other constructions as well. In (Briegel and De las Cuevas, 2012), the authors also introduced emoticons -edge-specific flags, which capture aspects of intuition. These can be used to speed-up re-learning via a reflection mechanism, where a random walk can be iterated multiple times, until a desired -flagged -set of actions is hit, see (Briegel and De las Cuevas, 2012) for more detail. Further in this direction, the deliberation of the agent can be based not on a hitting process -where the agent performs the first action it hits -but rather on a mixing process. In the latter case, the ECM is a collection Markov chains, and the correct action is sampled from the stationary distribution over the ECM. This model is referred to as the reflective PS (rPS) model, see Fig. 15. Common to all models, however, is that the deliberation process is governed by a stochastic walk, specified by the ECM.
FIG. 14 The environment is essentially a grid, where each site has an individual percept, the moves dictate the movements of the agent (say up, down, left, right), and certain sites are blocked off -walls. The agent explores this world looking for the rewarded site. When the exit is found, a reward is given and the agent is reset to the same initial position. Adapted from (Melnikov et al., 2014).
Regarding performance, the basic PS structure, with a two-layered network encoding percepts and actions -which matches standard tabular RL approaches -was extensively analysed and benchmarked against other models (Melnikov et al., 2014;Mautner et al., 2015). However, the questions that are emphasized in PS literature diverge from questions of performance in RL tasks, in two directions. First, the authors are interested in the capacities of the PS model beyond textbook RL. For instance, in (Mautner et al., 2015) it was shown that the action composition aspects of the ECM allow the agent to perform better in some benchmarking scenarios, which had a natural application for example in the context of protecting MBQC from unitary noise , and in the context of finding novel quantum experiments (Melnikov et al., 2017), elaborated on in section IV.C. Further, by utilizing the capacity of ECM to encode larger and multiple networks, we can also address problems which require generalization -inferring correct behaviour by percept similaritybut also design agents which autonomously optimize their own meta-parameters, such as γ and η in the PS model. That is, the agents can meta-learn (Makmal et al., 2016). These problems go beyond the basic RL framework, and the PS framework is flexible enough to also allow the incorporation of other learning models -e.g. neural networks could be used to perform dimensionality reduction (which could allow for broader generalization capabilities), or even to directly optimize the ECM itself. The PS model has been combined with such additional learning machinery in an application to robotics and haptic skill learning (Hangl et al., 2016). However, there is an advantage into keeping the underlying PS dynamics homogenous, that is, essentially solely based on random walks over the PS network, in that if offers a few natural routes to quantization. This is the second direction of foundational research in PS. For instance, in (Briegel and De las Cuevas, 2012) the authors expressed the entire classical PS deliberation dynamics as a incoherent part of a Liouvillean dynamics (master equation for the quantum density operator), which also included some coherent part (Hamiltonian-driven unitary dynamics). This approach may yield advantages both in deliberation time and also expands the space of internal policies the agent can realize. Another perspective on the quantization of the PS model was developed in the framerowk of discrete-time quantum walks. In (Paparo et al., 2014), the authors have exploited the paradigm of Szegedy-style quantum walks, to improve quadratically deliberation times of rPS agents. The Szegedy (Szegedy, 2004) approach to random walks can be used to specify a unitary random walk operator U P , for a given transition matrix P 121 , whose spectral properties are intimately related to those of P itself. We refer the reader to the original references for the exact specification of U P , and just point out that U P can be efficiently constructed via a simple circuit depending on P , or given black-box access to entries of P . Assume P corresponds to an irreducible and aperiodic (guaranteeing a unique stationary distribution), and also time-reversible (meaning it satisfies detailed balance conditions) Markov chain. Let π = (π i ) i be the unique stationary distribution of P , and δ the spectral gap of P 122 , and |π = i √ π i |i be the coherent encoding of the distribution π. Then we have that a) U P |π = |π , and b) the eigenstates {λ i } of P and eigenphases θ i of U P are related by λ i = cos(θ i ) 123 . This is important as the spectral properties, specifically the spectral gap δ more-or-less tightly fixes the mixing time -that is the number of applications of P needed to obtain the stationary distribution -toÕ(1/δ), by the famous Aldous bounds (Aldous, 1982). This quantity will later bound the complexity of classical agents. In contrast, for U P , we have that its non-zero eigenphases θ are not smaller thanÕ(1/ √ δ). This quadratic difference between the inverse spectral eigenvalue gap in the classical case, and the eigenphase gap in the quantum case is at the crux of all speed-ups. In (Magniez et al., 2011), it was shown how the above properties of U P can be used to construct a quantum operator R(π) ≈ 1 − 2 |π π| , which exponentially efficiently approximates the reflection over the encoding of the stationary distribution |π . The basic idea in the construction of R(π) is to apply phase estimation onto U P with precision high enough to detect non-zero phases, impose a global phase on all states with a non zero detected phase, and undo the process. Due to the quadratic relationship between the inverse spectral gap, and the smallest eigenphase, this can be achieved in timeÕ(1/ √ δ). That is, we can reflect over the (coherent encoding of the) stationary distribution, whereas obtaining it by classical mixing takesÕ(1/δ) applications of the classical walk operator. In (Paparo et al., 2014) this was used to obtain quadratically accelerated deliberation times for the rPS agent. In the rPS model, the ECM network has a special structure, enforced by the update rules. In particular, for each percept s we can consider the subnetwork ECM s , which collects all the clips one can reach starting from s. By construction, it contains all the action clips, but also other, intermediary clips. The corresponding Markov chain P s , governing the dynamics of ECM s , is, by construction, irreducible, aperiodic and time-reversible. In the deliberation process, given percept s, the deliberation process mixes the corresponding Markov chain P s , and outputs the reached clip, provided it is an action clip, and repeats the process otherwise.
Computationally speaking, we are facing the problem of outputting a single sample, clip c, drawn according to the conditional probability distribution p(c) = π c / if c ∈ A and p(c) = 0 otherwise.
Here is the total weight of all action clips in π. The classical computational complexity of this task is given by the product ofÕ(1/δ) -which is the mixing cost, and O(1/ ) which is the average time needed to actually hit an action clip. Using the Szegedy quantum walk techniques, based on constructing the reflector R(π), followed by an amplitude amplification algorithm to "project" onto the action space, we obtain a quadratically better complexity ofÕ(1/ √ δ) × O(1/ √ ). In full detail, this is achievable if we can generate one copy of the coherent encoding of the stationary distribution efficiently at each step, and in the context of the rPS this can be done in many cases as was shown in (Paparo et al., 2014), and further generalized in and . The proposal in (Paparo et al., 2014) was the first example of a provable quantum speed-up in the context of RL 124 , and was followed up by a proposal for an experimental demonstration , which identified a possibility of a modular implementation based on coherent controlization -the process of adding control to almost unknown unitaries. It is worth-while to note that further progress in algorithms for quantum walks and quantum Markov chain theory has the potential to lead to quantum improvements of the PS model. This to an extent mirrors the situation in quantum machine learning, where new algorithms for quantum linear algebra may lead to quantum speed-ups of other supervised and unsupervised algorithms. Computational speed-ups of deliberation processes in learning scenarios are certainly important, but in strict RL paradigm, such internal processing does not matter, and the learning efficiency depends only on the number of interaction steps needed to achieve high quality performance. Since the rPS and its quantum analog, the so-called quantum rPS agent are, by definition, behaviorally equivalent (i.e. they perform the same action with the same probability, given identical histories), their learning efficiency is the same. The same, however, holds in the context of all the supervised learning algorithms we discussed in previous sections, where the speed-ups were in the context of computational complexity. In contrast, quantum CLT learning results did demonstrate improvements in sample complexity, as discussed in section VI.A. While formally distinct, computational and sample complexity can become more closely related the moment the learning settings are made more realistic. For instance, if the training of a given SVM requires the solution of a BQP complete problem 125 , classical machines will most likely be able to run classification instances which are uselessly small. In contrast, a quantum computer could run such a quantum-enhanced learner. The same observation motivates most of research into quantum annealers for ML, see section VI.C.1.
In (Paparo et al., 2014), similar ideas were more precisely formalized in the context of active reinforcement learning, where the interaction is occurring relative to some external real time. This is critical, for instance, in settings where the environment changes relative to this real time, which is always the case in reality. If the deliberation time is slow relative to this change, the agent perceives a "blurred", time-averaged environment where one cannot learn. In contrast, a faster agent will have time to learn before the environment changes -and this makes a qualitative difference between the two agents. In the next section we will show how actual learning efficiency, in the rigid metronomic turn-based setting can also be improved, under stronger assumptions.
As mentioned, works which directly apply quantum techniques to RL, or other interactive modes of learning, are comparatively few in numbers, despite the ever growing importance of RL. These results still constitute quite isolated approaches, and we briefly review two recent papers. In (Crawford et al., 2016) the authors design a RL algorithm based on a deep Boltzmann machine, and combine this with quantum annealing methods for training such machines to achieve a possible speed-up.This work combines multiple interesting ideas, and may be particularly relevant in the light of recent advances in quantum annealing architectures. In (Lamata, 2017), the authors demonstrated certain building blocks of larger quantum RL agents in systems of superconducting qubits.
B. Quantum agent-environment paradigm for reinforcement learning
Executive summary: To characterize the ultimate scope and limits of learning agents in quantum environments, one must first establish a framework for quantum agents, quantum environments and their interaction: a quantum AE paradigm. Such a paradigm should maintain the correct classical limit, and preserve the critical conceptual components -in particular the history of the agent-environment interaction, which is non-trivial in the quantum case. With such a paradigm in place the potential of quantum enhancements of classical agents is explored, and it is shown that quantum effects, under certain assumptions, can help near-generically improve the learning efficiency of agents. A by-product of the quantum AE paradigm is a classification of learning settings, which is different and complementary to the classification stemming from a supervised learning perspective.
The topics of learning agents acting in quantum environments, and the more general questions of the how agent-environment interactions should be defined, have to this day only been broached in few works by the authors of this review and other co-authors. As these topics may form the general principles underlying the upcoming field of quantum AI, we take liberty to present them to substantial detail.
Motivated by the pragmatic question of the potential of quantum enhancements in general learning settings, in it was suggested that the first step should be the identification of a quantum generalization of the AE paradigm, which underlies both RL and AI. This is comparatively easy to do in finite-sized, discrete space settings.
a. Quantum agent-environment paradigm The (abstract) AE paradigm, roughly illustrated in Fig. 6, can be understood as a two-party communication scenario, the quantum descriptions of which are well-understood in QIP. In particular, the two players -here the agent, and the environment -are modelled as (infinite) sequences of unitary maps {E i A } i , and {E i E } i , respectively. They both have private memory registers R A and R E , with matching Hilbert spaces H A , and H E , and to enable precise specification of how they communicate (and to cleanly delineate the two players), the register of the communication channel, R C , is introduced, and it is the register which is alone accessible to both players -that is, the maps of the agent act on H A ⊗ H C and of the environment on H E ⊗ H C 126 . The two players then interact by sequentially applying their respective maps in turn (see Fig. 16). To further tailor this fully general setting for the AE paradigm purposes, the percept and action sets are promoted to sets of orthonormal vectors {|s |s ∈ S} and {|a |a ∈ A}, which are also mutually orthogonal. These are referred to as classical states. The Hilbert space of the channel is spanned by these two sets, so H C = span{|x | x ∈ S ∪ A}. This also captures the notion that the agent/environment only performs one action, or issues one percept, per turn. Without loss of generality, we can also assume that the state-spaces of the agent's and environment's registers are also spanned by sequences of percepts and actions. It is without loss of generality assumed that the reward status is encoded in the percept space.
RL:
2 come Hilbert spaces, H A = span{|a i i}, H S = span{|s i i}, and form orthonormal bases. The percept and action states, and their mixtures, are referred to as classical states. Any figure of merit Rate(·) of the performance of an agent A in E is a function of the history of interaction H 3 h = (a 1 , s 1 , . . .), collecting the exchanged percepts and actions. The history of interaction is thus the central concept in learning. The correct quantum generalization of the history is not trivial, and we will deal with this momentarily.
If either A or E are stochastic, the interaction of A and E is described by a distribution over histories (of length t), denoted by A $ t E. Most figures of merit are then extended to such distributions by convex-linearity.
To recover, e.g., supervised learning in this paradigm, take E to be characterized by the distribution P (x, y), where the agent is given an n sized sample of (x, y) pairs as the first n percepts. After this, the agent is to respond with labels as actions to given percepts, now unlabeled data-points x. This setting is native to RL if the percept space also contains the reinforcement signal -the reward. We denote the percept space including the reward status asS (e.g., if rewards are binary thenS = S ⇥ {0, 1}).
The agent-environment paradigm is a two-party interactive setting, and thus convenient for a quantum information treatment of QML. All the existing results group into four categories: CC, CQ, QC and QQ, depending on whether the agent (first symbol) or the environment (second symbol) are classical (C) or quantum (Q) [30]. The CC scenario covers classical machine learning. The CQ setting asks how classical ML techniques may aid in quantum tasks, such as quantum control [14,15], quantum metrology [16], adaptive quantum computing [17] and the design of quantum experiments [18]. Here we, for example, deal with non-convex/non-linear optimization problems arising in quantum experiments, tackeled by ML techniques. QC corresponds to quantum variants of learning algorithms [7, 10, 19] facing a classical environment. Figuratively speaking, this studies the potential of a learning robot, enhanced with a with a "quantum chip". In QQ settings, the focus of this work, both A and E are quantum systems. Here, the interaction can be fully quantum, and even the question of what it means "to learn" becomes problematic as, for instance, the agent and environment may become entangled.
Framework.-Since learning constitutes a two-player interaction, standard quantum extensions can be applied: the action and percept sets are represented by the aforementioned Hilbert spaces H A , H S . The agent and the environment act on a common communication register R C (capable of representing both percepts and actions). Thus, the agent (environment) is described as a sequence of CPTP maps {M t A } ({M t E })one for each time-stepwhich acts on the register R C , but also a private register R A (R E ) which constitutes the internal memory of the agent (environment). This is illustrated in Fig. 1 above the dashed line.
The central object characterizing an interaction, namely its history, is, for the classical case, recovered by performing periodic measurements on R C in the classical (often called computational) basis. The generalization of this process for the quantum case is a tested interaction: we define the tester as a sequence of controlled maps of the form
U T t |xi RC ⌦ | i RT = |xi RC ⌦ U x t | i RT where x 2 S [ A, and {U x
t } x are unitary maps acting on the tester register R T , for all steps t. The history, relative to a given tester, is defined to be the state of the register R T . A tested interaction is shown in Fig. 1.
R A / M A 1 · · · M A t · · · R C M E 1 • • M E 2 • · · · • M E t • · · · R E / · · · · · · R T / U T 2 U T 3 U T 4 · · · U T 2t 1 U T 2t · · ·
FIG. 1. Tested agent-environment interaction. In general, each map of the tester U T k acts on a fresh subsystem of the register RT , which is not under the control of the agent of the environment. The crossed wires represent multiple systems.
The restriction that testers are controlled maps relative to the classical basis guarantees that, for any choice of the local maps U x T , the interaction between classical A and E remains unchanged. A classical tester copies the content of R C relative to the classical basis, which has essentially the same e↵ect as measuring R C and copying the outcome. In other words, the interface between A and E is then classical. It can be shown that, in the latter case, for any quantum agent and/or environment there exist classical A and E which generate the same history under any tester [20]. In other words, classical agents can, in QC settings and, equivalently, in classically tested QQ settings, achieve the same performance as quantum agents, in terms of any history-dependent figure of merit. Thus, the only improvements can then be in terms of computational complexity.
Scope and limits of quantum improvements.-What is the ultimate potential of quantum improvements in learning? In the QC and classically tested settings, we are bound to computational complexity improvements, which have been achieved in certain cases. Improvements in learning e ciency require special type of access to the environments, which is not fully tested. Exactly this is done in [6,8], for the purpose of improving computational complexity, with great success, as the improvement can be exponential. There, the classical source of samples is substituted by a quantum RAM [23] architecture, which allows for the accessing of many samples in superposition. Such a substitution comes naturally in (un)supervised settings, as the basic interaction comprises only two steps and is memoryless -the agent requests M samples, and the environment provides them.
DL:
FIG. 16 RL: Tested agent-environment interaction suitable for RL. In general, each map of the tester U T k acts on a fresh subsystem of the register RT , which is not under the control of the agent, nor of the environment. The crossed wires represent multiple systems. DL: The simpler setting of standard quantum machine learning, where the environmental map is without internal memory, presented in the same framework.
It should be mentioned that the quantum AE paradigm also includes all other quantum ML settings as a special case. For instance, most quantum-enhanced ML algorithms assume access to quantum database, a quantum memory, and this setting is illustrated in Fig. 16, part DL. Since the quantum database is without loss of generality a unitary map, it requires no additional memory of its own, nor does it change over interaction steps. At this point, the classical AE paradigm can be recovered when the maps of the agent and environment are restricted to "classical maps", which, roughly speaking do not generate superpositions of classical states, nor entanglement when applied to classical states. Further, we now obtain a natural classification of generalized AE settings: CC, CQ, QC and QQ, depending on whether the agent or the environment are classical (C) or quantum (Q). We will come back to this classification in section VII.B.1. The performance of a learning agent, beyond internal processing time, is a function of the history of interaction, which is a distribution over percept-action sequences (of a given finite length) which can occur between a given agent and environment. Any genuine learning-related figure of merit, for instance, the probability of a reward at a given time-step (efficiency), or number of steps needed before the efficiency is above a threshold (learning speed) is a function of the interaction history. In the classical case, the history can simply be read out by a classical-basis measurement of the register H C , as the local state of the communication register is diagonal in this basis, and not entangled to the other systems -meaning the measurement does not perturb, i.e. commutes with the interaction. In the quantum case this is not, in general, the case. To recover a robust notion of a history (needed for gauging of the learning), a more detailed description of measurement is used, which captures weaker measurements as well: an additional system, a tester is added, which interchangeably couples to the H C register, and can copy full or partial information to a separate register. Formally, this is a sequence of controlled maps, relative to the classical basis, controlled by the states on H C and acting on a separate register, as illustrated in Fig. 16. The tester can copy the full information, when the maps are a generalized controlled-NOT gate -in which case it is called a classical testeror even do nothing, in which case the interaction is untested. The restriction of the tester to maps which are controlled with respect to the classical basis guarantees that a classical interaction will never be perturbed by its presence. With this basic framework in place, the authors show a couple of basic theorems characterizing when any quantum separations in learning-related figures of merit of can be expected at all. The notion of quantum separations here are the same as in the context of oracular computation, or quantum PAC theory: a separation means no classical agent could achieve the same performance. The authors prove basic expected theorems: quantum improvements (separations) require a genuine quantum interaction, and, further, full classical testing prohibits this. Further, they show that for any specification of a classical environment, there exists a "quantum implementation" -a sequence of maps {E i E } i -which is consistent with the classical specification, and prohibits any quantum improvements.
FIG. 17
The interactions for the classical (A) and quantum-enhanced classical agent (A q ). In Steps 1 and 2, A q uses quantum access to an oracularized environment E q oracle to obtain a rewarding sequence hr.
Step 3: A q simulates the agent A, and 'trains' the simulation to produce the rewarding sequence. In Step 4, A q uses the pre-trained agent for the remainder of the now classically tested interaction, with the classical environment E. Adapted from .
b. Provable quantum improvements in RL However, if the above no-go scenarios are relaxed, much can be achieved. The authors provide a structure of task environments (roughly speaking, maze-type problems), specification of quantum-accessible realizations of these environments, and a sporadic tester (which leaves a part of the interaction untested), for which classical learning agents can often be quantumenhanced. The idea has a few steps, which we only very briefly sketch out. As a first step, the environments considered are deterministic and strictly episodic -this means the task is reset after some M steps. Since the environments are deterministic, whether or not rewards are given depends only on the sequence of actions, as the interlacing percepts are uniquely specified. Since everything is reset after M steps there are no correlations in the memory of the environment between the blocks, i.e. episodes. This allows for the specification of a quantum version of the same environment, which can be accessed in superpositions and which takes blocks of actions and returns the same sequence plus a reward status -moreover, it can be realized such that it is self-inverse 127 . With access to such an object, a quantum agent can actually Grover-search for an example of a winning sequence. To convert this exploration advantage to a learning advantage, the set of agents and environments is restricted to pairs which are "luck-favoring", i.e. those where better performance in the past implies improved performance in the future, relative to a desired figure of merit. Under these conditions, any learning agent which is luck-favoring relative to a given environment can be quantum enhanced by first using quantum access to quadratically faster find the first winning instance, which is then used to "pre-train" the agent in question. The overall quantum-enhanced agent, provably outperforms the basic classical agent. The construction is illustrated in Fig. 17. These results can be generalized to a broader class of environments. Although these results form the first examples of quantum improvements in learning figures of merit in RL contexts, the assumptions of having access to "quantized" environments of the type used-in essence, the amount of quantum control the agent is assumed to have-are quite restrictive from a practical perspective. The questions of minimal requirements, and the questions of the scope of improvements possible are still unresolved.
AE-based classification of quantum ML
The AE paradigm is typically encountered in the contexts of RL, robotics, and more general AI settings, while it is less common in ML communities. Nonetheless, conventional ML scenarios can naturally be embedded in this paradigm, since it is, ultimately, mostly unrestrictive. For instance, supervised learning can be thought of as an interaction with an environment which is, for a certain number of steps, an effective database (or the underlying process, generating the data), providing training examples. After a certain number of steps, the environment starts providing unlabeled datapoints, and the agent responds with the labels. If we further assume the environment additionally responds with the correct label to whatever the agent sent, when the data-point/percept was from the training set, we can straightforwardly read out the empirical risk (training set error) from the history. Since the quantization of the AE paradigm naturally leads to four settings -CC, CQ, QC and QQ -depending on whether the agent, or environment, or both are fully quantum systems, we can classify all of the results in quantum ML into one of the four groups. Such coarse grained division places standard ML in CC, results on using ML to control quantum systems in CQ, quantum speed ups in ML algorithms (without a quantum database, as is the case in annealing approaches) in QC, and quantum ML/RL where the environments, databases or oracles are quantum-accessible are QQ. This classification is closely related to the classification introduced in (Aïmeur et al., 2006), which uses the L context goal , notation, where "context" may denote we are dealing with classical or quantum data and/or learner, and "goal" specifies the learning task (see section V.A.1 for more details). The QAE-based separation is not, however, identical to it: for instance classical learning tasks may require quantum or classical access -this distinguishes the examples of quantum speed-ups in internal processing in ML which require a quantum database, and those which do not. In operative terms, this separation makes sense as the database must be pre-filled at some point, and if this is included we obtain a QC setting (which now may fail to be efficient in terms of communication complexity). On the other hand, the L context goal systematics does a nice job separating classical ML, from quantum generalizations of the same, discussed in section V. This mismatch also illustrates the difficulties one encounters if a sufficiently coarse-grained classification of the quantum ML field is required. The classification criteria of this field, and also aspects of QAI in this review have been inspired by both the AE-induced criteria (perhaps natural from a physics perspective), and the L context goal classification (which is more objective driven, and natural from a computer science perspective).
C. Towards quantum artificial intelligence
Executive summary: Can quantum computers help us build (quantum) artificial intelligence?
The answer to this question cannot be simpler than the answer to the to the deep, and largely open, question of what intelligence is in the first place. Nonetheless, at least for very pragmatic readings of AI, early research directions into what QAI may be in the future can be identified. We have seen that quantum machine learning enhancements and generalizations cover data analysis and pattern matching aspects. Quantum reinforcement learning demonstrates how interactive learning can be quantum-enhanced. General QC can help with various planning, reasoning, and similar symbol manipulation tasks intelligent agents seem to be good at. Finally, the quantum AE paradigm provides a framework for the design and evaluation of whole quantum agents, built also from quantum-enhanced subroutines. These conceptual components form a basis for a behaviour-based theory of quantum-enhanced intelligent agents.
AI is quite a loaded concept, in a manner in which ML is not. The question of how genuine AI can be realized is likely to be as difficult as the more basic question of what intelligence is at all, which has been puzzling philosophers and scientists for centuries. Starting a broad discussion of when quantum AI will be reached, and what will be like, is thus clearly ill-advised. We can nonetheless provide a few less controversial observations. The first observation is the fact that the overall concept of quantum AI might have multiple meanings. First, it may pertain to a generalization of the very notions of intelligence, in the sense section V discusses how classical learning concepts generalize to include genuinely quantum extensions. A second, and a perhaps more pragmatic reading of quantum AI, may ask whether quantum effects can be utilized to generate more intelligent agents, where the notion of intelligence itself is not generalized: quantum-enhanced artificial intelligence. We will focus on this latter reading for the remainder of this review, as quantum generalization of basic learning concepts on its own, just as the notion of intelligence on its own, seem complicated enough. To comment on the question of quantum-enhanced AI, we first remind the reader that the conceptual debates in AI often have two perspectives. The ultimately pragmatic perspective is concerned only with behavior in relevant situations. This is perhaps best captured by Alan Turing, who suggested that it may be irrelevant what intelligence is, if it can be recognized, by virtue of similarity to a "prototype" of intelligence -a human (Turing, 1950) 128 . Another perspective tends to try to capture cognitive architectures, such as SOAR developed by John Laird, Allen Newell, and Paul Rosenbloom (Laird, 2012). Cognitive architectures try to identify the components needed to build intelligent agents, capable of many tasks, and thus also care about how the intelligence is implemented. They often also serve as models of human cognition, and are both theories of what cognition is, and how to implement it. A third perspective comes from the practitioners of AI who often believe that AI will be a complicated combination of various methods and techniques including learning and specialized algorithms, but are also sympathetic to the Turing test as the definitional method. A simple reading of this third perspective is particularly appearing, as it allows us to all but equate computation, ML and AI. Consequently all quantum machine learning algorithms, and even broader, even most quantum algorithms already constitute progress in quantum AI. Aspects of such reading can be found in a few works on the topic (Sgarbas, 2007;Wichert, 2014;Moret-Bonillo, 2015) 129 . The current status of the broad field of quantum ML and related research is showing signs of activity with respect to all of the three aspects mentioned. The substantial activity in the context of ML improvements, in all aspects presented, is certainly filling the toolbox of methods which one day may play a role in the complicated designs of quantum AI practitioners. In this category, a relevant role may also be played by various algorithms which may help in planning, pruning, reasoning via symbol manipulation, and other tasks AI practice and theory encounters. Many possible quantum algorithms which may be relevant come to mind. Examples include the algorithm for performing Bayesian inference (Low et al., 2014), algorithms for quadratic and super-polynomial improvements in NANDand boolean-tree evaluations, which are important in evaluation of optimal strategies in two-player games 130 (Childs et al., 2009;Zhan et al., 2012;Farhi et al., 2008). Further, even more exotic ideas, such as quantum game theory (Eisert et al., 1999), may be relevant. Regarding approaches to quantum artificial general intelligence, and, related, to quantum cognitive architectures, while no proposals explicitly address this possibility, the framework of PS offers sufficient flexibility and structure that it may be considered a good starting point. Further, this framework is intended to keep a homogenous structure, which may lead to more straightforward global quantization, in comparison to models which are built out of inhomogeneous blocks -already in classical systems, the performance of system combined out of inhomogeneous units may lead to difficult-to-control behaviour, and it stands to reason that quantum devices may have a more difficult time to be synchronized. It should be mentioned that recently there have been works providing a broad framework describing how composite large quantum systems can be precisely treated (Portmann et al., 2017). Finally, from the ultimate pragmatic perspective, the quantum AE paradigm presented can offer a starting point for a quantum-generalized Turing test for QAI, as the Turing test itself fits in the paradigm: the environment is the administrator of the test, and the agent is the machine trying to convince the environment it is intelligent. Although, momentarily, the only suitable referees for such a test are classical devices -humans -it may be conceivable they, too, may find quantum gadgets useful to better ascertain the nature of the candidate 131 . However, at this point it is prudent to remind ourselves and the reader, that all the above considerations are still highly speculative, and that the research into genuine AI has barely broken ground.
VIII. OUTLOOK
In this review, we have presented overviews of various lines of research that connect the fields of quantum information and quantum computation, on the one side, and machine learning and artificial intelligence, on the other side. Most of the work in this new area of research is still largely theoretical and conceptual, and there are, for example, hardly any dedicated experiments demonstrating how quantum mechanics can be exploited for ML and AI. However, there are a number of theoretical proposals Lamata, 2017;Friis et al., 2015) and also first experimental works showing how these ideas can be implemented in the laboratory (Neigovzen et al., 2009;Li et al., 2015b;Cai et al., 2015;Ristè et al., 2017) 132 . At the same time it is clear that certain quantum technologies, which have been developed in the context in QIP and QC, can be readily applied to quantum learning, to the extent that learning agents or algorithms employ elements of quantum information processing in their very design. Similarly, it is clear, and there are by now several examples, how techniques from classical machine learning can be fruitfully employed in data analysis and the design of experiments in quantum many-body physics (see section IV.D). One may ask about the long-term impact of the exchange of concepts and techniques between QM and ML/AI. Which implications will this exchange have on the development of the individual fields, and what is the broader perspective of these individual activities leading towards a new field of research, with its own questions and promises? Indeed, returning the focus back to the topics of this review, we can highlight one overarching question encapsulating the collective effort of the presented research:
⇒ What are the potential, and the limitations of an interaction between quantum physics, and ML and AI?
From a purely theoretical perspective, we can learn from analogies with the fields of communication, computation, or sensing. QIP has shown that to understand the limits of such information processing disciplines, both in pragmatic and conceptual sense, one must consider the full extent of quantum theory. Consequently, we should expect that the limits of learning, and of intelligence can also only be fully answered in this broader context. In this sense, the topics discussed in sections V already point to the rich and complex theory describing what learning may be, when even information itself is a quantum object, and aspects of the section VII.C point to how a general theory of quantum learning may be phrased 133 . The motivation of phrasing such a general theory may be fundamental, but it also may have more pragmatic consequences. In fact, arguments can be made that the field of quantum machine learning and the future field of quantum AI may constitute one of the most important research fields to emerge in recent times. A part of the reason behind such a bold claim stems from the obvious potential of both directions of influence between the two constituent sides of quantum learning (and quantum AI). For instance, the potential of quantum enhancements for ML is profound. In a society where data is generated at geometric rate 134 , and where its understanding may help us combat global problems, the potential of faster, better analyses cannot be overestimated. In contrast, ML and AI technologies are becoming indispensable tools in all high technologies, but they are also showing potential to help us do research in a novel, better way. A more subtle reason supporting optimism lies in positive feedback loops between ML, AI and QIP which are becoming apparent, and which is moreover, specific to these two disciplines. To begin with, we can claim that QC, once realized, will play an integral part in future AI systems, on general grounds. This can be deduced from even a cursory overview of the history of AI, which reveals that qualitative improvements in computing and information technologies result in progress in AI tasks, which is also intuitive. In simple terms, state-of-the-art in AI will always rely on state-of-the-art in computing. In contrast, ML and AI technologies are becoming indispensable tools in all high technologies.
The perfect match between ML, AI and QIP, however may have deeper foundations. In particular, →advancements in ML/AI may help with critical steps in the building of quantum computers.
In recent times, it has become ever more apparent that learning methods may make the difference between a given technology being realizable or being effectively impossible -beyond obvious examples, for instance direct computational approaches to build a human-level Go-playing software had failed, whereas AlphaGo (Silver et al., 2016), a fundamentally learning AI technology, achieved this complex goal. QC may in fact end up being such a technology, where exquisite fast, and adaptive controlrealized by an autonomous smart laboratory perhaps, helps mitigate the hurdles towards quantum computers. However, cutting edge research discussed in sections IV.C and IV.D suggest that ML and AI techniques could help at an even deeper level, by helping us discover novel physics which may be the missing link for full blown quantum technologies. Thus ML and AI may be what we need to build quantum computers. Another observation, which is hinted at increasing frequency in the community, and which fully entwines ML, AI and QIP, is that → AI/ML applications may be the best reasons to build quantum computers.
Quantum computers have been proven to dramatically outperform their classical counterparts only on a handful of (often obscure) problems. Perhaps the best applications of quantum computers that have enticed investors until recently were quantum simulation and quantum cryptology (i.e. using QC to break encryption), which may have been simply insufficient to stimulate broad-scale public investments. In contrast ML and AI-type tasks may be regarded as the "killer applications" QC has been waiting for. However, not only are ML and AI applications well motivated -in recent times, arguments have been put forward that ML-type applications may be uniquely suited to be tackled by quantum technologies. For instance, ML-type applications deal with massive parallel processing of high dimensional data -quantum computers seem to be good for this. Further, while most simulation and numerics tasks require data stability, which is incompatible with the noise modern days quantum devices undergo, ML applications always work with noisy data. This means that such an analysis makes sense only if it is robust to noise to start with, which is the often unspoken fact of ML: the important features are the robust features. Under such laxer set of constraints on the desired information processing, various current day technologies, such as quantum annealing methods may become a possible solution. The two main flavours, or directions of influence, in quantum ML thus have a natural synergistic effect further motivating that despite their quite fundamental differences, they should be investigated in close collaboration. Naturally, at the moment, each individual sub-field of quantum ML comes with its own set of open problems, key issues which need to be resolved before any credible verdict on the future of quantum ML can be made. Most fit in one of the two quintessential categories of research into quantum-enhanced topic: a) what are the limits/how much of an edge over best classical solutions can be achieved, and b) could the proposals be implemented in practice in any reasonable term. For most of the topics discussed, both questions above remain widely open. For instance, regarding quantum-enhancements using universal computation, only a few models have been beneficially quantized, and the exact problem they solve, even in theory, is not matching the best established methods used in practice. Regarding the second facet, the most impressive improvements (barring isolated exceptions) can be achieved only under a significant number of assumptions, such as quantum databases, and certain suitable properties the structure of the data-sets 135 . Beyond particular issues which were occasionally pointed out in various parts of this review, we will forego providing an extensive list of specific open questions for each of the research lines, and refer the interested reader to the more specialized reviews for more detail (Wittek, 2014a;Schuld et al., 2014a;Biamonte et al., 2016;Arunachalam and de Wolf, 2017;Ciliberto et al., 2017).
This leads us to the final topic of speculation of this outlook section: whether QC will truly be instrumental in the construction of genuine artificial (general) intelligence. On one hand, there is no doubt that quantum computers could help in heavily computational problems one typically encounters in, e.g., ML. In so far as AI reduces to sets of ML tasks, quantum computing may help. But AI is more than a sum of such specific-task-solving parts. Moreover, human brains are (usually) taken as a reference for systems capable of generating intelligent behaviour. Yet there is little, and no non-controversial, reason to believe genuine quantum effects play any critical part in their performance (rather, there is ample reasons to dismiss the relevance of quantum effects).
In other words, quantum computers may not be necessary for general AI. The extent to which quantum mechanics has something to say about general AI will be subject of research in years to come. Nonetheless, already now, we can set aside any doubt that quantum computers and AI can help each other, to an extent which will not be disregarded.
FIG. 1 Oracular computation and
FIG. 3
3TSP example: finding the shortest route visiting the largest cities in Germany.
FIG. 4
4Supervised (in this case, best linear classifier) and unsupervised learning (here clustering into two most likely groups and outliers) illustrated.
2 p=0. 9 FIG. 8
298A three state, two-action MDP.
FIG. 9
9Illustration of the structure of the episodic and compositional memory in PS, comprising clips (episodes) and probabilistic transitions. The actuator of the agent performs the action. Adapted from(Briegel and De las Cuevas, 2012).
N × M black-and-white bitmaps, characterized by a function f : {1, . . . , N } × {1, . . . , M } → {0, 1} (which technically coincides with a concept in CLT see II.B.1), specifying the color-value f (x, y) ∈ {0, 1} of a pixel at coordinate (x, y
FIG. 13
13Basic learning curves for PS with non-zero γ in the invasion game with a rules switch at time step 250. Adapted from(Briegel and De las Cuevas, 2012).
FIG. 15
15QrPS representation of network, and its steady state over non-action (red) and action (blue) clips.
Machine learning in condensed-matter and many-body physics 45 V. Quantum generalizations of machine learning concepts 47 A. Quantum generalizations: machine learning of quantum data 47 1. State discrimination, state classification, and machine learning of quantum data 48 2. Computational learning perspectives: quantum states as concepts 52 B. (Quantum) learning and quantum processes 53II. Classical background
15
A. Methods of machine learning
16
1. Artificial neural networks and deep learning
17
2. Support Vector Machines
19
3. Other models
22
B. Mathematical theories of supervised and inductive learning
24
1. Computational learning theory
25
2. VC theory
27
C. Basic methods and theory of reinforcement learning
30
III. Quantum mechanics, learning, and AI
34
IV. Machine learning applied to (quantum) physics
35
A. Hamiltonian estimation and metrology
37
1. Hamiltonian estimation
37
2. Phase estimation settings
38
3. Generalized Hamiltonian estimation settings
39
B. Design of target evolutions
40
1. Off-line design
41
2. On-line design
41
C. Controlling quantum experiments, and machine-assisted research
42
1. Controlling complex processes
43
2. Learning how to experiment
44
D. VI. Quantum enhancements for machine learning
55
A. Learning efficiency improvements: sample complexity
56
1. Quantum PAC learning
57
2. Learning from membership queries
58
B. Improvements in learning capacity
60
1. Capacity from amplitude encoding
60
2. Capacity via quantized Hopfield networks
61
C. Run-time improvements: computational complexity
63
1. Speed-up via adiabatic optimization
64
2. Speed-ups in circuit architectures
68
VII. Quantum learning agents, and elements of quantum AI
76
A. Quantum learning via interaction
77
B. Quantum agent-environment paradigm for reinforcement learning
83
1. AE-based classification of quantum ML
86
C. Towards quantum artificial intelligence
87
VIII. Outlook
88
Quantum control and gate design FIG. 10 Table of topics investigating the overlaps between quantum physics, machine learning, and AI.(3) Controlling quantum experiments, and
machine-assisted research
(4) Condensed matter and many body physics
Quantum enhancements for ML
(1) Quantum perceptrons and neural networks
(2) Quantum computational learning theory
(3) Quantum enhancement of learning capacity
(4) Quantum computational algorithmic speed-
ups for learning
Quantum generalizations of ML-type tasks
(1) Quantum generalizations: machine
learning of quantum data
(2) (Quantum) learning of quantum pro-
cesses
Quantum learning agents and elements of quan-
tum AI
(1) Quantum-enhanced learning through
interaction
(2) Quantum agent-environment paradigm
(3) Towards quantum AI
79 Quantum in that that which is learned is encoded in a quantum state. 80 In other words, for any environment state s, producing an action a causes a transition to some state s with probability s τ P a s, where states are represented as canonical vectors. In general, the observations output can also depend on the previous action of the agent.The dynamic of the quantum
POMDP are defined by actions which correspond to quantum instruments (superoperators) the
agent can apply: to each action a, we associate the set of Krauss operators {K a
o } o∈O , which satisfy
o K a †
o K a
o = 1. If the agent performs the action a, and observes the observation o, the state of
the environment is mapped as ρ → K a
o ρK a
o
† /T r[K a
o ρK a
o
† ], where T r[K a
o ρK a
o
† ] is the probability
81
This requires more general and richer formalism of density operators, and leads to generalized measurements, completely positive evolutions, etc.
Paraphrased from (McCarthy et al., 1955).
Each frame is cca. 10 6 dimensional, as each pixel constitutes one dimension, multiplied with 30 frames required for the one-second clip.
More generally, we can distinguish four modes of such operant conditioning: positive reinforcement (reward when correct), negative reinforcement (removal of negative reward when correct), positive punishment (negative reward when incorrect) and negative punishment (removal of reward when incorrect).
For example, in k−nearest neighbour classification, the training set is split into disjoint subsets specified by the shared labels. Given a new point which is to be classified, the algorithm identifies k nearest neighbour points from the data set to the new point. The label of the new point is decided by the majority label of these neighbours. The labeling process thus needs to refer to the entire training set.
More specifically, there exists a set of weights doing the job, even though standard training algorithms may fail to converge to that point. 18 Roughly speaking, models with high model complexity are more likely to "overfit", and it is more difficult to provide guarantees they will generalize well, i.e., perform well beyond the training set.
Indeed, this can be supported by hard theory, see Cover's Theorem(Cover, 1965).
In ML, the term model is often overloaded. Most often it refers to a classification system which has been trained on a dataset, and in that sense it "models" the actual labeling function. Often, however, it will also refer to a class of learning algorithms (e.g. the SVM learning model).
While the dichotomies between sample complexity and computational complexity are often considered in literature, the authors have first heard the trichotomic setting, including model complexity from(Wittek, 2014b). Examples of such balancing, and its failures can be observed in sections V.A.2, and VI.A.1.
An exception to this would be the uninteresting case when the class was finite and all instances had been observed.
For instance, in modern devices, the devices are (mostly) trained for the handwriting of the owner, which will most of the time be distinct from other persons handwritings, although the device should in principle handle any (reasonable) handwriting. 30 Note that we recover the standard PAC setting once the conditional probability distribution of P D (y|x) where the values of the first n bits (data-points) are fixed, is Kronecker-delta -i.e. the label is deterministic.
This rule is inspired by the the Bellman optimality equation, Q * (s, a) := E[R(s, a)] + γE[max a Q * (s , a )], wherethe expected values are taken over the randomness MDP transition rule and the reward function, which has as the solution -the fixed point -the optimal Q−value function. This equation can be used when the specification of the environment is fully known. Note that the optimal Q-values can be found without actually explicitly identifying an optimal policy. 41 Q-learning is an example of an off-policy algorithm as the estimate of the future value in Eq. 21 is not evaluated relative to the actual policy of the agent (indeed, it is not necessarily even defined), but rather relative to the so-called "greedy-policy", which takes the action with the maximal value estimate (note the estimate appears with a maximization term). 42 To avoid any confusion, we have introduced the concept policy to refer to the conditional probability distributions specifying what the agent will do given a state. However, the same term is often overloaded to also refer to the specification of the effective policy an agent will use given some state/time-step. For instance, " −greedy policies" refer to behaviour in which, given a state, the the agent outputs the action with the highest corresponding Q−value -i.e. acts greedily -with probability 1 − , and produces a random action otherwise. Clearly, this rule specifies a policy at any given time step, given the current Q-value table of the agent. One can also think of time-dependent policies, which mean that the policy also explicitly depends on the time-step. An example of a such a time-dependant and a (slowly converging) GLIE policy is an −greedy policy, where = (t) = 1/t is a function of the time-step, converging to zero. 43 SARSA is the acronym for state-action-reward-state-action.
For instance, the problem of finding optimal infinite-horizon policies, which was solvable via dynamical programming in the fully observable (MDP) case becomes, in general, uncomputable. 45 To comment a bit on how RL methods and tasks may be generalized towards general AI, one can consider learning scenarios where one has to combine standard data-learning ML to handle the realistic percept space (which is effectively infinite) with RL techniques. An example of this as was done e.g. in the famous AlphaGo system(Silver et al., 2016). Further, one could also consider more general types of interaction, beyond the strict turn-based metronomic model. For instance in active reinforcement learning, the interaction occurs relative to an external clock, which intertwines computational complexity and learning efficiency of the agent (see section VII.A). Further, the interaction may occur in fully continuous time. This setting is also not typically studied in the basic theory of AI, but occurs in the closely related problem of control theory(Wiseman and Milburn, 2010), which may be more familiar to physicists. Such generalizations are at the cutting edge of research, also in the classical realm, and also beyond the scope of this paper. 46 In this sense, a particular agent/robot, may perceive the full state of the environment in some environments (making the percepts identical to states), whereas in other environments, the sensors fail to observe everything, in which case the percepts correspond to observations.
In fact, this is not entirely true -certain proofs of separation between PAC learnability in the quantum and classical model assume hardness of factoring of certain integers (see section VI.A.2).
Certain optimization problems, such as online optimization problems where information is revealed incrementally, and decisions are made before all information is available, are more clearly related to "quintessential" ML problems such as supervised, unsupervised, or reinforcement learning. 49 Interestingly, such techniques allow for the identification of optimal approximations of unphysical processes which can be used to shed light on the properties of quantum operations.
This is often also expressed in terms of the variance (∆θ) 2 , so as N −2 , rather than the standard deviation.
This addition partially circumvents the computation of the likelihood function P (d|θ; C) which requires the simulation of the quantum system, and is in fact, in general intractable.
For the sake of intuition, a frequent application of X gates, referred to as bang-bang control, on a system which is freely evolving with respect to σz effectively flips the direction of rotation of the system Hamiltonian, effectively undoing its action. 60 By instantaneous we mean that it is assumed that the implementation requires no evolution time, e.g. by using infinite field strengths.
Indeed, the authors also show that correct behavior can be established when additional unknown parameters are introduced, like time-and-space dependent fields (see for results), where hand-crafted methods would fail.
For instance, the authors investigate the strategies explored by the learning agent, and identify spin-glass like phase transition in the space of protocols as a function of the protocol duration. This highlights the difficulty of the learning problem.
This method can be thought of as effectively by assigning a prior stating that the analyzed state is well approximated by a NQS.
Arguably, in the light of the physicalistic viewpoint on the nature of information, which posits that "Information is [ultimately] physical". 65 Classical evolutions are guaranteed to transform computational basis states (the "classical states") to computational basis states, and closed-system implies the dynamics must be reversible, leaving only permutations.
To provide the minimal amount of intuition, the best classical algorithm for the membership query model, heavily depends on Fourier transforms (DFT) of certain sets -the authors then use the fact that FT can be efficiently implemented on the amplitudes of the states generated by the quantum oracle using quantum computers. We refer the reader to (Bshouty and Jackson, 1998) for further details.86 The learning of such functions is in QIP circles also known as the (non-recursive) Bernstein-Vazirani problem defined first in(Bernstein and Vazirani, 1997). 87 However, the meaning of noise is not exactly the same in the classical and quantum case.
For a discussion on some of the shortcomings see e.g.(Brun et al., 2003;Trugenberger, 2003), and we also refer the reader to more recent reviews(Schuld et al., 2014b,c) for further details and analysis of the potential application of such memories to pattern recognition problems.
Generically, local optimization is easier than global, and in the context of the Ising system, global optimization is known to be NP-hard.
More precisely, an efficient algorithm which solves general QUBO problems can also efficiently solve arbitrary Ising ground state problems. One direction is trivial as QUBO optimization is a special case of ground state finding, where the local fields are zero. In the converse, given an Ising ground state problem over n variables, we can construct a QUBO over n + 1 variables, which can be used to encode the local terms.
Servedio also, incidentally, provided some of the earliest results in quantum computational learning theory, discussed in previous sections.
In minimum tree clustering, data is represented as a weighted graph (weight being the distance), and a minimum weight spanning tree is found. k clusters are identified by simply removing the k − 1-highest weight edges. Divisive clustering is an iterative method which splits sets into two subsets according to a chosen criterion, and this process is iterated. k−median clustering identifies clusters which minimize the cumulative within-cluster distances to the median point of the cluster.
To exemplify the logic behind association rules mining, in the typical context of shopping, if shopping item (list element) B occurs in nearly every shopping list in which shopping item A occurs as well, one concludes that the person buying A is also likely to buy B. This is captured by the rule denoted B ⇒ A.
In a related work(Wiebe and Granade, 2015), the authors investigate the learning capacity, of "small" quantum systems, and identify certain limitations in the context of Bayesian learning, based on Grover optimality bounds.Here, "small" pertains to systems of logarithmic size, encoding information in amplitudes. This work thus probes the potential of space complexity improvements for quantum-enhanced learning, related to early ideas discussed in VI.B.
Here, the condition number of the matrix A is given by the quotient of the largest and smallest singular value of A.113 The assumption that A is Hermitian is non-restrictive, as an oracle for any sparse matrix A can be modified to yield an oracle for the symmetrized matrix A = |0 1| ⊗ A † + |1 0| ⊗ A.
Although RL is a particularly mathematically clean model for learning by interaction, it is worthwhile to note
For instance, the Q-learning algorithm (see section II.C) is typically defined without an embodied agent-environment context. Naturally, we can easily promote this particular abstract model to an agent, by defining an agent which internally runs the Q-learning algorithm.
Representation means that we, strictly speaking, distinguish actual percepts, from the memorized percepts, and the same for actions. This distinction is however not crucial for the purposes of this exposition.
By transition matrix, we mean an entry-wise non-negative matrix, with columns adding to unity. 122 The spectral gap is defined with δ = 1 − |λ 2 |, where λ 2 is, in norm, the second largest eigenvalue. 123 In full detail, these relations hold whenever the MC is lazy (all states transition back to themselves with probability at least 1/2 ), ensuring that all the eigenvalues are non-negative, which can be ensured by adding the identity transition with probability 1/2. This slows down mixing and hitting processes by an irrelevant factor of 2.
We point out that the first ideas suggesting that quantum effects could be useful had been previously suggested in(Dong et al., 2005). 125 BQP stands for bounded-error quantum polynomial, and collects decision problems which can be solved with bounded error using a quantum computer. Complete problems of a given class are, in a sense, the hardest problems in that class, as all other are reducible to the complete instances using weaker reductions. In particular, it is not believed BQP complete problems are solvable on a classical computer, whereas all decision problems solvable by classical computers do belong to the class BQP.
Other delineations are possible, where the agent and environment have individually defined interfaces -a part of E accesible to A and a part of A accessible to E -leading to a four-partite system, but we will not be considering this here.
This realization is possible under a couple of technical assumptions, for details see.
Interestingly, the Turing test assumes that humans are good supervised learners of the concept of "intelligent agents", all the while being incapable of specifying the classifier -the definition of intelligence -explicitly.
It should be mentioned that some of the early discussions on quantum AI also consider the possibilities that human brains utilize some form of quantum processing, which may be at the crux of human intelligence. Such claims are still highly hypothetical, and not reviewed in this work. 130 See http://www.scottaaronson.com/blog/?p=207 for a simple explanation. 131 This is reminiscent to the problem of quantum verification, where quantum Turing test is a term used for the test which efficiently decides whether the Agent is a genuine quantum device/computer(Kashefi, 2013)
These complement the experimental work based on superconducting quantum annealers(Neven et al., 2009b;Adachi and Henderson, 2015), which is closely related to one of the approaches to QML.133 The question of whether information may be quantum, and whether we can talk about "quantum knowledge" as an outside observer broaches the completely fundamental questions of interpretations of quantum mechanics: for instance a Quantum Bayesianist would likely reject such a third-person perspective on learning. 134 https://insidebigdata.com/2017/02/16/the-exponential-growth-of-data/ (accessed July 2017)
In many proposals, the condition number of a matrix depending on the dataset explicitly appears in run-time, see section VI.C.2
Quantum Machine Learning: What Quantum Computing Means to Data Mining. Elsevier Insights. P Wittek, AP. ElsevierP. Wittek. Quantum Machine Learning: What Quantum Computing Means to Data Mining. Else- vier Insights. Elsevier, AP, 2014a. ISBN 9780128009536. URL https://books.google.de/books?id= PwUongEACAAJ.
The quest for a quantum neural network. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, 10.1007/s11128-014-0809-81573-1332Quantum Information Processing. 13Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. The quest for a quantum neural network. Quantum Information Processing, 13(11):2567-2586, Nov 2014a. ISSN 1573-1332. URL http://dx.doi.org/10. 1007/s11128-014-0809-8.
Quantum machine learning. Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, Seth Lloyd, arXiv:1611.09347Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning, 2016, arXiv:1611.09347.
A survey of quantum learning theory. Srinivasan Arunachalam, Ronald De, Wolf , abs/1701.06806CoRRSrinivasan Arunachalam and Ronald de Wolf. A survey of quantum learning theory. CoRR, abs/1701.06806, 2017. URL http://arxiv.org/abs/1701.06806.
Quantum machine learning: a classical perspective. Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, Leonard Wossnig, arXiv:1707.08561Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, and Leonard Wossnig. Quantum machine learning: a classical perspective, 2017, arXiv:1707.08561.
A Michael, Isaac L Nielsen, Chuang, 1107002176Quantum Computation and Quantum Information: 10th Anniversary Edition. New York, NY, USACambridge University Press978110700217310th editionMichael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, New York, NY, USA, 10th edition, 2011. ISBN 1107002176, 9781107002173.
Computable and Uncomputable. Sovetskoye Radio. Yuri Manin, Yuri Manin. Computable and Uncomputable. Sovetskoye Radio, 1980.
Simulating physics with computers. Richard Feynman, 10.1007/bf026501790020-7748International Journal of Theoretical Physics. 216-7Richard Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21 (6-7):467-488, June 1982. ISSN 0020-7748. URL http://dx.doi.org/10.1007/bf02650179.
Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. W Peter, Shor, 10.1137/s0097539795293172SIAM Journal on Computing. 265Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing, 26(5):1484-1509, oct 1997. URL https://doi.org/10.1137/ s0097539795293172.
Quantum algorithms for algebraic problems. M Andrew, Childs, Wim Van Dam, https:/link.aps.org/doi/10.1103/RevModPhys.82.1Rev. Mod. Phys. 82Andrew M. Childs and Wim van Dam. Quantum algorithms for algebraic problems. Rev. Mod. Phys., 82: 1-52, Jan 2010. URL https://link.aps.org/doi/10.1103/RevModPhys.82.1.
Quantum algorithms: an overview. npjQI, 2:15023 EP. Ashley Montanaro, 10.1038/npjqi.2015.23Review ArticleAshley Montanaro. Quantum algorithms: an overview. npjQI, 2:15023 EP -, Jan 2016. URL http: //dx.doi.org/10.1038/npjqi.2015.23. Review Article.
Quantum algorithm for linear systems of equations. Aram W Harrow, Avinatan Hassidim, Seth Lloyd, https:/link.aps.org/doi/10.1103/PhysRevLett.103.150502Phys. Rev. Lett. 103150502Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 103:150502, Oct 2009. URL https://link.aps.org/doi/10.1103/PhysRevLett.103. 150502.
Quantum linear systems algorithm with exponentially improved dependence on precision. Andrew M Childs, Robin Kothari, Rolando D Somma, arXiv:1511.02306Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum linear systems algorithm with exponentially improved dependence on precision, 2015, arXiv:1511.02306.
Quantum singular value decomposition of non-sparse low-rank matrices. Patrick Rebentrost, Adrian Steffens, Seth Lloyd, arXiv:1607.05404Patrick Rebentrost, Adrian Steffens, and Seth Lloyd. Quantum singular value decomposition of non-sparse low-rank matrices, 2016a, arXiv:1607.05404.
Sampling from the thermal quantum gibbs state and evaluating partition functions with a quantum computer. David Poulin, Pawel Wocjan, https:/link.aps.org/doi/10.1103/PhysRevLett.103.220502Phys. Rev. Lett. 103220502David Poulin and Pawel Wocjan. Sampling from the thermal quantum gibbs state and evaluating partition functions with a quantum computer. Phys. Rev. Lett., 103:220502, Nov 2009. URL https://link.aps. org/doi/10.1103/PhysRevLett.103.220502.
Simulated quantum annealing can be exponentially faster than classical simulated annealing. E Crosson, A W Harrow, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS). E. Crosson and A. W. Harrow. Simulated quantum annealing can be exponentially faster than classical simulated annealing. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 714-723, Oct 2016.
Quantum speed-ups for semidefinite programming. G S L Fernando, Krysta Brandao, Svore, arXiv:1609.05537Fernando G. S. L. Brandao and Krysta Svore. Quantum speed-ups for semidefinite programming, 2016, arXiv:1609.05537.
A quantum approximate optimization algorithm. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, arXiv:1411.4028Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm, 2014, arXiv:1411.4028.
Quantum simulation. M Georgescu, S Ashhab, Franco Nori, https:/link.aps.org/doi/10.1103/RevModPhys.86.153Rev. Mod. Phys. 86M. Georgescu, S. Ashhab, and Franco Nori. Quantum simulation. Rev. Mod. Phys., 86:153-185, Mar 2014. URL https://link.aps.org/doi/10.1103/RevModPhys.86.153.
A fast quantum mechanical algorithm for database search. K Lov, Grover, http:/doi.acm.org/10.1145/237814.2378661996. ACM. ISBN 0-89791-785-5Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, STOC '96. the Twenty-eighth Annual ACM Symposium on Theory of Computing, STOC '96New York, NY, USALov K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, STOC '96, pages 212-219, New York, NY, USA, 1996. ACM. ISBN 0-89791-785-5. URL http://doi.acm.org/10.1145/237814.237866.
Spatial search by quantum walk. M Andrew, Jeffrey Childs, Goldstone, https:/link.aps.org/doi/10.1103/PhysRevA.70.022314Phys. Rev. A. 7022314Andrew M. Childs and Jeffrey Goldstone. Spatial search by quantum walk. Phys. Rev. A, 70:022314, Aug 2004. URL https://link.aps.org/doi/10.1103/PhysRevA.70.022314.
Quantum random walks: An introductory overview. J Kempe, 10.1080/00107151031000110776Contemporary Physics. 444J Kempe. Quantum random walks: An introductory overview. Contemporary Physics, 44(4):307- 327, 2003, http://dx.doi.org/10.1080/00107151031000110776. URL http://dx.doi.org/10.1080/ 00107151031000110776.
Exponential algorithmic speedup by a quantum walk. Andrew M Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, Daniel A Spielman, http:/doi.acm.org/10.1145/780542.7805521-58113-674-9Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Computing, STOC '03. the Thirty-fifth Annual ACM Symposium on Theory of Computing, STOC '03New York, NY, USAACMAndrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman. Exponential algorithmic speedup by a quantum walk. In Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Computing, STOC '03, pages 59-68, New York, NY, USA, 2003. ACM. ISBN 1-58113-674-9. URL http://doi.acm.org/10.1145/780542.780552.
Quantum walks. Daniel Reitzner, Daniel Nagaj, Vladimir Buzek, ACTA PHYSICA SLOVACA. 616Daniel Reitzner, Daniel Nagaj, and Vladimir Buzek. Quantum walks. ACTA PHYSICA SLOVACA, 61(6): 603-725, 2012.
Discrete-query quantum algorithm for NAND trees. Andrew M Childs, Richard Cleve, Stephen P Jordan, David L Yonge-Mallo, 10.4086/toc.2009.v005a005Theory of Computing. 51Andrew M. Childs, Richard Cleve, Stephen P. Jordan, and David L. Yonge-Mallo. Discrete-query quantum algorithm for NAND trees. Theory of Computing, 5(1):119-123, 2009. URL https://doi.org/10.4086/ toc.2009.v005a005.
Super-polynomial quantum speed-ups for boolean evaluation trees with hidden structure. Bohua Zhan, Shelby Kimmel, Avinatan Hassidim, http:/doi.acm.org/10.1145/2090236.2090258In Innovations in Theoretical Computer Science. Bohua Zhan, Shelby Kimmel, and Avinatan Hassidim. Super-polynomial quantum speed-ups for boolean evaluation trees with hidden structure. In Innovations in Theoretical Computer Science 2012, Cambridge, MA, USA, January 8-10, 2012, pages 249-265, 2012. URL http://doi.acm.org/10.1145/2090236. 2090258.
Separations in query complexity using cheat sheets. Shalev Scott Aaronson, Robin Ben-David, Kothari, http:/doi.acm.org/10.1145/2897518.2897644978-1-4503-4132-5Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC '16. the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC '16New York, NY, USAACMScott Aaronson, Shalev Ben-David, and Robin Kothari. Separations in query complexity using cheat sheets. In Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC '16, pages 863-876, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4132-5. URL http://doi.acm.org/10. 1145/2897518.2897644.
Quantum communication and complexity. Wolf Ronald De, 0304-3975Theoretical Computer Science. 2871Ronald de Wolf. Quantum communication and complexity. Theoretical Computer Science, 287(1): 337 -353, 2002. ISSN 0304-3975. URL http://www.sciencedirect.com/science/article/pii/ S0304397502003778. Natural Computing.
Man-Hong Yung and Alán Aspuru-Guzik. A quantum-quantum metropolis algorithm. K Temme, T J Osborne, K G Vollbrecht, D Poulin, F Verstraete, 10.1038/nature097700028-0836Proceedings of the National Academy of Sciences. 4717336NatureK. Temme, T. J. Osborne, K. G. Vollbrecht, D. Poulin, and F. Verstraete. Quantum metropolis sampling. Nature, 471(7336):87-90, Mar 2011. ISSN 0028-0836. URL http://dx.doi.org/10.1038/nature09770. Man-Hong Yung and Alán Aspuru-Guzik. A quantum-quantum metropolis algorithm. Proceedings of the National Academy of Sciences, 109(3):754-759, 2012, http://www.pnas.org/content/109/3/754.full.pdf. URL http://www.pnas.org/content/109/3/754.abstract.
Quantum theory, the church-turing principle and the universal quantum computer. D Deutsch, 0080-4630Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 400D. Deutsch. Quantum theory, the church-turing principle and the universal quantum computer. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 400(1818):97- 117, 1985, http://rspa.royalsocietypublishing.org/content/400/1818/97.full.pdf. ISSN 0080-4630. URL http://rspa.royalsocietypublishing.org/content/400/1818/97.
A one-way quantum computer. Robert Raussendorf, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevLett.86.5188Phys. Rev. Lett. 86Robert Raussendorf and Hans J. Briegel. A one-way quantum computer. Phys. Rev. Lett., 86:5188-5191, May 2001. URL https://link.aps.org/doi/10.1103/PhysRevLett.86.5188.
Van den Nest. Measurement-based quantum computation. H J Briegel, D E Browne, W Dur, R Raussendorf, M , 10.1038/nphys11571745-2473Nat Phys. H. J. Briegel, D. E. Browne, W. Dur, R. Raussendorf, and M. Van den Nest. Measurement-based quantum computation. Nat Phys, pages 19-26, Jan 2009. ISSN 1745-2473. URL http://dx.doi.org/10.1038/ nphys1157.
Fast graph operations in quantum computation. Liming Zhao, Carlos A Pérez-Delgado, Joseph F Fitzsimons, https:/link.aps.org/doi/10.1103/PhysRevA.93.032314Phys. Rev. A. 9332314Liming Zhao, Carlos A. Pérez-Delgado, and Joseph F. Fitzsimons. Fast graph operations in quantum computation. Phys. Rev. A, 93:032314, Mar 2016. URL https://link.aps.org/doi/10.1103/PhysRevA. 93.032314.
Multiparty delegated quantum computing. Elham Kashefi, Anna Pappa, arXiv:1606.09200Elham Kashefi and Anna Pappa. Multiparty delegated quantum computing, 2016, arXiv:1606.09200.
Universal blind quantum computation. A Broadbent, J Fitzsimons, E Kashefi, 50th Annual IEEE Symposium on Foundations of Computer Science. A. Broadbent, J. Fitzsimons, and E. Kashefi. Universal blind quantum computation. In 2009 50th Annual IEEE Symposium on Foundations of Computer Science, pages 517-526, Oct 2009.
Topological quantum computation. H Michael, Alexei Freedman, Michael J Kitaev, Zhenghan Larsen, Wang, 10.1090/s0273-0979-02-00964-3Bulletin of the American Mathematical Society. 4001Michael H. Freedman, Alexei Kitaev, Michael J. Larsen, and Zhenghan Wang. Topological quantum computation. Bulletin of the American Mathematical Society, 40(01):31-39, oct 2002. URL https: //doi.org/10.1090/s0273-0979-02-00964-3.
A polynomial quantum algorithm for approximating the jones polynomial. Dorit Aharonov, Vaughan Jones, Zeph Landau, http:/doi.acm.org/10.1145/1132516.11325791-59593-134-1Proceedings of the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC '06. the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC '06New York, NY, USAACMDorit Aharonov, Vaughan Jones, and Zeph Landau. A polynomial quantum algorithm for approximating the jones polynomial. In Proceedings of the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC '06, pages 427-436, New York, NY, USA, 2006. ACM. ISBN 1-59593-134-1. URL http://doi.acm.org/10.1145/1132516.1132579.
Quantum computation by adiabatic evolution. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Michael Sipser, arXiv:quant-ph/0001106Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser. Quantum computation by adiabatic evolution, 2000, arXiv:quant-ph/0001106.
Designing adiabatic quantum optimization: A case study for the traveling salesman problem. Bettina Heim, Ethan W Brown, Dave Wecker, Matthias Troyer, arXiv:1702.06248Bettina Heim, Ethan W. Brown, Dave Wecker, and Matthias Troyer. Designing adiabatic quantum optimization: A case study for the traveling salesman problem, 2017, arXiv:1702.06248.
The computational complexity of linear optics. Scott Aaronson, Alex Arkhipov, http:/doi.acm.org/10.1145/1993636.1993682978-1-4503-0691-1Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing, STOC '11. the Forty-third Annual ACM Symposium on Theory of Computing, STOC '11New York, NY, USAACMScott Aaronson and Alex Arkhipov. The computational complexity of linear optics. In Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing, STOC '11, pages 333-342, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0691-1. URL http://doi.acm.org/10.1145/1993636.1993682.
Sergio Boixo, Sergei V Isakov, N Vadim, Ryan Smelyanskiy, Nan Babbush, Zhang Ding, Michael J Jiang, Bremner, arXiv:1608.00263John M. Martinis, and Hartmut Neven. Characterizing quantum supremacy in near-term devices. Sergio Boixo, Sergei V. Isakov, Vadim N. Smelyanskiy, Ryan Babbush, Nan Ding, Zhang Jiang, Michael J. Bremner, John M. Martinis, and Hartmut Neven. Characterizing quantum supremacy in near-term devices, 2016, arXiv:1608.00263.
Quantum advantage with shallow circuits. Sergey Bravyi, David Gosset, Robert Koenig, arXiv:1704.00690Sergey Bravyi, David Gosset, and Robert Koenig. Quantum advantage with shallow circuits, 2017, arXiv:1704.00690.
Temporally unstructured quantum computation. Dan Shepherd, Michael J Bremner, 1364-5021Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 465Dan Shepherd and Michael J. Bremner. Temporally unstructured quantum computation. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 465(2105):1413-1439, 2009, http://rspa.royalsocietypublishing.org/content/465/2105/1413.full.pdf. ISSN 1364-5021. URL http://rspa.royalsocietypublishing.org/content/465/2105/1413.
Achieving quantum supremacy with sparse and noisy commuting quantum computations. Quantum, 1:8. Michael J Bremner, Ashley Montanaro, Dan J Shepherd, 10.22331/q-2017-04-25-82521-327XMichael J. Bremner, Ashley Montanaro, and Dan J. Shepherd. Achieving quantum supremacy with sparse and noisy commuting quantum computations. Quantum, 1:8, April 2017. ISSN 2521-327X. URL https://doi.org/10.22331/q-2017-04-25-8.
. J , 25th Solvay ConfJ. Preskill, 2012. 25th Solvay Conf.
Quantum sampling problems, bosonsampling and quantum supremacy. A P Lund, Michael J Bremner, T C Ralph, 10.1038/s41534-017-0018-22056-6387npj Quantum Information. 315A. P. Lund, Michael J. Bremner, and T. C. Ralph. Quantum sampling problems, bosonsampling and quantum supremacy. npj Quantum Information, 3(1):15, 2017. ISSN 2056-6387. URL http://dx.doi. org/10.1038/s41534-017-0018-2.
Artificial Intelligence: A Modern Approach. Stuart Russell, Peter Norvig, 0136042597Prentice Hall Press9780136042594Upper Saddle River, NJ, USA3rd editionStuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd edition, 2009. ISBN 0136042597, 9780136042594.
. M L Mccarthy, N Minsky, C E Rochester, Shannon, Proposal For The Dart-Mouth Summer, Research, On, Intelligence, McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon. A PROPOSAL FOR THE DART- MOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE. http://www- formal.stanford.edu/jmc/history/dartmouth/dartmouth.html, 1955. URL http://www-formal.stanford. edu/jmc/history/dartmouth/dartmouth.html.
Symbolic versus Subsymbolic. Chris Eliasmith, William Bechtel, 10.1002/0470018860.s00022Ltd. John Wiley & SonsChris Eliasmith and William Bechtel. Symbolic versus Subsymbolic. John Wiley & Sons, Ltd, 2006. ISBN 9780470018866. URL http://dx.doi.org/10.1002/0470018860.s00022.
Computer science as empirical inquiry: Symbols and search. Allen Newell, Herbert A Simon, http:/doi.acm.org/10.1145/360018.3600220001-0782Commun. ACM. 193Allen Newell and Herbert A. Simon. Computer science as empirical inquiry: Symbols and search. Commun. ACM, 19(3):113-126, March 1976. ISSN 0001-0782. URL http://doi.acm.org/10.1145/360018.360022.
A brief history of connectionism. David A Medler, Neural Computing Surveys. 1David A. Medler. A brief history of connectionism. Neural Computing Surveys, 1:61-101, 1998.
Elephants don't play chess. Rodney A Brooks, 0921-8890Robotics and Autonomous Systems. 61Rodney A. Brooks. Elephants don't play chess. Robotics and Autonomous Systems, 6(1):3 -15, 1990. ISSN 0921-8890. URL http://www.sciencedirect.com/science/article/pii/S0921889005800259. Design- ing Autonomous Agents.
. Andrew Steane. Quantum computing. Reports on Progress in Physics. 612117Andrew Steane. Quantum computing. Reports on Progress in Physics, 61(2):117, 1998. URL http: //stacks.iop.org/0034-4885/61/i=2/a=002.
Understanding Machine Learning: From Theory to Algorithms. Shai Shalev, - Shwartz, Shai Ben-David, Cambridge University Press11070571329781107057135New York, NY, USAShai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, USA, 2014. ISBN 1107057132, 9781107057135.
Introduction to Machine Learning. Ethem Alpaydin, The MIT Press2nd edition, 2010. ISBN 026201243X, 9780262012430Ethem Alpaydin. Introduction to Machine Learning. The MIT Press, 2nd edition, 2010. ISBN 026201243X, 9780262012430.
ISBN 0262193981. insideBIGDATA. The exponential growth of data. Richard S Sutton, Andrew G Barto, MIT PressCambridge, MA, USAIntroduction to Reinforcement LearningRichard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. ISBN 0262193981. insideBIGDATA. The exponential growth of data. https://insidebigdata.com/2017/02/16/ the-exponential-growth-of-data/, 2017.
Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, 10.1038/nature169610028-0836Thore Graepel, and Demis Hassabis. 529David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, Jan 2016. ISSN 0028-0836. URL http://dx.doi.org/10.1038/nature16961. Article.
Semi-Supervised Learning. Olivier Chapelle, Bernhard Schölkopf, Alexander Zien, The MIT Press97802625141251st edition, 2010. ISBN 0262514125Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. Semi-Supervised Learning. The MIT Press, 1st edition, 2010. ISBN 0262514125, 9780262514125.
Universal Artificial Intellegence. Marcus Hutter, 10.1007/b138233SpringerBerlin HeidelbergMarcus Hutter. Universal Artificial Intellegence. Springer Berlin Heidelberg, 2005. URL https://doi.org/ 10.1007/b138233.
Computing machinery and intelligence. A M Turing, One of the most influential papers in the history of the cognitive sciences. A. M. Turing. Computing machinery and intelligence, 1950. URL http://cogprints. org/499/. One of the most influential papers in the history of the cognitive sciences: http://cogsci.umn.edu/millennium/final.html.
A logical calculus of the ideas immanent in nervous activity. S Warren, Walter Mcculloch, Pitts, 10.1007/BF024782591522-9602The bulletin of mathematical biophysics. 54Warren S. McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115-133, Dec 1943. ISSN 1522-9602. URL http://dx.doi.org/ 10.1007/BF02478259.
The Perceptron, a Perceiving and Recognizing Automaton (Project Para). F Rosenblatt, Cornell Aeronautical Laboratory. Cornell Aeronautical LaboratoryReportF. Rosenblatt. The Perceptron, a Perceiving and Recognizing Automaton (Project Para). Report: Cornell Aeronautical Laboratory. Cornell Aeronautical Laboratory, 1957.
Approximation by superpositions of a sigmoidal function. G Cybenko, 10.1007/bf02551274Mathematics of Control, Signals, and Systems. 24G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2(4):303-314, dec 1989. URL https://doi.org/10.1007/bf02551274.
Approximation capabilities of multilayer feedforward networks. Kurt Hornik, 10.1016/0893-6080(91)90009-tNeural Networks. 42Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251-257, jan 1991. URL https://doi.org/10.1016/0893-6080(91)90009-t.
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao, 10.1007/s11633-017-1054-21751-8520International Journal of Automation and Computing. Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing, Mar 2017. ISSN 1751-8520. URL https://doi.org/10.1007/ s11633-017-1054-2.
The mythos of model interpretability. Z C Lipton, abs/1606.03490CoRRZ. C. Lipton. The mythos of model interpretability. CoRR, abs/1606.03490, 2016. URL http://arxiv.org/ abs/1606.03490.
Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. S Hochreiter, Y Bengio, P Frasconi, J Schmidhuber, A Field Guide to Dynamical Recurrent Neural Networks. S. C. Kremer and J. F. KolenIEEE PressS. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, 2001.
Exploring strategies for training deep neural networks. Hugo Larochelle, Yoshua Bengio, Jérôme Louradour, Pascal Lamblin, 1532-4435J. Mach. Learn. Res. 10Hugo Larochelle, Yoshua Bengio, Jérôme Louradour, and Pascal Lamblin. Exploring strategies for training deep neural networks. J. Mach. Learn. Res., 10:1-40, June 2009. ISSN 1532-4435. URL http://dl.acm. org/citation.cfm?id=1577069.1577070.
Neural networks and physical systems with emergent collective computational abilities. J J Hopfield, 0027-8424Proc Natl Acad Sci U S A. 798pmidJ. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A, 79(8):2554-2558, Apr 1982. ISSN 0027-8424. URL http://www.ncbi.nlm.nih. gov/pmc/articles/PMC346238/. 6953413[pmid].
Increasing the capacity of a hopfield network without sacrificing functionality. Amos Storkey, 3-540-63631-5Proceedings of the 7th International Conference on Artificial Neural Networks, ICANN '97. the 7th International Conference on Artificial Neural Networks, ICANN '97London, UK, UKSpringer-VerlagAmos Storkey. Increasing the capacity of a hopfield network without sacrificing functionality. In Proceedings of the 7th International Conference on Artificial Neural Networks, ICANN '97, pages 451-456, London, UK, UK, 1997. Springer-Verlag. ISBN 3-540-63631-5. URL http://dl.acm.org/citation.cfm?id= 646257.685557.
Robust exponential memory in hopfield networks. Christopher Hillar, Ngoc M Tran, arXiv:1411.4625Christopher Hillar and Ngoc M. Tran. Robust exponential memory in hopfield networks, 2014, arXiv:1411.4625.
neural" computation of decisions in optimization problems. J J Hopfield, D W Tank, 10.1007/BF003399431432-0770Biological Cybernetics. 523J. J. Hopfield and D. W. Tank. "neural" computation of decisions in optimization problems. Biological Cybernetics, 52(3):141-152, Jul 1985. ISSN 1432-0770. URL http://dx.doi.org/10.1007/BF00339943.
Justifying and generalizing contrastive divergence. Yoshua Bengio, Olivier Delalleau, 10.1162/neco.2008.11-07-6470899-7667Neural Comput. 216Yoshua Bengio and Olivier Delalleau. Justifying and generalizing contrastive divergence. Neural Comput., 21 (6):1601-1621, June 2009. ISSN 0899-7667. URL http://dx.doi.org/10.1162/neco.2008.11-07-647.
Nathan Wiebe, Ashish Kapoor, Krysta M Svore, arXiv:1412.3489Quantum deep learning. Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum deep learning, 2014a, arXiv:1412.3489.
Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. M Cover, 0367-7508IEEE Transactions on Electronic Computers, EC. 143M. Cover. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14(3):326-334, June 1965. ISSN 0367-7508.
Least squares support vector machine classifiers. J A K Suykens, J Vandewalle, 10.1023/A:10186286097421573-773XNeural Processing Letters. 93J.A.K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural Processing Letters, 9(3):293-300, Jun 1999. ISSN 1573-773X. URL https://doi.org/10.1023/A:1018628609742.
Svm versus least squares svm. Jieping Ye, Tao Xiong, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics. Marina Meila and Xiaotong Shenthe Eleventh International Conference on Artificial Intelligence and StatisticsSan Juan, Puerto Rico2of Proceedings of Machine Learning ResearchJieping Ye and Tao Xiong. Svm versus least squares svm. In Marina Meila and Xiaotong Shen, editors, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, volume 2 of Proceedings of Machine Learning Research, pages 644-651, San Juan, Puerto Rico, 21-24 Mar 2007. PMLR. URL http://proceedings.mlr.press/v2/ye07a.html.
Random classification noise defeats all convex potential boosters. M Philip, Rocco A Long, Servedio, 10.1007/s10994-009-5165-z1573-0565Machine Learning. 78Philip M. Long and Rocco A. Servedio. Random classification noise defeats all convex potential boost- ers. Machine Learning, 78(3):287-304, 2010. ISSN 1573-0565. URL http://dx.doi.org/10.1007/ s10994-009-5165-z.
Noise tolerance under risk minimization. Naresh Manwani, P S Sastry, arXiv:1109.5231Naresh Manwani and P. S. Sastry. Noise tolerance under risk minimization, 2011, arXiv:1109.5231.
A decision-theoretic generalization of on-line learning and an application to boosting. Yoav Freund, Robert E Schapire, 0022-0000Journal of Computer and System Sciences. 551Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 -139, 1997. ISSN 0022-0000. URL http://www.sciencedirect.com/science/article/pii/S002200009791504X.
The lack of a priori distinctions between learning algorithms. H David, Wolpert, 10.1162/neco.1996.8.7.1341Neural Computation. 87David H. Wolpert. The lack of a priori distinctions between learning algorithms. Neural Computation, 8 (7):1341-1390, 1996, http://dx.doi.org/10.1162/neco.1996.8.7.1341. URL http://dx.doi.org/10.1162/ neco.1996.8.7.1341.
Treatise on Human Nature. David Hume, Oxford University Press1739David Hume. Treatise on Human Nature. Oxford University Press, 1739.
The problem of induction. John Vickers, No free lunch theorems -discussions and links. Edward N. ZaltaThe Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford Universityspring 2016 editionJohn Vickers. The problem of induction. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2016 edition, 2016. NFL. No free lunch theorems -discussions and links. http://www.no-free-lunch.org/.
No free lunch versus occam's razor in supervised learning. Tor Lattimore, Marcus Hutter, 10.1007/978-3-642-44958-1_17Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence -Papers from the Ray Solomonoff 85th Memorial Conference. Melbourne, VIC, AustraliaTor Lattimore and Marcus Hutter. No free lunch versus occam's razor in supervised learning. In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence -Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 -December 2, 2011, pages 223-235, 2011. URL https://doi.org/10.1007/978-3-642-44958-1_17.
Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Marcus Hutter, Springer-Verlag9783642060526Berlin, HeidelbergMarcus Hutter. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer-Verlag, Berlin, Heidelberg, 2010. ISBN 3642060528, 9783642060526.
Universal learning vs. no free lunch results. Shai Ben-David, Nathan Srebro, Ruth Urner, Workshop at NIPS 2011. Shai Ben-David, Nathan Srebro, and Ruth Urner. Universal learning vs. no free lunch results. In Workshop at NIPS 2011, 2011.
A theory of the learnable. G Valiant, http:/doi.acm.org/10.1145/1968.19720001-0782Commun. ACM. 2711G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, November 1984. ISSN 0001-0782. URL http://doi.acm.org/10.1145/1968.1972.
The Nature of Statistical Learning Theory. Vladimir N Vapnik, ISBN 0-387-94559-8Springer-Verlag New York, IncNew York, NY, USAVladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc., New York, NY, USA, 1995. ISBN 0-387-94559-8.
Queries and concept learning. Dana Angluin, Machine learning. 24Dana Angluin. Queries and concept learning. Machine learning, 2(4):319-342, 1988.
The strength of weak learnability. Robert E Schapire, 10.1023/A:10226488007600885-6125Mach. Learn. 52Robert E. Schapire. The strength of weak learnability. Mach. Learn., 5(2):197-227, July 1990. ISSN 0885-6125. URL http://dx.doi.org/10.1023/A:1022648800760.
Efficient distribution-free learning of probabilistic concepts. J Michael, Robert E Kearns, Schapire, 10.1016/s0022-0000(05)80062-5Journal of Computer and System Sciences. 483Michael J. Kearns and Robert E. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences, 48(3):464-497, jun 1994. URL https://doi.org/10.1016/ s0022-0000(05)80062-5.
The learnability of quantum states. Scott Aaronson, 1364-5021Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 463Scott Aaronson. The learnability of quantum states. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 463(2088):3089-3114, 2007, http://rspa.royalsocietypublishing.org/content/463/2088/3089.full.pdf. ISSN 1364-5021. URL http: //rspa.royalsocietypublishing.org/content/463/2088/3089.
Rademacher and gaussian complexities: Risk bounds and structural results. L Peter, Shahar Bartlett, Mendelson, 1532-4435J. Mach. Learn. Res. 3Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res., 3:463-482, March 2003. ISSN 1532-4435. URL http://dl.acm. org/citation.cfm?id=944919.944944.
A Probabilistic Theory of Pattern Recognition. L Devroye, L Györfi, G Lugosi, SpringerL. Devroye, L. Györfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996.
Technical note: Q-learning. J C H Christopher, Peter Watkins, Dayan, 10.1023/A:10226767223151573-0565Machine Learning. 8Christopher J.C.H. Watkins and Peter Dayan. Technical note: Q-learning. Machine Learning, 8(3):279-292, May 1992. ISSN 1573-0565. URL https://doi.org/10.1023/A:1022676722315.
Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, 10.1038/nature1423600280836Nature. 5187540Ioannis Antonoglou. and Demis HassabisVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, February 2015. ISSN 00280836. URL http://dx.doi.org/10.1038/nature14236.
Projective simulation for artificial intelligence. Leonid Peshkin, 10.1038/srep00400Scientific Reports. Hans J. Briegel and Gemma De las Cuevas2400Brown University, USPhD thesisReinforcement Learning by Policy SearchLeonid Peshkin. Reinforcement Learning by Policy Search. PhD thesis, Brown University, US, 2001. Hans J. Briegel and Gemma De las Cuevas. Projective simulation for artificial intelligence. Scientific Reports, 2:400 EP -, May 2012. URL http://dx.doi.org/10.1038/srep00400. Article.
Quantum Measurement and Control. M Wiseman, G J Milburn, Cambridge University PressM. Wiseman and G.J. Milburn. Quantum Measurement and Control. Cambridge University Press, 2010. ISBN 9780521804424. URL https://books.google.de/books?id=ZNjvHaH8qA4C.
On the Sample Complexity of Reinforcement Learning. Sham Kakade, University College LondonPhD thesisSham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003.
The sample-complexity of general reinforcement learning. Tor Lattimore, Marcus Hutter, Peter Sunehag, arXiv:1308.4828Tor Lattimore, Marcus Hutter, and Peter Sunehag. The sample-complexity of general reinforcement learning, 2013, arXiv:1308.4828.
Sample complexity of episodic fixed-horizon reinforcement learning. Christoph Dann, Emma Brunskill, Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15. the 28th International Conference on Neural Information Processing Systems, NIPS'15Cambridge, MA, USAMIT PressChristoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15, pages 2818-2826, Cambridge, MA, USA, 2015. MIT Press. URL http://dl.acm.org/citation.cfm?id= 2969442.2969555.
Quantum detection and estimation theory. Carl W Helstrom, 10.1007/BF010074791572-9613Journal of Statistical Physics. 12Carl W. Helstrom. Quantum detection and estimation theory. Journal of Statistical Physics, 1(2):231-252, 1969. ISSN 1572-9613. URL http://dx.doi.org/10.1007/BF01007479.
Probabilistic and statistical aspects of quantum theory. North-Holland series in statistics and probability. A S Holevo, North-Holland Pub. CoA.S. Holevo. Probabilistic and statistical aspects of quantum theory. North-Holland series in statistics and probability. North-Holland Pub. Co., 1982. ISBN 9780444863331. URL https://books.google.de/ books?id=ELDvAAAAMAAJ.
Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Advances in quantum metrology. L Samuel, Carlton M Braunstein, Caves, 10.1038/nphoton.2011.351749-4885Phys. Rev. Lett. 724Nat PhotonSamuel L. Braunstein and Carlton M. Caves. Statistical distance and the geometry of quantum states. Phys. Rev. Lett., 72:3439-3443, May 1994. URL https://link.aps.org/doi/10.1103/PhysRevLett.72.3439. Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Advances in quantum metrology. Nat Photon, 5 (4):222-229, Apr 2011. ISSN 1749-4885. URL http://dx.doi.org/10.1038/nphoton.2011.35.
Quantum-state estimation. Z Hradil, https:/link.aps.org/doi/10.1103/PhysRevA.55.R1561Phys. Rev. A. 55Z. Hradil. Quantum-state estimation. Phys. Rev. A, 55:R1561-R1564, Mar 1997. URL https://link.aps. org/doi/10.1103/PhysRevA.55.R1561.
Maximum-likelihood estimation of quantum processes. Jaromír Fiurášek, Zdeněk Hradil, https:/link.aps.org/doi/10.1103/PhysRevA.63.020101Phys. Rev. A. 6320101Jaromír Fiurášek and Zdeněk Hradil. Maximum-likelihood estimation of quantum processes. Phys. Rev. A, 63:020101, Jan 2001. URL https://link.aps.org/doi/10.1103/PhysRevA.63.020101.
Maximum-likelihood estimation of quantum measurement. Jaromír Fiurášek, https:/link.aps.org/doi/10.1103/PhysRevA.64.024102Phys. Rev. A. 6424102Jaromír Fiurášek. Maximum-likelihood estimation of quantum measurement. Phys. Rev. A, 64:024102, Jul 2001. URL https://link.aps.org/doi/10.1103/PhysRevA.64.024102.
Process reconstruction: From unphysical to physical maps via maximum likelihood. Mário Ziman, Martin Plesch, Vladimír Bužek, Peterštelmachovič , https:/link.aps.org/doi/10.1103/PhysRevA.72.022106Phys. Rev. A. 7222106Mário Ziman, Martin Plesch, Vladimír Bužek, and PeterŠtelmachovič. Process reconstruction: From unphysical to physical maps via maximum likelihood. Phys. Rev. A, 72:022106, Aug 2005. URL https://link.aps.org/doi/10.1103/PhysRevA.72.022106.
True precision limits in quantum metrology. Marcin Jarzyna, Rafa L Demkowicz-Dobrzański , New Journal of Physics. 17113010Marcin Jarzyna and Rafa l Demkowicz-Dobrzański. True precision limits in quantum metrology. New Journal of Physics, 17(1):013010, 2015. URL http://stacks.iop.org/1367-2630/17/i=1/a=013010.
Optimal quantum measurements for phase estimation. B C Sanders, G J Milburn, https:/link.aps.org/doi/10.1103/PhysRevLett.75.2944Phys. Rev. Lett. 75B. C. Sanders and G. J. Milburn. Optimal quantum measurements for phase estimation. Phys. Rev. Lett., 75:2944-2947, Oct 1995. URL https://link.aps.org/doi/10.1103/PhysRevLett.75.2944.
Optimal states and almost optimal adaptive measurements for quantum interferometry. D W Berry, H M Wiseman, https:/link.aps.org/doi/10.1103/PhysRevLett.85.5098Phys. Rev. Lett. 85D. W. Berry and H. M. Wiseman. Optimal states and almost optimal adaptive measurements for quantum interferometry. Phys. Rev. Lett., 85:5098-5101, Dec 2000. URL https://link.aps.org/doi/10.1103/ PhysRevLett.85.5098.
Optimal input states and feedback for interferometric phase estimation. D W Berry, H M Wiseman, J K Breslin, https:/link.aps.org/doi/10.1103/PhysRevA.63.053804Phys. Rev. A. 6353804D. W. Berry, H. M. Wiseman, and J. K. Breslin. Optimal input states and feedback for interferometric phase estimation. Phys. Rev. A, 63:053804, Apr 2001. URL https://link.aps.org/doi/10.1103/PhysRevA. 63.053804.
Machine learning for precise quantum measurement. Alexander Hentschel, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.104.063603Phys. Rev. Lett. 10463603Alexander Hentschel and Barry C. Sanders. Machine learning for precise quantum measurement. Phys. Rev. Lett., 104:063603, Feb 2010. URL https://link.aps.org/doi/10.1103/PhysRevLett.104.063603.
Efficient algorithm for optimizing adaptive quantum metrology processes. Alexander Hentschel, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.107.233601Phys. Rev. Lett. 107233601Alexander Hentschel and Barry C. Sanders. Efficient algorithm for optimizing adaptive quantum metrol- ogy processes. Phys. Rev. Lett., 107:233601, Nov 2011. URL https://link.aps.org/doi/10.1103/ PhysRevLett.107.233601.
Optimizing qubit hamiltonian parameter estimation algorithm using PSO. Alexandr Sergeevich, Stephen D Bartlett, 10.1109/cec.2012.6252948IEEE Congress on Evolutionary Computation. IEEEAlexandr Sergeevich and Stephen D. Bartlett. Optimizing qubit hamiltonian parameter estimation algorithm using PSO. In 2012 IEEE Congress on Evolutionary Computation. IEEE, jun 2012. URL https: //doi.org/10.1109/cec.2012.6252948.
Differential evolution for many-particle adaptive quantum metrology. Neil B Lovett, Cécile Crosnier, Martí Perarnau-Llobet, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.110.220501Phys. Rev. Lett. 110220501Neil B. Lovett, Cécile Crosnier, Martí Perarnau-Llobet, and Barry C. Sanders. Differential evolution for many-particle adaptive quantum metrology. Phys. Rev. Lett., 110:220501, May 2013. URL https: //link.aps.org/doi/10.1103/PhysRevLett.110.220501.
Robust online hamiltonian learning. Christopher Christopher E Granade, Nathan Ferrie, D G Wiebe, Cory, New Journal of Physics. 1410103013Christopher E Granade, Christopher Ferrie, Nathan Wiebe, and D G Cory. Robust online hamiltonian learning. New Journal of Physics, 14(10):103013, 2012. URL http://stacks.iop.org/1367-2630/14/ i=10/a=103013.
Bayesian adaptive exploration. Thomas J Loredo, http:/aip.scitation.org/doi/abs/10.1063/1.1751377AIP Conference Proceedings. 7071Thomas J. Loredo. Bayesian adaptive exploration. AIP Conference Proceedings, 707(1):330-346, 2004, http://aip.scitation.org/doi/pdf/10.1063/1.1751377. URL http://aip.scitation.org/doi/abs/ 10.1063/1.1751377.
Hamiltonian learning and certification using quantum resources. Nathan Wiebe, Christopher Granade, Christopher Ferrie, D G Cory, https:/link.aps.org/doi/10.1103/PhysRevLett.112.190501Phys. Rev. Lett. 112190501Nathan Wiebe, Christopher Granade, Christopher Ferrie, and D. G. Cory. Hamiltonian learning and certification using quantum resources. Phys. Rev. Lett., 112:190501, May 2014b. URL https://link. aps.org/doi/10.1103/PhysRevLett.112.190501.
Quantum hamiltonian learning using imperfect quantum resources. Nathan Wiebe, Christopher Granade, Christopher Ferrie, David Cory, https:/link.aps.org/doi/10.1103/PhysRevA.89.042314Phys. Rev. A. 8942314Nathan Wiebe, Christopher Granade, Christopher Ferrie, and David Cory. Quantum hamiltonian learning using imperfect quantum resources. Phys. Rev. A, 89:042314, Apr 2014c. URL https://link.aps.org/ doi/10.1103/PhysRevA.89.042314.
Experimental quantum hamiltonian learning. Jianwei Wang, Stefano Paesani, Raffaele Santagati, Sebastian Knauer, Antonio A Gentile, Nathan Wiebe, Maurangelo Petruzzella, Jeremy L O'brien, John G Rarity, Anthony Laing, Mark G Thompson, 10.1038/nphys40741745-2473Nat Phys. 136Jianwei Wang, Stefano Paesani, Raffaele Santagati, Sebastian Knauer, Antonio A. Gentile, Nathan Wiebe, Maurangelo Petruzzella, Jeremy L. O'Brien, John G. Rarity, Anthony Laing, and Mark G. Thompson. Experimental quantum hamiltonian learning. Nat Phys, 13(6):551-555, Jun 2017. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys4074. Letter.
Characterization of decohering quantum systems: Machine learning approach. P V Markku, Oliver Stenberg, Frank K Köhn, Wilhelm, https:/link.aps.org/doi/10.1103/PhysRevA.93.012122Phys. Rev. A. 9312122Markku P. V. Stenberg, Oliver Köhn, and Frank K. Wilhelm. Characterization of decohering quantum systems: Machine learning approach. Phys. Rev. A, 93:012122, Jan 2016. URL https://link.aps.org/ doi/10.1103/PhysRevA.93.012122.
Quantum optimally controlled transition landscapes. Herschel A Rabitz, Michael M Hsieh, Carey M Rosenthal, 0036-8075Science. 3035666Herschel A. Rabitz, Michael M. Hsieh, and Carey M. Rosenthal. Quantum optimally controlled transition land- scapes. Science, 303(5666):1998-2001, 2004, http://science.sciencemag.org/content/303/5666/1998.full.pdf. ISSN 0036-8075. URL http://science.sciencemag.org/content/303/5666/1998.
Common foundations of optimal control across the sciences: evidence of a free lunch. Benjamin Russell, Herschel Rabitz, 1364-503XPhilosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 375Benjamin Russell and Herschel Rabitz. Common foundations of optimal control across the sciences: evidence of a free lunch. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 375(2088), 2017, http://rsta.royalsocietypublishing.org/content/375/2088/20160210.full.pdf. ISSN 1364-503X. URL http://rsta.royalsocietypublishing.org/content/375/2088/20160210.
Evolutionary algorithms for hard quantum control. Ehsan Zahedinejad, Sophie Schirmer, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevA.90.032310Phys. Rev. A. 9032310Ehsan Zahedinejad, Sophie Schirmer, and Barry C. Sanders. Evolutionary algorithms for hard quantum control. Phys. Rev. A, 90:032310, Sep 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.90. 032310.
Both toffoli and controlled-not need little help to do universal quantum computation. Yaoyun Shi, arXiv:quant-ph/0205115Yaoyun Shi. Both toffoli and controlled-not need little help to do universal quantum computation, 2002, arXiv:quant-ph/0205115.
High-fidelity single-shot toffoli gate via quantum control. Ehsan Zahedinejad, Joydip Ghosh, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.114.200502Phys. Rev. Lett. 114Ehsan Zahedinejad, Joydip Ghosh, and Barry C. Sanders. High-fidelity single-shot toffoli gate via quan- tum control. Phys. Rev. Lett., 114:200502, May 2015. URL https://link.aps.org/doi/10.1103/ PhysRevLett.114.200502.
Designing high-fidelity single-shot three-qubit gates: A machine-learning approach. Ehsan Zahedinejad, Joydip Ghosh, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevApplied.6.054005Phys. Rev. Applied. 654005Ehsan Zahedinejad, Joydip Ghosh, and Barry C. Sanders. Designing high-fidelity single-shot three-qubit gates: A machine-learning approach. Phys. Rev. Applied, 6:054005, Nov 2016. URL https://link.aps. org/doi/10.1103/PhysRevApplied.6.054005.
Quantum computing with an always-on heisenberg interaction. C Simon, Sougato Benjamin, Bose, https:/link.aps.org/doi/10.1103/PhysRevLett.90.247901Phys. Rev. Lett. 90247901Simon C. Benjamin and Sougato Bose. Quantum computing with an always-on heisenberg interaction. Phys. Rev. Lett., 90:247901, Jun 2003. URL https://link.aps.org/doi/10.1103/PhysRevLett.90.247901.
Quantum gate learning in qubit networks: Toffoli gate without time-dependent control. Leonardo Banchi, Nicola Pancotti, Sougato Bose, 10.1038/npjqi.2016.19Npj Quantum Information. 2Leonardo Banchi, Nicola Pancotti, and Sougato Bose. Quantum gate learning in qubit networks: Toffoli gate without time-dependent control. Npj Quantum Information, 2:16019 EP -, 07 2016. URL http: //dx.doi.org/10.1038/npjqi.2016.19.
Quantum control experiments as a testbed for evolutionary multi-objective algorithms. M Ofer, Jonathan Shir, Zaki Roslund, Herschel Leghtas, Rabitz, 10.1007/s10710-012-9164-71389-2576Genetic Programming and Evolvable Machines. 134Ofer M. Shir, Jonathan Roslund, Zaki Leghtas, and Herschel Rabitz. Quantum control experiments as a testbed for evolutionary multi-objective algorithms. Genetic Programming and Evolvable Machines, 13 (4):445-491, December 2012. ISSN 1389-2576. URL http://dx.doi.org/10.1007/s10710-012-9164-7.
Quantum learning by measurement and feedback. Jeongho Bang, James Lim, M S Kim, Jinhyoung Lee, ; S Gammelmark, K Mølmer, arXiv:0803.2976New Journal of Physics. 11333017Quantum learning machineJeongho Bang, James Lim, M. S. Kim, and Jinhyoung Lee. Quantum learning machine, 2008, arXiv:0803.2976. S. Gammelmark and K. Mølmer. Quantum learning by measurement and feedback. New Journal of Physics, 11(3):033017, 2009. URL http://stacks.iop.org/1367-2630/11/i=3/a=033017.
Fidelity-based probabilistic q-learning for control of quantum systems. C Chen, D Dong, H X Li, J Chu, T J Tarn, 2162-237XIEEE Transactions on Neural Networks and Learning Systems. 255C. Chen, D. Dong, H. X. Li, J. Chu, and T. J. Tarn. Fidelity-based probabilistic q-learning for control of quantum systems. IEEE Transactions on Neural Networks and Learning Systems, 25(5):920-933, May 2014. ISSN 2162-237X.
Learning in quantum control: High-dimensional global optimization for noisy quantum dynamics. Pantita Palittapongarnpim, Peter Wittek, Ehsan Zahedinejad, Barry C Sanders, abs/1607.03428CoRRPantita Palittapongarnpim, Peter Wittek, Ehsan Zahedinejad, and Barry C. Sanders. Learning in quantum control: High-dimensional global optimization for noisy quantum dynamics. CoRR, abs/1607.03428, 2016. URL http://arxiv.org/abs/1607.03428.
Quantum machine learning with glow for episodic tasks and decision games. Jens Clausen, Hans J Briegel, arXiv:1601.07358Jens Clausen and Hans J. Briegel. Quantum machine learning with glow for episodic tasks and decision games, 2016, arXiv:1601.07358.
Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework. S Machnes, U Sander, S J Glaser, P De Fouquières, A Gruslys, S Schirmer, T Schulte-Herbrüggen, https:/link.aps.org/doi/10.1103/PhysRevA.84.022305Phys. Rev. A. 8422305S. Machnes, U. Sander, S. J. Glaser, P. de Fouquières, A. Gruslys, S. Schirmer, and T. Schulte-Herbrüggen. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework. Phys. Rev. A, 84:022305, Aug 2011. URL https://link.aps.org/doi/10.1103/PhysRevA. 84.022305.
Using recurrent neural networks to optimize dynamical decoupling for quantum memory. Moritz August, Xiaotong Ni, https:/link.aps.org/doi/10.1103/PhysRevA.95.012335Phys. Rev. A. 9512335Moritz August and Xiaotong Ni. Using recurrent neural networks to optimize dynamical decoupling for quantum memory. Phys. Rev. A, 95:012335, Jan 2017. URL https://link.aps.org/doi/10.1103/ PhysRevA.95.012335.
Adaptive quantum computation in changing environments using projective simulation. M Tiersch, E J Ganahl, H J Briegel, 10.1038/srep12874Scientific Reports. 512874M. Tiersch, E. J. Ganahl, and H. J. Briegel. Adaptive quantum computation in changing environments using projective simulation. Scientific Reports, 5:12874 EP -, Aug 2015. URL http://dx.doi.org/10. 1038/srep12874. Article.
Estimation of coherent error sources from stabilizer measurements. Davide Orsucci, Markus Tiersch, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevA.93.042303Phys. Rev. A. 9342303Davide Orsucci, Markus Tiersch, and Hans J. Briegel. Estimation of coherent error sources from stabilizer measurements. Phys. Rev. A, 93:042303, Apr 2016. URL https://link.aps.org/doi/10.1103/PhysRevA. 93.042303.
In-situ characterization of quantum devices with error correction. Joshua Combes, Christopher Ferrie, Chris Cesare, Markus Tiersch, G J Milburn, Hans J Briegel, Carlton M Caves, arXiv:1405.5656Joshua Combes, Christopher Ferrie, Chris Cesare, Markus Tiersch, G. J. Milburn, Hans J. Briegel, and Carlton M. Caves. In-situ characterization of quantum devices with error correction, 2014, arXiv:1405.5656.
Prediction and real-time compensation of qubit decoherence via machine learning. Sandeep Mavadia, Virginia Frey, Jarrah Sastrawan, Stephen Dona, Michael J Biercuk, 10.1038/ncomms14106Nature Communications. 814106Sandeep Mavadia, Virginia Frey, Jarrah Sastrawan, Stephen Dona, and Michael J. Biercuk. Prediction and real-time compensation of qubit decoherence via machine learning. Nature Communications, 8:14106 EP -, Jan 2017. URL http://dx.doi.org/10.1038/ncomms14106. Article.
Automated search for new quantum experiments. Mario Krenn, Mehul Malik, Robert Fickler, Radek Lapkiewicz, Anton Zeilinger, https:/link.aps.org/doi/10.1103/PhysRevLett.116.090405Phys. Rev. Lett. 11690405Mario Krenn, Mehul Malik, Robert Fickler, Radek Lapkiewicz, and Anton Zeilinger. Automated search for new quantum experiments. Phys. Rev. Lett., 116:090405, Mar 2016. URL https://link.aps.org/doi/ 10.1103/PhysRevLett.116.090405.
Projective Simulation for Classical and Quantum Autonomous Agents. Talk delivered at the KITP Program Control of Complex Quantum Systems. J Hans, Briegel, Santa BarbaraHans J. Briegel. Projective Simulation for Classical and Quantum Autonomous Agents. Talk delivered at the KITP Program Control of Complex Quantum Systems, Santa Barbara., 2013.
Active learning machine learns to create new quantum experiments. Alexey A Melnikov, Hendrik Poulsen Nautrup, Mario Krenn, Vedran Dunjko, Markus Tiersch, Anton Zeilinger, Hans J Briegel, arXiv:1706.00868Alexey A. Melnikov, Hendrik Poulsen Nautrup, Mario Krenn, Vedran Dunjko, Markus Tiersch, Anton Zeilinger, and Hans J. Briegel. Active learning machine learns to create new quantum experiments, 2017, arXiv:1706.00868.
Machine learning meets quantum state preparation. the phase diagram of quantum control. Marin Bukov, G R Alexandre, Dries Day, Phillip Sels, Anatoli Weinberg, Pankaj Polkovnikov, Mehta, arXiv:1705.00565Marin Bukov, Alexandre G. R. Day, Dries Sels, Phillip Weinberg, Anatoli Polkovnikov, and Pankaj Mehta. Machine learning meets quantum state preparation. the phase diagram of quantum control, 2017, arXiv:1705.00565.
Machine learning applications in genetics and genomics. W Maxwell, William Stafford Libbrecht, Noble, 10.1038/nrg39201471-0056Nat Rev Genet. 166Maxwell W. Libbrecht and William Stafford Noble. Machine learning applications in genetics and genomics. Nat Rev Genet, 16(6):321-332, Jun 2015. ISSN 1471-0056. URL http://dx.doi.org/10.1038/nrg3920. Review.
Machine Learning in Medicine -a Complete Overview. J Ton, Aeilko H Cleophas, Zwinderman, 10.1007/978-3-319-15195-3Springer International PublishingTon J. Cleophas and Aeilko H. Zwinderman. Machine Learning in Medicine -a Complete Overview. Springer International Publishing, 2015. URL https://doi.org/10.1007/978-3-319-15195-3.
Development and Uses of Artificial Intelligence in Chemistry. Hugh Cartwright, 10.1002/9780470189078.ch8John Wiley & Sons, IncHugh Cartwright. Development and Uses of Artificial Intelligence in Chemistry, pages 349-390. John Wiley & Sons, Inc., 2007. ISBN 9780470189078. URL http://dx.doi.org/10.1002/9780470189078.ch8.
Artificial intelligence called in to tackle LHC data deluge. Davide Castelvecchi, 10.1038/528018aNature. 5287580Davide Castelvecchi. Artificial intelligence called in to tackle LHC data deluge. Nature, 528(7580):18-19, dec 2015. URL https://doi.org/10.1038/528018a.
Predicting crystal structures with data mining of quantum calculations. Stefano Curtarolo, Dane Morgan, Kristin Persson, John Rodgers, Gerbrand Ceder, https:/link.aps.org/doi/10.1103/PhysRevLett.91.135503Phys. Rev. Lett. 91135503Stefano Curtarolo, Dane Morgan, Kristin Persson, John Rodgers, and Gerbrand Ceder. Predicting crystal structures with data mining of quantum calculations. Phys. Rev. Lett., 91:135503, Sep 2003. URL https://link.aps.org/doi/10.1103/PhysRevLett.91.135503.
Finding density functionals with machine learning. John C Snyder, Matthias Rupp, Katja Hansen, Klaus-Robert Müller, Kieron Burke, https:/link.aps.org/doi/10.1103/PhysRevLett.108.253002Phys. Rev. Lett. 108253002John C. Snyder, Matthias Rupp, Katja Hansen, Klaus-Robert Müller, and Kieron Burke. Finding density functionals with machine learning. Phys. Rev. Lett., 108:253002, Jun 2012. URL https://link.aps. org/doi/10.1103/PhysRevLett.108.253002.
Fast and accurate modeling of molecular atomization energies with machine learning. Matthias Rupp, Alexandre Tkatchenko, Klaus-Robert Müller, O Anatole Von Lilienfeld, https:/link.aps.org/doi/10.1103/PhysRevLett.108.058301Phys. Rev. Lett. 10858301Matthias Rupp, Alexandre Tkatchenko, Klaus-Robert Müller, and O. Anatole von Lilienfeld. Fast and accurate modeling of molecular atomization energies with machine learning. Phys. Rev. Lett., 108:058301, Jan 2012. URL https://link.aps.org/doi/10.1103/PhysRevLett.108.058301.
Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces. Zhenwei Li, James R Kermode, Alessandro De Vita, https:/link.aps.org/doi/10.1103/PhysRevLett.114.096405Phys. Rev. Lett. 11496405Zhenwei Li, James R. Kermode, and Alessandro De Vita. Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces. Phys. Rev. Lett., 114:096405, Mar 2015a. URL https://link. aps.org/doi/10.1103/PhysRevLett.114.096405.
Machine learning for many-body physics: The case of the anderson impurity model. Louis-François Arsenault, Alejandro Lopez-Bezanilla, O Anatole Von Lilienfeld, Andrew J Millis, https:/link.aps.org/doi/10.1103/PhysRevB.90.155136Phys. Rev. B. 90155136Louis-François Arsenault, Alejandro Lopez-Bezanilla, O. Anatole von Lilienfeld, and Andrew J. Millis. Machine learning for many-body physics: The case of the anderson impurity model. Phys. Rev. B, 90: 155136, Oct 2014. URL https://link.aps.org/doi/10.1103/PhysRevB.90.155136.
Discovering phase transitions with unsupervised learning. Lei Wang, https:/link.aps.org/doi/10.1103/PhysRevB.94.195105Phys. Rev. B. 94195105Lei Wang. Discovering phase transitions with unsupervised learning. Phys. Rev. B, 94:195105, Nov 2016. URL https://link.aps.org/doi/10.1103/PhysRevB.94.195105.
Discovering phases, phase transitions and crossovers through unsupervised machine learning: A critical examination. Wenjian Hu, R P Rajiv, Richard T Singh, Scalettar, arXiv:1704.00080Wenjian Hu, Rajiv R. P. Singh, and Richard T. Scalettar. Discovering phases, phase transitions and crossovers through unsupervised machine learning: A critical examination, 2017, arXiv:1704.00080.
Machine learning phases of matter. Juan Carrasquilla, Roger G Melko, 10.1038/nphys40351745-2473Nat Phys. 135Juan Carrasquilla and Roger G. Melko. Machine learning phases of matter. Nat Phys, 13(5):431-434, May 2017. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys4035. Letter.
Kelvin Ch, ' Ng, Juan Carrasquilla, Roger G Melko, Ehsan Khatami, arXiv:1609.02552Machine learning phases of strongly correlated fermions. Kelvin Ch'ng, Juan Carrasquilla, Roger G. Melko, and Ehsan Khatami. Machine learning phases of strongly correlated fermions, 2016, arXiv:1609.02552.
Machine learning quantum phases of matter beyond the fermion sign problem. Peter Broecker, Juan Carrasquilla, Roger G Melko, Simon Trebst, arXiv:1608.07848Peter Broecker, Juan Carrasquilla, Roger G. Melko, and Simon Trebst. Machine learning quantum phases of matter beyond the fermion sign problem, 2016, arXiv:1608.07848.
Learning phase transitions by confusion. P L Evert, Ye-Hua Van Nieuwenburg, Sebastian D Liu, Huber, 10.1038/nphys40371745-2473Nat Phys. 135Evert P. L. van Nieuwenburg, Ye-Hua Liu, and Sebastian D. Huber. Learning phase transitions by confusion. Nat Phys, 13(5):435-439, May 2017. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys4037. Letter.
Kernel methods for interpretable machine learning of order parameters. Pedro Ponte, Roger G Melko, arXiv:1704.05848Pedro Ponte and Roger G. Melko. Kernel methods for interpretable machine learning of order parameters, 2017, arXiv:1704.05848.
Solving the quantum many-body problem with artificial neural networks. Giuseppe Carleo, Matthias Troyer, 0036-8075Science. 3556325Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem with artificial neural networks. Science, 355(6325):602-606, 2017, http://science.sciencemag.org/content/355/6325/602.full.pdf. ISSN 0036-8075. URL http://science.sciencemag.org/content/355/6325/602.
Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. F Verstraete, V Murg, J I Cirac, 10.1080/14789940801912366Advances in Physics. 572F. Verstraete, V. Murg, and J.I. Cirac. Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. Advances in Physics, 57(2):143-224, 2008, http://dx.doi.org/10.1080/14789940801912366. URL http://dx.doi.org/10.1080/14789940801912366.
Giacomo Torlai, Guglielmo Mazzola, Juan Carrasquilla, Matthias Troyer, Roger Melko, Giuseppe Carleo, arXiv:1703.05334Many-body quantum state tomography with neural networks. Giacomo Torlai, Guglielmo Mazzola, Juan Carrasquilla, Matthias Troyer, Roger Melko, and Giuseppe Carleo. Many-body quantum state tomography with neural networks, 2017, arXiv:1703.05334.
Quantum entanglement in neural network states. Dong-Ling Deng, Xiaopeng Li, S. Das Sarma, https:/link.aps.org/doi/10.1103/PhysRevX.7.021021Phys. Rev. X. 721021Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Quantum entanglement in neural network states. Phys. Rev. X, 7:021021, May 2017. URL https://link.aps.org/doi/10.1103/PhysRevX.7.021021.
Efficient representation of quantum many-body states with deep neural networks. Xun Gao, Lu-Ming Duan, arXiv:1701.05039Xun Gao and Lu-Ming Duan. Efficient representation of quantum many-body states with deep neural networks, 2017, arXiv:1701.05039.
An exact mapping between the variational renormalization group and deep learning. Pankaj Mehta, David J Schwab, arXiv:1410.3831Pankaj Mehta and David J. Schwab. An exact mapping between the variational renormalization group and deep learning, 2014, arXiv:1410.3831.
Thermodynamic limit of density matrix renormalization. Stefan Stellanöstlund, Rommer, https:/link.aps.org/doi/10.1103/PhysRevLett.75.3537Phys. Rev. Lett. 75StellanÖstlund and Stefan Rommer. Thermodynamic limit of density matrix renormalization. Phys. Rev. Lett., 75:3537-3540, Nov 1995. URL https://link.aps.org/doi/10.1103/PhysRevLett.75.3537.
Renormalization algorithms for quantum-many body systems in two and higher dimensions. F Verstraete, J I Cirac, arXiv:cond-mat/0407066F. Verstraete and J. I. Cirac. Renormalization algorithms for quantum-many body systems in two and higher dimensions, 2004, arXiv:cond-mat/0407066.
Deep learning and quantum entanglement: Fundamental connections with implications to network design. Yoav Levine, David Yakira, Nadav Cohen, Amnon Shashua, arXiv:1704.01552Yoav Levine, David Yakira, Nadav Cohen, and Amnon Shashua. Deep learning and quantum entanglement: Fundamental connections with implications to network design, 2017, arXiv:1704.01552.
Transforming bell's inequalities into state classifiers with machine learning. Chi Yue, Man-Hong Ma, Yung, arXiv:1705.00813Yue-Chi Ma and Man-Hong Yung. Transforming bell's inequalities into state classifiers with machine learning, 2017, arXiv:1705.00813.
A separability-entanglement classifier via machine learning. Sirui Lu, Shilin Huang, Keren Li, Jun Li, Jianxin Chen, Dawei Lu, Zhengfeng Ji, Yi Shen, Duanlu Zhou, Bei Zeng, arXiv:1705.01523Sirui Lu, Shilin Huang, Keren Li, Jun Li, Jianxin Chen, Dawei Lu, Zhengfeng Ji, Yi Shen, Duanlu Zhou, and Bei Zeng. A separability-entanglement classifier via machine learning, 2017, arXiv:1705.01523.
A single quantum cannot be cloned. W K Wootters, W H Zurek, 10.1038/299802a0Nature. 2995886W. K. Wootters and W. H. Zurek. A single quantum cannot be cloned. Nature, 299(5886):802-803, Oct 1982. URL http://dx.doi.org/10.1038/299802a0.
No-signaling bound on quantum state discrimination. Sarah Croke, Erika Andersson, Stephen M Barnett, https:/link.aps.org/doi/10.1103/PhysRevA.77.012113Phys. Rev. A. 7712113Sarah Croke, Erika Andersson, and Stephen M. Barnett. No-signaling bound on quantum state discrimination. Phys. Rev. A, 77:012113, Jan 2008. URL https://link.aps.org/doi/10.1103/PhysRevA.77.012113.
Quantum state discrimination using the minimum average number of copies. Sergei Slussarenko, Morgan M Weston, Jun-Gang Li, Nicholas Campbell, Howard M Wiseman, Geoff J Pryde, https:/link.aps.org/doi/10.1103/PhysRevLett.118.030502Phys. Rev. Lett. 11830502Sergei Slussarenko, Morgan M. Weston, Jun-Gang Li, Nicholas Campbell, Howard M. Wiseman, and Geoff J. Pryde. Quantum state discrimination using the minimum average number of copies. Phys. Rev. Lett., 118:030502, Jan 2017. URL https://link.aps.org/doi/10.1103/PhysRevLett.118.030502.
Quantum template matching. Masahide Sasaki, Alberto Carlini, Richard Jozsa, https:/link.aps.org/doi/10.1103/PhysRevA.64.022317Phys. Rev. A. 6422317Masahide Sasaki, Alberto Carlini, and Richard Jozsa. Quantum template matching. Phys. Rev. A, 64: 022317, Jul 2001. URL https://link.aps.org/doi/10.1103/PhysRevA.64.022317.
Quantum learning and universal quantum matching machine. Masahide Sasaki, Alberto Carlini, https:/link.aps.org/doi/10.1103/PhysRevA.66.022303Phys. Rev. A. 6622303Masahide Sasaki and Alberto Carlini. Quantum learning and universal quantum matching machine. Phys. Rev. A, 66:022303, Aug 2002. URL https://link.aps.org/doi/10.1103/PhysRevA.66.022303.
Universal programmable quantum state discriminator that is optimal for unambiguously distinguishing between unknown states. A János, Mark Bergou, Hillery, https:/link.aps.org/doi/10.1103/PhysRevLett.94.160501Phys. Rev. Lett. 94160501János A. Bergou and Mark Hillery. Universal programmable quantum state discriminator that is optimal for unambiguously distinguishing between unknown states. Phys. Rev. Lett., 94:160501, Apr 2005. URL https://link.aps.org/doi/10.1103/PhysRevLett.94.160501.
Quantum pure-state identification. A Hayashi, M Horibe, T Hashimoto, https:/link.aps.org/doi/10.1103/PhysRevA.72.052306Phys. Rev. A. 7252306A. Hayashi, M. Horibe, and T. Hashimoto. Quantum pure-state identification. Phys. Rev. A, 72:052306, Nov 2005. URL https://link.aps.org/doi/10.1103/PhysRevA.72.052306.
Unambiguous pure-state identification without classical knowledge. A Hayashi, M Horibe, T Hashimoto, https:/link.aps.org/doi/10.1103/PhysRevA.73.012328Phys. Rev. A. 7312328A. Hayashi, M. Horibe, and T. Hashimoto. Unambiguous pure-state identification without classical knowledge. Phys. Rev. A, 73:012328, Jan 2006. URL https://link.aps.org/doi/10.1103/PhysRevA.73.012328.
Quantum learning: asymptotically optimal classification of qubit states. Mȃdȃlin Guţȃ, Wojciech Kot Lowski, New Journal of Physics. 1212123032Mȃdȃlin Guţȃ and Wojciech Kot lowski. Quantum learning: asymptotically optimal classification of qubit states. New Journal of Physics, 12(12):123032, 2010. URL http://stacks.iop.org/1367-2630/12/i= 12/a=123032.
Quantum learning without quantum memory. G Sentís, J Calsamiglia, R Muñoz-Tapia, E Bagan, 10.1038/srep00708Article. Gael Sentís. 2708Scientific Reports. personal communicationG. Sentís, J. Calsamiglia, R. Muñoz-Tapia, and E. Bagan. Quantum learning without quantum memory. Scientific Reports, 2:708 EP -, Oct 2012. URL http://dx.doi.org/10.1038/srep00708. Article. Gael Sentís. personal communication, 2017.
Quantum learning of coherent states. Gael Sentís, Mȃdȃlin Guţȃ, Gerardo Adesso, 10.1140/epjqt/s40507-015-0030-4EPJ Quantum Technology. 217Gael Sentís, Mȃdȃlin Guţȃ, and Gerardo Adesso. Quantum learning of coherent states. EPJ Quantum Technology, 2(17), Jul 2015. URL https://doi.org/10.1140/epjqt/s40507-015-0030-4.
Programmable discrimination with an error margin. G Sentís, E Bagan, J Calsamiglia, R. Muñoz Tapia, https:/link.aps.org/doi/10.1103/PhysRevA.88.052304Phys. Rev. A. 8852304G. Sentís, E. Bagan, J. Calsamiglia, and R. Muñoz Tapia. Programmable discrimination with an error margin. Phys. Rev. A, 88:052304, Nov 2013. URL https://link.aps.org/doi/10.1103/PhysRevA.88.052304.
. Esma Aïmeur, Gilles Brassard, Sébastien Gambs, 10.1007/11766247_37978-3-540-34630-2Machine Learning in a Quantum World. SpringerEsma Aïmeur, Gilles Brassard, and Sébastien Gambs. Machine Learning in a Quantum World, pages 431-442. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006. ISBN 978-3-540-34630-2. URL http: //dx.doi.org/10.1007/11766247_37.
Quantum decision tree classifier. Songfeng Lu, Samuel L Braunstein, 10.1007/s11128-013-0687-51573-1332Quantum Information Processing. 13Songfeng Lu and Samuel L. Braunstein. Quantum decision tree classifier. Quantum Information Processing, 13(3):757-770, 2014. ISSN 1573-1332. URL http://dx.doi.org/10.1007/s11128-013-0687-5.
Sebastien Gambs, arXiv:0809.0444Quantum classification. Sebastien Gambs. Quantum classification, 2008, arXiv:0809.0444.
Inductive supervised quantum learning. Alex Monràs, Gael Sentís, Peter Wittek, https:/link.aps.org/doi/10.1103/PhysRevLett.118.190503Phys. Rev. Lett. 118190503Alex Monràs, Gael Sentís, and Peter Wittek. Inductive supervised quantum learning. Phys. Rev. Lett., 118: 190503, May 2017. URL https://link.aps.org/doi/10.1103/PhysRevLett.118.190503.
Andrea Rocchetto, arXiv:1705.00345Stabiliser states are efficiently pac-learnable. Andrea Rocchetto. Stabiliser states are efficiently pac-learnable, 2017, arXiv:1705.00345.
Optimal quantum learning of a unitary transformation. Alessandro Bisio, Giulio Chiribella, Giacomo Mauro D'ariano, Stefano Facchini, Paolo Perinotti, https:/link.aps.org/doi/10.1103/PhysRevA.81.032324Phys. Rev. A. 8132324Alessandro Bisio, Giulio Chiribella, Giacomo Mauro D'Ariano, Stefano Facchini, and Paolo Perinotti. Optimal quantum learning of a unitary transformation. Phys. Rev. A, 81:032324, Mar 2010. URL https://link.aps.org/doi/10.1103/PhysRevA.81.032324.
and Michal Sedl?k. Quantum learning algorithms for quantum measurements. Alessandro Bisio, Giacomo Mauro Dariano, Paolo Perinotti, 0375-9601Physics Letters A. 37539Alessandro Bisio, Giacomo Mauro DAriano, Paolo Perinotti, and Michal Sedl?k. Quantum learning algorithms for quantum measurements. Physics Letters A, 375(39):3425 -3434, 2011. ISSN 0375-9601. URL http://www.sciencedirect.com/science/article/pii/S0375960111009467.
Perfect probabilistic storing and retrieving of unitary channels. Michal Sedlák, Alessandro Bisio, Mário Ziman, Michal Sedlák, Alessandro Bisio, and Mário Ziman. Perfect probabilistic storing and retrieving of uni- tary channels, 2017. URL http://qpl.science.ru.nl/papers/QPL_2017_paper_30.pdf. Featured in QPL/IQSA 2017.
Optimal single-shot strategies for discrimination of quantum measurements. Michal Sedlák, Mário Ziman, https:/link.aps.org/doi/10.1103/PhysRevA.90.052312Phys. Rev. A. 9052312Michal Sedlák and Mário Ziman. Optimal single-shot strategies for discrimination of quantum measurements. Phys. Rev. A, 90:052312, Nov 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.90.052312.
The learnability of unknown quantum measurements. Hao-Chung Cheng, Min-Hsiu Hsieh, Ping-Cheng Yeh, Quantum Information & Computation. 16Hao-Chung Cheng, Min-Hsiu Hsieh, and Ping-Cheng Yeh. The learnability of unknown quantum measure- ments. Quantum Information & Computation, 16(7&8):615-656, 2016. URL http://www.rintonpress. com/xxqic16/qic-16-78/0615-0656.pdf.
Quantum partially observable markov decision processes. Jennifer Barry, Daniel T Barry, Scott Aaronson, https:/link.aps.org/doi/10.1103/PhysRevA.90.032311Phys. Rev. A. 9032311Jennifer Barry, Daniel T. Barry, and Scott Aaronson. Quantum partially observable markov decision processes. Phys. Rev. A, 90:032311, Sep 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.90.032311.
Quantum perceptrons. M Lewenstein, 10.1080/09500349414552331Journal of Modern Optics. 4112M. Lewenstein. Quantum perceptrons. Journal of Modern Optics, 41(12):2491-2501, dec 1994. URL https://doi.org/10.1080/09500349414552331.
On quantum neural computing. Subhash Kak, 0020-0255Information Sciences. 833Subhash Kak. On quantum neural computing. Information Sciences, 83(3):143 -160, 1995. ISSN 0020-0255. URL http://www.sciencedirect.com/science/article/pii/002002559400095S.
Learning DNF over the uniform distribution using a quantum example oracle. H Nader, Jeffrey C Bshouty, Jackson, 10.1137/s0097539795293123SIAM Journal on Computing. 283Appeared in n Computational learning theory (COLT) conference proceedings in 1995Nader H. Bshouty and Jeffrey C. Jackson. Learning DNF over the uniform distribution using a quantum example oracle. SIAM Journal on Computing, 28(3):1136-1153, jan 1998. URL https://doi.org/10. 1137/s0097539795293123. Appeared in n Computational learning theory (COLT) conference proceedings in 1995.
The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Roger Penrose, ISBN 0-19-851973-7Oxford University Press, IncNew York, NY, USARoger Penrose. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press, Inc., New York, NY, USA, 1989. ISBN 0-19-851973-7.
Quantum effects in neural networks. Hidetoshi Nishimori, Yoshihiko Nonomura, 10.1143/JPSJ.65.3780Journal of the Physical Society of Japan. 6512Hidetoshi Nishimori and Yoshihiko Nonomura. Quantum effects in neural networks. Journal of the Physical Society of Japan, 65(12):3780-3796, 1996, http://dx.doi.org/10.1143/JPSJ.65.3780. URL http: //dx.doi.org/10.1143/JPSJ.65.3780.
Importance of quantum decoherence in brain processes. Max Tegmark, https:/link.aps.org/doi/10.1103/PhysRevE.61.4194Phys. Rev. E. 61Max Tegmark. Importance of quantum decoherence in brain processes. Phys. Rev. E, 61:4194-4206, Apr 2000. URL https://link.aps.org/doi/10.1103/PhysRevE.61.4194.
A quantum dot neural network, 1996. Mitja Peruš. Neural networks as a basis for quantum associative networks. E C Behrman, J Niemel, J E Steck, S R Skinner, Neural Netw. World. 106E.C. Behrman, J. Niemel, J.E. Steck, and S.R. Skinner. A quantum dot neural network, 1996. Mitja Peruš. Neural networks as a basis for quantum associative networks. Neural Netw. World, 10(6): 1001-1013, 2000.
Entanglement in a quantum neural network based on quantum dots. M V Altaisky, N N Zolnikova, N E Kaputkina, V A Krylov, Yu E Lozovik, N S Dattani, 1569-4410Photonics and Nanostructures -Fundamentals and Applications. 24M.V. Altaisky, N.N. Zolnikova, N.E. Kaputkina, V.A. Krylov, Yu E. Lozovik, and N.S. Dattani. Entanglement in a quantum neural network based on quantum dots. Photonics and Nanostructures -Fundamentals and Applications, 24:24 -28, 2017. ISSN 1569-4410. URL http://www.sciencedirect.com/science/ article/pii/S1569441017300317.
The quest for a quantum neural network. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, 10.1007/s11128-014-0809-8Quantum Information Processing. 13Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. The quest for a quantum neural network. Quantum Information Processing, 13(11):2567-2586, aug 2014b. URL https://doi.org/10.1007/ s11128-014-0809-8.
A heuristic review of quantum neural networks. Jesse A Garman, Imperial College London, Department of Physics, United KingdomMaster's thesisJesse A. Garman. A heuristic review of quantum neural networks. Master's thesis, Imperial College London, Department of Physics, United Kingdom, 2011.
Quantum algorithms for learning and testing juntas. Alp Atıcı, Rocco A Servedio, 10.1007/s11128-007-0061-66Quantum Information ProcessingAlp Atıcı and Rocco A. Servedio. Quantum algorithms for learning and testing juntas. Quantum Information Processing, 6(5):323-348, sep 2007. URL https://doi.org/10.1007/s11128-007-0061-6.
Quantum learning robust against noise. Andrew W Cross, Graeme Smith, John A Smolin, https:/link.aps.org/doi/10.1103/PhysRevA.92.012327Phys. Rev. A. 9212327Andrew W. Cross, Graeme Smith, and John A. Smolin. Quantum learning robust against noise. Phys. Rev. A, 92:012327, Jul 2015. URL https://link.aps.org/doi/10.1103/PhysRevA.92.012327.
Quantum complexity theory. Ethan Bernstein, Umesh Vazirani, 10.1137/S0097539796300921SIAM Journal on Computing. 265Ethan Bernstein and Umesh Vazirani. Quantum complexity theory. SIAM Journal on Computing, 26 (5):1411-1473, 1997, https://doi.org/10.1137/S0097539796300921. URL https://doi.org/10.1137/ S0097539796300921.
Optimal quantum sample complexity of learning algorithms. Srinivasan Arunachalam, Ronald De, Wolf , arXiv:1607.00932Srinivasan Arunachalam and Ronald de Wolf. Optimal quantum sample complexity of learning algorithms, 2016, arXiv:1607.00932.
Quantum predictive learning and communication complexity with single input. Dmitry Gavinsky, 1533-7146Quantum Info. Comput. 127-8Dmitry Gavinsky. Quantum predictive learning and communication complexity with single input. Quantum Info. Comput., 12(7-8):575-588, July 2012. ISSN 1533-7146. URL http://dl.acm.org/citation.cfm? id=2231016.2231019.
Exponential separation of quantum and classical one-way communication complexity. Ziv Bar-Yossef, T S Jayram, Iordanis Kerenidis, 10.1137/060651835SIAM Journal on Computing. 381Ziv Bar-Yossef, T. S. Jayram, and Iordanis Kerenidis. Exponential separation of quantum and classical one-way communication complexity. SIAM Journal on Computing, 38(1):366-384, jan 2008. URL https://doi.org/10.1137/060651835.
Equivalences and separations between quantum and classical learnability. Rocco A Servedio, Steven J Gortler, 10.1137/s0097539704412910SIAM Journal on Computing. 335Rocco A. Servedio and Steven J. Gortler. Equivalences and separations between quantum and classical learnability. SIAM Journal on Computing, 33(5):1067-1092, jan 2004. URL https://doi.org/10.1137/ s0097539704412910.
An optimal quantum algorithm for the oracle identification problem. Robin Kothari, abs/1311.7685Robin Kothari. An optimal quantum algorithm for the oracle identification problem. CoRR, abs/1311.7685, 2013. URL http://arxiv.org/abs/1311.7685.
Quantum lower bounds by polynomials. Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, Ronald De Wolf, http:/doi.acm.org/10.1145/502090.5020970004-5411J. ACM. 484Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald de Wolf. Quantum lower bounds by polynomials. J. ACM, 48(4):778-797, July 2001. ISSN 0004-5411. URL http://doi.acm.org/10. 1145/502090.502097.
Cryptographic limitations on learning boolean formulae and finite automata. Michael Kearns, Leslie Valiant, http:/doi.acm.org/10.1145/174644.1746470004-5411J. ACM. 411Michael Kearns and Leslie Valiant. Cryptographic limitations on learning boolean formulae and finite automata. J. ACM, 41(1):67-95, January 1994. ISSN 0004-5411. URL http://doi.acm.org/10.1145/ 174644.174647.
Quantum associative memory. Dan Ventura, Tony Martinez, 0020-0255Information Sciences. 1241?4Dan Ventura and Tony Martinez. Quantum associative memory. Information Sciences, 124(1?4): 273 -296, 2000. ISSN 0020-0255. URL http://www.sciencedirect.com/science/article/pii/ S0020025599001012.
Probabilistic quantum memories. C A Trugenberger, 10.1103/physrevlett.87.067901Physical Review Letters. 876C. A. Trugenberger. Probabilistic quantum memories. Physical Review Letters, 87(6), jul 2001. URL https://doi.org/10.1103/physrevlett.87.067901.
Comment on "probabilistic quantum memories. T Brun, H Klauck, A Nayak, M Rötteler, Ch Zalka, 10.1103/physrevlett.91.209801Physical Review Letters. 9120T. Brun, H. Klauck, A. Nayak, M. Rötteler, and Ch. Zalka. Comment on "probabilistic quantum memories". Physical Review Letters, 91(20), nov 2003. URL https://doi.org/10.1103/physrevlett.91.209801.
Trugenberger replies. Carlo A Trugenberger, 10.1103/physrevlett.91.209802Physical Review Letters. 9120Carlo A. Trugenberger. Trugenberger replies:. Physical Review Letters, 91(20), nov 2003. URL https: //doi.org/10.1103/physrevlett.91.209802.
Quantum computing for pattern classification. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, arXiv:1412.3646Trends in Artificial Intelligence. 8862SpringerLNAIMaria Schuld, Ilya Sinayskiy, and Francesco Petruccione. Quantum computing for pattern classification. Trends in Artificial Intelligence, LNAI 8862, Springer, pages 208-220, 2014c, arXiv:1412.3646.
Quantum learning for neural associative memories. Fuzzy Sets and Systems. G G Rigatos, S G Tzafestas, 0165-0114157G.G. Rigatos and S.G. Tzafestas. Quantum learning for neural associative memories. Fuzzy Sets and Systems, 157(13):1797 -1813, 2006. ISSN 0165-0114. URL http://www.sciencedirect.com/science/ article/pii/S0165011406000923.
Neurodynamics and attractors in quantum associative memories. G G Rigatos, S G Tzafestas, 1069-2509Integr. Comput.-Aided Eng. 143G. G. Rigatos and S. G. Tzafestas. Neurodynamics and attractors in quantum associative memories. Integr. Comput.-Aided Eng., 14(3):225-242, August 2007. ISSN 1069-2509. URL http://dl.acm.org/citation. cfm?id=1367089.1367091.
Quantum pattern recognition with liquid-state nuclear magnetic resonance. Rodion Neigovzen, Jorge L Neves, Rudolf Sollacher, Steffen J Glaser, https:/link.aps.org/doi/10.1103/PhysRevA.79.042321Phys. Rev. A. 7942321Rodion Neigovzen, Jorge L. Neves, Rudolf Sollacher, and Steffen J. Glaser. Quantum pattern recognition with liquid-state nuclear magnetic resonance. Phys. Rev. A, 79:042321, Apr 2009. URL https://link. aps.org/doi/10.1103/PhysRevA.79.042321.
Adiabatic quantum optimization for associative memory recall. Hadayat Seddiqi, Travis S Humble, arXiv:1407.1904Front. Phys. 279Hadayat Seddiqi and Travis S. Humble. Adiabatic quantum optimization for associative memory recall. Front. Phys. 2:79, 2014, arXiv:1407.1904.
Exponential capacity of associative memories under quantum annealing recall. Siddhartha Santra, Omar Shehab, Radhakrishnan Balu, arXiv:1602Siddhartha Santra, Omar Shehab, and Radhakrishnan Balu. Exponential capacity of associative memories under quantum annealing recall, 2016, arXiv:1602.
Noise facilitation in associative memories of exponential capacity. Amin Karbasi, Amir Hesam, Amin Salavati, Lav R Shokrollahi, Varshney, arXiv:1403.3305Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, and Lav R. Varshney. Noise facilitation in associative memories of exponential capacity, 2014, arXiv:1403.3305.
Read the fine print. Scott Aaronson, 10.1038/nphys32721745-2473Nat Phys. 114Scott Aaronson. Read the fine print. Nat Phys, 11(4):291-293, Apr 2015. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys3272. Commentary.
Probing for quantum speedup in spin-glass problems with planted solutions. Itay Hen, Joshua Job, Tameem Albash, F Troels, Matthias Rønnow, Daniel A Troyer, Lidar, https:/link.aps.org/doi/10.1103/PhysRevA.92.042325Phys. Rev. A. 9242325Itay Hen, Joshua Job, Tameem Albash, Troels F. Rønnow, Matthias Troyer, and Daniel A. Lidar. Probing for quantum speedup in spin-glass problems with planted solutions. Phys. Rev. A, 92:042325, Oct 2015. URL https://link.aps.org/doi/10.1103/PhysRevA.92.042325.
Training a large scale classifier with the quantum adiabatic algorithm. Hartmut Neven, S Vasil, Geordie Denchev, William G Rose, Macready, arXiv:0912.0779Hartmut Neven, Vasil S. Denchev, Geordie Rose, and William G. Macready. Training a large scale classifier with the quantum adiabatic algorithm, 2009a, arXiv:0912.0779.
The ising model: teaching an old problem new tricks. Zhengbing Bian, Fabian Chudak, William G Macready, Geordie Rose, Zhengbing Bian, Fabian Chudak, William G. Macready, and Geordie Rose. The ising model: teaching an old problem new tricks, 2010.
Training a binary classifier with the quantum adiabatic algorithm. Hartmut Neven, S Vasil, Geordie Denchev, William G Rose, Macready, arXiv:0811.0416Hartmut Neven, Vasil S. Denchev, Geordie Rose, and William G. Macready. Training a binary classifier with the quantum adiabatic algorithm, 2008, arXiv:0811.0416.
Nips 2009 demonstration: Binary classification using hardware implementation of quantum annealing. Harmut Neven, S Vasil, Marshall Denchev, Jiayong Drew-Brook, Zhang, G William, Geordie Macready, Rose, NIPS 2009 demonstration. Harmut Neven, Vasil S Denchev, Marshall Drew-Brook, Jiayong Zhang, William G Macready, and Geordie Rose. Nips 2009 demonstration: Binary classification using hardware implementation of quantum annealing. In NIPS 2009 demonstration, 2009b.
Qboost: Large scale classifier training with adiabatic quantum optimization. H Neven, V S Denchev, G Rose, W G Macready, Proceedings of the Asian Conference on Machine Learning. Steven C. H. Hoi and Wray Buntinethe Asian Conference on Machine LearningSingapore25Singapore Management UniversityH. Neven, V.S. Denchev, G. Rose, and W.G. Macready. Qboost: Large scale classifier training with adiabatic quantum optimization. In Steven C. H. Hoi and Wray Buntine, editors, Proceedings of the Asian Conference on Machine Learning, volume 25 of Proceedings of Machine Learning Research, pages 333-348, Singapore Management University, Singapore, 04-06 Nov 2012. PMLR. URL http: //proceedings.mlr.press/v25/neven12.html.
Robust classification with adiabatic quantum optimization. S Vasil, Nan Denchev, S V N Ding, Hartmut Vishwanathan, Neven, arXiv:1205.1148Vasil S. Denchev, Nan Ding, S. V. N. Vishwanathan, and Hartmut Neven. Robust classification with adiabatic quantum optimization, 2012, arXiv:1205.1148.
S Vasil, Nan Denchev, Shin Ding, S V N Matsushima, Hartmut Vishwanathan, Neven, arXiv:1504.01446Totally corrective boosting with cardinality penalization. Vasil S. Denchev, Nan Ding, Shin Matsushima, S. V. N. Vishwanathan, and Hartmut Neven. Totally corrective boosting with cardinality penalization, 2015, arXiv:1504.01446.
Construction of non-convex polynomial loss functions for training a binary classifier with quantum annealing. Ryan Babbush, Nan Vasil Denchev, Sergei Ding, Hartmut Isakov, Neven, 10.1007/s11128-012-0506-4arXiv:1406.42031573-1332Quantum Information Processing. Kristen L. Pudenz and Daniel A. Lidar12Quantum adiabatic machine learningRyan Babbush, Vasil Denchev, Nan Ding, Sergei Isakov, and Hartmut Neven. Construction of non-convex polynomial loss functions for training a binary classifier with quantum annealing, 2014, arXiv:1406.4203. Kristen L. Pudenz and Daniel A. Lidar. Quantum adiabatic machine learning. Quantum Information Processing, 12(5):2027-2070, 2013. ISSN 1573-1332. URL http://dx.doi.org/10.1007/ s11128-012-0506-4.
Bayesian network structure learning using quantum annealing. B O'gorman, R Babbush, A Perdomo-Ortiz, A Aspuru-Guzik, V Smelyanskiy, 10.1140/epjst/e2015-02349-91951-6401The European Physical Journal Special Topics. 2241B. O'Gorman, R. Babbush, A. Perdomo-Ortiz, A. Aspuru-Guzik, and V. Smelyanskiy. Bayesian network structure learning using quantum annealing. The European Physical Journal Special Topics, 224(1): 163-188, 2015. ISSN 1951-6401. URL http://dx.doi.org/10.1140/epjst/e2015-02349-9.
Application of quantum annealing to training of deep neural networks. H Steven, Maxwell P Adachi, Henderson, arXiv:1510.06356Steven H. Adachi and Maxwell P. Henderson. Application of quantum annealing to training of deep neural networks, 2015, arXiv:1510.06356.
Quantum boltzmann machine. H Mohammad, Evgeny Amin, Jason Andriyash, Bohdan Rolfe, Roger Kulchytskyy, Melko, arXiv:1601.02036Mohammad H. Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko. Quantum boltzmann machine, 2016, arXiv:1601.02036.
. M Lukas, Wolfgang Sieberer, Lechner, arXiv:1708.02533Programmable superpositions of ising configurations. Lukas M. Sieberer and Wolfgang Lechner. Programmable superpositions of ising configurations, 2017, arXiv:1708.02533.
A quantum annealing architecture with all-to-all connectivity from local interactions. Wolfgang Lechner, Philipp Hauke, Peter Zoller, Science Advances. 19Wolfgang Lechner, Philipp Hauke, and Peter Zoller. A quantum annealing architec- ture with all-to-all connectivity from local interactions. Science Advances, 1(9), 2015, http://advances.sciencemag.org/content/1/9/e1500838.full.pdf. URL http://advances.sciencemag. org/content/1/9/e1500838.
Quantum enhanced inference in markov logic networks. Peter Wittek, Christian Gogolin, 10.1038/srep45672Scientific Reports. 745672Peter Wittek and Christian Gogolin. Quantum enhanced inference in markov logic networks. Scientific Reports, 7:45672, apr 2017. URL https://doi.org/10.1038/srep45672.
Markov logic networks. Matthew Richardson, Pedro Domingos, 10.1007/s10994-006-5833-1Machine Learning. 62Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62(1-2):107-136, jan 2006. URL https://doi.org/10.1007/s10994-006-5833-1.
Quantum machine learning with small-scale devices: Implementing a distance-based classifier with a quantum interference circuit. Maria Schuld, Mark Fingerhuth, Francesco Petruccione, arXiv:1703.10793Maria Schuld, Mark Fingerhuth, and Francesco Petruccione. Quantum machine learning with small-scale de- vices: Implementing a distance-based classifier with a quantum interference circuit, 2017, arXiv:1703.10793.
Quantum optimization for training support vector machines. Davide Anguita, Sandro Ridella, Fabio Rivieccio, Rodolfo Zunino, 0893-6080Advances in Neural Networks Research: {IJCNN} '03. 16Davide Anguita, Sandro Ridella, Fabio Rivieccio, and Rodolfo Zunino. Quantum optimization for training support vector machines. Neural Networks, 16(5?6):763 -770, 2003. ISSN 0893-6080. URL http: //www.sciencedirect.com/science/article/pii/S089360800300087X. Advances in Neural Networks Research: {IJCNN} '03.
A Quantum Algorithm for Finding the Minimum. Christoph Durr, Peter Hoyer, quant- ph/9607014Christoph Durr and Peter Hoyer. A Quantum Algorithm for Finding the Minimum, January 1999, quant- ph/9607014. URL http://arxiv.org/abs/quant-ph/9607014.
Quantum speed-up for unsupervised learning. Esma Aïmeur, Gilles Brassard, Sébastien Gambs, 10.1007/s10994-012-5316-51573-0565Machine Learning. 90Esma Aïmeur, Gilles Brassard, and Sébastien Gambs. Quantum speed-up for unsupervised learning. Machine Learning, 90(2):261-287, 2013. ISSN 1573-0565. URL http://dx.doi.org/10.1007/s10994-012-5316-5.
Quantum algorithm for association rules mining. Chao-Hua Yu, Fei Gao, Qing-Le Wang, Qiao-Yan Wen, https:/link.aps.org/doi/10.1103/PhysRevA.94.042311Phys. Rev. A. 9442311Chao-Hua Yu, Fei Gao, Qing-Le Wang, and Qiao-Yan Wen. Quantum algorithm for association rules mining. Phys. Rev. A, 94:042311, Oct 2016. URL https://link.aps.org/doi/10.1103/PhysRevA.94.042311.
Nathan Wiebe, Ashish Kapoor, Krysta M Svore, arXiv:1602.04799Quantum perceptron models. Nathan Wiebe, Ashish Kapoor, and Krysta M Svore. Quantum perceptron models, 2016, arXiv:1602.04799.
Pattern recognition on a quantum computer. Ralf Schützhold, https:/link.aps.org/doi/10.1103/PhysRevA.67.062311Phys. Rev. A. 6762311Ralf Schützhold. Pattern recognition on a quantum computer. Phys. Rev. A, 67:062311, Jun 2003. URL https://link.aps.org/doi/10.1103/PhysRevA.67.062311.
Quantum principal component analysis. Nathan Wiebe, Christopher Granade, 10.1038/nphys3029arXiv:1512.031451745-2473Nat Phys. Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost109Can small quantum systems learn?Nathan Wiebe and Christopher Granade. Can small quantum systems learn?, 2015, arXiv:1512.03145. Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nat Phys, 10(9):631-633, Sep 2014. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys3029. Letter.
Quantum fingerprinting. Harry Buhrman, Richard Cleve, John Watrous, Ronald De Wolf, https:/link.aps.org/doi/10.1103/PhysRevLett.87.167902Phys. Rev. Lett. 87167902Harry Buhrman, Richard Cleve, John Watrous, and Ronald de Wolf. Quantum fingerprinting. Phys. Rev. Lett., 87:167902, Sep 2001. URL https://link.aps.org/doi/10.1103/PhysRevLett.87.167902.
Hamiltonian simulation with nearly optimal dependence on all parameters. D W Berry, A M Childs, R Kothari, IEEE 56th Annual Symposium on Foundations of Computer Science. D. W. Berry, A. M. Childs, and R. Kothari. Hamiltonian simulation with nearly optimal dependence on all parameters. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 792-809, Oct 2015.
Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. B D Clader, B C Jacobs, C R Sprouse, https:/link.aps.org/doi/10.1103/PhysRevLett.100.160501Phys. Rev. Lett. 110160501Phys. Rev. Lett.B. D. Clader, B. C. Jacobs, and C. R. Sprouse. Preconditioned quantum linear system algorithm. Phys. Rev. Lett., 110:250504, Jun 2013. URL https://link.aps.org/doi/10.1103/PhysRevLett.110.250504. Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Phys. Rev. Lett., 100:160501, Apr 2008. URL https://link.aps.org/doi/10.1103/PhysRevLett.100.160501.
Quantum machine learning over infinite dimensions. Hoi-Kwan, Raphael Lau, George Pooser, Christian Siopsis, Weedbrook, https:/link.aps.org/doi/10.1103/PhysRevLett.118.080501Phys. Rev. Lett. 11880501Hoi-Kwan Lau, Raphael Pooser, George Siopsis, and Christian Weedbrook. Quantum machine learning over infinite dimensions. Phys. Rev. Lett., 118:080501, Feb 2017. URL https://link.aps.org/doi/10.1103/ PhysRevLett.118.080501.
Quantum algorithm for data fitting. Nathan Wiebe, Daniel Braun, Seth Lloyd, https:/link.aps.org/doi/10.1103/PhysRevLett.109.050505Phys. Rev. Lett. 10950505Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quantum algorithm for data fitting. Phys. Rev. Lett., 109: 050505, Aug 2012. URL https://link.aps.org/doi/10.1103/PhysRevLett.109.050505.
New quantum algorithm for linear regression. Guoming Wang, arXiv:1402.0660Guoming Wang. New quantum algorithm for linear regression, 2014, arXiv:1402.0660.
Hamiltonian simulation by qubitization. Hao Guang, Isaac L Low, Chuang, arXiv:1610.06546Guang Hao Low and Isaac L. Chuang. Hamiltonian simulation by qubitization, 2016, arXiv:1610.06546.
Prediction by linear regression on a quantum computer. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, https:/link.aps.org/doi/10.1103/PhysRevA.94.022342Phys. Rev. A. 9422342Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. Prediction by linear regression on a quantum computer. Phys. Rev. A, 94:022342, Aug 2016. URL https://link.aps.org/doi/10.1103/PhysRevA. 94.022342.
Quantum algorithms for supervised and unsupervised machine learning. Seth Lloyd, Masoud Mohseni, Patrick Rebentrost, ; Rebentrost, arXiv:411Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning, 2013, arXiv:(Rebentrost et al., 2014)1307.0411.
Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Nathan Wiebe, Ashish Kapoor, Krysta M Svore, 1533-7146Quantum Info. Comput. 153-4Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Quantum Info. Comput., 15(3-4):316-356, March 2015. ISSN 1533-7146. URL http://dl.acm.org/citation.cfm?id=2871393.2871400.
Quantum support vector machine for big data classification. Patrick Rebentrost, Masoud Mohseni, Seth Lloyd, https:/link.aps.org/doi/10.1103/PhysRevLett.113.130503Phys. Rev. Lett. 113130503Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classification. Phys. Rev. Lett., 113:130503, Sep 2014. URL https://link.aps.org/doi/10.1103/ PhysRevLett.113.130503.
Quantum assisted gaussian process regression. Zhikuan Zhao, Jack K Fitzsimons, Joseph F Fitzsimons, arXiv:1512.03929Zhikuan Zhao, Jack K. Fitzsimons, and Joseph F. Fitzsimons. Quantum assisted gaussian process regression, 2015, arXiv:1512.03929.
Quantum algorithms for topological and geometric analysis of data. Seth Lloyd, Silvano Garnerone, Paolo Zanardi, 10.1038/ncomms10138Nature Communications. 710138Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric analysis of data. Nature Communications, 7:10138, jan 2016. URL https://doi.org/10.1038/ncomms10138.
Quantum gradient descent and newton's method for constrained polynomial optimization. Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, Seth Lloyd, arXiv:1612.01789Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, and Seth Lloyd. Quantum gradient descent and newton's method for constrained polynomial optimization, 2016b, arXiv:1612.01789.
Quantum gradient descent for linear systems and least squares. Iordanis Kerenidis, Anupam Prakash, arXiv:1704.04992Iordanis Kerenidis and Anupam Prakash. Quantum gradient descent for linear systems and least squares, 2017, arXiv:1704.04992.
The epoch-greedy algorithm for multi-armed bandits with side information. John Langford, Tong Zhang, Advances in Neural Information Processing Systems. J. C. Platt, D. Koller, Y. Singer, and S. T. RoweisCurran Associates, Inc20John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side informa- tion. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 817-824. Curran Associates, Inc., 2008. URL http://papers.nips.cc/ paper/3178-the-epoch-greedy-algorithm-for-multi-armed-bandits-with-side-information.pdf.
Projective simulation applied to the grid-world and the mountain-car problem. Alexey A Melnikov, Adi Makmal, Hans J Briegel, arXiv:1405.5459Alexey A. Melnikov, Adi Makmal, and Hans J. Briegel. Projective simulation applied to the grid-world and the mountain-car problem, 2014, arXiv:1405.5459.
Projective simulation for classical learning agents: A comprehensive investigation. Julian Mautner, Adi Makmal, Daniel Manzano, Markus Tiersch, Hans J Briegel, 10.1007/s00354-015-0102-01882-7055New Generation Computing. 331Julian Mautner, Adi Makmal, Daniel Manzano, Markus Tiersch, and Hans J. Briegel. Projective simulation for classical learning agents: A comprehensive investigation. New Generation Computing, 33(1):69-114, Jan 2015. ISSN 1882-7055. URL http://dx.doi.org/10.1007/s00354-015-0102-0.
Projective simulation with generalization. CoRR, abs/1504.02247. Alexey A Melnikov, Adi Makmal, Vedran Dunjko, Hans-J Briegel, Alexey A. Melnikov, Adi Makmal, Vedran Dunjko, and Hans-J. Briegel. Projective simulation with generalization. CoRR, abs/1504.02247, 2015. URL http://arxiv.org/abs/1504.02247.
Meta-learning within projective simulation. A Makmal, A A Melnikov, V Dunjko, H J Briegel, 2169-3536IEEE Access. 4A. Makmal, A. A. Melnikov, V. Dunjko, and H. J. Briegel. Meta-learning within projective simulation. IEEE Access, 4:2110-2122, 2016. ISSN 2169-3536.
Robotic playing for hierarchical complex skill learning. S Hangl, E Ugur, S Szedmak, J Piater, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). S. Hangl, E. Ugur, S. Szedmak, and J. Piater. Robotic playing for hierarchical complex skill learning. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2799-2804, Oct 2016.
Quantum speedup for active learning agents. Giuseppe Davide Paparo, Vedran Dunjko, Adi Makmal, Miguel Angel Martin-Delgado, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevX.4.031002Phys. Rev. X. 431002Giuseppe Davide Paparo, Vedran Dunjko, Adi Makmal, Miguel Angel Martin-Delgado, and Hans J. Briegel. Quantum speedup for active learning agents. Phys. Rev. X, 4:031002, Jul 2014. URL https: //link.aps.org/doi/10.1103/PhysRevX.4.031002.
Quantum speed-up of markov chain based algorithms. M Szegedy, 45th Annual IEEE Symposium on Foundations of Computer Science. M. Szegedy. Quantum speed-up of markov chain based algorithms. In 45th Annual IEEE Symposium on Foundations of Computer Science, pages 32-41, Oct 2004.
Some inequalities for reversible markov chains. David J Aldous, The Journal of the London Mathematical Society, Second Series. 25David J. Aldous. Some inequalities for reversible markov chains. The Journal of the London Mathematical Society, Second Series, 25:564-576, 1982.
Search via quantum walk. Frédéric Magniez, Ashwin Nayak, Jérémie Roland, Miklos Santha, 10.1137/090745854SIAM J. Comput. 401Frédéric Magniez, Ashwin Nayak, Jérémie Roland, and Miklos Santha. Search via quantum walk. SIAM J. Comput., 40(1):142-164, 2011. URL https://doi.org/10.1137/090745854.
Quantum mixing of markov chains for special distributions. V Dunjko, H J Briegel, New Journal of Physics. 17773004V. Dunjko and H. J. Briegel. Quantum mixing of markov chains for special distributions. New Journal of Physics, 17(7):073004, 2015a. URL http://stacks.iop.org/1367-2630/17/i=7/a=073004.
Sequential quantum mixing for slowly evolving sequences of markov chains. Vedran Dunjko, Hans J Briegel, arXiv:1503.01334Vedran Dunjko and Hans J. Briegel. Sequential quantum mixing for slowly evolving sequences of markov chains, 2015b, arXiv:1503.01334.
Quantum Reinforcement Learning. Daoyi Dong, Chunlin Chen, Zonghai Chen, 10.1007/11539117_97978-3-540-31858-3SpringerBerlin Heidelberg; Berlin, HeidelbergDaoyi Dong, Chunlin Chen, and Zonghai Chen. Quantum Reinforcement Learning, pages 686-689. Springer Berlin Heidelberg, Berlin, Heidelberg, 2005. ISBN 978-3-540-31858-3. URL http://dx.doi.org/10.1007/ 11539117_97.
Quantum-enhanced deliberation of learning agents using trapped ions. V Dunjko, H J Friis, Briegel, New Journal of Physics. 17223006V Dunjko, N Friis, and H J Briegel. Quantum-enhanced deliberation of learning agents using trapped ions. New Journal of Physics, 17(2):023006, 2015a. URL http://stacks.iop.org/1367-2630/17/i=2/ a=023006.
Reinforcement learning using quantum boltzmann machines. Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S Oberoi, Pooya Ronagh, arXiv:1612.05695Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi, and Pooya Ronagh. Reinforcement learning using quantum boltzmann machines, 2016, arXiv:1612.05695.
Basic protocols in quantum reinforcement learning with superconducting circuits. Lucas Lamata, 10.1038/s41598-017-01711-62045-2322Scientific Reports. 711609Lucas Lamata. Basic protocols in quantum reinforcement learning with superconducting circuits. Scientific Reports, 7(1):1609, 2017. ISSN 2045-2322. URL http://dx.doi.org/10.1038/s41598-017-01711-6.
Quantum-enhanced machine learning. Vedran Dunjko, Jacob M Taylor, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevLett.117.130501Phys. Rev. Lett. 117130501Vedran Dunjko, Jacob M. Taylor, and Hans J. Briegel. Quantum-enhanced machine learning. Phys. Rev. Lett., 117:130501, Sep 2016. URL https://link.aps.org/doi/10.1103/PhysRevLett.117.130501.
Framework for learning agents in quantum environments. Vedran Dunjko, Jacob M Taylor, Hans J Briegel, arXiv:1507.08482Vedran Dunjko, Jacob M. Taylor, and Hans J. Briegel. Framework for learning agents in quantum environments, 2015b, arXiv:1507.08482.
ISBN 0262122960, 9780262122962. Kyriakos N. Sgarbas. The road to quantum artificial intelligence. John E Laird, arXiv:0705.3360Current Trends in Informatics. The MIT PressThe Soar Cognitive ArchitectureJohn E. Laird. The Soar Cognitive Architecture. The MIT Press, 2012. ISBN 0262122960, 9780262122962. Kyriakos N. Sgarbas. The road to quantum artificial intelligence. Current Trends in Informatics, pages 469-477, 2007, arXiv:0705.3360.
Principles of quantum artificial intelligence. Andrzej Wichert, 978-9814566742World ScientificHackensack New JerseyAndrzej Wichert. Principles of quantum artificial intelligence. World Scientific, Hackensack New Jersey, 2014. ISBN 978-9814566742.
Can artificial intelligence benefit from quantum computing?. Vicente Moret-Bonillo, 10.1007/s13748-014-0059-0s13748-014-0059-0Progress in Artificial Intelligence. 32Vicente Moret-Bonillo. Can artificial intelligence benefit from quantum computing? Progress in Artificial Intelligence, 3(2):89-105, Mar 2015. ISSN 2192-6360. URL https://doi.org/10.1007/ s13748-014-0059-0.
Quantum inference on bayesian networks. Guang Hao Low, Theodore J Yoder, Isaac L Chuang, https:/link.aps.org/doi/10.1103/PhysRevA.89.062315Phys. Rev. A. 8962315Guang Hao Low, Theodore J. Yoder, and Isaac L. Chuang. Quantum inference on bayesian networks. Phys. Rev. A, 89:062315, Jun 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.89.062315.
A quantum algorithm for the hamiltonian NAND tree. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, 10.4086/toc.2008.v004a008Theory of Computing. 41Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum algorithm for the hamiltonian NAND tree. Theory of Computing, 4(1):169-190, 2008. URL https://doi.org/10.4086/toc.2008.v004a008.
Quantum games and quantum strategies. Jens Eisert, Martin Wilkens, Maciej Lewenstein, https:/link.aps.org/doi/10.1103/PhysRevLett.83.3077Phys. Rev. Lett. 83Jens Eisert, Martin Wilkens, and Maciej Lewenstein. Quantum games and quantum strategies. Phys. Rev. Lett., 83:3077-3080, Oct 1999. URL https://link.aps.org/doi/10.1103/PhysRevLett.83.3077.
Causal boxes: Quantum informationprocessing systems closed under composition. C Portmann, C Matt, U Maurer, R Renner, B Tackmann, 0018-9448IEEE Transactions on Information Theory. 635C. Portmann, C. Matt, U. Maurer, R. Renner, and B. Tackmann. Causal boxes: Quantum information- processing systems closed under composition. IEEE Transactions on Information Theory, 63(5):3277-3305, May 2017. ISSN 0018-9448.
Turing Resreach Symposuim. Elham Kashefi, Elham Kashefi. Turing Resreach Symposuim, May 2012. Link: https://www.youtube.com/watch?v=3y7JCjaNZLY, 2013.
Coherent controlization using superconducting qubits. Nicolai Friis, Alexey A Melnikov, Gerhard Kirchmair, Hans J Briegel, 10.1038/srep18036Sci. Rep. 5Nicolai Friis, Alexey A. Melnikov, Gerhard Kirchmair, and Hans J. Briegel. Coherent controlization using superconducting qubits. Sci. Rep., 5, Dec 2015. URL http://dx.doi.org/10.1038/srep18036. Article.
Experimental realization of a quantum support vector machine. Zhaokai Li, Xiaomei Liu, Nanyang Xu, Jiangfeng Du, https:/link.aps.org/doi/10.1103/PhysRevLett.114.140504Phys. Rev. Lett. 114140504Zhaokai Li, Xiaomei Liu, Nanyang Xu, and Jiangfeng Du. Experimental realization of a quantum support vector machine. Phys. Rev. Lett., 114:140504, Apr 2015b. URL https://link.aps.org/doi/10.1103/ PhysRevLett.114.140504.
Entanglement-based machine learning on a quantum computer. X.-D Cai, D Wu, Z.-E Su, M.-C Chen, X.-L Wang, Li Li, N.-L Liu, C.-Y. Lu, J.-W Pan, https:/link.aps.org/doi/10.1103/PhysRevLett.114.110504Phys. Rev. Lett. 114110504X.-D. Cai, D. Wu, Z.-E. Su, M.-C. Chen, X.-L. Wang, Li Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan. Entanglement-based machine learning on a quantum computer. Phys. Rev. Lett., 114:110504, Mar 2015. URL https://link.aps.org/doi/10.1103/PhysRevLett.114.110504.
Demonstration of quantum advantage in machine learning. Diego Ristè, Marcus P Da Silva, Colm A Ryan, Andrew W Cross, Antonio D Córcoles, John A Smolin, Jay M Gambetta, Jerry M Chow, Blake R Johnson, 10.1038/s41534-017-0017-32056-6387npj Quantum Information. 316Diego Ristè, Marcus P. da Silva, Colm A. Ryan, Andrew W. Cross, Antonio D. Córcoles, John A. Smolin, Jay M. Gambetta, Jerry M. Chow, and Blake R. Johnson. Demonstration of quantum advantage in machine learning. npj Quantum Information, 3(1):16, 2017. ISSN 2056-6387. URL https://doi.org/10. 1038/s41534-017-0017-3.
| zyda_arxiv-0877000 |
arXiv:physics/0011058v1 [physics.chem-ph] Generalized Heitler-London Theory for H 3 : A Comparison of the Surface Integral Method with Perturbation Theory
23 Nov 2000 (February 8, 2022)
Tanja I Sachse
Max-Planck-Institut für Strömungsforschung
Institut für Physik
Technische Universität
Bunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany
Ulrich Kleinekathöfer
Max-Planck-Institut für Strömungsforschung
Institut für Physik
Technische Universität
Bunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany
arXiv:physics/0011058v1 [physics.chem-ph] Generalized Heitler-London Theory for H 3 : A Comparison of the Surface Integral Method with Perturbation Theory
23 Nov 2000 (February 8, 2022)
The generalized Heitler-London (GHL) theory provides a straightforward way to express the potential energy surface of H 3 in terms of Coulomb and exchange energies which can be calculated either by perturbation theory or using the surface integral method (SIM). By applying the Rayleigh-Schrödinger perturbation theory, GHL theory for the quartet spin state of H 3 is shown to yield results equivalent to the symmetrized Rayleigh-Schrödinger version of symmetry adapted perturbation theory (SAPT). This equivalence allows a comparison with the corresponding results obtained by the surface integral method. The surface integral result calculated with a product of atomic wave functions is found to have certain advantages over the perturbation approach.
I. INTRODUCTION
The generalized Heitler-London (GHL) theory provides a useful framework to calculate the potential energy surfaces for polyatomic systems [1][2][3][4]. Since the potential energy is expressed in terms of Coulomb and exchange energies it is possible to systematically separate out many-body effects in every single term contributing to the potential energy. In this paper some aspects of the three-body exchange effects occurring in H 3 are examined in more detail.
Axilrod, Teller and Muto [5] were the first to suggest a formula describing the leading long range three-body dispersion term for three spherically symmetric atoms. Since then the non-additive effects have been intensively studied and several review articles have been published [6][7][8]. In the GHL approach the potentials can be decomposed into Coulomb and exchange energies, whereas in symmetry adapted perturbation theory (SAPT) these interactions are expressed in terms of Coulomb and exchange integrals in the manner first introduced by Heitler and London. Recently, SAPT was formulated for the interactions of trimers [9] and has been applied to numerical calculations up to third order for the quartet spin state of H 3 [10] and for the helium-trimer [11] up to third order. Other three-body calculations for H 3 are based on Heitler-London type calculations [12] and perturbation calculations making use of Unsöld approximations [13]. In the former the splitting into Coulomb and exchange part is as pointed out by the author himself not completely rigorous.
In a previous paper [3] analytical results were reported for the doublet as well as for the quartet spin state for the H 3 system based on the GHL theory. Two kinds of exchange energies appear: cyclic exchange energies, where all three electrons are involved, and twobody exchange energies in the presence of the respective third atom. The cyclic exchange energy of three hydrogen and three helium atoms [14] was calculated using the surface integral method (SIM) which was previously applied to two atoms [1,2,4,[15][16][17]. In a forthcoming paper [18] it will be demonstrated that all exchange energies occurring in the H 3 -system can be calculated either by the surface integral method or by using perturbation theory, and the corresponding results for the implicit three-body effect on the two-body exchange energies will be derived and compared.
For H 2 it was previously shown that SAPT and GHL are equivalent [19]. The purpose of this paper is to compare the surface integral method calculations of the three-body effects in the exchange energies based on an atomic product wave function with the results of first to third order of SAPT which are only available for the quartet spin state of H 3 [10]. In order to perform this comparison it is necessary to first prove that the SAPT and GHL theory expressions for the energy of the quartet state are equivalent. The results reveal that with the zeroth order wave function the surface integral result contains parts of the second order SAPT result and is therefore more efficient.
In Sections II and III the basic ideas of the GHL theory and polarization approximation are described. In Section IV the equivalence of the GHL and the symmetrized Rayleigh-Schrödinger (SRS) theories is demonstrated order by order. The latter is designated a weak symmetry forcing SAPT. Section V reviews the surface integral method (SIM). Thereafter in Section VI the advantages of SIM over the perturbation approach will be demonstrated by comparing the numerical results of perturbation theory and SIM.
II. GENERALIZED HEITLER-LONDON THEORY FOR H 3
The application of generalized Heitler-London theory to H 3 was previously discussed in Ref. [3]. The generalized Heitler-London equation is given bŷ
HF = g ǫ gT (g)F (1)
where F is the localized, i.e. non-symmetrized wave function,T (g) designates a permutation operator for the electron coordinates, and ǫ g stands for the Coulomb (g = I) and exchange energies (g = I). Applying results from the theory of the symmetric group, the energy eigenvalues of the Hamiltonian can be derived. For the H 3 -system, the result for the two doublet states is
1/2 E GHL = ǫ I − ǫ 123 ± 1 2 [(ǫ 12 − ǫ 23 ) 2 + (ǫ 23 − ǫ 13 ) 2 + (ǫ 13 − ǫ 12 ) 2 ](2)
and for the quartet state
3/2 E GHL = ǫ I − ǫ 12 − ǫ 23 − ǫ 13 + 2ǫ 123 .(3)
The remainder of this paper will be concerned only with the quartet state.
III. POLARIZATION APPROXIMATION AND GENERALIZED HEITLER-LONDON (GHL) THEORY
The Born-Oppenheimer non-relativistic Hamiltonian of the three-body system is given byĤ
=Ĥ 0 +V (4) usingĤ 0 =Ĥ 0 A +Ĥ 0 B +Ĥ 0 C (5) V =V AB +V BC +V AC(6)
whereĤ 0 A ,Ĥ 0 B andĤ 0 C are the Hamiltonians of three free hydrogen atoms andV AB ,V BC andV AC describe the interaction between atoms A and B, B and C, as well as A and C, respectively. The polarization approximation [20] is based on the equation
HF = E p F(7)
where the polarization wave function F and the polarization energy E p can be written as perturbation series
F = φ n ,(8)E p = ǫ n .(9)
The zeroth order polarization wave function φ 0 is the eigenfunction of the free Hamilto-nianĤ 0 and thus is a product of three free hydrogen wave functions. Starting from the GHL equation with F chosen as the polarization wave function, Eq. (1) together with the Hamiltonian Eq. (4) can be written as
(Ĥ 0 +V )| n φ n = g ǫ gT (g)| N n=0 φ n .(10)
Forming scalar products withT (g)φ 0 for each group element g
(T (g)φ 0 , (Ĥ 0 +V ) n=0 φ n ) = g ′ ǫ g ′ (T (g) φ 0 , n=0T (g ′ )φ n )(11)
a system of linear equations can be derived for the Coulomb energy ǫ I as well as for the exchange energies ǫ g (g = I) in terms of Coulomb integrals J, exchange integrals K g , and overlap integrals S g :
E 0 + J ≈ ǫ I + g ′ =g ǫ g ′ S g ′ −1 : g = I E 0 S g + K g ≈ ǫ g + g ′ =g ǫ g ′ S g ′ −1 g : g = I .(12)
The following notation for the nth order overlap, Coulomb and exchange integrals was used:
S g := M n=0 S n g(13)
J := M n=0 J n (14)
K g := M n=0 K n g = M n=1 K n g ,(15)
where S n g := (T (g)φ 0 , φ n ) (16)
J n := (φ 0 ,V φ n−1 ) (17) J 0 = E 0 (18) K n g := (φ 0 ,VT (g −1 ) φ n−1 ) .(19)
The equalities S n g −1 = S n g and K n g −1 = K n g hold. In Ref. [18] it will be shown how the Coulomb and exchange energies can be expressed in terms of Coulomb, exchange and overlap integrals and how the order-by-order contributions to the Coulomb and exchange energies can be found.
The convergence properties of the polarization theory have been extensively discussed for the case of two hydrogen atoms [21]. For low orders it was shown that the perturbation series rapidly converges to the Coulomb energy [19,[21][22][23] though this is not the limit for the infinite order expansion. It is assumed that the behavior of this perturbation theory for a system of two atoms also roughly holds in the case of three atoms [9,10]. Since here we are only interested in low orders, especially the first, this expected behavior justifies approximating the localized wave function via the polarization approximation for three hydrogen atoms as well.
IV. EQUIVALENCE OF THE GHL AND SRS THEORY FOR QUARTET H 3
In this section the order-by-order equivalence of the complete energy expressions obtained by using either the GHL or the SRS theory will be demonstrated. Both the GHL and SRS theories start with the Hamiltonian Eq. (4) and a zeroth order wave function which is a product of three free hydrogen atom wave functions. To demonstrate the equivalence of the first order expressions the first order SRS term will be expressed in terms of Coulomb and exchange energies. In Eq. (12) of Ref. [10] this term is given by
3/2 E 1 SRS = N 0 < ψ 0 |V (1 −T (12) −T (23) −T (13) +T (123) +T (132)) |ψ 0 > ,(20)
which can be expressed with Eqs. (16) to (19) as
3/2 E 1 SRS = N 0 J 1 − K 1 12 − K 1 23 − K 1 13 + K 1 123 + K 1 132 ,(21)
where
N 0 = 1 − S 0 12 − S 0 23 − S 0 13 + S 0 123 + S 0 132 .(22)
With Eq. (12) it is possible to express the first order contributions as
J 1 = ǫ 1 I + ǫ 1 12 S 0 12 + ǫ 1 23 S 0 23 + ǫ 1 13 S 0 13 + ǫ 1 123 S 0 123 + ǫ 1 132 S 0 123 (23) K 1 12 = ǫ 1 12 + ǫ 1 I S 0 12 + ǫ 1 23 S 0 123 + ǫ 1 13 S 0 123 + ǫ 1 123 S 0 23 + ǫ 1 132 S 0 13 (24) K 1 23 = ǫ 1 23 + ǫ 1 I S 0 23 + ǫ 1 12 S 0 123 + ǫ 1 13 S 0 123 + ǫ 1 123 S 0 13 + ǫ 1 132 S 0 12 (25) K 1 13 = ǫ 1 13 + ǫ 1 I S 0 13 + ǫ 1 12 S 0 123 + ǫ 1 23 S 0 123 + ǫ 1 123 S 0 12 + ǫ 1 132 S 0 23 (26) K 1 123 = ǫ 1 123 + ǫ 1 I S 0 123 + ǫ 1 12 S 0 23 + ǫ 1 23 S 0 13 + ǫ 1 13 S 0 12 + ǫ 1 132 S 0 123 (27) K 1 132 = ǫ 1 132 + ǫ 1 I S 0 123 + ǫ 1 12 S 0 13 + ǫ 1 23 S 0 12 + ǫ 1 13 S 0 23 + ǫ 1 123 S 0 123(28)
On inserting into Eq. (21) many terms cancel and Eq. (21) is equivalent to the first order contribution to Eq. (3)
3/2 E 1 SRS = N 0 J 1 − K 1 12 − K 1 23 − K 1 13 + K 1 123 + K 1 132 = ǫ 1 I − ǫ 1 12 − ǫ 1 23 − ǫ 1 13 + ǫ 1 123 + ǫ 1 132 = 3/2 E 1 GHL .(29)
The rest of the proof will be done by complete induction. The claim of the induction is the equivalence of the GHL and SRS energy expressions up to nth order. From Eq. (12) of [10] the general nth-order expression for the interaction energy in SRS theory is found to be
3/2 E n SRS = N 0 < ψ 0 |V (1 −T (12) −T (23) −T (13) +T (123) +T (132)) |ψ (n−1) pol > − n−1 k=1 3/2 E k SRS < ψ 0 | (1 −T (12) −T (23) −T (13) +T (123) +T (132)) |ψ (n−k) pol > = N 0 J n − K n 12 − K n 23 − K n 13 + K n 123 + K n 132 − n−1 k=1 3/2 E k SRS (−S n−k 12 − S n−k 23 − S n−k 13 + S n−k 123 + S n−k 132 )(30)
where N 0 is given by Eq. (22). Thus it is necessary to prove that
3/2 E n GHL = ǫ n I − ǫ n 12 − ǫ n 23 − ǫ n 13 + ǫ n 123 + ǫ n 132 (31) = 3/2 E n SRS .(32)
To perform a proof by induction it is necessary to show that also the (n+1)st order terms of both theories are equal. To do so, the (n + 1)st order of GHL theory is expressed in terms of the quantities occurring in SRS theory. This can be achieved by inserting the solutions of the set of linear equations Eq. (12) into the complete GHL energy for the H 3 -quartet state [24]
3/2 E GHL = ǫ I − ǫ 12 − ǫ 23 − ǫ 13 + ǫ 123 + ǫ 132 (33) ≈ M n=0 3/2 E n GHL = M n=0 ǫ n I − ǫ n 12 − ǫ n 23 − ǫ n 13 + ǫ n 123 + ǫ n 132 = E 0 + J − K 12 − K 23 − K 13 + K 123 + K 132 1 − S 12 − S 23 − S 13 + S 123 + S 132 −1(34)
where J, K g , and S g have been defined in Eqs. (13) to (15). To find the expression for the (n + 1)st order contribution to the energy of the quartet state, the left hand side is first multiplied by the denominator
M n=0 3/2 E n GHL 1 − M n=0 (S n 12 + S n 23 + S n 13 ) + M n=0 (S n 123 + S n 132 ) = E 0 1 − M n=0 (S n 12 + S n 23 + S n 13 ) + M n=0 (S n 123 + S n 132 ) + M n=0 [J n − K n 12 − K n 23 − K n 13 + K n 123 + K n 132 ] .(35)
Collecting terms of (n + 1)st order leads to
3/2 E n+1 GHL (1 − S 0 12 − S 0 23 − S 0 13 + S 0 123 + S 0 132 ) = J n+1 − K n+1 12 − K n+1 23 − K n+1 13 + K n+1 123 + K n+1 132 +E 0 (−S n+1 12 − S n+1 23 − S n+1 13 + S n+1 123 + S n+1 132 ) − n k=0 3/2 E k GHL (−S n+1−k 12 − S n+1−k 23 − S n+1−k 13 + S n+1−k 123 + S n+1−k 132 )(36)
with the result that
3/2 E n+1 GHL = N 0 J n+1 − K n+1 12 − K n+1 23 − K n+1 13 + K n+1 123 + K n+1 132 − n k=1 E GHL,k 3/2 (−S n+1−k 12 − S n+1−k 23 − S n+1−k 13 + S n+1−k 123 + S n+1−k 132 ) .(37)
Using the claim of the proof, which stated that for all orders up to the nth the GHL term is equal to the SRS-term, E GHL,k
3/2
in the last line can be replaced by 3/2 E
SRS for all orders 1, . . . , n. Thus Eq. (37) can be transformed into
3/2 E n+1 GHL = N 0 J n+1 − K n+1 12 − K n+1 23 − K n+1 13 + K n+1 123 + K n+1 132 − n k=1 3/2 E k SRS (−S n+1−k 12 − S n+1−k 23 − S n+1−k 13 + S n+1−k 123 + S n+1−k 132 ) (38) = 3/2 E n+1 SRS(39)
and the equality also holds for the (n + 1)st order. Thus the contributions to the energy of the H 3 -quartet state in the SRS and GHL theories are equal order by order.
One advantage of the GHL theory is that it permits the calculation of the exchange energies by other methods, such as the surface integral method. In Ref. [10], the nonadditive energy terms of the quartet spin state of H 3 have been calculated up to third order. The first order terms can be split into a polarization and an exchange part. Since the first order polarization energy is pairwise additive, the only non-additive term in first order is contained in the exchange term which in Eqs. (23) and (55) of Ref. [9] is given by
E 1 exch (3, 3) = < ψ 0 |V AB T (23) +T (13) +T (123) +T (132) − S 0 23 − S 0 13 − S 0 123 − S 0 132 |ψ 0 > + < ψ 0 |V AB T (12) +T (13) +T (123) +T (132) − S 0 12 − S 0 13 − S 0 123 − S 0 132 |ψ 0 > + < ψ 0 |V AB T (12) +T (23) +T (123) +T (132) − S 0 12 − S 0 23 − S 0 123 − S 0 132 |ψ 0 > ,(40)
which can be expressed in terms of exchange energies as
E 1 exch (3, 3) = ǫ 1 123 (1 − S 0 123 ) − ǫ 1 12 (1 + S 0 12 ) − ǫ H 2 ,1 12 (1 + S 0 12 ) − ǫ 1 23 (1 + S 0 23 ) − ǫ H 2 ,1 23 (1 + S 0 23 ) − ǫ 1 13 (1 + S 0 13 ) − ǫ H 2 ,1 13 (1 + S 0 13 ) .(41)
This term is also obtained if the pure two-body contributions are subtracted from Eq. (29).
V. SURFACE INTEGRAL METHOD (SIM) FOR THE CALCULATION OF EXCHANGE ENERGIES
As shown in Refs. [14] and [18] all exchange energies occurring in the GHL-description of the H 3 system, i.e. the two-body as well as the cyclic exchange energies, can be calculated by the surface integral method (SIM). The exchange energy ǫ g 0 associated with the arbitrary group element g 0 = I is given accordingly by
ε g 0 = V dv F 2 − (T (g 0 )F ) 2 −1 1 2 Σ F ∇ 9 T (g 0 )F − T (g 0 )F ∇ 9 F · d s − g =I,g 0 ε g V dv F (T (g 0 g)F ) − (T (g 0 )F )(T (g)F ) .(42)
In order to compare numerical results for three-body exchange effects with the published SAPT results for H 3 [10], an expression for the non-additive exchange energy has to be obtained using SIM. The non-additive exchange energy basically contains the cyclic exchange energy and the implicit three-body effects on the two-body exchange energies. As already pointed out in Ref. [14] it can be shown that for a choice of the partial volume V such that F is localized inside, all quantities occurring in the sum of Eq. (42) go to zero with at least a factor of e −R faster than the surface integral itself if all internuclear distances are larger or equal to R. This holds for all exchange energies. In a different paper [18] it will be shown how to find the implicit three-body effect from the complete surface integral expression for the two-body exchange energies. For product wave functions as used here the pure two-body part is given by the first line of formula Eq. (42), i.e. surface integral (SI) over denominator. The implicit three-body effect is contained in the second line of Eq.
(42), i.e. the products of partial overlap integrals with exchange energies. Following the same scheme used in the Appendix of Ref. [14], these terms can be shown to asymptotically go to zero as e −5R which is faster by a factor of e −3R than the surface integral (SI) itself. Using these results a GHL non-additive exchange energy for the quartet state of H 3 can be defined by simply subtracting the pure two-body contribution from the two-body exchange energies in the GHL result for the quartet state Eq. (3)
( 3/2 E GHL ) exch = 2ǫ 123 − ǫ 12 − ǫ H 2 12 − ǫ 23 − ǫ H 2 23 − ǫ 13 − ǫ H 2 13(43)
which can be calculated either by SIM or perturbation theory. The first order contribution to this non-additive term Tables I and II and will be discussed in the next Section.
( 3/2 E 1 GHL ) exch = 2ǫ 1 123 − ǫ 1 12 − ǫ H 2 ,1 12 − ǫ 1 23 − ǫ H 2 ,1 23 − ǫ 1 13 − ǫ H 2 ,
In summary, the complete three-body exchange effect in H 3 , which consists of the cyclic exchange energy and the effect of the presence of the third atom on the two-body exchange energies, can asymptotically be approximated by the surface integral for the cyclic exchange energy.
VI. RESULTS
In Tables I and II as well as Figures 1 and 2 the numerical results for the first order non-additive exchange energy of SRS theory are compared with three different SIM-terms: (i) the non-additive exchange energy of GHL theory Eq. (43), (ii) the cyclic exchange energy (complete SIM expression Eq. (42) with overlaps), (iii) the surface integral (SI) of the cyclic exchange energy only (without overlaps). All these quantities have been calculated using the zeroth order localized wave function F = 1/π 3/2 exp(−r 1A − r 2B − r 3C ). Since the exchange energies calculated by SIM cannot be given a definite perturbative order (due to the fact that only part of the complete space is used in the calculation) the quantity (i) is not expected to yield the same numerical results as the first order non-additive exchange energy of SRS theory. But since the same zeroth order product wave function was used to calculate both terms it is expected that both quantities exhibit a similar overall behavior in the range of parameters studied.
In Table I results for equilateral triangular geometry of the nuclei ranging between R = 4 and R = 10 atomic units are listed. Generally, all terms calculated by SIM have smaller absolute values than the first order perturbative ones. At R = 4 a.u., the absolute value of the complete SIM term Eq. (43) is 27 % below the SRS result Eq. (41), the cyclic exchange energy is 38 % smaller, and only the surface integral of the cyclic exchange energy is 25 % greater in absolute value. At R = 10 a.u., however, all three quantities calculated by SIM are no longer distinguishable and are only 6 % below the SRS result.
In Table II the results for isosceles triangles with equal sides of length of 6 a.u. and with angles γ B varying between 30 • and 180 • are shown. All quantities except for the surface integral without overlaps exhibit a change of sign in the region around 120 • and 150 • . At 30 • , (i) the absolute value of the SIM term Eq. (43) is 31 % smaller than the SRS result, (ii) the cyclic exchange energy is 41 % smaller, and again (iii) the surface integral of the cyclic exchange energy only is 13 % greater in absolute value. At 180 • on the other hand, only the value for the surface integral has the wrong sign, while both the other terms have become indistinguishable and are now 35 % greater in absolute value than the SRS term. The differences between the numerical results for the quantities compared in Tables I and II are, as already pointed out, not due to numerical problems but due to the fact that the quantities are different by definition.
From the Tables it appears that for triangular geometries of the nuclei and internuclear distances R ≥ 4 a.u. the first order non-additive exchange energy for the quartet state of H 3 can be quite well approximated by the surface integral of the cyclic exchange energy. This was stated in Ref. [14] and has now been explained by the fact that all the SIM approximations (see section V and in Ref. [14]) hold in this region.
In Tables III and IV as well as Figures 1 and 2 higher orders of SRS theory are also taken into account and compared with the complete GHL non-additive exchange energy Eq. (43) in order to show that SIM goes beyond the first order of SRS theory. For equilateral triangular geometries of the nuclei and internuclear distances larger than 6 a.u. the results of GHL theory lie between the first order SRS term and the sum of the first and second order terms, approaching the first order term for increasing distances. At 6 a.u. GHL is very close to the first plus second order of SRS, and even at 4 a.u. GHL is only 17 % below the total sum up to third order of SRS theory.
For isosceles structures of the nuclei with equal internuclear distances of 6 a.u. the advantage of SIM over the first order SRS theory is even more apparent. Starting at 60 • , the GHL result is closer to the first plus second order than to the first order SRS term. The change of sign occurs for the first order between 120 • and 150 • whereas for all other terms already between 90 • and 120 • . The differences of the GHL to the first plus second order SRS term range from 0.4% at 60 • to 33% at 120 • and 10% at 180 • . At 30 • the GHL result is again only 16% smaller than the SRS term with the third order term included.
The advantage of SIM over the perturbative approach is that the surface integral SI is easily calculated numerically, and including the partial overlap terms provides part of the second order SRS contributions.
VII. CONCLUSIONS
This paper demonstrates how the perturbation series consisting of Coulomb, exchange and overlap integrals can be used to express the Coulomb and exchange energies occurring in GHL theory. Combining the perturbation series with the GHL theory yields an energy expression for the quartet spin state equivalent to that of symmetrized Rayleigh-Schrödinger perturbation theory given in [10].
It is possible to evaluate the exchange energies using the surface integral method (SIM). The SIM has the advantage that it derives from a clear physical picture for the exchange process in terms of the electrons continuously trading places. For the cyclic exchange energies this method has already been described in detail in Ref. [14], and for the implicit three-body effect on the two-body exchange energies it will be shown in Ref. [18].
The long range behavior of the three-body terms entering the two-body exchange energies and of the partial overlap integrals -multiplied by two-body exchange energies in the expression for the cyclic exchange energy in Eq. (42) -indicate that for large internuclear separations the surface integral for the cyclic exchange energy is sufficient to describe the non-additive contribution to the exchange part of the quartet spin state. The numerical results in Tables I and II confirm this conclusion.
VIII. ACKNOWLEDGEMENTS
We thank K. T. Tang and J. P. Toennies for helpful discussions. U. K. gratefully acknowledges financial support from the DFG. −3.6 · 10 −9 −0.7 · 10 −9 −0.7 · 10 −9 −3.4 · 10 −9 TABLE III. Comparison of the numerical results for the non-additive exchange energy in GHL theory (GHL Eq. (43)) with the first order non-additive exchange energy of SRS-theory (SRS 1 Eq. (41)), with the SRS non-additive exchange energy up to second order (SRS 2 ) [10] , and with up to third order SRS 3 [10] . The nuclei form equilateral triangles with sides of lengths R. −7.40 · 10 −6 −5.67 · 10 −6 −4.98 · 10 −6 −6.05 · 10 −6 120 −3.42 · 10 −7 3.88 · 10 −7 9.02 · 10 −7 2.61 · 10 −7 150
8.84 · 10 −7 1.43 · 10 −6 1.88 · 10 −6 1.31 · 10 −6 180 1.10 · 10 −6 1.63 · 10 −6 2.07 · 10 −6 1.48 · 10 −6 TABLE IV. Comparison of the numerical results of GHL-theory with the same quantities as in Table III. The nuclei form isosceles triangles with two sides of lengths R AB = R BC = 6 a.u., γ B is the angle included.
the respective SRS-term Eq. (41) only by overlap integrals that are negligible compared to one.A comparison of the numerical results of the first order non-additive exchange energy Eq. (41) of SRS theory and the GHL term [Eq. (44)] calculated by SIM using the zeroth order product wave function F = 1/π 3/2 exp(−r 1A − r 2B − r 3C ) is given in
FIGURESFIG. 1 .
1Comparison of different orders of the non-additive exchange energy in SRS theory with the GHL result (filled triangles) calculated with SIM from Eq. (43) for equilateral triangles. The first order SRS contribution is denoted by circles, and with all terms up to second order by open triangles. The stars show twice the surface integral of the cyclic exchange energy.
FIG. 2 .
2Comparison of different orders of the non-additive exchange energy in SRS theory with the GHL result (filled triangles) calculated with SIM from Eq. (43) for isosceles triangles with R AB = R BC = 6 a.u. as a function of the included angle γ B . The first order SRS contribution is denoted by circles, and with all terms up to second order by open triangles. The stars show twice the surface integral of the cyclic exchange energy only. Note the change in the energy axis from linear to logarithmic scale.
TABLE I .
IComparison of the numerical results for the first order non-additive exchange energy of SRS-theory (SRS 1 Eq. (41)) with a similar but still different quantity derived from GHL theory Eq. (43), with the cyclic exchange calculated by SIM (2ǫ 123 (SIM)) including overlaps, and with the surface integral SI of the cyclic exchange energy without overlaps (2 SI). The nuclei form equilateral triangles with sides of lengths R.E 1 exch [E h ], R AB = R BC = 6 a.u. γ B [degrees]SRS Eq. (41)
GHL Eq. (43)
2ǫ 123 (SIM)
2 SI
30
−3.75 · 10 −4
−2.60 · 10 −4
−2.23 · 10 −4
−4.25 · 10 −4
60
−5.90 · 10 −5
−5.19 · 10 −5
−5.15 · 10 −5
−5.70 · 10 −5
90
−7.40 · 10 −6
−6.05 · 10 −6
−6.03 · 10 −6
−7.95 · 10 −6
120
−3.42 · 10 −7
2.61 · 10 −7
2.60 · 10 −7
−1.62 · 10 −6
150
8.84 · 10 −7
1.31 · 10 −6
1.30 · 10 −6
−5.83 · 10 −7
180
1.10 · 10 −6
1.48 · 10 −6
1.48 · 10 −6
−4.10 · 10 −7
TABLE II .
IIComparison of the numerical results of SRS-theory with the same quantities as inTable I. The nuclei form isosceles triangles with two sides of lengths R AB = R BC = 6 a.u., γ B is the angle included.E exch [E h ]
R[a 0 ]
SRS 1 Eq. (41)
SRS 2
SRS 3
GHL Eq. (43)
4
−3.83 · 10 −3
−3.60 · 10 −3
−3.34 · 10 −3
−2.79 · 10 −3
6
−5.90 · 10 −5
−5.21 · 10 −5
−5.03 · 10 −5
−5.19 · 10 −5
7
−5.88 · 10 −6
−4.77 · 10 −6
−4.62 · 10 −6
−5.32 · 10 −6
8
−5.33 · 10 −7
−3.71 · 10 −7
−3.57 · 10 −7
−4.89 · 10 −7
10
E exch [E h ], R AB = R BC = 6 a.u. γ B [degrees] SRS1 Eq. (41)
SRS 2
SRS 3
GHL Eq. (43)
30
−3.75 · 10 −4
−3.33 · 10 −4
−3.08 · 10 −4
−2.60 · 10 −4
60
−5.90 · 10 −5
−5.21 · 10 −5
−5.03 · 10 −5
−5.19 · 10 −5
90
. K T Tang, J P Toennies, C L Yiu, Int. Rev. Phys. Chem. 17363K.T. Tang, J.P. Toennies and C. L. Yiu, Int. Rev. Phys. Chem. 17, 363 (1998).
S H Patil, K T Tang, Asymptotic Methods in Quantum Mechanics: Applications to Atoms, Molecules and Nuclei. BerlinSpringerS. H. Patil and K. T. Tang, Asymptotic Methods in Quantum Mechanics: Applications to Atoms, Molecules and Nuclei (Springer, Berlin, 2000).
. U Kleinekathöfer, K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 1113377U. Kleinekathöfer, K.T. Tang, J.P. Toennies, and C.L. Yiu, J. Chem. Phys. 111, 3377 (1999).
. U Kleinekathöfer, Chem. Phys. Lett. 324403U. Kleinekathöfer, Chem. Phys. Lett 324, 403 (2000).
. B M Axilrod, E Teller, J. Chem. Phys. 11299B. M. Axilrod and E. Teller, J. Chem. Phys. 11, 299 (1943);
. Y Muto, Proc. Phys. Soc. Jpn. 17629Y. Muto, Proc. Phys. Soc. Jpn. 17, 629 (1943).
. M J Elrod, R J Saykally, Chem. Rev. 941975M. J. Elrod and R. J. Saykally, Chem. Rev. 94, 1975 (1994).
. W J Meath, M Koulis, J. Mol. Struct. (Theochem.). 2261W. J. Meath and M. Koulis, J. Mol. Struct. (Theochem.) 226, 1 (1991).
. W J Meath, R A Aziz, Mol. Phys. 52225W. J. Meath and R. A. Aziz, Mol. Phys. 52, 225 (1984).
. R Moszynski, P E S Wormer, B Jeziorski, A Van Der Avoird, J. Chem. Phys. 1038058R. Moszynski, P. E. S. Wormer, B. Jeziorski, and A. van der Avoird, J. Chem. Phys. 103, 8058 (1995).
. T Korona, R Moszynski, B Jeziorski, J. Chem. Phys. 1058178T. Korona, R. Moszynski, and B. Jeziorski, J. Chem. Phys. 105, 8178 (1996).
. V F Lotrich, K Szalewicz, J. Chem. Phys. 112112V. F. Lotrich and K. Szalewicz, J. Chem. Phys. 112, 112 (2000).
. R J Wheatley, Mol. Phys. 84899R. J. Wheatley, Mol. Phys. 84, 899 (1995).
. Z C Zhang, A R Allnatt, J D Talman, W J Meath, Mol. Phys. 811425Z. C. Zhang, A. R. Allnatt, J. D. Talman, and W. J. Meath, Mol. Phys. 81, 1425 (1994).
. U Kleinekathöfer, T I Sachse, K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 113948U. Kleinekathöfer, T. I. Sachse, K. T. Tang, J. P. Toennies, and C. L. Yiu, J. Chem. Phys. 113, 948 (2000)
. K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 947266K. T. Tang, J. P. Toennies, and C. L. Yiu, J. Chem. Phys. 94, 7266 (1991).
. K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 99377K. T. Tang, J. P. Toennies and C. L. Yiu, J. Chem. Phys. 99, 377 (1993).
. U Kleinekathöfer, K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 107U. Kleinekathöfer, K. T. Tang, J. P. Toennies, and C. L. Yiu, J. Chem. Phys. 107, 9502, (1997).
. T I Sachse, K T Tang, J P Toennies, in preparationT. I. Sachse, K. T. Tang and J. P. Toennies, in preparation.
. T Cwiok, B Jeziorski, W Ko Los, R Moszynski, J Rychlewski, K Szalewicz, Chem. Phys. Lett. 19567T. Cwiok, B. Jeziorski, W. Ko los, R. Moszynski, J. Rychlewski und K.Szalewicz, Chem. Phys. Lett. 195, 67 (1992).
. J O Hirschfelder, Chem. Phys. Lett. 1325J.O. Hirschfelder, Chem. Phys. Lett. 1, 325 (1967).
. B Jeziorski, R Moszynski, K Szalewicz, Chem. Rev. 941887B. Jeziorski, R. Moszynski, and K. Szalewicz, Chem. Rev. 94, 1887 (1994).
. G Chalasinski, B Jeziorski, K Szalewicz, Int. J. Quantum Chem. 11247G. Chalasinski, B. Jeziorski, and K. Szalewicz, Int. J. Quantum Chem. 11, 247 (1977).
. K T Tang, J P Toennies, C L Yiu, Chem. Phys. Lett. 162170K. T. Tang, J. P. Toennies, and C. L. Yiu, Chem. Phys. Lett. 162, 170 (1989).
The explicit expressions will be given in a forthcoming paper. 18The explicit expressions will be given in a forthcoming paper [18].
| zyda_arxiv-0883000 |
Backreaction effects of dissipation in neutrino decoupling
7 Oct 2000 (October 29, 2018)
Roy Maartens
School of Computer Science and Mathematics
Portsmouth University
PO1 2EGPortsmouthEngland
Josep Triginer
Department of Physics
Autonomous University of Barcelona
08193BellaterraSpain
Backreaction effects of dissipation in neutrino decoupling
7 Oct 2000 (October 29, 2018)
Dissipative effects during neutrino decoupling in the early universe create a small backreaction on the Hubble rate, and lead to a small rise in temperature and entropy. We use a simplified thermohydrodynamic model, which provides a causal approximation to kinetic theory, in order to estimate the backreaction effects and the entropy production.I. INTRODUCTIONNon-equilibrium processes in the early universe are typically associated with dynamical transitions or particle decouplings. In the case of neutrino decoupling, the standard approach is to treat the process as adiabatic (see e.g.[1]). The small non-equilibrium effects are thus usually neglected, which provides a reasonable approximation. However, given the increasing accuracy of cosmological observations and theoretical modeling, it is worthwhile revisiting the standard equilibrium models of processes such as neutrino decoupling, in order to see whether non-equilibrium corrections can lead to observable consequences. Recently, non-equilibrium corrections in neutrino decoupling have been calculated in a number of papers, using complicated kinetic theory and numerical computations (see [2] for a short review). The corrections are very small, as expected. For example, in[3][4][5]it was found that non-equilibrium effects lead to a small change in the decoupling temperature for neutrinos. Spectral distortions have also been analyzed[6], showing the remarkable fact that they amount to as much as 1% or more for the higher-energy side of the spectrum. Although these corrections in the spectrum, energy density and temperature of the neutrino component have hardly any effect on primordial helium synthesis, yielding a change in the mass fraction of ∼ 10 −4 , they can lead to other effects that may be observable. Thus it is shown that the non-equilibrium increase in neutrino temperature, which leads to an extra injection of energy into the photon spectrum, leads to a shift of equilibrium epoch between matter and radiation which, in turn, modifies the angular spectrum of fluctuations of the cosmic microwave background radiation[7,8].Despite the accuracy of these models in obtaining corrections to the decoupling temperature and distribution function due to non-equilibrium effects, they still make use of the standard Friedman equations for a perfect (i.e nondissipative) fluid. This leads to the physically inconsistent situation in which, say, the energy density and expansion evolve in time like a radiative fluid in equilibrium. One expects that small distortions in the particle equilibrium distribution function should be reflected in the macroscopic (i.e fluid) description, as given by the stress-energy tensor, by adding a bulk viscous pressure to the equilibrium one. Here we consider an alternative thermo-hydrodynamic model of dissipative effects in neutrino decoupling, simple enough to produce analytic solutions for the backreaction effects on the universal scale factor, and estimates for the entropy production due to dissipation. As explained above these effects are not the focus of recent papers, which use sophisticated kinetic theory models focusing on the neutrino temperature. Our simplified approach cannot compete with these models for accuracy and completeness, but it has the advantage of simplicity, allowing for a qualitative understanding of effects not previously investigated in detail. A similar approach has previously been developed in [9] to the reheating era that follows inflation.The thermo-hydrodynamic model is based on an approximation to kinetic theory which respects relativistic causality. This approximation is the Grad moment method, leading to the causal thermodynamics of Israel and Stewart[10]in the hydrodynamic regime (see also[11]for an alternative but equivalent approach). This causal theory is a generalization of the more commonly used relativistic Navier-Stokes-Fourier theory. The latter, due to Eckart [12], may be derived via the Chapman-Enskog approximation in kinetic theory. The resulting theory is quasi-stationary and noncausal, and suffers from the pathologies of infinite wavefront speeds and instability of all equilibrium states *
[13]
. The main new ingredient in the causal transport equations is a transient term which contains the relaxation time. Our simple model is based on a one-component fluid. In [14], relaxation time processes are incorporated in a two-fluid model. In this setting, electrons and positrons on the one side and neutrinos and antineutrinos on the other side, are found to be in two different equilibrium states with slightly different temperatures. The system evolves towards a state of thermal equilibrium in a characteristic relaxation time.
Dissipative effects in the decoupling of a given species of particles arise from the growing mean free path of the decoupling particles in their weakening interaction with the cosmic fluid. Eventually the mean collision time exceeds the gravitational expansion time, and decoupling is complete. A hydrodynamic model may be used to cover the early stages of the decoupling process, but it will eventually break down when the mean collision time becomes large enough [15].
In the conditions prevailing at the time of neutrino decoupling, it is reasonable to neglect sub-horizon metric fluctuations and treat the spacetime as a Friedmann model. (The incorporation of perturbations in our model would use the covariant formalism for dissipative fluids developed in [16].) The dynamical effects of spatial curvature and any surviving vacuum energy will be negligible, so that we can reasonably assume a spatially flat geometry. Furthermore, we assume that the average 4-velocities of the neutrinos (regarded as massless) and of the photon-electron-positron gas are the same. With all these assumptions, only scalar dissipation is possible. Dissipation during neutrino decoupling arises because the falling temperature lowers the interaction rate with leptons as the lepton mass can no longer be ignored relative to the thermal energy. Thus dissipation is directly reflected in a deviation of the equation of state from the thermalized radiation form p = 1 3 ρ. Within a hydrodynamic one-fluid model, such dissipation is described via bulk viscosity, which vanishes in the p = 1 3 ρ limit, but is nonzero otherwise. We will use the full (i.e. non-truncated) version of the causal transport equation for bulk stress.
II. CAUSAL TRANSPORT EQUATION FOR BULK STRESS
The particle number 4-current and the energy-momentum tensor are
N a = nu a , T ab = ρu a u b + (p + Π)h ab ,
where ρ is the energy density, p is the equilibrium (hydrostatic) pressure, n is the particle number density, Π is the bulk viscous pressure, and h ab = g ab + u a u b is the projector into the comoving instantaneous rest space. Particle and energy-momentum conservation ∇ a N a = 0 , ∇ b T ab = 0 , lead to the equationsṅ
+ 3Hn = 0 , (1) ρ + 3H(ρ + p + Π) = 0 ,(2)
where H is the Hubble expansion rate. The specific entropy s and the temperature T are related via the Gibbs equation
nT ds = dρ − ρ + p n dn .(3)
Then it follows that
nTṡ = −3HΠ ,(4)
where Π is always non-positive. The Grad moment approximation in kinetic theory (or phenomenological arguments) leads to the full causal transport equation [10] for Π:
τΠ + Π = −3ζH − 1 2 τ Π 3H +τ τ −ζ ζ −Ṫ T ,(5)
where τ is the relaxation time scale, which allows for causal propagation of viscous signals, and ζ ≤ 0 is the bulk viscous coefficient as given below. Quasi-stationary, noncausal theories have τ = 0, which reduces the evolution equation (5) to an algebraic equation Π = −3ζH. This leads to instantaneous propagation of viscous signals. Note also that the causal relaxational effects lead to a small increase in the sound speed over its adiabatic value [17]:
c 2 s → c 2 s + c 2 b where c 2 b = ζ (ρ + p)τ .(6)
This result, which is not well known, is derived in the appendix. The approximation used in deriving the transport equation (also in the quasi-stationary case) requires that |Π| ≪ ρ, which is reasonable for most dissipative processes (see [18] for a nonlinear generalization of the causal transport equation.)
Equation (5) as it stands is known as the full or non-truncated transport equation for bulk viscous pressure [19][20][21]. When the term containing the square bracket on the right is neglected, we get the truncated equation which is usually used. Under many conditions, truncation leads to a reasonable approximation. We will use the full equation.
Taking n and ρ as independent variables, the Gibbs equation (3) leads to the integrability condition
n ∂T ∂n ρ + (ρ + p) ∂T ∂ρ n = T ∂p ∂ρ n ,(7)
and together with the energy conservation equation (2) this gives the temperature evolution equatioṅ
T T = −3H ∂p ∂ρ n − 1 T ∂T ∂ρ n 3HΠ .(8)
The first term on the right accounts for adiabatic cooling due to expansion, whereas in the second term, viscosity contributes to heating of the fluid (note that Π is always non-positive).
Using equations (1) and (2), the Gibbs equation takes the form
n 2 T ds = n3HΠ 3H(ρ + p) + 3HΠ dρ + (ρ + p) ∂n ∂p ρ ṗρ dρ − dp .(9)
As expected we learn from the last equation that when the fluid is perfect (Π = 0), the specific entropy is conserved along the flow lines (ṡ = 0). Furthermore, if a barotropic equation of state for n holds, i.e. n = n(ρ), then ds = 0 so that s is a universal constant, the same on all flow-lines, and the fluid is called isentropic. 1 Yet, as Eq. (9) shows, this is no longer true in the presence of dissipation, i.e. a barotropic particle number density no longer forces ds to vanish. For simplicity, we assume the linear barotropic equation of state
p = (γ − 1)ρ ,(10)
where γ is constant and we are interested in the case γ ≈ 4 3 . The adiabatic speed of sound c s is given by
c 2 s = ∂p ∂ρ s ,
which for a perfect fluid (either barotropic or not) becomes
c 2 s =ṗ ρ .
When Eq. (10) holds then c s = √ γ − 1. Using Eq. (10) and the integrability condition (7), we find
T = ρ (γ−1)/γ F ρ 1/γ n ,(11)
where F is an arbitrary function which satisfiesḞ = 0. If T is barotropic, then F is constant and we have a power-law form with fixed exponent for the temperature [17,22] T ∝ ρ (γ−1)/γ .
In the non-dissipative case, these barotropic equations for p and T are compatible with the ideal gas law
p = nT ,(13)
but in the presence of dissipation this is no longer true. In effect, equations (10), (12) and (13) imply n ∝ ρ 1/γ , i.e. n n = 1 γρ ρ , which implies, by using Eq. (2), that Π = 0. We shall drop in the sequel a barotropic equation of state for the temperature in favour of the more physically appealing equation of state (13) together the γ-law in (10).
III. DISSIPATION IN NEUTRINO DECOUPLING
A hydrodynamic approach in the expanding universe requires a particle collision time t c short enough to adjust to the falling temperature. As the natural time-scale for the expanding universe is H −1 , we have
t c < H −1 .
If t c ≪ H −1 , then an equilibrium state can in principle be attained. Dissipative phenomena could play a prominent role for t c ∼ H −1 .
We learn from kinetic theory that t c is determined by
t c = 1 nσv ,(14)
where n is the number density of the target particles with which the given species is interacting, σ the cross-section and v the mean relative speed of interacting particles. For the decoupling of massless neutrinos in the early universe, v = 1, the target number density is that of electrons, and [23]
σ ≈ G F T 2 ,
where G F is the Fermi coupling constant. At the neutrino decoupling temperature T d , we have m e /T d ≈ 1 2 , so that the rest mass energy m e of electrons starts to become important. Since the electron number density in the radiation dominated era evolves as n e ∝ a −3 , where a is the scale factor, we have from Eq. (14) that
t c ∝ a 3 T 2 .(15)
Dissipation due to massless particles with long mean free path in a hydrodynamic fluid is described by the radiative transfer model. The bulk viscous coefficient takes the form [24]
ζ = 4rT 4 Γ 2 t c ,(16)
where r is 7 8 times the radiation constant and Γ measures the deviation of p/ρ from its pure-radiation value:
Γ = 1 3 − ∂p ∂ρ n ,(17)
where p and ρ refer to the pressure and energy density of the radiation/matter mixture as a whole. Since we assume the linear equation of state (10), it follows that Γ is a perturbative constant parameter in our simple model:
Γ = 4 3 − γ ≪ 1 .
The assumption that Γ is constant relies on the assumption that decoupling takes place rapidly. Since standard adiabatic treatments of decoupling [1] assume instantaneous decoupling, this assumption should be a reasonable first approximation. We may neglect the −3ζH term on the right of the transport equation (5), since it is O(Γ 2 ). Note that our simple model would thus break down in the quasi-stationary Eckart theory, since it would immediately lead to Π = O(Γ 2 ).
The relaxation timescale τ in causal radiative transfer [25] is given by τ = t c . The termζ/ζ on the right of Eq. (5) becomesζ
ζ = H + O(Γ) ,
on using equations (8) and (15). The full transport equation (5) becomes, to lowest order
τΠ + Π = −4τ HΠ .(18)
(We can think of the right hand side as an effective source term relative to the truncated transport equation.) We can rewrite this in the standard truncated form as
τ * Π + Π = 0 ,(19)
where the effective relaxation time acquires an expansion correction:
τ * = τ 1 + 4τ H .(20)
The amount of reduction depends on the size of τ = t c relative to H. The hydrodynamical description requires τ H < 1. If τ H ≪ 1, then τ * ≈ τ . But if τ H is close to 1, the reduction could be significant. The Friedmann equation
ρ = 3H 2 ,(21)
together with Eq. (2) leads to
Π = −2Ḣ − (4 − 3Γ)H 2 .(22)
On using equation (22) we get from (18) the evolution equation for Ḧ
H + HḢ(8 − 3Γ + N ) + H 3 (2 − 3 2 Γ)(N + 4) = 0,(23)
where
N = (τ H) −1 ,(24)
which is of the order of the number of interactions in an expansion time. Now, from equations (10), (13), (15) and (24) we have
N = Ha H d a d 3 ,(25)
where the expression n ∝ a −3 has been used and a d and H d = H(a d ) are the values at which N = 1, so that a d is determined by the equation
t c (a d )H(a d ) = 1 .(26)
Changing the independent variable to the scale factor a, developing equation (23) and collecting the previous results, yields
a 2 HH ′′ + a 2 H ′2 + aHH ′ 9 − 3Γ + Ha H d a d 3 + 2 − 3 2 Γ H 2 4 + Ha H d a d 3 = 0 ,(27)
where a prime denotes d/da. We expand H as
H =H + δH where δH = Γh + O(Γ 2 ) .(28)
The equilibrium Hubble rateH corresponds to the thermalized radiation state p = 1 3 ρ, so that Γ = 0, and Eq. (28) becomes
a 2HH ′′ + a 2H ′2 + 9aHH ′ + 8H 2 + aHH ′ + 2H 2 H ā H d a d = 0 .
The unique power-law solution is the well-known perfect radiative solution
H = H 0 a 0 a 2 = 1 2t ,(29)
where a 0 marks the start of the dissipative decoupling process, so that H =H for a < a 0 . Substituting Eq. (28) into (27) and using the fact that
H 0 a 0 H d a d = a d a 0 + O(Γ) ,
we find that to O(Γ):
a 2 h ′′ + a 5 + a d a 3 h ′ + 4 + 2 a d a 3 h = 3 2 H 0 a 0 a d 2 a d a 5 .(30)
Defining α = a/a d , we can rewrite this as
d 2 h dα 2 + 5 α + 1 α 4 dh dα + 4 α 2 + 2 α 5 h = 3 2 H 0 α 2 0 1 α 7 .(31)
Now we use the following general result [26]: if ϕ is a solution of
y ′′ + f (x)y ′ + g(x)y = k(x)
when k = 0, then the general solution is
y = C 1 ϕ + C 2 ϕ dx ϕ 2 E + ϕ 1 ϕ 2 E ϕEkdx dx ,
where E = exp f dx. By inspection, a solution of the homogeneous equation (31) is 1/α 2 . It follows that the general solution is
h(a) = H 0 a 0 a 2 c 1 + c 2 Ei 1 3 a d a 3 + 3 2 ln a a d ,(32)
where c 1 and c 2 are arbitrary integration constants and Ei is the exponential-integral function [27] Ei
(x) ≡ x −∞ e v v dv = C + ln x + ∞ k=1 x k k!k ,
with C denoting Euler's constant. By equations (22) and (32), the bulk stress to first order is
Π = (3H 2 − 4Hh − 2h ′ Ha)Γ,(33)
This expression holds for a > a 0 , where a 0 marks the onset of dissipative evolution. Thereafter, the bulk stress decays according to the causal law (19). In order to relate the constants c 1 and c 2 , we require, according to standard matching conditions, that H is continuous. Thus h(a 0 ) = 0, which fixes c 1 :
c 1 = −c 2 Ei 1 3 a d a 0 3 − 3 2 ln a 0 a d .(34)
Thus, using Eq. (32), we see that the backreaction of the dissipative decoupling process on the expansion of the universe is given by
δH =H c 2 Ei 1 3 a d a 3 − Ei 1 3 a d a 0 3 + 3 2 ln a a 0 Γ + O(Γ 2 ) .(35)
Substituting Eq. (34) into Eq. (33), we find that the bulk stress becomes
Π =ρ 2c 2 exp 1 3 a d a 3 Γ + O(Γ 2 ) ,(36)
whereρ = 3H 2 is the equilibrium energy density. Since Π < 0, we require c 2 < 0. Below we find a prescription for c 2 in terms of physical parameters.
IV. CONCLUSION
In order to complete the model, we need to determine the remaining arbitrary constant c 2 in terms of physical parameters. A rough estimate, which is consistent with the simplicity of the model, arises as follows. We estimate the duration of the dissipative process as
∆a ≈ a d − a 0 ,(37)
i.e. we assume that the process ends at a d . Then by Eqs. (8) and (13), the fractional viscous rise in temperature due to decoupling is approximately
∆T T ≈ − Π(a 0 ) ρ(a 0 ) ∆a a 0 .(38)
We can consider the fractional temperature increase as an input from previous kinetic-theory investigations (as described in the introduction), which typically predict it to be O(10 −3 ). 2 Then equations (36)-(38) and (33) allow us to estimate the constant c 2 in terms of the physical parameters a d /a 0 , ∆T /T and Γ as:
c 2 Γ ≈ − 1 2 ∆T T exp − 1 3 a d a0 3 a d a0 − 1 .(39)
Finally, we can also estimate the entropy production due to decoupling. By Eqs. (4) and (38), the viscous increase in entropy per particle is approximately
∆s ≈ 3 ∆T T .(40)
Our model describes the response of the cosmic fluid to a bulk stress, which is a very simple thermo-hydrodynamic approximation to more realistic kinetic theory models of neutrino decoupling, but which nevertheless accommodates the dissipative effects and respects relativistic causality. The simplicity of our model allows us to derive analytic forms for the dynamical quantities and the backreaction effects, but it does not incorporate a mechanism for bringing the dissipative process to an end.
The same reasoning applies when the temperature is barotropic.
Note that this small temperature increase is due to dissipative heating, and is not to be confused with the larger temperature increase arising from electron-positron annihilation, which occurs after neutrino decoupling. Our model does not consider the annihilation process.
Acknowledgements:This work was partially supported by a European Science Exchange Programme grant.APPENDIX A: CHARACTERISTIC VELOCITIES FOR BULK VISCOUS PERTURBATIONSFollowing[17], we derive equation(6)for the dissipative contribution to the sound speed. The full analysis of the causality and stability of the Israel-Stewart theory was performed in a series of papers by Hiscock and Salmonson[13,28]. They showed that both issues are closely related and obtained general expressions for the characteristic velocities for dissipative perturbations. Here we extract from their general expressions specific results for the case in which only bulk viscosity is present.The purely bulk viscous case stems from the general expressions of[13]by setting all the coefficients coupled to heat flux and shear viscosity to zero. This yields for the speed of propagating transverse modeswhich is what one expects for scalar sound-wave perturbations. Equation (128) of[13]governing the speed v = v L of propagating longitudinal modes becomes, on dividing by β 0 β 2 and setting α 0 = α 1 = 0,Dividing by β 1 and taking β 1 → ∞, we haveThe first term on the right is the adiabatic contribution c 2 s to v 2 , and the second term is the dissipative contribution c 2 b , which, requiring v 2 ≤ 1, leads toWe also learn from[13]that causality and stability requirefor all λ such that 0 ≤ λ ≤ 1. This condition is seen to be hold on account of the inequality (A2). The expression for c b refines and corrects the statement in[29](the first paper to apply causal bulk viscosity in cosmology) that ζ/ρτ = 1 is required by causality.
S Weinberg, Gravitation and Cosmology. New YorkWileyS. Weinberg, Gravitation and Cosmology (Wiley, New York, 1972).
. A D Dolgov, astro-ph/9807134A. D. Dolgov, astro-ph/9807134.
. M A Herrera, S Hacyan, Astrophys. J. 336539M. A. Herrera and S. Hacyan, Astrophys. J. 336, 539 (1989).
. N C Raha, B Mitra, Phys. Rev. D. 44393N. C. Raha and B. Mitra, Phys. Rev. D 44, 393 (1991).
. N Fornengo, C W Kim, J Song, Phys. Rev. D. 565213N. Fornengo, C. W. Kim, and J. Song, Phys. Rev. D 56, 5213 (1997).
. A D Dolgov, M Fukugita, Phys. Rev. D. 465378A. D. Dolgov and M. Fukugita, Phys. Rev. D 46, 5378 (1992).
. N Y Gnedin, O Y Gnedin, Astrophys. J. 50911N. Y. Gnedin and O. Y. Gnedin, Astrophys. J. 509, 11 (1998).
. A D Dolgov, S H Hansen, D V Semikoz, Nucl. Phys. B. 503426A. D. Dolgov, S. H. Hansen, and D. V. Semikoz, Nucl. Phys. B 503, 426 (1997);
. W Zimdahl, D Pavon, R Maartens, Phys. Rev. D. 554681W. Zimdahl, D. Pavon, and R. Maartens, Phys. Rev. D 55, 4681 (1997).
. W Israel, J M Stewart, Ann. Phys. (NY). 118341W. Israel and J. M. Stewart, Ann. Phys. (NY) 118, 341 (1979).
. D Pavon, D Jou, J Casas-Vázquez, Ann. Inst. H. Poincaré. 3679D. Pavon, D. Jou, and J. Casas-Vázquez, Ann. Inst. H. Poincaré 36, 79 (1982).
. C Eckart, Phys. Rev. 15919C. Eckart, Phys. Rev. 15, 919 (1940).
. W A Hiscock, L Lindblom, Ann. Phys. (NY). 151466W. A. Hiscock and L. Lindblom, Ann. Phys. (NY) 151, 466 (1983).
. M A Herrera, S Hacyan, Phys. Fluids. 283253M. A. Herrera and S. Hacyan, Phys. Fluids 28, 3253 (1985).
. R Maartens, J Triginer, Phys. Rev. D. 58123507R. Maartens and J. Triginer, Phys. Rev. D 58, 123507 (1998).
. R Maartens, J Triginer, Phys. Rev. D. 564640R. Maartens and J. Triginer, Phys. Rev. D 56, 4640 (1997).
R Maartens, astro-ph/9609119Hanno Rund Conference on Relativity and Thermodynamics. S. D. MaharajUniversity of Natal, South AfricaR. Maartens, in Hanno Rund Conference on Relativity and Thermodynamics, ed. S. D. Maharaj (University of Natal, South Africa, 1996). (astro-ph/9609119)
. R Maartens, V Méndez, Phys. Rev. D. 551937R. Maartens and V. Méndez, Phys. Rev. D 55, 1937 (1997).
. W A Hiscock, J Salmonson, Phys. Rev. D. 433249W. A. Hiscock and J. Salmonson, Phys. Rev. D 43, 3249 (1991).
. R Maartens, Class. Quantum Grav. 121455R. Maartens, Class. Quantum Grav. 12, 1455 (1995).
. W , Phys. Rev. D. 535483W. Zimdahl, Phys. Rev. D 53, 5483 (1996).
. V Méndez, J Triginer, J. Math. Phys. 372906V. Méndez and J. Triginer, J. Math. Phys. 37, 2906 (1996).
T Padmanabhan, Structure Formation in the Universe. CambridgeCambridge University PressT. Padmanabhan, Structure Formation in the Universe (Cambridge University Press, Cambridge, 1993).
. S Weinberg, Astrophys. J. 168175S. Weinberg, Astrophys. J. 168, 175 (1971).
. N Udey, W Israel, Mon. Not. R. Astron. Soc. 1991137N. Udey and W. Israel, Mon. Not. R. Astron. Soc. 199, 1137 (1982).
E Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen I. StuttgartTeubner117E. Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen I (Teubner, Stuttgart, 1983), p117.
I S Gradshteyn, I M Ryzhik, Table of Integrals, Series, and Products. LondonAcademic925I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products (Academic, London, 1980), p925.
W A Hiscock, L Lindblom, Contemporary Mathematics: Mathematics and General Relativity. J. IsenbergAmerican Math. SocietyW. A. Hiscock and L. Lindblom, in Contemporary Mathematics: Mathematics and General Relativity, ed. J. Isenberg (American Math. Society, 1988).
. V A Belinskii, E S Nikomarov, I M Khalatnikov, Sov. Phys. JETP. 50213V. A. Belinskii, E. S. Nikomarov, and I. M. Khalatnikov, Sov. Phys. JETP 50, 213 (1979).
| zyda_arxiv-0890000 |
The electron-phonon processes of the nitrogen-vacancy center in diamond
13 Mar 2015
Taras Plakhotnik
School of Mathematics and Physics
The University of Queensland
4072St LuciaQLDAustralia
Marcus W Doherty
Laser Physics Centre
Research School of Physics and Engineering
Australian National University
ACT 2601Australia
Neil B Manson
Laser Physics Centre
Research School of Physics and Engineering
Australian National University
ACT 2601Australia
The electron-phonon processes of the nitrogen-vacancy center in diamond
13 Mar 2015(Dated: March 16, 2015)
Applications of negatively charged nitrogen-vacancy center in diamond exploit the center's unique optical and spin properties, which at ambient temperature, are predominately governed by electronphonon interactions. Here, we investigate these interactions at ambient and elevated temperatures by observing the motional narrowing of the center's excited state spin resonances. We determine that the center's Jahn-Teller dynamics are much slower than currently believed and identify the vital role of symmetric phonon modes. Our results have pronounced implications for center's diverse applications (including quantum technology) and for understanding its fundamental properties.
The negatively charged nitrogen-vacancy (NV) center is a point defect in diamond [1] that has found diverse applications in quantum technology. The center is employed as a highly sensitive nanoscale sensor of electromagnetic fields [2][3][4][5][6][7], temperature [8][9][10][11][12][13] and pressure [14] that can operate in ambient and extreme conditions. Recent NV metrology proposals include gyroscopy [15][16][17][18] and the development of hybrid [19] and multi-mode [13] sensors. In quantum information science, the NV center is used to realize spin registers [20][21][22] at room temperature and spin-photon entanglement [23,24] at cryogenic temperatures. A new direction in NV quantum information science seeks to exploit spin-phonon coupling to enhance NV spin registers and develop novel quantum devices [25][26][27][28].
The applications of the NV center are based upon its remarkable optical and spin properties. The center's room temperature applications primarily rely upon its bright optical fluorescence, long-lived ground state spin coherence and methods of optical spin polarization and readout. The latter enable the optical detection of the center's magnetic resonances (ODMR) and are the consequence of spin-dependent phonon-mediated intersystem crossings (ISCs) [29,30]. The center's cryogenic applications also employ the coherence of the center's visible zero-phonon line (ZPL). The necessity of cryogenics arises from the temperature dependent electron-phonon induced dephasing and depolarization of the ZPL [36,37]. Electron-phonon coupling is also responsible for the motional narrowing of the center's excited state spin resonances, which determines their utility as an additional quantum resource for sensing and information processing [13,31]. Thus, a through understanding of the NV center's electron-phonon interactions is important to the continued advancement of its applications and may be generalized to similar defects with emerging quantum applications, such as the silicon-vacancy center in diamond [32,33] and centers in silicon carbide [34,35]. Here, we show that there exist several issues in the current under-standing and identify possible resolutions.
The electronic structure of the NV center is depicted in Fig. 1. The optical transitions of the visible ZPL occur between the ground 3 A 2 and excited 3 E spin triplet levels. The temperature dependent broadening of the ZPL was initially described [36] using the widely applicable model of quadratic electron-phonon interactions with A 1 phonon modes [38]. However, subsequent single center cryogenic measurements revealed that the broadening was more consistent with the characteristic ∝ T 5 temperature dependence of linear electron-phonon (Jahn-Teller) interactions with E phonon modes [37]. These interactions induce population transfer between the quasidegenerate orbital states (|X , |Y ) of the 3 E level (see Fig. 1), which dephases the optical transitions and leads to the depolarization of the ZPL fluorescence [37]. Applying their Jahn-Teller model, Fu et al [37] identified a factor of ∼ 2 inconsistency between the population transfer rates that describe the ZPL broadening and depolarization at low temperatures. By introducing a phonon cutoff energy, Abtew et al [39] attempted to extend Fu et al's model to describe the ZPL broadening up to room temperature. In doing so, they obtained a cutoff at 50 meV for E phonons, which is much lower than the diamond Debye energy ω D ≈168 meV [40] and the features of the NV electron-phonon spectral density extracted from the visible phonon sideband [41].
At temperatures < ∼ 30 K, the complicated six level 3 E fine structure (see Fig. 1) is observed via high resolution optical spectroscopy [29]. Above ≈ 150 K [42], the population transfer between the 3 E orbital states is sufficiently fast to dynamically average the 3 E fine structure so that ODMR resembles the simpler three level structure of the ground state [43]. The dynamically averaged fine structure is temperature dependent and is described by the spin Hamiltonian [13]
H = D (S 2 z − 2 3 ) − D ⊥ R(T )(S 2 x − S 2 y )(1)
where D = 1.42 GHz and D ⊥ = 0.775 GHz are the 3 E spin-spin parameters, R(T ) = (e hξ ⊥ /kB T − 1)/(e hξ ⊥ /kB T + 1) is the temperature reduction factor, h and k B are the Planck and Boltzman constants, respectively, and ξ ⊥ is the 3 E strain splitting. Note that the negligible contribution of the λ ⊥ spin-orbit term (see Ref. 13) to the fine structure is ignored here. The dynamical averaging is also expected to motionally narrow the 3 E ODMR, since the rapid population transfer decouples the orbit and spin degrees of freedom. Fuchs et al have measured the 3 E spin dephasing rate at room temperature [31]. They attributed the observed dephasing to the dynamical averaging process and, using a motional narrowing model, suggested that elevated temperatures or strain may decrease its rate. However, their proposal is yet to be tested by a systematic study of the motional narrowing effect. Furthermore, there is an inconsistency between Fuchs et al's observations and the current ZPL broadening model. If the population transfer rate (∼ 10 THz) at room temperature is inferred from the ZPL width [31], then the spin dephasing rate predicted by the motional narrowing model (∼ 1.2 MHz) is almost two orders of magnitude smaller than measured (∼ 92 MHz). The anomalously low cutoff of Abtew et al, the discrepancy identified by Fu et al, the conspicuous absence of interactions with A 1 modes, and the orders of magnitude larger than expected 3 E spin dephasing rate, all indicate problems in the current ZPL broadening model. The optical polarization and readout of the spin triplet levels is a result of spin-selective ISCs with the intermediate 1 A 1 and 1 E spin-singlet levels (see Fig. 1
3 A 2 3 E 1 A 1 3 E |Y |S 0 |X |S 0 |Y |S − |Y |S + |X |S − |X |S + |±1 |0 D + D + D − D − ξ ⊥ W ↓ , W ↑ 1.946 eV) [1].
Goldman et al [29,30] have developed a detailed model of the electron-phonon mechanisms that govern the ISC from 3 E to 1 A 1 . However, in order to validate Goldman et al's model and extend it to room temperature, more detailed and quantitative knowledge of linear E phonon interactions is required. This knowledge can be improved by extending the current measurements of the population transfer rates at cryogenic temperatures to room temperature and beyond.
In this paper, we report observations of the 3 E ODMR of NV centers in nanodiamond over the temperature range 295-550 K. We show that the ODMR is well described by a motional narrowing model and extract the population transfer rates. We establish that the rates are much slower than currently believed and do not account for the observed ZPL broadening at room temperature. We propose that quadratic A 1 phonon interactions contribute significantly to the ZPL already above 30 K. Finally, we back up and rectify the proposals of Fuchs et al, resolve the inconsistencies of the ZPL broadening model and provide valuable insight into electron-phonon coupling above cryogenic temperatures.
Our continuous wave ODMR experiments were performed using 532 nm laser excitation and fluorescence collection via an epifluorescence design. The nanodiamonds were spin coated on a silica substrate. The NV spin resonances were driven by radio-frequency (RF) magnetic field created by a gold wire deposited onto the substrate. The excitation laser spot overlapped with the wire and the optical heating of the wire was used to control the temperature of a chosen nanodiamond. See Ref. 13 for further experimental details. On average, the nanodiamonds had a diameter of ∼ 30 nm and contained ∼ 15 NV centers. We performed ODMR measurements on a total of 10 nanodiamonds. The results from one nanodiamond are presented here and are consistent with the rest of the sample and, as will be explained, measurements in bulk diamond. Previous optical spectroscopy has measured the 3 E strain splitting of the nanodiamond to be hξ ⊥ ∼ 4.7 meV [13]. This large strain splitting permits a simple 3 E fine structure (see Fig. 1) and application of the motional narrowing model. Note that the strain splitting in previous reports [31,37] have been much smaller.
Examples of ODMR spectra are shown in Fig. 2. Averaging over the unresolved 3 E hyperfine structure, the observed ODMR splitting is [13]
∆ ODMR = 2 3 D ⊥ R(T ) + 4 3 A 2 + D 2 ⊥ R 2 (T ) 1/2 ,(2)
where A ≈ 40 MHz is the isotropic hyperfine parameter. RF-power broadening is evident in Fig. 2. Similar to the analysis of the 3 A 2 ODMR in Ref. 44, a five-level model of the optical and spin dynamics yields the following expressions for the ODMR linewidth Γ ODMR and contrast C ODMR
Γ ODMR = Γ (inh) ODMR + Γ (h) ODMR 1 + 4πκP RF Γ (h) ODMR γ 1 1/2 C ODMR = C (max) ODMR 4πκP RF 4πκP RF + γ 1 Γ (h) ODMR ,(3)
where Γ ODMR are the homogenous and inhomogenous linewidths in the absence of power broadening, P RF is the RF-power, κ is a proportionality factor such that κP RF is the spin Rabi frequency, and γ 1 is the effective spin relaxation rate. The essential difference to Ref. 44 is a much weaker, but more complicated dependence of γ 1 on the laser power. At low laser powers, γ 1 ≈ kk ISC /(k + 0.5k ISC ) ≈ 22 MHz, where k ≈ 20 MHz [45] is the 3 E radiative decay rate in nanodiamond of the same type and origin as used in this work and k ISC ≈ 50 MHz is the average 3 E ISC rate [29,45]. Stress inhomogeneity and the unresolved hyperfine structure contribute to Γ
ODMR = Γ ∞ + Γ MN (T )
is the sum of the broadening due to the 3 E orbital decay rate Γ ∞ = (k + 0.5k ISC )/π and motional narrowing Γ MN (T ). Whilst the orbital decay rate increases at high temperature [8,46], this temperature dependence is ignored in the following because the contribution of Γ ∞ to the observed Γ ODMR changes little, from 14 MHz to 17 MHz between 295 K and 500 K. The spin bath dephasing contribution to Γ ∞ is also ignored because it has been assessed using the 3 A 2 ODMR to be negligibly small (1-2 MHz). In the fast exchange approximation of motional narrowing [47,48], where the population transfer rates (W ↑ , W ↓ ) are much larger than the jump in the spin resonances between the 3 E orbital states (2D ⊥ ), Γ MN (T ) ≈ β(T )2πD 2 ⊥ /W ↓ . The factor β(T ) = 8e −hξ ⊥ /kB T /(e −hξ ⊥ /kB T + 1) 3 is close to 1 above room temperatures. Thus, as W ↓ increases with temperature, Γ MN decreases.
In the temperature regime k B T ≫ hξ ⊥ , Raman scattering of E phonons dominate the population transfer rates which read [29]
W ↓ = B E T 5 Ω E k B T x ⊥ x 2 e x (x−x ⊥ ) 2 (e x −1)(e x−x ⊥ −1) dx(4)
and W ↑ = W ↓ e −hξ ⊥ /kB T , where x ⊥ = hξ ⊥ /k B T and Ω E is the cutoff energy for E phonons. The deformation potential and Debye density of states for acoustic phonons have been assumed such that the corresponding electron-phonon spectral density is J E (ω) ≈ η E ω 3 and the constant B E = 64 πh η 2 E k 5 B . Whilst in the simplest case Ω E = ω D , the cutoff is often considered as a phenomenological parameter which takes into account the departure from J E (ω) ∝ ω 3 . In the high temperature regime 1 ≫ x ⊥ ,hΩ E /k B T applicable to our work, the in-
ODMR = Γ ∞ + β(T )2πD 2 ⊥ /(QT 2 )
. Systematic measurements of the ODMR linewidth, contrast and splitting at different temperatures, RF and laser powers are presented in Fig. 3. The weak optical-power dependence [inset of Fig. 3(a)] supports the approximation γ 1 ≈ 22 MHz. The simultaneous fitting of the six data sets using the five parameters yields Γ (inh) ODMR = 33 ± 3 MHz, κ ≈ 210 ± 40 MHz 2 W −1 , C (max) ODMR = 16 ± 2%, Q = 0.83 ± 0.06 MHz K −2 , and hξ ⊥ = 4.6 ± 0.2 meV. The values of κ and hξ ⊥ are in reasonable agreement with the parameters of the RF wire and previous optical spectroscopy, respectively. The fitting yields Γ (h) ODMR = 55 MHz at room temperature. The dephasing rate measured by Fuchs et al at room temperature in bulk diamond corresponds to Γ (h) ODMR ∼ 29 MHz. Taking into account that the much smaller stress splitting ξ ⊥ of Fuchs et al's NV center will increase Q by ∼ 2, the two values are in agreement. Hence, we conclude that our nanodiamond measurements are consistent with bulk diamond and capture intrinsic phenomena of the NV center.
The previous measurements of the ZPL width [37] are plotted in Fig. 4 together with W ↓ /(2π) calculated here using the value of Q that we obtained by fitting our motional narrowing observations (rescaled to ξ ⊥ = 0 to match the stress splitting in Ref. 37). It is evident that the rates are orders of magnitude too small to account for the ZPL width alone. We propose that the additional width is due to quadratic interactions with A 1 modes that purely dephase the optical transitions [36,38]. In which case, the ZPL width is [49]
Γ ZPL = W ↓ 2π + W A π + γ 0 ,(5)
where W A is the additional dephasing rate and γ 0 is the approximately temperature independent contribution of the optical decay rate. As per a similar derivation of W ↓ ,
W A = B A T 7 Ω A k B T 0 e x x 6 (e x − 1) 2 dx,(6)
where B A is a constant and Ω A is the cutoff energy of A 1 phonons. We used Eqs. (4)(5)(6) and fitted the ZPL width measurements to obtain B E = 1.32 Hz K −5 , Ω E = 13 ± 1 meV, B A = 24 ± 4 µHz K −7 , Ω A = 37 ± 2 meV and γ 0 = 16.2 ± 0.5 MHz (in bulk diamond). We confirmed our parameters (B E and Ω E ) of the population transfer rates by also fitting the polarization visibility measurements of Ref. 37 (see Fig. 4). Our fit of the visibility curve is practically indistinguishable from Ref. 37 The phonon cutoffs that we obtained are much lower than expected. We attribute this to the inadequacies of the acoustic approximation of the phonon spectral density J(ω) ≈ η E ω 3 [36,38] and consider the cutoffs as phenomenological. Noting that Ω E /k B ∼ 155 K, these inadequacies are negligible at low temperatures, which 39. The red solid curve is the contribution of W ↓ to the ZPL width according to (4,5). The red dots show W ↓ /2π derived from ODMR data alone. Inset: the ZPL polarization visibility of two NV centers (red and blue points) from [37]. The solid curve is our fit obtained using the model V = (W ↑ − W ↓ ± r (1 − a) / (1 + a)) / (W ↓ + W ↑ + r), where a = 0.40 ± 0.02 and r = 80 MHz are defined in Ref. 37, and W ↓ and W ↑ are determined by our fit of the ZPL width. explains why they were not detected in previous cryogenic measurements [29,37]. Interestingly, Ω E is close to the calculated Jahn-Teller barrier energy ∼ 10 meV [39]. Note that the spectral density extracted, for example, from the visible phonon sideband represents contributions of E and A 1 phonons due to linear electronphonon interactions. It is difficult to distinguished the effects of A 1 and E modes on the phonon band experimentally and ab initio calculations therefore appear to be the best avenue for future advancement to resolve the puzzle.
FIG. 1 .
1The electronic and fine structures of the NV center at high stress. The 3 E sub-levels are labelled by their product of orbital (|X , |Y ) and spin (|0 , |S± ) states, where the spin states are solutions of (1). The 3 E fine structure splittings (D± = D ± D ⊥ ) are denoted in red. The 3 A2 sub-levels are denoted by their spin projection (ms = 0,±1). The optical transitions of the spin triplet and singlet levels are depicted as black solid arrows and the visible ZPL is at 1.956 eV. Blue dashed arrows represent the population transfers within the 3 E (rates W ↓ and W ↑ ). The black dashed arrows denote the allowed ISCs between the spin triplet and singlet levels.
. 3. A: ODMR linewidth as a function of temperature at RF-powers of 400, 200, and 50 mW (top to bottom). Inset shows the weak optical power dependence of the linewidth at 294 K and at two RF powers: 47 mW (bottom) and 380 mW (top). B and C show the RF-power dependence of the linewidth and contrast at 294 K and 100 mW optical power. D: ODMR splitting at different temperatures (50 mW RFpower). Error bars are determined by the statistics of repeated measurements. The plotted linewidth is the average width of the two lines.
FIG. 4 .
4Blue points are the ZPL width measured in Ref. 37. The black solid curve depicts the fit of our model and the black dashed curve is the extended Jahn-Teller model of Ref.
This work was supported by the Australian Research Council under the Discovery Project scheme DP0771676 and DP120102232. * [email protected] [1] M.W. Doherty, N.B. Manson, P. Delaney, F. Jelezko, J. Wrachtrup and L.C.L. Hollenberg, Physics Reports 528, 1 (2013). [2] D. Le Sage, K. Arai, D.R. Glenn, S.J. DeVience, L.M. Pham, L. Rahn-Lee, M.D. Lukin, A. Yacoby, A. Komeili and R.L. Walsworth, Nature 496, 486 (2013). [3] M.S. Grinolds, S. Hong, P. Maletinsky, L. Luan, M. D. Lukin, R. L. Walsworth and A. Yacoby. Nature Physics
FIG. 2. Example ODMR spectra at different temperatures (315 K upper, 455 K lower) and RF powers (440 mW left, 55 mW right). The narrowing and reduced splitting of the lines at higher temperature as well as power broadening at higher RF power can be seen. The lineshape fits (solid lines) are the sum of two Lorentzians and a linear background.1.2
1.4
1.6
1.8
78
80
82
84
86
1.2
1.4
1.6
1.8
400
410
420
430
RF frequency (GHz)
Luminescence intensity (arb. u.)
1.2
1.4
1.6
1.8
250
252
254
1.2
1.4
1.6
1.8
322
324
326
328
330
RF frequency (GHz)
and our value of B E also agrees with the value ∼ 1.6 Hz K −5 obtained there. Unlike Ref. 37, we use the same value of B E for ZPL and visibility fits and our fit to ZPL data better describes the low and room temperature regions than the extended Jahn-Teller model presented in Ref.39. Most importantly, the ZPL broadening is fully consistent with our ODMR measurements at elevated temperatures.
. H J Mamin, M Kim, M H Sherwood, C T Rettner, K Ohno, D D Awschalom, D Rugar, Science. 339557H.J. Mamin, M. Kim, M.H. Sherwood, C.T. Rettner, K. Ohno, D.D. Awschalom and D. Rugar, Science 339, 557 (2013).
. T Staudacher, F Shi, S Pezzagna, J Meijer, J Du, C A Meriles, F Reinhard, J Wrachtrup, Science. 339561T. Staudacher, F. Shi, S. Pezzagna, J. Meijer, J. Du, C.A. Meriles, F. Reinhard and J. Wrachtrup, Science 339, 561 (2013).
. F Dolde, H Fedder, M W Doherty, T Nöbauer, F Rempp, G Balasubramanian, T Wolf, F Reinhard, L C L Hollenberg, F Jelezko, J Wrachtrup, Nature Physics. 7459F. Dolde, H. Fedder, M. W. Doherty, T. Nöbauer, F. Rempp, G. Balasubramanian, T. Wolf, F. Reinhard, L.C.L. Hollenberg, F. Jelezko and J. Wrachtrup, Nature Physics 7, 459 (2011).
. F Dolde, M W Doherty, J Michl, I Jakobi, B Naydenov, S Pezzagna, J Meijer, P Neumann, F Jelezko, N B Manson, J Wrachtrup, Phys. Rev. Lett. 11297603F. Dolde, M.W. Doherty, J. Michl, I. Jakobi, B. Nay- denov, S. Pezzagna, J. Meijer, P. Neumann, F. Jelezko, N.B. Manson and J. Wrachtrup, Phys. Rev. Lett. 112, 097603(2014).
. D M Toyli, D J Christle, A Alkauskas, B B Buckley, C G Van De Walle, D D Awschalom, Phys. Rev. X. 231001D.M. Toyli, D.J. Christle, A. Alkauskas, B.B. Buckley, C.G. Van de Walle and D.D. Awschalom, Phys. Rev. X 2, 031001 (2012).
. D M Toyli, C F De Las Casas, D J Christle, V V Dobrovitski, D D Awschalom, PNAS. 1108417D.M. Toyli, C.F. de las Casas, D.J. Christle, V.V. Do- brovitski and D.D. Awschalom, PNAS 110, 8417 (2013).
. P Neumann, I Jakobi, F Dolde, C Burk, R Reuter, G Waldherr, J Honert, T Wolf, A Brunner, J H Shim, D Suter, H Sumiya, J Isoya, J Wrachtrup, Nano Lett. 132738P. Neumann, I. Jakobi, F. Dolde, C. Burk, R. Reuter, G. Waldherr, J. Honert, T. Wolf, A. Brunner, J.H. Shim, D. Suter, H. Sumiya, J. Isoya and J. Wrachtrup, Nano Lett. 13, 2738 (2013).
. G Kucsko, P C Maurer, N Y Yao, M Kubo, H J Noh, P K Lo, H Park, M D Lukin, Nature. 50054G. Kucsko, P.C. Maurer, N.Y. Yao, M. Kubo, H.J. Noh, P.K. Lo, H. Park and M.D. Lukin, Nature 500, 54 (2013).
. M W Doherty, V M Acosta, A Jarmola, M S J Barson, N B Manson, D Budker, L C L Hollenberg, Phys. Rev. B. 9041201M.W. Doherty, V.M. Acosta, A. Jarmola, M.S.J. Barson, N.B. Manson, D. Budker and L.C.L. Hollenberg, Phys. Rev. B 90, 041201(R) (2014).
. T Plakhotnik, M W Doherty, J H Cole, R Chapman, N B Manson, Nano Lett. 144989T. Plakhotnik, M.W. Doherty, J.H. Cole, R. Chapman and N.B. Manson, Nano Lett. 14, 4989 (2014).
. M W Doherty, V V Struzhkin, D A Simpson, L P Mcguinness, Y Meng, A Stacey, T J Karle, R J Hemley, N B Manson, L C L Hollenberg, S Prawer, Phys. Rev. Lett. 11247601M.W. Doherty, V.V. Struzhkin, D.A. Simpson, L.P. McGuinness, Y. Meng, A. Stacey, T.J. Karle, R.J. Hem- ley, N.B. Manson, L.C.L. Hollenberg and S. Prawer, Phys. Rev. Lett. 112, 047601 (2014).
. D Maclaurin, M W Doherty, L C L Hollenberg, A M Martin, Phys. Rev. Lett. 108240403D. Maclaurin, M.W. Doherty, L.C.L. Hollenberg and A.M. Martin, Phys. Rev. Lett. 108, 240403 (2012).
. M Ledbetter, K Jensen, R Fischer, A Jarmola, D Budker, Phys. Rev. A. 8652116M. Ledbetter, K. Jensen, R. Fischer, A. Jarmola and D. Budker, Phys. Rev. A 86, 052116 (2012).
. A Ajoy, P Cappellaro, Phys. Rev. A. 8662104A. Ajoy and P. Cappellaro, Phys. Rev. A 86, 062104 (2012).
. M W Doherty, J Michl, F Dolde, I Jakobi, P Neumann, N B Manson, J Wrachtrup, New J. Phys. 1663067M.W. Doherty, J. Michl, F. Dolde, I. Jakobi, P. Neu- mann, N.B. Manson and J. Wrachtrup, New J. Phys. 16, 063067 (2014).
. J Cai, F Jelezko, M B Plenio, 10.1038/ncomms5065Nature Comm. 5J. Cai, F. Jelezko and M.B. Plenio, Nature Comm. 5, doi:10.1038/ncomms5065 (2014).
. M V G Dutt, L Childress, L Jiang, E Togan, J Maze, F Jelezko, A S Zibrov, P R Hemmer, M D Lukin, Science. 1312Dutt, M.V.G.; Childress, L.; Jiang, L.; Togan, E.; Maze, J.; Jelezko, F.; Zibrov, A.S.; Hemmer, P.R.; Lukin, M.D. Science 2007, 316, 1312.
. P Neumann, N Mizuochi, F Rempp, P Hemmer, H Watanabe, S Yamasaki, V Jacques, T Gaebel, F Jelezko, J Wrachtrup, Science. 3201326P. Neumann, N. Mizuochi, F. Rempp, P. Hemmer, H. Watanabe, S. Yamasaki, V. Jacques, T. Gaebel, F. Jelezko and J. Wrachtrup, Science 320, 1326 (2008).
. G Waldherr, Y Wang, S Zaiser, M Jamali, T Schulte-Herbrueggen, H Abe, T Ohshima, J Isoya, J F Du, P Neumann, J Wrachtrup, Nature. 506204G. Waldherr, Y. Wang, S. Zaiser, M. Jamali, T. Schulte- Herbrueggen, H. Abe, T. Ohshima, J. Isoya, J.F. Du, P. Neumann and J. Wrachtrup, Nature 506, 204 (2014).
. E Togan, Y Chu, A S Trifonov, L Jiang, J Maze, L ; M V G Childress, M V G Dutt, A S Soerensen, P R Hemmer, A S Zibrov, M D Lukin, Nature. 730Togan, E.; Chu, Y.; Trifonov, A.S.; Jiang, L.; Maze, J.; Childress, L.; M. V. G. Dutt, M.V.G.; Soerensen, A.S.; Hemmer, P.R.; Zibrov, A.S.; Lukin, M.D. Nature 2010, 466, 730.
. H Bernien, B Hensen, W Pfaff, G Koolstra, M S Blok, L Robledo, T H Taminiau, M Markham, D J Twitchen, L Childress, R Hanson, Nature. 49786H. Bernien, B. Hensen, W. Pfaff, G. Koolstra, M.S. Blok, L. Robledo, T.H. Taminiau, M. Markham, D.J. Twitchen, L. Childress and R. Hanson, Nature 497, 86 (2013).
. S D Bennett, N Y Yao, J Otterbach, P Zoller, P Rabl, M D Lukin, Phys. Rev. Lett. 110156402S.D. Bennett, N.Y. Yao, J. Otterbach, P. Zoller, P. Rabl and M.D. Lukin, Phys. Rev. Lett. 110, 156402 (2013).
. E R Macquarrie, T A Gosavi, N R Jungwirth, S A Bhave, G D Fuchs, Phys. Rev. Lett. 111227602E.R. MacQuarrie, T.A. Gosavi, N.R. Jungwirth, S.A. Bhave, and G.D. Fuchs, Phys. Rev. Lett. 111, 227602 (2013).
. K V Kepesidis, S D Bennett, S Portolan, M D Lukin, P Rabl, Phys. Rev. B. 8864105K.V. Kepesidis, S.D. Bennett, S. Portolan, M.D. Lukin and P. Rabl, Phys. Rev. B 88, 064105 (2013).
. P Ovartchaiyapong, K W Lee, B A Myers, A C Bleszynski Jayich, 10.1038/ncomms5429Nature Comm. 5P. Ovartchaiyapong, K.W. Lee, B.A. Myers and A.C. Bleszynski Jayich, Nature Comm. 5, doi:10.1038/ncomms5429 (2014).
. M L Goldman, A Sipahigil, M W Doherty, N Y Yao, S D Bennett, M Markham, D J Twitchen, N B Manson, A Kubanek, M D Lukin, arXiv:1406.4065M.L. Goldman, A. Sipahigil, M.W. Doherty, N.Y. Yao, S.D. Bennett, M. Markham, D.J. Twitchen, N.B. Man- son, A. Kubanek and M.D. Lukin, arXiv:1406.4065 (2014).
. M L Goldman, M W Doherty, A Sipahigil, N Y Yao, S D Bennett, N B Manson, A Kubanek, M D Lukin, arXiv:1412.4865M.L. Goldman, M.W. Doherty, A. Sipahigil, N.Y. Yao, S.D. Bennett, N.B. Manson, A. Kubanek and M.D. Lukin, arXiv:1412.4865 (2014).
Heremans. G D Fuchs, V V Dobrovitski, D M Toyli, F J , Nature Phys. C.D. Weis, T. Schenkel and D.D. Awschalom6668G.D. Fuchs, V.V. Dobrovitski, D.M. Toyli, F.J. Here- mans, C.D. Weis, T. Schenkel and D.D. Awschalom, Na- ture Phys. 6, 668 (2010).
. B Pingault, J N Becker, C H H Schulte, C Arend, C Hepp, T Godde, A I Tartakovskii, M Markham, C Becher, M Atatüre, Phys. Rev. Lett. 113263601B. Pingault, J.N. Becker, C.H.H. Schulte, C. Arend, C. Hepp, T. Godde, A.I. Tartakovskii, M. Markham, C. Becher, and M. Atatüre, Phys. Rev. Lett. 113, 263601 (2014).
. L J Rogers, K D Jahnke, M H Metsch, A Sipahigil, J M Binder, T Teraji, H Sumiya, J Isoya, M D Lukin, P Hemmer, F Jelezko, Phys. Rev. Lett. 113263602L.J. Rogers, K.D. Jahnke, M.H. Metsch, A. Sipahigil, J.M. Binder, T. Teraji, H. Sumiya, J. Isoya, M.D. Lukin, P. Hemmer and F. Jelezko, Phys. Rev. Lett. 113, 263602 (2014).
. D J Christle, A L Falk, P Andrich, P V Klimov, J Hassan, N T Son, E Janzén, T Ohshima, D , D. J. Christle, A. L. Falk, P. Andrich, P. V. Klimov, J. Hassan, N. T. Son, E. Janzén, T. Ohshima, and D.
. D Awschalom, 10.1038/nmat4144Nature Materials. D. Awschalom. Nature Materials, doi:10.1038/nmat4144 (2014).
. M Widmann, S.-Y Lee, T Rendler, N Tien Son, H Fedder, S Paik, L.-P Yang, N Zhao, S Yang, I Booker, A Denisenko, M Jamali, S A Momenzadeh, I Gerhardt, T Ohshima, A Gali, E Janzén, J Wrachtrup, 10.1038/nmat4145Nature Materials. M. Widmann, S.-Y. Lee, T. Rendler, N. Tien Son, H. Fedder, S. Paik, L.-P. Yang, N. Zhao, S. Yang, I. Booker, A. Denisenko, M. Jamali, S.A. Momenzadeh, I. Gerhardt, T. Ohshima, A. Gali, E. Janzén and J. Wrachtrup, Na- ture Materials, doi:10.1038/nmat4145 (2014).
. G Davies, J. Phys. C: Solid State Phys. 73797G. Davies, J. Phys. C: Solid State Phys. 7, 3797 (1974).
. K.-M C Fu, C Santori, P E Barclay, L J Rogers, N B Manson, R G Beausoleil, Phys. Rev. Lett. 103256404K.-M.C. Fu, C. Santori, P.E. Barclay, L.J. Rogers, N.B. Manson and R.G. Beausoleil, Phys. Rev. Lett. 103, 256404 (2009).
. A A Maradudin, Solid State Phys. 18274A.A. Maradudin, Solid State Phys., 18, 274 (1966).
. T A Abtew, Y Y Sunn, B.-C Shih, P Dev, S B Zhang, P Zhang, Phys. Rev. Lett. 107146403T.A. Abtew, Y.Y. Sunn, B.-C. Shih, P. Dev, S.B. Zhang and P. Zhang, Phys. Rev. Lett. 107, 146403 (2011).
A M Zaitsev, Optical Properties of Diamond: A Data Handbook. New YorkSpringerA.M. Zaitsev, Optical Properties of Diamond: A Data Handbook (Springer, New York, 2001).
. P Kehayias, M W Doherty, D English, R Fischer, A Jarmola, K Jensen, N Leefer, P Hemmer, N B Manson, D Budker, Phys. Rev. B. 88165202P. Kehayias, M.W. Doherty, D. English, R. Fischer, A. Jarmola, K. Jensen, N. Leefer, P. Hemmer, N.B. Manson and D. Budker, Phys. Rev. B 88, 165202 (2013).
. A Batalov, V Jacques, F Kaiser, P Siyushev, P Neumann, L J Rogers, R L Mcmurtrie, N B Manson, F Jelezko, J Wrachtrup, Phys. Rev. Lett. 102195506A. Batalov, V. Jacques, F. Kaiser, P. Siyushev, P. Neu- mann, L.J. Rogers, R.L. McMurtrie, N.B. Manson, F. Jelezko, and J. Wrachtrup, Phys. Rev. Lett. 102, 195506 (2009).
Manson. L J Rogers, R L Mcmurtrie, M J Sellars, N B , New J. Phys. 1163007L.J. Rogers, R.L. McMurtrie, M.J. Sellars and N.B. Man- son, New J. Phys. 11, 063007 (2009).
. K Jensen, V M Acosta, A Jarmola, D Budker, Phys. Rev. B. 8714115K. Jensen, V.M. Acosta, A. Jarmola and D. Budker, Phys. Rev. B 87, 014115 (2013).
. R Chapman, T Plakhotnik, Phys. Rev. B. 8645204R. Chapman and T. Plakhotnik, Phys. Rev. B 86, 045204 (2012).
. T Plakhotnik, D Gruber, Phys. Chem. Chem. Phys. 129751T. Plakhotnik and D. Gruber, Phys. Chem. Chem. Phys. 12, 9751 (2010).
. P D Reilly, J L Skinner, J. Chem. Phys. 101959P.D. Reilly and J.L. Skinner, J. Chem. Phys. 101, 959 (1994).
C P Slitcher, Principles of Mangetic Resonance. New YorkHarper & RowC.P. Slitcher, Principles of Mangetic Resonance (Harper & Row, New York, 1963).
C Cohen-Tannoudji, J Dupont-Roc, G Grynberg, Atom-photon interactions. New YorkJohn Wiley and Sons IncC. Cohen-Tannoudji, J. Dupont-Roc and G. Grynberg, Atom-photon interactions (John Wiley and Sons Inc., New York, 1992).
| zyda_arxiv-0910000 |
Rationalizing right-handed neutrinos
Apr 2003
Graham D Kribs [email protected]
Department of Physics
University of Wisconsin
53706MadisonWI
Rationalizing right-handed neutrinos
Apr 2003arXiv:hep-ph/0304256v1 27
A simple argument based on an SU(3) gauged horizontal symmetry is presented that connects the explanation for three generations of matter with the existence of a triplet of right-handed neutrinos. This rationale for right-handed neutrinos is analogous to, but completely independent of, grand unification or extra universal dimensions. A brief discussion of the supersymmetrized SU(3) model is also given, pointing out that certain problems in ordinary supersymmetric models such as fast proton decay via dimension-5 Planck-suppressed operators can be naturally solved.
One of the most remarkable features of the Standard Model (SM) is that matter fermions are chiral and yet all gauge [1,2] and gravitational [3] anomalies vanish for each generation. A known but not often emphasized fact about the matter content is that, given one generation with unfixed hypercharges, anomaly cancellation determines the relative hypercharge assignment to be precisely what has been established by experiment [4]. In other words, electric charge quantization is essentially automatic without grand unification. This fact, taken at face value, is circumstantial evidence against the existence of right-handed neutrinos. By definition a candidate for a right-handed neutrino is any fermion that is uncharged under all of the SM gauge symmetries. Yet, gauge symmetries are precisely the reason that each type of matter (Q, u, d, L, e) is tied with the other matter fields together in a self-consistent, exclusive fashion. In addition, non-chiral matter allows a new mass scale unconnected to electroweak symmetry breaking that only further complicates our understanding of mass generation and mass hierarchies. Extensions of the SM with non-chiral matter, such as adding right-handed neutrinos, therefore appear to be contrary to all of the guiding wisdom gleaned from experiment, at least until recently. (Those who are still in doubt need only observe the agony that the µ problem causes avatars of supersymmetry.) Neutrino experiments [5,6,7], however, have firmly established that the neutrinos oscillate between each generation and thus they have mass. The largest mass of any one neutrino is constrained to be less than about 2 eV [8], and more likely their mass is one to a few orders of magnitude below this, depending on the generation. The mechanism of mass generation for neutrinos remains a mystery. If neutrinos acquire mass analogously to the SM matter fermions, preserving lepton number, then the particle content must be extended with at least two right-handed neutrinos ν 1,2 . Ordinary Yukawa terms L = λ ν LHν c with tiny couplings λ ν < ∼ 10 −12 suffice to explain the two undisputed mass differences found in neutrino oscillation experiments.
But, the global symmetry behind lepton number conservation is not expected to be exact. At dimension-5, the operator HLHL/M violates lepton number by two units and leads to a tiny Majorana mass v 2 /M for left-handed neutrinos. This transmutes the neutrino mass hierarchy problem from explaining λ ν < ∼ 10 −12 to instead explaining v/M < ∼ 10 −12 . To embrace the dimension-5 neutrino mass explanation means the SM effective theory breaks down at M < ∼ 10 14 GeV. This is somewhat disconcerting since there are dimension-6 operators that violate lepton and baryon number, leading to a proton decay rate that is excluded by experiment unless M > ∼ 10 16 GeV. Hence, while lepton number must be be violated at M to explain neutrino masses, baryon number must be preserved to keep the proton stable. The simplest phenomenological explanation for lepton number violation without baryon number violation at the cutoff scale M is to add righthanded neutrinos to the SM with ordinary Yukawa couplings, plus a heavy Majorana mass term L = M ν c ν for the right-handed neutrinos. The resulting combination of a Dirac mass and a heavy Majorana mass leads to the famous "see-saw" neutrino mass matrix [9]
0 λ ν v λ ν v M .(1)
Diagonalizing this mass matrix, or equivalently integrating out the right-handed neutrino gives back the SM plus the dimension-5 operator HLHL/M with a welldefined coefficient, λ 2 ν , and thus predicted neutrino mass, λ 2 ν v 2 /M . A right-handed neutrino with Majorana mass M therefore provides an ultraviolet completion of the effective theory beyond the cutoff M , explaining why only lepton number was violated at M . The difficulty with such neutrino mass generation mechanisms is that they do not really solve the neutrino mass hierarchy problem, and worse still, require precisely those odd-ball fields -right-handed neutrinos -that are unconnected to SM matter through gauge anomalies. Furthermore, the see-saw explanation requires a new Majorana mass scale unconnected with electroweak symmetry breaking. These facts would ordinarily be highly distressing except for a remarkable coincidence: the scale M ∼ 10 14 GeV is tantalizingly close to where the SM gauge couplings come to an approximate intersection. Such an intersection is predicted by grand unified theories (GUTs), providing justification for the new scale. Furthermore, in an SO(10) GUT each right-handed neutrino is elegantly fused with each generation of SM matter into a single 16 representation [10]. This is really just an artifact of unifying into a GUT group with rank greater than that of the SM, since candidates for righthanded neutrinos in GUTs are those fields uncharged under the SM symmetries but charged under some additional gauge symmetry [for SO (10) this is the extra U(1) under the decomposition SO(10) → SU(5) × U(1)]. Rank > 4 GUTs therefore provide a rationale for n = 0 mod 3 right-handed neutrinos whenever each generation is unified into a single representation of the group.
Unfortunately, grand unification has many well-known problems of implementation. Non-supersymmetric grand unification proposals suffer from the hierarchy problem as well as a rather inexact unification of gauge couplings. Both non-supersymmetric and supersymmetric unification models predict proton decay at a rate that has been experimentally ruled out in the simplest models. Also, several theoretical problems pervade unification ranging from understanding how the Higgs is embedded into a GUT representation (the doublet-triplet splitting problem), how (or if) Yukawa couplings are unified, etc. Such experimental and theoretical problems ought to induce us to reconsider GUTs as the origin of right-handed neutrinos and the Majorana mass scale. Is there any rationale independent of unification that predicts righthanded neutrinos as well as the Majorana mass scale?
Suppose the explanation for the number of generations is that each field's three generations (Q 1,2,3 , u 1,2,3 , etc.) correspond to three components of a multiplet of a "horizontal" flavor symmetry. There are only two continuous symmetries that are suitable for this purpose possessing a 3 representation: SU(3) [11] and SU(2) ∼ SO(3) [12,13]. SU(2) can be summarily dismissed if right-handed neutrinos are required to be in a chiral representation of the new symmetry. It has already been emphasized that nonchiral fermions, and right-handed neutrinos in particular, seem to have no (aesthetic) place in the SM if anomaly cancellation is to connect all matter together. There is no hope with SU(2) since it is anomaly-free. SU(2) also does not predict the number of generations since representations of any dimension are possible [14]. Instead, SU(3) admits chiral fermions with only certain dimensionality -there can be three but not two, four, five, seven, etc. generations. Moreover, SU(3) provides two additional key ingredients: (1) there is an additional anomaly cancellation condition on the matter content if SU(3) is at least weakly gauged, and (2) all fermion masses, including right-handed neutrino masses, arise from spontaneous symmetry breaking. Before proceeding, note that the connection between SU(3) anomaly cancellation and the existence of right-handed neutrinos was made some time ago in [11]. In this paper the argument in presented in detail, contrasting with grand unification and universal extra dimensions, and then implications for a supersym-metrized version are briefly discussed.
Gauging a new symmetry in which SM fermions transform is non-trivial and requires the cancellation of all gauge anomalies associated with the new symmetry. There are potentially eight new gauge anomalies as-
sociated with SU(3) f : [SU(3) f ] 3 ; SU(3) f × [SU(3) c ] 2 ; [SU(3) f ] 2 × SU(3) c ; SU(3) f × [SU(2) L ] 2 ; [SU(3) f ] 2 × SU(2) L ; SU(3) f × [U(1) Y ] 2 ; [SU(3) f ] 2 × U(1) Y ; and SU(3) f × [grav] 2 .
Six of these are trivially zero since tr[t a ] = 0 for SU(N) gauge groups with N > 1. This leaves the mixed flavor symmetry/hypercharge anomaly 3 anomaly. The mixed anomaly leads to a condition on the sum of the hypercharges of the SM fermions that is equivalent to
[SU(3) f ] 2 × U(1) Y , and the [SU(3) f ]the mixed [grav] 2 × U(1) Y anomaly [SU (3) f ] 2 × U (1) Y : (6Y Q + 3Y u + 3Y d + 2Y L + Y e ) = 0,
and so automatically cancels. The [SU(3) f ] 3 anomaly, however, does not cancel with just the SM fermion content [11,12]. This is straightforward to see: Five types of matter (Q, u, d, L, e) can be assigned to either 3 or 3 representations. Two of the five fields contribute an even number of 3's or 3's to the anomaly (Q : ±6; L : ±2) while the remaining three fields contribute an odd number of 3's or 3's (u : ±3, d : ±3, e : ±1). The sum of two even numbers and three odd numbers is an odd number, and so [SU(3) f ] 3 anomaly a ∝ n 3 − n 3 cannot be canceled no matter how SM matter is assigned to SU(3) f .
The simplest assignment of matter in 3 or 3 representations allows ordinary Yukawa couplings. Here the Higgs scalar doublet is assumed to be a singlet under SU(3) f , since there is no need for (and various reasons that disfavor) more than one Higgs doublet in the SM. Gauge invariance of the three Yukawa couplings of the SM implies three relations among the anomaly coefficients of SM matter
QHu c ⇒ a(Q) + a(u c ) = 0 (2) QH * d c ⇒ a(Q) + a(d c ) = 0 (3) LH * e c ⇒ a(L) + a(e c ) = 0 .(4)
Without loss of generality Q can be chosen to be a 3,
a = 6 − 3 − 3 ± (2 − 1) = ±1 .(5)
Notice that the anomaly associated with colored fermions self-cancels, but with the leptons it does not cancel regardless of assigning (L, e c ) into a (3,3) or ( It is important to emphasize that this flavor symmetry rationale for right-handed neutrinos is completely independent of grand unification. In fact, the simplest assignment that allows Yukawa couplings to be gauge invariant under SU(3) f does not commute with the usual matter embeddings in unified representations of GUTs. For example, SU(5) [as well as SO(10) and E 6 ] unifies Q and u into a single representation; this is inconsistent with the SU(3) f assignment given above. However, Yukawa couplings are notoriously over-constrained in GUTs as well as flavor symmetry models. SU(5) predicts the down and lepton Yukawas of each generation should unify, and SO(10) predicts up, down, and lepton Yukawas to unify. These predictions are badly broken at low energies, and not much better at the GUT scale for all but perhaps λ b and λ τ . Analogously, the simplest SU(3) f assignment allows Yukawa couplings for all generations, but no generational differences. This must come from additional structure related to the flavor symmetry breaking that has not been specified here. Nevertheless, the matter (and Higgs) assignments under SU(3) f can be suitably modified to commute with grand unification. This was done in several early works on gauged SU(3) f × SU(5) [15]. There they found that many more triplets (or perhaps larger representations) of right-handed neutrinos were needed to cancel the [SU(3) f ] 3 anomaly. For the purposes of this paper, it is enough to observe that there must be at least one triplet of right-handed neutrinos to gauge SU(3) f .
The absence of a signal for new physics in flavorchanging neutral current processes places an important constraint on the scale of SU(3) f symmetry breaking. The constraint arises from the tree-level exchange of flavor gauge bosons that lead to transitions between samecharge, different-generation quarks or leptons. Integrating out heavy flavor gauge bosons results in a low-energy effective theory with new contributions to four-fermion, flavor-violating operators
g 4 f M 2 f (f i γ µ f i )(f j γ µ f j ) ,(6)
where g f is the SU(3) f gauge coupling and M f is the symmetry breaking scale. If the couplings are CP-conserving, one of the strongest constraints comes from the ∆s = 2 process that contributes to the K 0 − K 0 mass difference. Estimates of the bound on the four-quark operator suggest M f > ∼ g 2 f × 1600 TeV [16]. The bound is significantly stronger if the couplings maximally violate CP. In any case, for a flavor gauge coupling that is of order the SM gauge couplings, the bound on the symmetry breaking scale is at least hundreds of TeV. This is reminiscent of the constraints on extended technicolor [17].
The benefit of right-handed neutrinos transforming under a chiral representation of the flavor symmetry is that the Majorana mass scale is no longer arbitrary. The right-handed neutrino Majorana mass is generated through flavor symmetry breaking, analogous to SM fermion masses generated through electroweak symmetry breaking. The scale M f is not predicted, but obviously there is no conflict between the lower bound M f > ∼ 1000 TeV from flavor-changing constraints and the upper bound M f < ∼ 10 14 GeV needed for a successful see-saw explanation of neutrino masses. If M f were near the lower bound, future experiments could search for deviations from (or as-yet unobserved) flavor-changing neutral current processes as a signal for SU(3) f . This would require neutrino Yukawa couplings λ ν ∼ 10 −4 nearer in value to their lepton cousins.
How are right-handed neutrino Majorana masses generated from flavor symmetry breaking? Consider a pair of complex scalar fields in the fundamental representation Σ 1,2 (3) that acquire unaligned vacuum expectation values. This is sufficient to break SU(3) f → nothing. A right-handed neutrino mass arises from the dimension-4 operator ǫ ijk ν c i ν j Σ * k replacing Σ by its vev. Curiously, this two-field breaking model gives mass to just two offdiagonal components of the 3 × 3 Majorana mass matrix in flavor space
Σ 1 Σ 2 Σ 1 Σ 2 (7)
due to the anti-symmetric contraction of SU(3) f indices. This may be a useful starting point for generating an interesting neutrino mass texture. Also, the flavor symmetry could be broken in stages, such as SU(3) → SU(2) → nothing, that may be similarly useful for quark, lepton, or neutrino mass textures. SU(3) f is not the only rationale for three generations and three right-handed neutrinos. In a recent proposal called "universal extra dimensions" (UED) [18], all matter, Higgs, and gauge bosons are promoted to six dimensional fields, and the more complicated gauge and gravitational anomaly structure of six dimensional theories is used to constrain the matter content [19]. Ref. [19] found that cancellation of the global gauge anomaly [20] required the number of generations to be n g = 0 mod 3, and cancellation of the pure gravitational anomaly required n = n g fermionic fields uncharged under the SM gauge group. This is intriguingly similar to the SU(3) f symmetry argument, since the matter content is similarly restricted by anomaly cancellation of a larger symmetry structure. Other similarities are remarkable: [19] required that all matter was chiral in 6-D, analogous to requiring all matter to be in chiral representations of SU(3) f . This led to two possible chirality assignments in UED that are precisely analogous to the 3 versus 3 "chirality" possibilities for the SM fermions under SU(3) f . Specifically, the quark doublet (Q) must have the opposite chirality to the quark singlets (u, d), and the lepton doublet (L) must have the opposite chirality to the lepton singlets (e, ν). In UED the lepton doublet could have the same or the opposite chirality of the quark doublet, just as here the lepton doublet could be assigned to the same (3) or opposite (3) representation of the quark doublet. Finally, the UED rationale for three generations and three right-handed neutrinos does not depend on the compactification scale, just as the SU(3) f argument does not depend on the flavor symmetry breaking scale.
There are a few important differences between the six dimensional UED model and the SU(3) f model. The higher dimensional nature of UED implies there is an effective theory cutoff scale that is only an order of magnitude above the compactification scale; in the SU(3) f model, there is no such restriction. Several gauge anomalies, such as [SU(2) L ] 2 U(1) Y that are automatically canceled in the SU(3) f model, are canceled in UED only via the Green-Schwarz mechanism with additional matter. Finally, the prediction of three generations is not easily extended to a supersymmetric six-dimensional "universal" model for a variety of reasons [19], whereas the SU(3) f model can be quite simply supersymmetrized as will be briefly sketched below.
Everything that has been said for the SM with a gauged SU(3) f flavor symmetry also applies to a straightforward extension of the minimal supersymmetric standard model (MSSM). This means promoting matter supermultiplets to (anti-)fundamental representations of SU(3) f while the Higgs supermultiplets remain singlets, in exact analogy with the non-supersymmetric case. (In the following discussion the same notation is used for the MSSM chiral superfields as for the SM fermion fields.) There are, however, new restrictions on the allowed operators in the supersymmetrized SU(3) f model. The most interesting, model-independent restriction is that the dimension-5 operators leading to proton decay
QQQL M Pl , u c u c d c e c M Pl(8)
Below the SU(3) f symmetry breaking scale, these dimension-6 operators map onto the dimension-5 operators above with tiny coefficients of order Σ /M Pl . This is sufficient to cure the fast proton decay problem that results from the ordinarily unsuppressed dimension-5 operators.
A supersymmetrized version of the SU(3) f model has even more interesting constraints. All dimension ≤ 4 lepton number violating superpotential terms QLd c , LLe c , and LH u are forbidden by SU(3) f . Again, higher dimension operators with SU(3) f breaking fields will reintroduce these terms, but (for the first two) this leads to significant suppression. If the flavor symmetry were promoted to U(3) f , the dimension-4 baryon number violating term u c d c d c would also be forbidden. An exact flavor symmetry could serve in precisely the same role as matter parity on superfields (R-parity on fields). Of course the flavor symmetry is broken, and this reintroduces these so-called R-parity violating operators. It would be interesting to see if R-parity could be discarded in favor of a spontaneously broken U(3) f flavor symmetry without sacrificing a long-lived proton.
In summary, an extension of the Standard Model with an SU(3) f gauged flavor symmetry is presented that explains why there are three generations of matter and predicts the existence of three right-handed neutrinos. This argument is independent of grand unification or extra universal dimensions. The right-handed Majorana mass scale results from spontaneous SU(3) f symmetry breaking. If the breaking scale is "low", less than of order 1000 TeV, deviations in flavor changing neutral current processes are expected due to tree-level flavor gauge boson exchange. It should be emphasized that such a Majorana mass scale is completely consistent with the see-saw explanation for neutrino mass generation so long as the Dirac masses of the neutrinos are less than but of order the muon mass. This is a perfectly reasonable possibility given that SU(3) f has freed us from thinking only in terms of grand unification. The supersymmetric extension including a gauged SU(3) f is straightforward. The fast proton decay problem from dimension-5 Planck-suppressed operators is automatically cured, and certain R-parity violating couplings are naturally suppressed. Combining the SU(3) f gauged flavor symmetry with models that attempt to explain the structure of the quark, lepton, or neutrino mass matrices is an extremely interesting direction left for future work.
I have benefited from discussions with B. Balantekin, V. Barger, A. Nelson, Y. Nir, G. Shiu, and L.-T. Wang. I
then u c and d c must both be 3's. There are two choices for the leptons: [L(3), e c (3)] or [L(3), e c (3)]. In either case, the [SU(3) f ] 3 anomaly coefficient becomes
3, 3). Intriguingly, the [SU(3) f ] 3 anomaly is canceled by adding a single new field that transforms as a 3 [for L(3), e c (3)] or 3 [for L(3), e c (3)] under SU(3) f . To avoid spoiling the SM anomaly cancellation conditions this field must be neutral under SM gauge symmetries. Hence, this anomaly-cancellation field has precisely the quantum numbers of a right-handed neutrino. Also, a Yukawa interaction connecting the left-handed with the right-handed neutrino, L = LHν c , is automatically allowed by SU(3) f gauge invariance regardless of the initial choice of (3, 3, 3) versus (3, 3, 3) for (L, e c , ν c ). This is a remarkable result. Let me restate the assumptions and the implication: Assuming (1) the explanation for the number of generations is a gauged SU(3) f flavor symmetry, (2) all matter is assigned to chiral representations (3 or 3) of SU(3) f , and (3) ordinary Yukawa couplings are SU(3) f gauge invariant, then there must exist one set of right-handed neutrinos ν 1,2,3 transforming as a triplet of SU(3) f .
are forbidden by SU(3) f . Technically the second operator in Eq. (8) could be made gauge-invariant if u c were assigned the conjugate representation to that of d c and e c , but this does not happen for the SU(3) f model norfor the embeddings of matter into SU(5) or SO(10) repre-
sentations. These operators can be made gauge-invariant
by adding a pair of SU(3) f breaking superfields Σ(3) and
Σ(3), whereby Eq. (8) becomes
QQQLΣ
M 2
Pl
,
u c u c d c e c Σ
M 2
Pl
.
. S L Adler, Phys. Rev. 1772426S. L. Adler, Phys. Rev. 177, 2426 (1969);
. J S Bell, R Jackiw, Nuovo Cim. A. 6047J. S. Bell and R. Jackiw, Nuovo Cim. A 60, 47 (1969);
. W A Bardeen, Phys. Rev. 1841848W. A. Bardeen, Phys. Rev. 184, 1848 (1969);
. C Bouchiat, J Iliopoulos, P Meyer, Phys. Lett. B. 38519C. Bouchiat, J. Iliopoulos and P. Meyer, Phys. Lett. B 38, 519 (1972);
. D J Gross, R Jackiw, Phys. Rev. D. 6477D. J. Gross and R. Jackiw, Phys. Rev. D 6, 477 (1972).
. E Witten, Phys. Lett. B. 117324E. Witten, Phys. Lett. B 117, 324 (1982).
. L Alvarez-Gaume, E Witten, Nucl. Phys. B. 234269L. Alvarez-Gaume and E. Witten, Nucl. Phys. B 234, 269 (1984).
. C Q Geng, R E Marshak, ; J A Minahan, P Ramond, R C Warner, Phys. Rev. D. 39715Phys. Rev. DC. Q. Geng and R. E. Marshak, Phys. Rev. D 39, 693 (1989). J. A. Minahan, P. Ramond and R. C. Warner, Phys. Rev. D 41, 715 (1990).
. Y Fukuda, Super-Kamiokande CollaborationarXiv:hep-ex/9807003Phys. Rev. Lett. 811562Y. Fukuda et al. [Super-Kamiokande Collaboration], Phys. Rev. Lett. 81, 1562 (1998) [arXiv:hep-ex/9807003].
. Q R Ahmad, SNO CollaborationarXiv:nucl-ex/0204008Phys. Rev. Lett. 8911301Q. R. Ahmad et al. [SNO Collaboration], Phys. Rev. Lett. 89, 011301 (2002) [arXiv:nucl-ex/0204008].
. K Eguchi, KamLAND CollaborationarXiv:hep-ex/0212021Phys. Rev. Lett. 9021802K. Eguchi et al. [KamLAND Collaboration], Phys. Rev. Lett. 90, 021802 (2003) [arXiv:hep-ex/0212021].
. See C For A Review, Weinheimer, arXiv:hep-ex/0210050For a review, see C. Weinheimer, arXiv:hep-ex/0210050.
. M Gell-Mann, CERNP Ramond, CERNR Slansky, CERNPrint-80-0576M. Gell-Mann, P. Ramond and R. Slansky, Print-80- 0576 (CERN);
. T Yanagida, Prog. Theor. Phys. 641103T. Yanagida, Prog. Theor. Phys. 64, 1103 (1980);
. R N Mohapatra, G Senjanovic, Phys. Rev. Lett. 44912R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980).
H Georgi, Proceedings of the APS Division of Particles and Fields. the APS Division of Particles and Fieldsed C. CarlsonH. Georgi, Particles and Fields, Proceedings of the APS Division of Particles and Fields, ed C. Carlson;
. H Fritzsch, P Minkowski, Annals Phys. 93193H. Fritzsch and P. Minkowski, Annals Phys. 93, 193 (1975).
. T Yanagida, Phys. Rev. D. 202986T. Yanagida, Phys. Rev. D 20, 2986 (1979);
. Prog. Theor. Phys. 641103Prog. Theor. Phys. 64, 1103 (1980).
. F Wilczek, A Zee, Phys. Rev. Lett. 42421F. Wilczek and A. Zee, Phys. Rev. Lett. 42, 421 (1979).
. T Maehara, T Yanagida, Prog. Theor. Phys. 611434T. Maehara and T. Yanagida, Prog. Theor. Phys. 61, 1434 (1979).
with only odd dimensional representations, but SO(3) with just 3's is physically indistinguishable from SU(2) with just 3's, and so there is no unambiguous explanation for three generations. SO(3) is naively better than SU. SO(3) is naively better than SU(2), with only odd dimen- sional representations, but SO(3) with just 3's is physi- cally indistinguishable from SU(2) with just 3's, and so there is no unambiguous explanation for three genera- tions.
. See E G J L Chkareuli, Pisma Zh. Eksp. Teor. Fiz. 32671JETP Lett.See e.g. J. L. Chkareuli, JETP Lett. 32, 671 (1980) [Pisma Zh. Eksp. Teor. Fiz. 32, 684 (1980)];
. Z G Berezhiani, J L Chkareuli, Pisma Zh. Eksp. Teor. Fiz. 35612JETP Lett.Z. G. Berezhiani and J. L. Chkareuli, JETP Lett. 35, 612 (1982) [Pisma Zh. Eksp. Teor. Fiz. 35, 494 (1982)];
. K Tamvakis, G Zoupanos, Z. G. Berezhiani. 12699Phys. Lett. BK. Tamvakis and G. Zoupanos, Phys. Lett. B 126, 314 (1983). Z. G. Berezhiani, Phys. Lett. B 129, 99 (1983);
. M Soldate, M H Reno, C T Hill, Phys. Lett. B. 17995M. Soldate, M. H. Reno and C. T. Hill, Phys. Lett. B 179, 95 (1986).
. M Leurer, Y Nir, N Seiberg, arXiv:hep-ph/9310320Nucl. Phys. B. 420468See e.g. M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B 420, 468 (1994) [arXiv:hep-ph/9310320].
. S Dimopoulos, J R Ellis, Nucl. Phys. B. 182505S. Dimopoulos and J. R. Ellis, Nucl. Phys. B 182, 505 (1982).
. T Appelquist, H C Cheng, B A Dobrescu, arXiv:hep-ph/0012100Phys. Rev. D. 6435002T. Appelquist, H. C. Cheng and B. A. Dobrescu, Phys. Rev. D 64, 035002 (2001) [arXiv:hep-ph/0012100].
. B A Dobrescu, E Poppitz, arXiv:hep-ph/0102010Phys. Rev. Lett. 8731801B. A. Dobrescu and E. Poppitz, Phys. Rev. Lett. 87, 031801 (2001) [arXiv:hep-ph/0102010].
. S Elitzur, V P Nair, Nucl. Phys. B. 243205S. Elitzur and V. P. Nair, Nucl. Phys. B 243, 205 (1984).
| zyda_arxiv-0945000 |
Faddeev-Jackiw Hamiltonian Reduction for Free and Gauged Rarita-Schwinger Theories
13 Sep 2016
Suat Dengiz
Center for Theoretical Physics
Massachusetts Institute of Technology
02139CambridgeMAUSA
Faddeev-Jackiw Hamiltonian Reduction for Free and Gauged Rarita-Schwinger Theories
13 Sep 2016(Dated: September 14, 2016)
We study the Faddeev-Jackiw symplectic Hamiltonian reduction for 3 + 1-dimensional free and Abelian gauged Rarita-Schwinger theories that comprise Grassmannian fermionic fields. We obtain the relevant fundamental brackets and find that they are in convenient forms for quantization. The brackets are independent of whether the theories contain mass or gauge fields, and the structure of constraints and symplectic potentials largely determine characteristic behaviors of the theories. We also note that, in contrast to the free massive theory, the Dirac field equations for free massless Rarita-Schwinger theory cannot be obtained in a covariant way.
I. INTRODUCTION
In 1941, Rarita and Schwinger constructed a theory of spin- 3 2 vector-spinor fields which has a local fermionic gauge-invariance [1]. However, this symmetry is lost when the vector-spinor field has mass or couples to the other lower spin fields. More precisely, in 1961, Johnson and Sudarshan studied massive Rarita-Schwinger field minimally coupled to an external electromagnetic field, and showed that the equal-time commutators and relativistic covariance of the theory are in conflict, which makes the quantization a rather subtle issue [2]. In 1969, Velo and Zwanziger found that the massive gauged extension of the theory also admits superluminal wave propagation. Thus, the causality principle is also violated in the theory [3]. Despite these persistent problems, the massless theory keeps its importance particularly in two aspects. First, the massless (Majorana) Rarita-Schwinger field plays a central role in the construction of covariantly interacting supergravity theory [4][5][6]. The theory describes a generalization of the Rarita-Schwinger fermionic gaugeinvariance and the vector-spinor fields are fermionic superpartner of gravitons, namely gravitinos of the supergravity. In this concept, Das and Freedman showed that the massless theory is free from the non-causal wave propagation and has a unitary propagator structure [7]. Secondly, the massless Rarita-Schwinger theory is valuable for the cancellation of SU(8) gauge anomalies. Unlike the generic anomaly cancellation mechanisms in which the anomalies are supposed to be canceled withing the lower spin fermionic fields, it was shown by Marcus [8] and later studied by Adler [9], that a complete SU (8) gauge theory can be constructed via Rarita-Schwinger fields. In this set-up, the vector-spinor field acquires a crucial role in canceling anomalies arising in the gauge theory. Thus, it is left to determine whether the gauged Rarita-Schwinger fields describe well-behaved, complete classical or quantum field theories. For this purpose, Adler has recently studied minimally gauged massless Rarita-Schwinger theories at both classical and quantum levels in detail [10]. He showed that, unlike the massive case, the massless gauged Rarita-Schwinger theory provides consistent classical and quantum theories with a generalized fermionic gauge-invariance.
Taking the above mentioned observations as inspiration points and noting the hard task of getting proper brackets of constrained systems providing viable quantization, we study the Faddeev-Jackiw (FJ) symplectic Hamiltonian reduction [11,12] for free and gauged Rarita-Schwinger theories. Unlike Dirac's approach for constrained systems [16], FJ symplectic first-order formalism does not require any classification of constraints 1 . In other words, the method avoids analyzing systems by evaluating all commutation relations among the constraints and classifying them accordingly. Apparently, the FJ approach supplies a rather economical way of quantizing constrained systems. In doing so, we find the fundamental brackets for the free and gauged Rarita-Schwinger theories for both massless and massive versions. Here, the brackets are in admissible structures to be quantized. We also observe that the brackets are identical for all kinds of the theories; the brackets are independent of whether the theory is massive or interacting with external electromagnetic field or not. The differences between the theories arise among the constraints they have. We also notice that, in contrast to the massive case, the Dirac field equations for free massless Rarita-Schwinger theory cannot be obtained in a covariant way.
The layout of the paper is as follows: In Sec. II, we recapitulate the fundamental properties of free massless Rarita-Schwinger theory and apply FJ Hamiltonian reduction to the theory. In Sec. III, we turn our attention to the FJ Hamiltonian reduction for free massive Rarita-Schwinger theory. Sec. IV and Sec. V are devoted to the first-order symplectic analysis for Abelian gauged extensions of massless and massive Rarita-Schwinger theories. In Sec. VI, we conclude our results. In the Appendix A, the derivation of the transverse and traceless decomposition of the fields in the free massless Rarita-Schwinger theory is given as a sample. In the Appendix B, we briefly review the FJ approach for constrained and unconstrained systems. We also give an example of the application of symplectic method to anti-commuting spin-1 2 Dirac theory.
II. FREE MASSLESS RARITA-SCHWINGER THEORY
The 3 + 1-dimensional free massless Rarita-Schwinger theory is described by the Lagrangian
L = −ǫ λµνρψ λ γ 5 γ µ ∂ ν ψ ρ ,(1)
where ψ µ andψ µ are vector-spinor fields with spinor indices suppressed. We work in the metric signature (+, −, −, −), γ 5 = iγ 0 γ 1 γ 2 γ 3 and {γ µ , γ ν } = 2η µν . We consider the fermionic fields as independent anti-commuting Grassmannian variables. Recall that, unlike the complex Dirac field, for the Grassmannian variables there is no such relation asψ µ = γ 0 ψ + µ . Instead, ψ µ andψ µ are independent generators in the Grassmann algebra. Thus, one can define the conjugation as follows:
ψ * µ =ψ ν (γ 0 ) ν µ , (ψ µ ) * = (γ 0 ) µ ν ψ ν .(2)
Notice that this does not mean that Eq.(2) produces a new element in the Grassmannian algebra. This is merely the conjugation of independent variables. Therefore, with the help of the conjugation of the Grassmannian variables (θ 1 θ 2 ) * = θ * 2 θ * 1 , one can show that the Lagrangian in Eq.(1) is selfadjoint up to a boundary term:
L * = L + ∂ ν (ǫ λµνρψ λ γ 5 γ µ ψ ρ ),(3)
such that the total derivative term naturally drops at the action level. Moreover, variations with respect to independent variables respectively yield
ǫ λµνρ γ 5 γ µ ∂ ν ψ ρ = 0, ǫ λµνρ ∂ νψλ γ 5 γ µ = 0,(4)
which are the corresponding field equations. From now on, we will work with the first of Eq.(4). But, by following the same steps, one could easily obtain the similar results for the second equation. Notice that by using the identity
ǫ λµνρ γ 5 γ µ = i(η λρ γ ν − η λν γ ρ − γ λ η ρν + γ λ γ ν γ ρ ),(5)
one can recast the field equation in Eq.(4) as follows
/ ∂ψ λ − ∂ λ (γ · ψ) − γ λ ∂ · ψ + γ λ / ∂(γ · ψ) = 0.(6)
Here / ∂ = γ µ ∂ µ and γ · ψ = γ µ ψ µ . Contracting Eq.(6) with γ λ gives
∂ · ψ − / ∂(γ · ψ) = 0.(7)
Finally, by plugging this result in Eq.(6), the field equation reduces to
/ ∂ψ λ − ∂ λ (γ · ψ) = 0.(8)
To obtain the real propagating degrees of freedom, let us now study gauge transformation and corresponding gauge conditions. For this purpose, let us recall that under the local Rarita-Schwinger fermionic gauge transformation
δψ ρ (x) = ∂ ρ ǫ(x),(9)
the Lagrangian in Eq.(1) transforms as
δL = ∂ λ (−ǫ λµνρǭ γ 5 γ µ ∂ ν ψ ρ ).(10)
Here ǫ is an arbitrary four-component spinor field. As is seen in Eq.(10), the free massless Rarita-Schwinger Lagrangian changes by a total derivative under the Rarita-Schwinger gauge transformation, which drops at the action level and thus we have a completely gauge-invariant theory. This means that the theory admits a gauge redundancy. To find the correct physical degrees of freedom of the theory, one needs to fix this gauge-freedom. For this purpose, let us assume the Coulomb-like gauge condition
γ i ψ i = 0,(11)
where i = 1, 2, 3. In fact, this is a reasonable gauge choice: Any initial data ψ ′ i (x, t) that does not satisfy Eq. (11) can be tuned to the desired form via 2
ǫ(x, t) = −γ i ∂ iˆd 3 y 4π|x − y| γ j ψ j (y, t).(12)
(See [7] and [17] for further discussions). For the sake of the self-completeness, one needs to examine the theory further to see whether Eq.(11) imposes any additional conditions or not. For this purpose, note that ψ 0 component does not have a time derivative, so it is a Lagrange multiplier. In other words, as in the electromagnetic case, the zeroth component of the vector-spinor field is a zero mode which is followed with a constraint. More precisely, the λ = 0 component of the field equation in Eq.(8) reads
γ i ∂ i ψ 0 − ∂ 0 (γ i ψ i ) = 0.(13)
One can also get a secondary constraint by contracting the field equation with ∂ λ . But since our primary aim is not analyzing the system by examining all the existing constraints, we leave it as a comment. As is seen in Eq.(13), gauge fixing condition γ i ψ i = 0 imposes γ i ∂ i ψ 0 = 0. Here, since the operator is not invertible, we are not allowed to get ψ 0 = 0 as a corollary of γ i ψ i = 0; yet we assume an additional condition of ψ 0 = 0. Furthermore, splitting the fully contracted equation in Eq.(7) into its space and time components yields
∂ i ψ i − γ 0 ∂ 0 (γ i ψ i ) − γ i ∂ i (γ 0 ψ 0 ) − γ i ∂ i (γ j ψ j ) = 0.(14)
In Eq. (14), one should notice that the gauge fixing condition γ i ψ i = 0 together with the assumed condition ψ 0 = 0 impose ∂ i ψ i = 0. As a consequence of this, we obtain the set of consistency conditions
γ i ψ i = 0 , ∂ i ψ i = 0 , ψ 0 = 0.(15)
Observe that Eq. (15) can also be written in covariant forms as follows
γ µ ψ µ = 0 , ∂ µ ψ µ = 0,(16)/ ∂ψ λ = 0.(17)
Symplectic Reduction for Free Massless Rarita-Schwinger Theory
In this section, we study the FJ Hamiltonian reduction for the free massless Rarita-Schwinger theory which will lead us to the fundamental brackets of the theory. For this purpose, let us recast the Lagrangian in Eq.(1) in a more symmetric form:
L = − 1 2 ǫ λµνρψ λ γ 5 γ µ ∂ ν ψ ρ + 1 2 ǫ λµνρ (∂ νψλ )γ 5 γ µ ψ ρ .(18)
To study the theory in the first-order symplectic formalism, one needs to convert Eq.(18) into the desired symplectic form. That is, one needs to split the Lagrangian into its space and time components. After a straightforward decomposition, one gets
L = A (k) 1ψ k + A (k) 2ψ k − H(ψ 0 ,ψ 0 , ψ k ,ψ k ),(19)
where the symplectic coefficients are
A (k) 1 = − 1 2 ǫ ijkψ i γ 5 γ j , A (k) 2 = 1 2 ǫ ijk γ 5 γ j ψ i ,(20)
and the corresponding symplectic potential reads
H(ψ 0 ,ψ 0 , ψ k ,ψ k ) = 1 2 ǫ ijkψ 0 γ 5 γ i ∂ j ψ k − 1 2 ǫ ijkψ i γ 5 γ 0 ∂ j ψ k − 1 2 ǫ ijkψ i γ 5 γ j ∂ k ψ 0 − 1 2 ǫ ijk (∂ jψ0 )γ 5 γ i ψ k + 1 2 ǫ ijk (∂ jψi )γ 5 γ 0 ψ k + 1 2 ǫ ijk (∂ kψi )γ 5 γ j ψ 0 .(21)
As expected, all the non-dynamical components have been relegated into the Hamiltonian part of the system. In analyzing the theory, one could also choose the conjugate momenta ofψ k as a dynamical variable. But in our analysis, we will not work with it. Instead, we consider ψ µ and ψ µ as the independent variables. Note that ψ 0 andψ 0 are not dynamical components, so they are Lagrange multipliers. Following [11,12], the elimination of constraints give the equations
ǫ ijk (∂ kψi )γ 5 γ j = 0, ǫ ijk γ 5 γ i ∂ j ψ k = 0.(22)
To solve the constraint equations, one can decompose the independent fields into its local transverse and γ-traceless parts as
ψ i = ψ T i +ψ iψi =ψ T i +ψ i ,(23)
where "T " and "ˆ" stand for the transverse and traceless parts, respectively. Here the γ-traceless parts areψ
i = ψ i − 1 3 γ i γ j ψ j ,ψ i =ψ i − 1 3ψ j γ j γ i ,(24)
such that γ iψ i = 0 and γ iψ i = 0. Then, by using the identity
ǫ ijk γ 5 γ k = −γ 0 σ ij where σ ij = i 2 [γ i , γ j ],(25)
as well as the constraints in Eq. (22), one can show that the transverse and traceless decomposition of the fields in Eq.(23) can actually be written as follows
ψ i = ψ T i + ∂ i ζ ∇ 2 ,ψ i =ψ T i + ∂ iζ ∇ 2 ,(26)where ζ = / ∂(γ · ψ T ) and ∇ 2 = ∂ i ∂ i .
As a side comment, one should note that as is done in [12], without addressing the transverse and γ-traceless parts (23), one could also directly start with the (26). Here, we further provide what the explicit form of the Longitudinal part is. (See Appendix A for the derivation of Eq.(26)). Accordingly, the constraint equations in Eq.(22) turn into completely transverse ones
ǫ ijk (∂ kψ T i )γ 5 γ j = 0, ǫ ijk γ 5 γ i ∂ j ψ T k = 0.(27)
Finally, by inserting Eq.(26) and Eq.(27) in the Eq. (19), up to a boundary term, one gets a completely transverse Lagrangian
L = A (k) T 1ψ T k + A (k) T 2ψ T k − H T (ψ T k ,ψ T k ).(28)
Here the transverse symplectic coefficients and potential are
A (k) T 1 = − 1 2 ǫ ijkψT i γ 5 γ j , A (k) T 2 = 1 2 ǫ ijk γ 5 γ j ψ T i , H T (ψ T k ,ψ T k ) = − 1 2 ǫ ijkψT i γ 5 γ 0 ∂ j ψ T k + 1 2 ǫ ijk (∂ jψ T i )γ 5 γ 0 ψ T k .(29)
Thus, by defining the symplectic variables as (ξ 1 , ξ 2 ) = (ψ T k ,ψ T k ), one gets the corresponding symplectic matrix
f αβ = 0 ǫ ijk γ 5 γ j −ǫ ijk γ 5 γ j 0 = ǫ αβ ǫ ijk γ 5 γ j ,
which is clearly non-singular. Notice that the minus sign in the sub-block is due to the antisymmetric ǫ tensor. Therefore, by taking care of the epsilons contraction in the current signature, one can easily show that the inverse symplectic matrix is
f −1 αβ = 0 − 1 2 ǫ imk γ 5 γ m 1 2 ǫ imk γ 5 γ m 0 = 1 2 ǫ βα ǫ imk γ 5 γ m .
Once the inverse symplectic matrix is found, one can evaluate the fundamental brackets. That is, by using the definition of the FJ equal-time brackets for the Grassmann variables
{ξ β , ξ α } F J = −f −1 αβ ,(30)
one gets the fundamental brackets for free massless Rarita-Schwinger theory as follows
{ψ T i (x),ψ T k (y)} F J = − 1 2 ǫ imk γ 5 γ m δ 3 (x − y), {ψ T i (x), ψ T k (y)} F J = 0, {ψ T i (x),ψ T k (y)} F J = 0.(31)
Note that, with the help of the identity in Eq.(25), the non-vanishing bracket can also be rewritten as
{ψ T i (x),ψ T k (y)} F J = i 2 γ k γ i γ 0 δ 3 (x − y),(32)
which is identical with the one found in [18].
III. FREE MASSIVE RARITA-SCHWINGER THEORY
The Lagrangian that describes the 3 + 1-dimensional free massive Rarita-Schwinger theory is
L = −ǫ λµνρψ λ γ 5 γ µ ∂ ν ψ ρ + imψ λ σ λρ ψ ρ ,(33)
where
σ λρ = i 2 [γ λ , γ ρ ] = i(η λρ − γ ρ γ λ ).
Recall that the fermionic fields are anti-commuting Grassmannian variables. Accordingly, the field equations of the independent variables respectively read
ǫ λµνρ γ 5 γ µ ∂ ν ψ ρ − imσ λρ ψ ρ = 0, ǫ λµνρ ∂ νψλ γ 5 γ µ + imψ λ σ λρ = 0.(34)
In dealing with the fundamental properties of the theory, as we did in the massless theory, we will work only with the first field equation in Eq.(34). Notice that by using the identity in Eq.(5), one can recast the field equation as follows
i[ / ∂ψ λ − ∂ λ (γ · ψ) − γ λ ∂ · ψ + γ λ / ∂(γ · ψ)] − imσ λρ ψ ρ = 0.(35)
Observe that the contraction of Eq.(35) with γ λ yields
2i[ / ∂(γ · ψ) − ∂ · ψ] + 3mγ · ψ = 0,(36)
and the contraction of Eq.
(35) with ∂ λ gives m[ / ∂(γ · ψ) − ∂ · ψ] = 0.(37)
Combining both contracted field equations Eq.(36) and Eq.(37), one obtains
γ · ψ = 0, ∂ · ψ = 0.(38)
With these gauge-fixing conditions, the equation in Eq.(35) turns into the Dirac field equation for massive spin-3 2 vector-spinor field
(i / ∂ + m)ψ λ = 0.(39)
Note that, unlike the massless theory, one obtains the Dirac field equation in Eq.(39) without addressing the space and time decompositions of the field equations. On the other hand, due to the mass term, the Rarita-Schwinger gauge-invariance is inevitably lost.
Symplectic Reduction for Free Massive Rarita-Schwinger Lagrangian
Let us now study the symplectic Hamiltonian reduction of the free massive Rarita-Schwinger theory. For this purpose, let us recall that the Lagrangian in Eq.(33), up to a boundary term, can be written as
L = − 1 2 ǫ λµνρψ λ γ 5 γ µ ∂ ν ψ ρ + 1 2 ǫ λµνρ ∂ νψλ γ 5 γ µ ψ ρ + imψ λ σ λρ ψ ρ .(40)
In order to proceed the FJ symplectic reduction of Eq.(40), one needs to separate the dynamical components from the non-dynamical ones so that the non-dynamical components can be relegated to Hamiltonian part of the Lagrangian. Therefore, by splitting the Lagrangian into its space and time components, one will obtain
L = A (k) 1ψ k + A (k) 2ψ k − H(ψ 0 ,ψ 0 , ψ k ,ψ k ),(41)
where the coefficient of the dynamical parts are
A (k) 1 = − 1 2 ǫ ijkψ i γ 5 γ j , A (k) 2 = 1 2 ǫ ijk γ 5 γ j ψ i ,(42)
and the explicit form of the symplectic potential is
H(ψ 0 ,ψ 0 , ψ k ,ψ k ) = 1 2 ǫ ijkψ 0 γ 5 γ i ∂ j ψ k − 1 2 ǫ ijkψ i γ 5 γ 0 ∂ j ψ k − 1 2 ǫ ijkψ i γ 5 γ j ∂ k ψ 0 − 1 2 ǫ ijk (∂ jψ0 )γ 5 γ i ψ k + 1 2 ǫ ijk (∂ jψi )γ 5 γ 0 ψ k + 1 2 ǫ ijk (∂ kψi )γ 5 γ j ψ 0 − imψ 0 σ 0i ψ i − imψ i σ i0 ψ 0 − imψ i σ ij ψ j .(43)
Like the free massless theory, ψ 0 andψ 0 are zero modes of the system whose eliminations give rise the constraints
ǫ ijk (∂ kψi )γ 5 γ j − imψ i σ i0 = 0, ǫ ijk γ 5 γ i ∂ j ψ k − imσ 0i ψ i = 0.(44)
As was done in the previous section, by decomposing the fields into the local transverse and γtraceless parts as in the Eq.(23), the constraints in Eq.(44) turn into completely transverse ones
ǫ ijk (∂ kψ T i )γ 5 γ j − imψ T i σ i0 = 0, ǫ ijk γ 5 γ i ∂ j ψ T k − imσ 0i ψ T i = 0.(45)
In this case, the longitudinal part reads ζ = ( / ∂ + im)γ · ψ T . Thus, by plugging the Eq.(23) and the transverse constraints Eq.(45) into the Eq.(41), up to a boundary term, the Lagrangian turns into
L = A (k) T 1ψ T k + A (k) T 2ψ T k + imψ T i σ i0ζ ∇ 2 + imζ ∇ 2 σ 0i ψ T i − H T (ψ T k ,ψ T k ),(46)
where the transverse symplectic coefficients and potential respectively are
A (k) T 1 = − 1 2 ǫ ijkψT i γ 5 γ j , A (k) T 2 = 1 2 ǫ ijk γ 5 γ j ψ i , H T (ψ T k ,ψ T k ) = − 1 2 ǫ ijkψT i γ 5 γ 0 ∂ j ψ T k + 1 2 ǫ ijk ∂ jψ T i γ 5 γ 0 ψ T k − imψ T i σ ij ψ T j .(47)
Observe that the middle two terms in Eq.(46) are not in the symplectic forms. Therefore, by assuming the Darboux transformation
ψ T k → ψ ′ T k = e 2i ζ ∇ 2 ψ T k ,(48)
with an additional assumption of
ǫ ijkψT i γ 5 γ j ψ T k = me −2iζ ∇ 2ψ T i σ i0 ,(49)
the undesired terms in Eq.(46) drop and thus we are left with a completely transverse Lagrangian
L = A (k) T 1ψ T k + A (k) T 2ψ T k − H T (ψ T k ,ψ T k ) − λ k φ k (ψ T k ,ψ T k ) −λ iφ i (ψ T k ,ψ T k ).(50)
Note that the extra condition Eq.(49) is enforced by the Darboux transformation and the constraint equations; otherwise, the coupled terms in the symplectic part could not be decoupled. In fact, it seems there is a lack in the physical interpretation of Eq.(49). Therefore, it will be particularly interesting if one can show that it has a relation with the real constraints or not. Here, as is mentioned in Eq.(117), the remaining variables (i.e., the longitudinal components) are denoted as the Lagrange multipliers
λ k = ∂ k ζ ∇ 2 ,λ i = ∂ iζ ∇ 2 ,(51)
such that
φ k (ψ T k ,ψ T k ) = iǫ ijkψT i γ 5 γ 0 ψ T j ,φ i (ψ T k ,ψ T k ) = −iǫ ijkψT j γ 5 ψ T k .(52)
As noted in [11,12], since the last two terms in the Eq.(50) cannot be dropped via elimination of constraints anymore, Eq.(52) corresponds to the true constraints of the system. Note also that the true constraints cannot be rewritten as linear combinations of the ones that are obtained during the eliminations of the constraints; otherwise, they would also drop when the eliminations of constraint was performed. These are the constraints that cannot be eliminated anymore. Therefore, setting φ k (ψ T k ,ψ T k ) andφ i (ψ T k ,ψ T k ) to zero provides an unconstrained fully traceless Lagrangian
L = A (k) T 1ψ T k + A (k) T 2ψ T k − H T (ψ T k ,ψ T k ).(53)
Thus, with the definition of the dynamical variables (ξ 1 , ξ 2 ) = (ψ T k ,ψ T k ), the non-vanishing equaltime FJ bracket for the free massive Rarita-Schwinger theory becomes
{ψ T i (x),ψ T k (y)} F J = i 2 γ k γ i γ 0 δ 3 (x − y).(54)
which is same as the one found in [19].
IV. GAUGED MASSLESS RARITA-SCHWINGER THEORY
In this section, we study the massless Rarita-Schwinger field minimally coupled to an external electromagnetic field which is described by the Lagrangian
L = −ǫ λµνρψ λ γ 5 γ µ → D ν ψ ρ .(55)
Here the gauge covariant derivative is D ν = ∂ ν + gA ν , where g is the relevant coupling constant and A µ is an Abelian gauge field. The field equations read
ǫ λµνρ γ 5 γ µ → D ν ψ ρ = 0, ǫ λµνρψ λ ← D ν γ 5 γ µ = 0.(56)
As in the free massless and massive theories, while deducing the some basic properties of the theory, we will only deal with the first of Eq.(56). Notice that with the help of the identity in Eq.(5), the Eq.(56) turns into
/ Dψ λ − D λ (γ · ψ) − γ λ D · ψ + γ λ / D(γ · ψ) = 0.(57)
Moreover, contracting the Eq.(57) with γ λ yields
/ D(γ · ψ) − D · ψ = 0.(58)
Finally, substituting the Eq.(58) in Eq. (57) gives
/ Dψ λ − D λ (γ · ψ) = 0.(59)
On the other side, contracting the Eq.(56) with D λ becomes
gǫ λµνρ γ 5 γ µ F λν ψ ρ = 0,(60)
which is a secondary constraint in the theory and does not provide any further simplification in the field equation in Eq.(59).
Symplectic Reduction for Gauged Massless Rarita-Schwinger Theory
Let us now apply the first-order symplectic formalism to the massless Rarita-Schwinger fields minimally coupled to an external electromagnetic field. For this purpose, let us note that the Lagrangian of the theory in Eq.(55) can be recast in a more symmetric form as follows
L = − 1 2 ǫ λµνρψ λ γ 5 γ µ → D ν ψ ρ + 1 2 ǫ λµνρψ λ ← D ν γ 5 γ µ ψ ρ .(61)
Similarly, by splitting the Lagrangian in Eq.(61) into its space and time components, one gets
L = A (k) 1ψ k + A (k) 2ψ k − H(ψ 0 ,ψ 0 , ψ k ,ψ k , A 0 , A k ),(62)
where the symplectic coefficients are
A (k) 1 = − 1 2 ǫ ijkψ i γ 5 γ j , A (k) 2 = 1 2 ǫ ijk γ 5 γ j ψ i ,(63)
and the related symplectic potential is
H(ψ 0 ,ψ 0 , ψ k ,ψ k , A 0 , A k ) = 1 2 ǫ ijkψ 0 γ 5 γ i ∂ j ψ k − 1 2 ǫ ijkψ i γ 5 γ 0 ∂ j ψ k − 1 2 ǫ ikjψ i γ 5 γ k ∂ j ψ 0 − 1 2 ǫ ijk ∂ jψ0 γ 5 γ i ψ k + 1 2 ǫ ijk ∂ jψi γ 5 γ 0 ψ k + 1 2 ǫ ikj ∂ jψi γ 5 γ k ψ 0 + gǫ ijkψ i γ 5 γ j A 0 ψ k + gǫ ijkψ 0 γ 5 γ i A j ψ k − gǫ ijkψ i γ 5 γ 0 A j ψ k − gǫ ikjψ i γ 5 γ k A j ψ 0 .(64)
Note that although the gauge fields are non-dynamical variables, due to being external potentials, one cannot vary and then impose these variations to be vanished. Otherwise, as in the Quantum Electromagnetic Dynamics with external potential, the gauge field current would be enforced to be zero which is not a desired situation. Hence, as in the free theories, here ψ 0 andψ 0 are the only zero modes of the theory: Therefore, variations with respect to ψ 0 andψ 0 respectively give the following constraint equations
ǫ ikj ∂ jψi γ 5 γ k − gǫ ikjψ i γ 5 γ k A j = 0, ǫ ijk γ 5 γ i ∂ j ψ k + gǫ ijk γ 5 γ i A j ψ k = 0.(65)
As was done in the free theories, by decomposing the fields into the local transverse and γ-traceless parts as in Eq.(23) 3 and using the constraints in Eq.(65) as well as by assuming the Darboux transformation (48), with an additional assumption of
iǫ ijkψT i γ 5 γ j ψ T k = ge −2iζ ∇ 2 ǫ ijkψT i γ 5 γ k A j ,(66)
the Lagrangian (62) turns into a completely transverse one
L = A (k) T 1ψ T k + A (k) T 2ψ T k − H T (ψ T k ,ψ T k ) − λ k φ k (ψ T k ,ψ T k ) −λ iφ i (ψ T k ,ψ T k ),(67)
where the transverse symplectic coefficients and potential read
A (k) T 1 = − 1 2 ǫ ijkψT i γ 5 γ j , A (k) T 2 = 1 2 ǫ ijk γ 5 γ j ψ T i , H T (ψ T k ,ψ T k ) = − 1 2 ǫ ijkψT i γ 5 γ 0 ∂ j ψ T k + 1 2 ǫ ijk ∂ jψ T i γ 5 γ 0 ψ T k + gǫ ijkψT i γ 5 γ j A 0 ψ T k − gǫ ijkψT i γ 5 γ 0 A j ψ T k . (68)
Note that the symplectic potential also contains gauge field parts. Furthermore, as is given in (117), the remaining variables (i.e., the longitudinal components) are denoted as the Lagrange multipliers
λ k = ∂ k ζ ∇ 2 ,λ k = ∂ iζ ∇ 2 ,(69)
such that
φ k (ψ T k ,ψ T k ) = iǫ ijkψT i γ 5 γ 0 ψ T j + gǫ ijkψT i γ 5 γ j A 0 + gǫ ijkλ i γ 5 γ j A 0 − gǫ ijkψT i γ 5 γ 0 A j φ i (ψ T k ,ψ T k ) = −iǫ ijkψT j γ 5 ψ T k + gǫ ijk γ 5 γ j A 0 ψ T k − gǫ ijk γ 5 γ 0 A j ψ T k − gǫ ijk γ 5 γ 0 A j λ k ,(70)
which cannot be dropped via elimination of constraints anymore so, according to [11,12], they are the true constraint of the system. Thus, by setting φ k (ψ T k ,ψ T k ) andφ i (ψ T k ,ψ T k ) to zero, one arrives at a completely transverse Lagrangian
L = A (k) T 1ψ T k + A (k) T 2ψ T k − H T (ψ T k ,ψ T k ).(71)
Finally, with the definition of the symplectic dynamical variables (ξ 1 , ξ 2 ) = (ψ T k ,ψ T k ), one obtains the non-vanishing equal-time FJ basic bracket for the gauged massless Rarita-Schwinger theory as follows
{ψ T i (x),ψ T k (y)} F J = i 2 γ k γ i γ 0 δ 3 (x − y),(72)
which is consistent with the Pauli-spin-part of the fundamental bracket obtained in [10] in which Adler studies the Dirac quantization of the non-Abelian gauged Rarita-Schwinger theory via the left-chiral component of the fermionic field. One should notice that such a difference is expected because in [10], the corresponding gauge fields are non-Abelian variables; however here the gauge fields are Abelian vector fields.
V. GAUGED MASSIVE RARITA-SCHWINGER
In this section, we study the massive Rarita-Schwinger theory minimally coupled to an external electromagnetic field which is described by the Lagrangian
L = −ǫ λµνρψ λ γ 5 γ µ → D ν ψ ρ + imψ λ σ λρ ψ ρ ,(73)
where the gauge-covariant derivative is D ν = ∂ ν + gA ν . Accordingly, the field equations for the independent anti-commuting fermionic fields are
ǫ λµνρ γ 5 γ µ → D ν ψ ρ − imσ λρ ψ ρ = 0, ǫ λµνρψ λ ← D ν γ 5 γ µ + imψ λ σ λρ = 0,(74)
which with the help of the identity in Eq.(5) turns into
i[ / Dψ λ − D λ (γ · ψ) − γ λ D · ψ + γ λ / D(γ · ψ)] − imσ λρ ψ ρ = 0.(75)
Moreover, contraction of the equation in Eq.(75) with γ λ gives
2i( / D(γ · ψ) − D · ψ) + 3mγ · ψ = 0.(76)
And contraction of field equation in Eq.(74) with D λ becomes
gǫ λµνρ γ 5 γ µ F λν ψ ρ + m[( / D(γ · ψ) − D · ψ] = 0,(77)
which with the additional redefinition
F d = F d µ ρ = ǫ µ ρλ ν F λ ν ,(78)
turns into
m[ / D(γ · ψ) − D · ψ] − gγ 5 γ · F d · ψ = 0.(79)
Combining Eq.(76) and Eq.(79), one gets the secondary constraint that determines the equation of motion of ψ 0 component as follows
γ · ψ = − 2 3 m −2 igγ 5 γ · F d · ψ.(80)
Observe that using Eq.(80) in Eq.(79) gives the relation
D · ψ = −( / D − 3im 2 ) 2 3 m −2 igγ 5 γ · F d · ψ.(81)
Finally, by plugging Eq.(80) and Eq.(81) into the field equation in Eq.(75), one obtains
(i / D − m)ψ λ + (iD λ + m 2 γ λ ) 2 3 m −2 igγ 5 γ · F d · ψ = 0,(82)
which is the equation that is used by Velo and Zwanziger in deducing the acausal wave propagation of the solution by finding the future-directed normals to the surfaces at each point [3].
Symplectic Reduction for Gauged Massive Rarita-Schwinger Theory
Finally, let us apply FJ symplectic Hamiltonian reduction to the massive Rarita-Schwinger field minimally coupled to an external electromagnetic field. In order to do so, let us rewrite the Lagrangian in Eq.(73) in a more symmetric form:
L = − 1 2 ǫ λµνρψ λ γ 5 γ µ → D ν ψ ρ + 1 2 ǫ λµνρψ λ ← D ν γ 5 γ µ ψ ρ + imψ λ σ λρ ψ ρ .(83)
Subsequently, by splitting Lagrangian in Eq.(83) into its space and time components, one gets
L = A (k) 1ψ k + A (k) 2ψ k − H(ψ 0 ,ψ 0 , ψ k ,ψ k , A 0 , A k ),(84)
where the symplectic coefficients are
A (k) 1ψ k = − 1 2 ǫ ijkψ i γ 5 γ j , A (k) 2 = 1 2 ǫ ijk γ 5 γ j ψ i ,(85)
and the relevant Hamiltonian
H(ψ 0 ,ψ 0 , ψ k ,ψ k , A 0 , A k ) is H(ψ 0 ,ψ 0 , ψ k ,ψ k , A 0 , A k ) = 1 2 ǫ ijkψ 0 γ 5 γ i ∂ j ψ k − 1 2 ǫ ijkψ i γ 5 γ 0 ∂ j ψ k − 1 2 ǫ ikjψ i γ 5 γ k ∂ j ψ 0 − 1 2 ǫ ijk ∂ jψ0 γ 5 γ i ψ k + 1 2 ǫ ijk ∂ jψi γ 5 γ 0 ψ k + 1 2 ǫ ikj ∂ jψi γ 5 γ k ψ 0 − imψ 0 σ 0i ψ i − imψ i σ i0 ψ 0 − imψ i σ ij ψ j + gǫ ijkψ i γ 5 γ j A 0 ψ k + gǫ ijkψ 0 γ 5 γ i A j ψ k − gǫ ijkψ i γ 5 γ 0 A j ψ k − gǫ ikjψ i γ 5 γ k A j ψ 0 .(86)
Note that as is emphasized in the massless gauged part, since the gauge fields are external potentials, one is not allowed to set their variation to zero. Hence, here ψ 0 ,ψ 0 are the only Lagrange multipliers that induce constraints on the system. Therefore, eliminations of constraint yield
ǫ ikj ∂ jψi γ 5 γ k − imψ i σ i0 − gǫ ikjψ i γ 5 γ k A j = 0, ǫ ijk γ 5 γ i ∂ j ψ k − imσ 0i ψ i + gǫ ijk γ 5 γ i A j ψ k = 0. (87)
Like the free massive theory, by decomposing the dynamical components into the local transverse and traceless parts as in Eq.(23) 4 as well as using constraints in Eq.(87) and the Darboux transformation (48), with an additional assumption of
iǫ ijkψT i γ 5 γ j ψ T k = e −2iζ ∇ 2 (imψ i σ i0 + gǫ ijkψT i γ 5 γ k A j ),(89)
the Lagrangian, up to a boundary term, turns into
L = A (k) T 1ψ T k + A (k) T 2ψ T k − H T (ψ T k ,ψ T k ) − λ k φ k (ψ T k ,ψ T k ) −λ iφ i (ψ T k ,ψ T k ).(90)
Here the transverse symplectic coefficients and potential are
A (k) T 1 = − 1 2 ǫ ijkψT i γ 5 γ j , A (k) T 2 = 1 2 ǫ ijk γ 5 γ j ψ T i , H T (ψ T k ,ψ T k ) = − 1 2 ǫ ijkψT i γ 5 γ 0 ∂ j ψ T k + 1 2 ǫ ijk ∂ jψ T i γ 5 γ 0 ψ T k + gǫ ijkψT i γ 5 γ j A 0 ψ T k − gǫ ijkψT i γ 5 γ 0 A j ψ T k − imψ T i σ ij ψ T j .
(91) 4 In this case, from the constraint equation, one finds
ζ = ( / ∂ + im + g γ · A)γ · ψ T − gA · ψ T .(88)
Notice that, different from the free cases, the symplectic potential involves mass and gauge potentials. Here in the Eq.(90), as in the previous sections, the Lagrange multipliers are the Longitudinal parts of the vector-spinor field and the corresponding constraints read
φ k (ψ T k ,ψ T k ) = iǫ ijkψT i γ 5 γ 0 ψ T j + gǫ ijkψT i γ 5 γ j A 0 + gǫ ijkλ i γ 5 γ j A 0 − gǫ ijkψT i γ 5 γ 0 A j φ i (ψ T k ,ψ T k ) = −iǫ ijkψT j γ 5 ψ T k + gǫ ijk γ 5 γ j A 0 ψ T k − gǫ ijk γ 5 γ 0 A j ψ T k − gǫ ijk γ 5 γ 0 A j λ k ,(92)
which are same as Eq.(70). Similarly, by setting Eq.(92) to zero [11,12], one arrives at a completely transverse Lagrangian
L = A (k) T 1ψ T k + A (k) T 2ψ T k − H T (ψ T k ,ψ T k ),(93)
whose symplectic part is same as the ones found so far. Thus, with the definition of the dynamical variables (ξ 1 , ξ 2 ) = (ψ T k ,ψ T k ), the non-vanishing equal-time bracket for the gauged massive Rarita-Schwinger theory becomes
{ψ T i (x),ψ T k (y)} F J = i 2 γ k γ i γ 0 δ 3 (x − y),(94)
which is identical to the one found in [20].
VI. CONCLUSIONS
In this work, we studied 3 + 1-dimensional free and Abelian gauged Grassmannian Rarita-Schwinger theories for their massless and massive extensions in the context of Faddeev-Jackiw first-order symplectic formalism. We have obtained the fundamental brackets of theories which are consistent with the some results that we found in the literature but obtained in a more simpler way. The brackets are independent of whether the theories contain mass or gauge field or not, and thus the structure of constraints and symplectic potentials determine characteristic behaviors of the theories. It will be particularly interesting to find proper transformations that will relate the constraints obtained via the Faddeev-Jackiw symplectic method with the ones that are obtained via Dirac method. But since the constraints obtained in both methods are rather complicated, in this paper, we restrict ourselves only to the Faddeev-Jackiw analysis of Rarita-Schwinger theories and leave this as a future work. With the comparison with the literature, we concluded that the Faddeev-Jackiw symplectic approach provides a more economical way in deriving the fundamental brackets for the Rarita-Schwinger theories. In addition to these, we notice that, in contrast to the massive theory, the Dirac field equations for free massless Rarita-Schwinger theory cannot be covariantly deduced.
VII.
VIII. APPENDIX A: TRANSVERSE AND TRACELESS DECOMPOSITION OF FIELDS
In this section, let us give the derivations of (26) and (27): To solve the constraint equations, one can decompose the independent fields into its local transverse and γ-traceless parts as
ψ i = ψ T i +ψ iψi =ψ T i +ψ i ,(95)
where "T " and "ˆ" stand for the transverse and traceless parts, respectively. Here the γ-traceless parts areψ
i = ψ i − 1 3 γ i γ j ψ j ,ψ i =ψ i − 1 3ψ j γ j γ i .(96)
Therefore, we have
∂ i ψ T i = ∂ iψT i = 0 and γ iψ i = γ iψ i = 0.(97)
To find how the constraint equations in Eq.(22) decomposes under Eq.(95), let us focus on the following constraint equation
ǫ ijk γ 5 γ i ∂ j ψ k = 0.(98)
Note that with the identity ǫ ijk γ 5 γ k = −γ 0 σ ij and Eq.(95), the Eq.(98) turns into
iγ 0 2 [γ k , γ j ]∂ j (ψ T k +ψ k ) = 0.(99)
Furthermore, by using the relation
[γ k , γ j ] = {γ k , γ j } − 2γ j γ k = 2(η kj − γ j γ k ),(100)
and the transverse and traceless properties of the fields in Eq.(97), one gets
iγ 0 ∂ kψ k − γ j γ k ∂ j ψ T k = 0.(101)
Notice that after contraction with iγ 0 and relabeling of the dummy indices, it becomes
∂ iψ i − γ m γ n ∂ m ψ T n = 0,(102)which yieldsψ i = ∂ i ζ ∇ 2 where ζ = γ m γ n ∂ m ψ T n .(103)
This structure is also valid for the other theories. The only difference arises in the definition of ζ which we give its explicit form in each section. Finally, by substituting this result into the constraint in Eq.(98), it turns into
ǫ ijk γ 5 γ i ∂ j ψ T k + ǫ ijk γ 5 γ i ∂ j ∂ k ζ ∇ 2 = 0.(104)
Because of the symmetric and anti-symmetric contraction in "j, k" indices, the second term drops, and we are left with the transverse constraint equation
ǫ ijk γ 5 γ i ∂ j ψ T k = 0.(105)
IX. APPENDIX B: FADDEEV-JACKIW HAMILTONIAN REDUCTION FOR CONSTRAINED AND UNCONSTRAINED SYSTEMS
In this section, we review the Faddeev-Jackiw symplectic first-order formalism which was introduced particularly to quantize the constrained systems [11,12]. The method works on the first-order Lagrangian and does not require any classification of constraints. To better understand how the method works, let us consider
L = p αq α − H(p, q), α = 1, . . . n.(106)
With the definition of 2n-component phase-space coordinates ξ α = p α , α = 1, · · · , n and ξ β = q β , β = n + 1, · · · , 2n,
Eq.(106) can be rewritten as a Lagrangian one-form
Ldt = 1 2 ξ α f 0 αβ dξ β − V (ξ)dt.(108)
Here the symplectic 2n × 2n matrix is [11,12]. But, in general, the symplectic twoform does not have to be constant. Therefore, let us now consider the following generic Lagrangian
f 0 αβ = 0 I −I 0 αβ , where I is the identity matrix; A 0 ≡ 1 2 ξ α f 0 αβ dξ β is the canonical one-form; f 0 ≡ dA 0 ≡ 1 2 f 0 αβ dξ α dξ β is the symplectic two-form. Note that f 0 is constantLdt = A α dξ α − H(ξ)dt, α = 1, · · · , 2n,(109)
where A α is an arbitrary one-form. The variation of Eq.(109) with respect to ξ yields
f βαξα = ∂H ∂ξ β where f βα = ∂A α ∂ξ β − ∂A β ∂ξ α .(110)
In the case of when the symplectic matrix is nonsingular, Eq.(110) becomeṡ
ξ α = f −1 αβ ∂H(ξ) ∂ξ β .(111)
Thus, by using Eq.(111) and the Poisson brackets for the bosonic variables, one obtains the FJ fundamental brackets as follows
{ξ β , ξ α } F J = f −1 αβ .(112)
Note that, in the case of the Grassmannian variables, using the anti-commutation property of the variables as well as the Poisson brackets for the Grassmannian variables [21], one haṡ
ξ α = ∂H(ξ) ∂ξ β (f −1 ) αβ ,(113)
and the corresponding fundamental brackets become
{ξ β , ξ α } F J = −(f −1 ) αβ .(114)
On the other side, when there are constraints in the system which are induced by the existence of the zero-modes, then the symplectic matrix cannot be inverted. In that case, according to the Darboux's theorem which states that for any given one-form A = A α dξ α where α = 1, · · · , N , one can always do the following changes in the variables ξ α → (p β , q γ z ρ ), β, γ = 1, · · · , n, ρ = 1, · · · , N − 2n,
so that A turns into A = A α dq α . As is seen above, when there is no constraint, Eq.(115) diagonalizes f αβ . However when there are constraints, only a 2n × 2n sub-block of f αβ diagonalizes and the remaining N − 2n degrees of freedom (corresponding the zero-modes z ρ ) will not be in the symplectic form [11,12]; yet they occur in the rest of the Lagrangian:
L = p α dq α − Φ(p, q, z)dt.(116)
The equations ∂Φ ∂z α = 0 can be used to eliminate the zero-modes of z's only if ∂ 2 Φ ∂z ρ ∂z β is nonsingular. In the generic case, after diagonalization and elimination of z's as many as possible, one ultimately arrives at
L = p αq α − H(p, q) − λ ρ φ ρ (p, q),(117)
where the remaining z's are denoted by λ ρ (namely, Lagrange multipliers) and the φ ρ are the only true constraints in the system
φ ρ = 0.(118)
A. Symplectic Reduction for Dirac Theory of spin- 1 2 fields In this section, to see how the method works, we provide FJ Hamiltonian reduction for the Dirac theory for the spin-1 2 theory as an example. For this purpose, let us note that the Lagrangian can be written:
L = − i 2ψ ← / ∂ ψ + i 2ψ → / ∂ ψ − mψψ.(119)
As mentioned above, we assume that the independent dynamical variables are anti-commuting Grassmann variables. In order to pass to the symplectic analysis of the system, one needs to separate the dynamical components from the non-physical ones by splitting the Lagrangian (119) into its time and space components. In doing so, one arrives at
L = i 2 γ 0 ψψ + i 2 γ 0ψψ − i 2 ∂ iψ γ i ψ − i 2ψ γ i ∂ i ψ + mψψ ,(120)
whose variation, up to a boundary term, yields
δL = δψ iγ 0ψ + δψ iγ 0ψ − δψ(−iγ i ∂ i ψ + mψ) + δψ(−iγ i ∂ iψ − mψ) ,(121)
from which one gets the Dirac field equations as follows
iγ 0ψ = −iγ i ∂ iψ − mψ, iγ 0ψ = −iγ i ∂ i ψ + mψ.(122)
As is seen from Eq.(121) and Eq.(122), the symplectic matrix for the Dirac theory and its inverse are
f αβ = 0 iγ 0 iγ 0 0 , f −1 αβ = 0 −iγ 0 −iγ 0 0 = −f αβ .
One should observe that, in contrast to the bosonic case, the symplectic matrix for the Grassmannian variables is symmetric and the fundamental brackets are defined as follows
{ξ β , ξ α } F J = −(f −1 ) αβ ,(123)
from which one gets the basic bracket for the Dirac theory
{ψ,ψ} F J = iγ 0 .(124)
This is also valid for the massless theory. Note that since the theory does not have any gauge redundancy, one does not need to assume any gauge-fixing.
For the quantization of the constrained system, see for example[13][14][15].
Since the gauge choice ∂ i ψi = 0 on the initial data will also arise due to the self-consistency, one should also be able to regulate the gauge parameter via ǫ = − 1 ∇ 2 ∂iψ i . But since we start with the (11), we have to give(12).
Notice that, in this case, the longitudinal part becomes ζ = ( / ∂ + g γ · A)γ · ψ T − gA · ψ T .
ACKNOWLEDGMENTSWe would like to thank Roman Jackiw for suggesting the problem and several useful discussions. We would also like to thank Bayram Tekin for useful suggestions and Markus Schulze, Gilly Elor and Ibrahim Burak Ilhan for critical readings of the paper. S.D. is supported by the TUBITAK 2219 Scholarship.
On a theory of particles with half integral spin. W Rarita, J Schwinger, Phys. Rev. 6061W. Rarita and J. Schwinger, "On a theory of particles with half integral spin," Phys. Rev. 60, 61 (1941).
Inconsistency of the local field theory of charged spin 3/2 particles. K Johnson, E C G Sudarshan, Annals Phys. 13126K. Johnson and E. C. G. Sudarshan, "Inconsistency of the local field theory of charged spin 3/2 particles," Annals Phys. 13, 126 (1961).
Propagation and quantization of Rarita-Schwinger waves in an external electromagnetic potential. G Velo, D Zwanziger, Phys. Rev. 1861337G. Velo and D. Zwanziger, "Propagation and quantization of Rarita-Schwinger waves in an external electromagnetic potential," Phys. Rev. 186, 1337 (1969).
Progress Toward a Theory of Supergravity. D Z Freedman, P Van Nieuwenhuizen, S Ferrara, Phys. Rev. D. 133214D. Z. Freedman, P. van Nieuwenhuizen and S. Ferrara, "Progress Toward a Theory of Supergravity," Phys. Rev. D 13, 3214 (1976).
Supergravity and the S Matrix. M T Grisaru, H N Pendleton, P Van Nieuwenhuizen, Phys. Rev. D. 15996M. T. Grisaru, H. N. Pendleton and P. van Nieuwenhuizen, "Supergravity and the S Matrix," Phys. Rev. D 15, 996 (1977).
Soft Spin 3/2 Fermions Require Gravity and Supersymmetry. M T Grisaru, H N Pendleton, Phys. Lett. B. 67323M. T. Grisaru and H. N. Pendleton, "Soft Spin 3/2 Fermions Require Gravity and Supersymmetry," Phys. Lett. B 67, 323 (1977).
Gauge Quantization for Spin 3/2 Fields. A K Das, D Z Freedman, Nucl. Phys. B. 114271A. K. Das and D. Z. Freedman, "Gauge Quantization for Spin 3/2 Fields," Nucl. Phys. B 114, 271 (1976).
Composite Anomalies in Supergravity. N Marcus, Phys. Lett. B. 157383N. Marcus, "Composite Anomalies in Supergravity," Phys. Lett. B 157, 383 (1985).
SU(8) family unification with boson-fermion balance. S L Adler, Int. J. Mod. Phys. A. 291450130S. L. Adler, "SU(8) family unification with boson-fermion balance," Int. J. Mod. Phys. A 29, 1450130 (2014).
Quantized Gauged Massless Rarita-Schwinger Fields. S L Adler, Phys. Rev. D. 92885023S. L. Adler, "Quantized Gauged Massless Rarita-Schwinger Fields," Phys. Rev. D 92, no. 8, 085023 (2015);
Classical Gauged Massless Rarita-Schwinger Fields. S L Adler, Phys. Rev. D. 92885022S. L. Adler, "Classical Gauged Massless Rarita-Schwinger Fields," Phys. Rev. D 92, no. 8, 085022 (2015).
Hamiltonian Reduction of Unconstrained and Constrained Systems. L D Faddeev, R Jackiw, Phys. Rev. Lett. 601692L. D. Faddeev and R. Jackiw, "Hamiltonian Reduction of Unconstrained and Constrained Systems," Phys. Rev. Lett. 60, 1692 (1988).
Diverse topics in theoretical and mathematical physics. R Jackiw, World ScientificSingapore, SingaporeR. Jackiw, "Diverse topics in theoretical and mathematical physics," Singapore, Singapore: World Scientific (1995) .
Quantization of Relativistic Systems with Boson and Fermion First and Second Class Constraints. E S Fradkin, T E Fradkina, Phys. Lett. B. 72343E. S. Fradkin and T. E. Fradkina, "Quantization of Relativistic Systems with Boson and Fermion First and Second Class Constraints," Phys. Lett. B 72, 343 (1978).
Relativistic S Matrix of Dynamical Systems with Boson and Fermion Constraints. I A Batalin, G A Vilkovisky, Phys. Lett. B. 69309I. A. Batalin and G. A. Vilkovisky, "Relativistic S Matrix of Dynamical Systems with Boson and Fermion Constraints," Phys. Lett. B 69, 309 (1977).
Quantization of gauge systems. M Henneaux, C Teitelboim, Univ. Pr.Princeton, USAM. Henneaux and C. Teitelboim, "Quantization of gauge systems," Princeton, USA: Univ. Pr. (1992).
Generalized Hamiltonian dynamics. P A M Dirac, Proc. Roy. Soc. Lond. A. 246326P. A. M. Dirac, "Generalized Hamiltonian dynamics," Proc. Roy. Soc. Lond. A 246, 326 (1958).
Supergravity. D Z Freedman, A Van Proeyen, Cambridge University PressD. Z. Freedman and A. Van Proeyen, "Supergravity," Cambridge University Press (2012).
Covariant Canonical Quantization Of Massless Rarita-schwinger Field. M Okawa, Prog. Theor. Phys. 62305M. Okawa, "Covariant Canonical Quantization Of Massless Rarita-schwinger Field," Prog. Theor. Phys. 62, 305 (1979).
Quantization of an interacting spin -3 / 2 field and the Delta isobar. V Pascalutsa, Phys. Rev. D. 5896002V. Pascalutsa, "Quantization of an interacting spin -3 / 2 field and the Delta isobar," Phys. Rev. D 58, 096002 (1998).
Quantization of a Spin 3/2 Field Interacting With the Electromagnetic Field. K Inoue, M Omote, M Kobayashi, Prog. Theor. Phys. 631413K. Inoue, M. Omote and M. Kobayashi, "Quantization of a Spin 3/2 Field Interacting With the Elec- tromagnetic Field," Prog. Theor. Phys. 63, 1413 (1980).
On the Quantization of Systems with Anticommutating Variables. R Casalbuoni, Nuovo Cim. A. 33115R. Casalbuoni, "On the Quantization of Systems with Anticommutating Variables," Nuovo Cim. A 33, 115 (1976).
| zyda_arxiv-0953000 |
DISCOVERING LATENT NETWORK STRUCTURE IN POINT PROCESS DATA
Scott W Linderman
Harvard University
Ryan P Adams
Harvard University
DISCOVERING LATENT NETWORK STRUCTURE IN POINT PROCESS DATA
Networks play a central role in modern data analysis, enabling us to reason about systems by studying the relationships between their parts. Most often in network analysis, the edges are given. However, in many systems it is difficult or impossible to measure the network directly. Examples of latent networks include economic interactions linking financial instruments and patterns of reciprocity in gang violence. In these cases, we are limited to noisy observations of events associated with each node. To enable analysis of these implicit networks, we develop a probabilistic model that combines mutuallyexciting point processes with random graph models. We show how the Poisson superposition principle enables an elegant auxiliary variable formulation and a fully-Bayesian, parallel inference algorithm. We evaluate this new model empirically on several datasets.
1. Introduction. Many types of modern data are characterized via relationships on a network. Social network analysis is the most commonly considered example, where the properties of individuals (vertices) can be inferred from "friendship" type connections (edges). Such analyses are also critical to understanding regulatory biological pathways, trade relationships between nations, and propagation of disease. The tasks associated with such data may be unsupervised (e.g., identifying low-dimensional representations of edges or vertices) or supervised (e.g., predicting unobserved links in the graph). Traditionally, network analysis has focused on explicit network problems in which the graph itself is considered to be the observed data. That is, the vertices are considered known and the data are the entries in the associated adjacency matrix. A rich literature has arisen in recent years for applying statistical machine learning models to this type of problem, e.g., Liben-Nowell & Kleinberg (2007); Hoff (2008); Goldenberg et al. (2010).
In this paper we are concerned with implicit networks that cannot be observed directly, but about which we wish to perform analysis. In an implicit network, the vertices or edges of the graph may not be directly observed, but the graph structure may be inferred from noisy emissions. These noisy observations are assumed to have been generated according to underlying dynamics that respect the latent network structure.
For example, trades on financial stock markets are executed thousands of times per second. Trades of one stock are likely to cause subsequent activity on stocks in related industries. How can we infer such interactions and disentangle them from market-wide fluctuations that occur throughout the day? Discovering latent structure underlying financial markets not only reveals interpretable patterns of interaction, but also provides insight into the stability of the market. In Section 4 we will analyze the stability of mutually-excitatory systems, and in Section 6 we will explore how stock similarity may be inferred from trading activity.
As another example, both the edges and vertices may be latent. In Section 7, we examine patterns of violence in Chicago, which can often be attributed to social structures in the form of gangs. We would expect that attacks from one gang onto another might induce cascades of violence, but the vertices (gang identity of both perpetrator and victim) are unobserved. As with the financial data, it should be possible to exploit dynamics to infer these social structures. In this case spatial information is available as well, which can help inform latent vertex identities.
In both of these examples, the noisy emissions have the form of events in time, or "spikes," and our intuition is that a spike at a vertex will induce activity at adjacent vertices. In this paper, we formalize this idea into a probabilistic model based on mutually-interacting point processes. Specifically, we combine the Hawkes process (Hawkes, 1971) with recently developed exchangeable random graph priors. This combination allows us to reason about latent networks in terms of the way that they regulate interaction in the Hawkes process. Inference in the resulting model can be done with Markov chain Monte Carlo, and an elegant data augmentation scheme results in efficient parallelism.
Preliminaries.
2.1. Poisson Processes. Point processes are fundamental statistical objects that yield random finite sets of events {s n } N n=1 ⊂ S, where S is a compact subset of R D , for example, space or time. The Poisson process is the canonical example. It is governed by a nonnegative "rate" or "intensity" function, λ(s) : S → R + . The number of events in a subset S ⊂ S follows a Poisson distribution with mean S λ(s)ds. Moreover, the number of events in disjoint subsets are independent.
We use the notation {s n } N n=1 ∼ PP(λ(s)) to indicate that a set of events {s n } N n=1 is drawn from a Poisson process with rate λ(s). The likelihood is given by
p({s n } N n=1 |λ(s)) = exp − S λ(s)ds N n=1 λ(s n ).(1)
In this work we will make use of a special property of Poisson processes, the Poisson superposition theorem, which states that {s n } ∼ PP(λ 1 (s) + . . . + λ K (s)) can be decomposed into K independent Poisson processes. Letting z n denote the origin of the n-th event, we perform the decomposition by independently sampling each z n from Pr(z n = k) ∝ λ k (s n ), for k ∈ {1 . . . K} (Daley & Vere-Jones, 1988).
Hawkes Processes.
Though Poisson processes have many nice properties, they cannot capture interactions between events. For this we turn to a more general model known as Hawkes processes. A Hawkes process consists of K point processes and gives rise to sets of marked events {s n , c n } N n=1 , where c n ∈ {1, . . . , K} specifies the process on which the n-th event occurred. For now, we assume the events are points in time, i.e., s n ∈ [0, T ]. Each of the K processes is a conditionally Poisson process with a rate λ k (t | {s n : s n < t}) that depends on the history of events up to time t.
Hawkes processes have additive interactions. Each process has a "background rate" λ 0,k (t), and each event s n on process k adds a nonnegative impulse response h k,k (t − s n ) to the intensity of other processes k . Causality and locality of influence are enforced by requiring h k,k (∆t) to be zero for ∆t / ∈ [0, ∆t max ]. By the superposition theorem for Poisson processes, these additive components can be considered independent processes, each giving rise to their own events. We augment our data with a latent random variable z n ∈ {0, . . . , n − 1} to indicate the cause of the n-th event (0 if the event is due to the background rate and 1 . . . n − 1 if it was caused by a preceding event).
Let C n,k denote the set of events on process k that were parented by event n. Formally, C n,k ≡ {s n : c n = k ∧ z n = n}. Let C 0,k be the set of events attributed to the background rate of process k. The augmented Hawkes likelihood is the product of likelihoods of each Poisson process:
p({(s n , c n , z n )} N n=1 | {λ 0,k (t)},{{h k,k (∆t)}}) = K k=1 p(C 0,k | λ 0,k (t)) × N n=1 K k=1 p(C n,k | h cn,k (t − s n )) ,(2)
where the densities in the product are given by Equation 1. Figure 1 illustrates a causal cascades of events for a simple network of three processes (I-III). The first event is caused by the background rate (z 1 = 0), and it induces impulse responses on processes II and III. Event 2 is spawned by the impulse on the third process (z 2 = 1), and feeds back onto processes I and II. In some cases a single parent event induces multiple children, e.g., event 4 spawns events 5a-c. In this simple example, processes excite one another, but do not excite themselves. Next we will introduce more sophisticated models for such interaction networks.
2.3. Random Graph Models. Graphs of K nodes correspond to K × K matrices. Unweighted graphs are binary adjacency matrices A where A k,k = 1 indicates a directed edge from node k to node k . Weighted directed graphs can be represented by a real matrix W whose entries indicate the weights of the edges. Random graph models reflect the probability of different network structures through distributions over these matrices.
Recently, many random graph models have been unified under an elegant theoretical framework due to Aldous and Hoover (Aldous, 1981;Hoover, 1979). See Lloyd et al. (2012) for an overview. Conceptually, the Aldous-Hoover representation characterizes the class of exchangeable random graphs, that is, graph models for which the joint probability is invariant under permutations of the node labels. Just as de Finetti's theorem equates exchangeable sequences (X n ) n∈N to independent draws from a random probability measure Θ, the Aldous-Hoover theorem relates random exchangeable graphs to the following generative model:
u 1 , u 2 , . . . ∼ i.i.d Uniform[0, 1], A k,k ∼ Bernoulli(Θ(u k , u k )),
for some random function Θ : [0, 1] 2 → [0, 1].
Empty graph models (A k,k ≡ 0) and complete models (A k,k ≡ 1) are trivial examples, but much more structure may be encoded. For example, consider a model in which nodes are endowed with a location in space, x k ∈ R D . This could be an abstract feature space or a real location like the center of a gang territory. The probability of connection between two notes decreases with distance between them as A k,k ∼ Bern(ρe −||x k −x k ||/τ ), where ρ is the overall sparsity and τ is the characteristic distance scale. This simple model can be converted to the Aldous-Hoover representation by transforming u k into x k via the inverse CDF.
Many models can be constructed in this manner. Stochastic block models, latent eigenmodels, and their nonparametric extensions all fall under this class (Lloyd et al., 2012). We will leverage the generality of the Aldous-Hoover formalism to build a flexible model and inference algorithm for Hawkes processes with structured interaction networks.
3. The Network Hawkes Model. In order to combine Hawkes processes and random network models, we decompose the Hawkes impulse response h k,k (∆t) as follows:
h k,k (∆t) = A k,k W k,k g θ k,k (∆t).(3)
Here, A ∈ {0, 1} K×K is a binary adjacency matrix and W ∈ R K×K + is a non-negative weight matrix. Together these specify the sparsity structure and strength of the interaction network, respectively. The non-negative function g θ k,k (∆t) captures the temporal aspect of the interaction. It is parameterized by θ k,k and satisfies two properties: a) it has bounded support for ∆t ∈ [0, ∆t max ], and b) it integrates to one. In other words, g is a probability density with compact support.
Decomposing h as in Equation 3 has many advantages. It allows us to express our separate beliefs about the sparsity structure of the interaction network and the strength of the interactions through a spike-and-slab prior on A and W (Mohamed et al., 2012). The empty graph model recovers independent background processes, and the complete graph recovers the standard Hawkes process. Making g a probability density endows W with units of "expected number of events" and allows us to compare the relative strength of interactions. The form suggests an intuitive generative model: for each impulse response draw m ∼ Poisson(W k,k ) number of induced events and draw the m child event times i.i.d. from g, enabling computationally tractable conjugate priors.
Intuitively, the background rates, λ 0,k (t), explain events that cannot be attributed to preceding events. In the simplest case the background rate is constant. However, there are often fluctuations in overall intensity that are shared among the processes, and not reflective of process-to-process interaction, as we will see in the daily variations in trading volume on the S&P100 and the seasonal trends in homicide. To capture these shared background fluctuations, we use a sparse Log Gaussian Cox process (Møller et al., 1998) to model the background rate:
λ 0,k (t) = µ k + α k exp{y(t)}, y(t) ∼ GP(0, K(t, t )).
The kernel K(t, t ) describes the covariance structure of the background rate that is shared by all processes. For example, a periodic kernel may capture seasonal or daily fluctuations. The offset µ k accounts for varying background intensities among processes, and the scaling factor α k governs how sensitive process k is to these background fluctuations (when α k = 0 we recover the constant background rate).
Finally, in some cases the process identities, c n , must also be inferred. With gang incidents in Chicago we may have only a location, x n ∈ R 2 . In this case, we may place a spatial Gaussian mixture model over the c n 's, as in Cho et al. (2013). Alternatively, we may be given the label of the community in which the incident occurred, but we suspect that interactions occur between clusters of communities. In this case we can use a simple clustering model or a nonparametric model like that of Blundell et al. (2012).
3.1. Inference with Gibbs Sampling. We present a Gibbs sampling procedure for inferring the model parameters, W , A, {{θ k,k }},{λ 0,k (t)}, and, if necessary, {c n }. In order to simplify our Gibbs updates, we will also sample a set of parent assignments for each event {z n }. Incorporating these parent variables enables conjugate prior distributions for W , θ k,k , and, in the case of constant background rates, λ 0,k .
Sampling weights W .. A gamma prior on the weights, W k,k ∼ Gamma(α 0 W , β 0 W ), results in the conditional distribution, W k,k | {s n , c n , z n } N n=1 , θ k,k ∼ Gamma(α k,k , β k,k ), α k,k = α 0 W + N n=1 N n =1 δ cn,k δ c n ,k δ z n , n β k,k = β 0 W + N n=1 δ cn,k .
This is a minor approximation valid for ∆t max T . Here and elsewhere, δ i,j is the Kronecker delta function. We use the inverse-scale parameterization of the gamma distribution, i.e.,
Gamma(x | α, β) = β α Γ(α) x α−1 exp{−β x}.
Sampling impulse response parameters θ k,k .. We let g k,k (∆t) be the logistic-normal density with parameters θ k,k = {µ, τ }:
g k,k (∆t | µ, τ ) = 1 Z exp −τ 2 σ −1 ∆t ∆t max − µ 2 σ −1 (x) = ln(x/(1 − x)) Z = ∆t(∆t max − ∆t) ∆t max τ 2π − 1 2 .
The normal-gamma prior µ, τ ∼ N G(µ, τ |µ 0 µ , κ 0 µ , α 0 τ , β 0 τ ) yields the standard conditional distribution (see Murphy, 2012) with the following sufficient statistics:
x n,n = ln(s n − s n ) − ln(t max − (s n − s n )), m = N n=1 N n =1 δ cn,k δ c n ,k δ z n ,n , x = 1 m N n=1 N n =1
δ cn,k δ c n ,k δ z n ,n x n,n .
Sampling background rates λ 0,k .. For background rates λ 0,k (t) ≡ λ 0,k , the prior λ 0,k ∼ Gamma(α 0 λ , β 0 λ ) is conjugate with the likelihood and yield the conditional distribution This conjugacy no longer holds for Gaussian process background rates, but conditioned upon the parent variables, we must simply fit a Gaussian process for those events for which z n = 0. We use elliptical slice sampling (Murray et al., 2010) for this purpose.
λ 0,k | {s n , c n , z n } N n=1 , ∼ Gamma(α λ , β λ ), α λ = α 0 λ + n δ cn,k δ zn,0 β λ = β 0 λ + T
Collapsed Gibbs sampling A and z n .. With Aldous-Hoover graph priors, the entries in the binary adjacency matrix A are conditionally independent given the parameters of the prior. The likelihood introduces dependencies between the rows of A, but each column can be sampled in parallel. Gibbs updates are complicated by strong dependencies between the graph and the parent variables, z n . Specifically, if z n = n, then we must have A cn,c n = 1. To improve the mixing of our sampling algorithm, first we update A | {s n , c n }, W , θ k,k by marginalizing the parent variables. The posterior is determined by the likelihood of the conditionally Poisson process λ k (t | {s n : s n < t}) (Equation 1) with and without interaction A k,k and the prior comes from the Aldous-Hoover graph model. Then we update z n | {s n , c n }, A, W , θ k,k by sampling from the discrete conditional distribution. Though there are N parent variables, they are conditionally independent and may be sampled in parallel. We have implemented our inference algorithm on GPUs to capitalize on this parallelism.
Sampling process identities c n .. As with the adjacency matrix, we use a collapsed Gibbs sampler to marginalize out the parent variables when sampling the process identities. Unfortunately, the c n 's are not conditionally independent and hence must be sampled sequentially. This limits the size of the datasets we can handle when the process identities are unknown, but our GPU implementation is still able to achieve upwards of 4 iterations (sampling all variables) per second on datasets with thousands of events.
Stability of Network Hawkes
Processes. Due to their recurrent nature, Hawkes processes must be constrained to ensure their positive feedback does not lead to infinite numbers of events. A stable system must satisfy 1 Daley & Vere-Jones, 1988). When we are conditioning on finite datasets we do not have to worry about this. We simply place weak priors on the network parameters, e.g., a beta prior on the sparsity ρ of an Erdős-Renyi graph, and a Jeffreys prior on the scale of the gamma weight distribution. For the generative model, however, we would like to set our hyperparameters such that the prior distribution places little mass on unstable networks. In order to do so, we use tools from random matrix theory.
λ max = max | eig(A W ) | < 1 (see
The celebrated circular law describes the asymptotic eigenvalue distribution for K × K random matrices with entries that are i.i.d. with zero mean and variance σ 2 . As K grows, the eigenvalues are uniformly distributed over a disk in the complex plane centered at the origin and with radius σ √ K. In our case, however, the mean of the entries, E[A k,k W k,k ] = µ, is not zero. Silverstein (1994) has shown that we can analyze noncentral random matrices by considering them to be perturbations about the mean.
Consider A W = V + U , where V = µKe K e T
K is a deterministic rank-one matrix with every entry equal to µ, e K ∈ R K is a column vector with all entries equal to K −1/2 , and U is a random matrix with i.i.d. zero-mean entries. Then, as K approaches infinity, the largest eigenvalue will come from V and will be distributed as λ max ∼ N (µK, σ 2 ), and the remaining eigenvalues will be uniformly distributed over the complex disc.
In the simple case of W k,k ∼ Gamma(α, β) and A k,k ∼ Bern(ρ), we have µ = ρα/β and σ = ρ((1 − ρ)α 2 + α)/β. For a given K, α and β, we can tune the sparsity parameter ρ to achieve stability with high probability. We simply set ρ such that the minimum of σ √ K and, say, µK + 3σ, equals one. Figures 2a and 2b show a variety of weight distributions and the maximum stable ρ. Increasing the network size, the mean, or the variance will require a concomitant increase in sparsity.
This approach relies on asymptotic eigenvalue distributions, and it is unclear how quickly the spectra of random matrices will converge to this distribution. To test this, we computed the empirical eigenvalue distribution for random matrices of various size, mean, and variance. We generated 10 4 random matrices for each weight distribution in Figure 2a with sizes K = 4, 64, and 1024, and ρ set to the theoretical maximum indicated by dots in Figure 2b. The theoretical and empirical distributions of the maximum eigenvalue are shown in Figures 2c and 2d. We find that for small mean and variance weights, for example Gamma(1, 5) in the Figure 2c, the empirical results closely match the theory. As the weights grow larger, as in Gamma(8, 12) in 2d, the empirical eigenvalue distributions have increased variance and lead to a greater than expected probability of unstable matrices for the range of network sizes tested here. We conclude that networks with strong weights should be counterbalanced by strong sparsity limits, or additional structure in the adjacency matrix that prohibits excitatory feedback loops.
Synthetic Results.
Our inference algorithm is first tested on synthetic data generated from the network Hawkes model. We perform two tests: a) a link prediction task where the process identities are given and the goal is to simply infer whether or not an interaction exists, and b) an event prediction task where we measure the probability of held-out event sequences. The network Hawkes model can be used for link prediction by considering the posterior probability of interactions P (A k,k | {s n , c n }). By thresholding at varying probabilities we compute a ROC curve. A standard Hawkes process assumes a complete set of interactions (A k,k ≡ 1), but we can similarly threshold its inferred weight matrix to perform link prediction.
Cross correlation provides a simple alternative measure of interaction. By summing the crosscorrelation over offsets ∆t ∈ [0, ∆t max ), we get a measure of directed interaction. A probabilistic alternative is offered by the generalized linear model for point processes (GLM), a popular model for spiking dynamics in computational neuroscience (Paninski, 2004). The GLM allows for constant background rates and both excitatory and inhibitory interactions. Impulse responses are modeled with linear basis functions. Area under the impulse response provides a measure of directed excitatory interaction that we use to compute a ROC curve. See the supplementary material for a detailed description of this model. Figure 3a, compared to a baseline of a Poisson process with constant rate. Improvement in predictive likelihood over baseline is normalized by the number of events in the test data to obtain units of "bits per spike." Again, the network Hawkes model outperforms the competitors in all but one sample network.
We sampled ten network Hawkes processes of 30 nodes each with Erdős-Renyi graph models, constant background rates, and the priors described in Section 3. The Hawkes processes were simulated for T = 1000 seconds. We used the models above to predict the presence or absence of interactions. The results of this experiment are shown in the ROC curves of Figure 3a. The network Hawkes model accurately identifies the sparse interactions, outperforming all other models.
With the Hawkes process and the GLM we can evaluate the log likelihood of held-out test data. On this task, the network Hawkes outperforms the competitors for 9 out 10 networks. On average, the network Hawkes model achieves 2.2 ± .1 bits/spike improvement in predictive log likelihood over a homogeneous Poisson process. Figure 3b shows that on average the standard Hawkes and the GLM provide only 60% and 72%, respectively, of this predictive power. See the supplementary material for further analysis.
6. Trades on the S&P 100. As an example of how Hawkes processes may discover interpretable latent structure in real-world data, we study the trades on the S&P 100 index collected at 1s intervals during the week of Sep. 28 through Oct. 2, 2009. Every time a stock price changes by ±0.1% of its current price an event is logged on the stock's process, yielding a total of K = 100 processes and N =182,037 events.
Trading volume varies substantially over the course of the day, with peaks at the opening and closing of the market. This daily variation is incorporated into the background rate via a Log Gaussian Cox Process (LGCP) with a periodic kernel (see supplementary material). We look for short-term interactions on top of this background rate with time scales of ∆t max = 60s. In Figure 4 we compare the predictive performance of independent LGCPs, a standard Hawkes process with LGCP background rates, and the network Hawkes model with LGCP background rates under two graph priors. The models are trained on four days of data and tested on the fifth. Though the network Hawkes is slightly outperformed by the standard Hawkes, the difference is small relative to the performance improvement from considering interactions, and the inferred network parameters provide interpretable insight into the market structure. In the latent distance model for A, each stock has a latent embedding x k ∈ R 2 such that nearby stocks are more likely to interact, as described in Section 2.3. Figure 5 shows a sample from the posterior distribution over embeddings in R 2 for ρ = 0.2 and τ = 1. We have plotted stocks in the six largest sectors, as listed on Bloomberg.com. Some sectors, notably energy and financials, tend to cluster together, indicating an increased probability of interaction between stocks in the same sector. Other sectors, such as consumer goods, are broadly distributed, suggesting that these stocks are less influenced by others in their sector. For the consumer industry, which is driven by slowly varying factors like inventory, this may not be surprising.
The Hinton diagram in the bottom panel of Figure 5 shows the top 4 eigenvectors of the interaction network. All eigenvalues are less than 1, indicating that the system is stable. The top row corresponds to first eigenvector (λ max = 0.74). Apple (AAPL), J.P. Morgan (JPM), and Exxon Mobil (XOM) have notably large entries in the eigenvector, suggesting that their activity will spawn cascades of self-excitation. The fourth eigenvector (λ 4 = 0.34) is dominated by Walgreens (WAG) and CVS (CVS), suggesting bursts of activity in these drug stores, perhaps due to encouraging quarterly reports during flu season (Associated Press, 2012).
7. Gangs of Chicago. In our final example, we study spatiotemporal patterns of gang-related homicide in Chicago. Sociologists have suggested that gang-related homicide is mediated by underlying social networks and occurs in mutually-exciting, retaliatory patterns (Papachristos, 2009). This is consistent with a spatiotemporal Hawkes process in which processes correspond to gang territories and homicides incite further homicides in rival territories.
We study gang-related homicides between 1980 and 1995 (Block et al., 2005). Homicides are labeled by the community in which they occurred. Over this time-frame there were N = 1637 gang-related homicides in the 77 communities of Chicago. We evaluate our model with an event-prediction task, training on 1980-1993 and testing on 1994-1995. We use a Log Gaussian Cox Process (LGCP) temporal background rate in all model variations. Our baseline is a single process with a uniform spatial rate for the city. We test two process identity models: a) the "community" model, which considers each community a separate process, and b) the "cluster" model, which groups communities into processes. The number of The community process identity model improves predictive performance by accounting for higher rates in South and West Chicago where gangs are deeply entrenched. Allowing for interactions between community areas, however, results in a decrease in predictive power due to overfitting (there is insufficient data to fit all 77 2 potential interactions). Interestingly, sparse graph priors do not help. They bias the model toward sparser but stronger interactions which are not supported by the test data. These results are shown in the "communities" group of Figure 6a. Clustering the communities improves predictive performance for all graph models, as seen in the "clusters" group. Moreover, the clustered models benefit from the inclusion of excitatory interactions, with the highest predictive log likelihoods coming from a four-cluster Erdős-Renyi graph model with interactions shown in Figure 6b. Distance-dependent graph priors do not improve predictive performance on this dataset, suggesting that either interactions do not occur over short distances, or that local rivalries are not substantial enough to be discovered in our dataset. More data is necessary to conclusively say which. Looking into the inferred clusters in Figure 6c and their rates in 6d, we can interpret the clusters as "safe suburbs" in gold, "buffer neighborhoods" in green, and "gang territories" in red and blue. Self-excitation in the blue cluster (Figure 6b) suggests that these regions are prone to bursts of activity, as one might expect during a turf-war. This interpretation is supported by reports of "a burst of street-gang violence in 1990 and 1991" in West Englewood (41.77 • N, −87.67 • W) (Block & Block, 1993). Figure 6d also shows a significant increase in the homicide rate between 1989 and 1995, consistent with reports of escalating gang warfare (Block & Block, 1993). In addition to this long-term trend, homicide rates show a pronounced seasonal effect, peaking in the summer and tapering in the winter. A LGCP with a quadratic kernel point-wise added to a periodic kernel captures both effects.
8. Related Work. Multivariate point processes are of great interest to the machine learning community as they are intuitive models for a variety of natural phenomena. We have leveraged previous work on Poisson processes with Gaussian process intensities in our background rate models (Cunningham et al., 2007). An expectation-maximization inference algorithm for Hawkes processes was put forth by Simma & Jordan (2010) and applied to very large social network datasets. We have adapted their latent variable formulation in our fully-Bayesian inference algorithm and introduced a framework for prior distributions over the latent network.
Others have considered special cases of the model we have proposed. Blundell et al. (2012) combine Hawkes processes and the Infinite Relational Model (a specific exchangeable graph model with an Aldous-Hoover representation) to cluster processes and discover interactions. Cho et al. (2013) applied Hawkes processes to gang incidents in Los Angeles. They developed a spatial Gaussian mixture model (GMM) for process identities, but did not explore structured network priors. We experimented with this process identity model but found that it suffers in predictive log likelihood tests (see supplementary material).
Recently, Iwata et al. (2013) developed a stochastic EM algorithm for Hawkes processes, leveraging similar conjugacy properties, but without network priors. Zhou et al. (2013) have developed a promising optimization-based approach to discovering low-rank networks in Hawkes processes, similar to some of the network models we explored.
Perhaps the most closely related work is that of Perry & Wolfe (2013). They provide a partial likelihood inference algorithm for Hawkes processes with a similar emphasis on structural patterns in the network of interactions. They provide an estimator capable of discovering homophily (the tendency for similar processes to interact) and other network effects. Our fully-Bayesian approach generalizes this method to capitalize on recent developments in random network models (Lloyd et al., 2012) and allows for nonparametric background rates.
Finally, generalized linear models (GLMs) are widely used in computational neuroscience (Paninski, 2004). GLMs allow for both excitatory and inhibitory interactions, but, as we have shown, when the data consists of purely excitatory interactions, Hawkes processes outperform GLMs in link-and event-prediction tests.
Conclusion.
We developed a framework for discovering latent network structure from spiking data. Our auxiliary variable formulation of the multivariate Hawkes process supported arbitrary Aldous-Hoover graph priors, Log Gaussian Cox Process background rates, and models of unobserved process identities. Our parallel MCMC algorithm allowed us to reason about uncertainty in the latent network in a fully-Bayesian manner, taking into account noisy observations and prior beliefs. We leveraged results from random matrix theory to analyze the conditions under which random network models will be stable, and our applications uncovered interpretable latent networks in a variety of synthetic and real-world problems. Generalizing beyond the Hawkes observation model is a promising avenue for future work.
APPENDIX A: INFERENCE DETAILS
A.1. Derivation of conjugate prior updates. By combining Equations 1 and 2 of the main text, we can write the joint likelihood, with the auxiliary parent variables, as,
p({s n , c n , z n } N n=1 , | {λ 0,k (t)} K k=1 , {h k,k (∆t)} k,k ) = K k=1 exp − T 0 λ 0,k (τ )dτ N n=1 λ 0,k (s n ) δ cn,k δ zn,0 × N n=1 K k =1 exp − T sn h cn,k (τ − s n )dτ N n =1
h cn,c n (s n − s n ) δ c n ,k δz n ,n .
The first line corresponds to the likelihood of the background processes; the second and third correspond to the likelihood of the induced processes triggered by each spike.
To derive the updates for weights, recall from Equation 3 of the main text that W k,k only appears in the impulse responses for which c n = k and c n = k . so we have,
p(W k,k | {s n , c n , z n } N n=1 , . . .) ∝ N n=1 exp − T sn h k,k (τ − s n )dτ N n =1 h k,k (s n − s n ) δ c n ,k δz n ,n δ cn,k × p(W k,k ) = N n=1 exp − T sn A k,k W k,k g k,k (τ − s n )dτ N n =1
A k,k W k,k g k,k (s n − s n ) δ c n ,k δz n ,n δ cn,k × p(W k,k ).
If A k,k = 1 and we ignore spikes after T − ∆t max , this is approximately proportional to
exp −W k,k N k W N k,k k,k p(W k,k ), where N k = N n=1 δ cn,k , and N k,k = N n=1 N n =1
δ cn,k δ c n ,k δ z n ,n .
When p(W k,k ) is a gamma distribution, the conditional distribution is also gamma. If A k,k = 0, the conditional distribution reduces to the prior, as expected.
Similar conjugate updates can be derived for constant background rates and the impulse response parameters, as stated in the main text.
A.2. Log Gaussian Cox Process background rates. In the Trades on the S&P100 and the Gangs of Chicago datasets, it was crucial to model the background fluctuations that were shared among all processes. However, if the background rate is allowed to vary at time scales shorter than ∆t max then it may obscure interactions between processes. To prevent this, we sample the Log Gaussian Cox Process (LGCP) at a sparse grid of M + 1 equally spaced points and linearly interpolate to evaluate the background rate at the exact time of each event. We have,
y = ŷ mT M M m=0
∼ GP(0, K(t, t )).
Then,
λ 0,k mT M M m=0 = µ k + α k exp ŷ mT M ,
and λ 0,k (s n ) is linearly interpolated between the rate at surrounding grid points. The equally spaced grid allows us to calculate the integral using the trapezoid quadrature rule. We use Elliptical Slice Sampling (Murray et al., 2010) to sample the conditional distribution of the vector y.
Kernel parameters are set empirically or with prior knowledge. For example, the period of the kernel is set to one day for the S&P100 dataset and one year for the Gangs of Chicago dataset since these are well-known trends. The scale and offset parameters have log Normal priors set such that the maximum and minimum homogeneous event counts in the training data are within two standard deviations of the expected value under the LGCP background rate. That is, the background rate should be able to explain all of the data without any observations if there is no evidence for interactions.
A.3. Priors on hyperparameters. When possible, we sample the parameters of the prior distributions. For example, in the Erdős-Renyi graph model we place a Beta(1, 1) prior on the sparsity ρ. For the latent distance model, we place a log normal prior on the characteristic length scale τ and sample it using Hamiltonian Monte Carlo.
For all of the results in this paper, we fixed the prior on the interaction kernel, g(∆t) to a weak Normal-Gamma distribution with parameters µ 0 µ = −1.0, κ 0 µ = 10, α 0 τ = 10, and β 0 τ = 1. Scale of gamma prior on weights.. For real data, we place an uninformative prior on the weight distribution. The gamma distribution is parameterized by a shape α 0 W and an inverse scale or rate β 0 W . The shape parameter α 0 W is chosen by hand (typically we use α 0 W = 2), but the inverse scale parameter β 0 W is sampled. We may not know a proper scale a priori, however we can use a scaleinvariant Jeffrey's prior to infer this parameter as well. Jeffrey's prior is proportional to the square root of the Fisher information, which for the gamma distribution is
Pr(β 0 W ) ∝ I(β 0 W ) = α 0 W β 0 W .
Hence the posterior is
Pr(β 0 W | {{W k,k }}) ∝ α 0 W β 0 W K k=1 K k =1 (β 0 W ) α 0 W Γ(α 0 W ) W α 0 W −1 k,k e −β 0 W W k,k ∝ (β 0 W ) K 2 α 0 W −1 exp −β 0 W K k=1 K k =1 W k,k .
This is a gamma distribution with parameters,
β 0 W ∼ Gamma(K 2 α 0 W , K k=1 K k =1
W k,k ).
APPENDIX B: SYNTHETIC TEST DETAILS
We generated T = 1000s of events for each synthetic network. The average number of spikes was 25,732 ± 9,425. Network 6, the only network for which the GLM outperformed the network Hawkes model in the event-prediction test, was an outlier with 44,973 events. For event prediction, we trained on the first 900 seconds and tested on the last 100 seconds of the data. We ran our Markov chain for 2500 iterations and computed the posterior probabilities of A and W using the last 500 samples.
A simple alternative to the Hawkes model is to look at cross-correlation between the event times. First, the event times are binned into an arrayŝ k of length M . Let (ŝ k ŝ k )[m] be the cross-correlation betweenŝ k andŝ k at discrete time lag m. Then,
W k,k = ∆tmaxM/T m=0 (ŝ k ŝ k )[m]
provides a simple measure of directed, excitatory interaction that can be thresholded to perform link prediction.
Additionally, we compare the network Hawkes process to the generalized linear model for point processes, a popular model in computational neuroscience (Paninski, 2004). Here, the event counts are modeled asŝ k,m ∼ Poisson(λ k,m ). The mean depends on external covariates and other events according to
λ k,m = exp α T k y m + K k =1 B b=1 β k,k ,b (g b * ŝ k )[m] ,
where y m is an external covariate at time m, {g b (∆m)} B b=1 are a set of basis functions that model impulse responses, and α and β are parameters to be inferred. Under this formulation the loglikelihood of the events is concave function of the parameters and is easily maximized. Unlike the Hawkes process, however, this model allows for inhibitory interactions.
For link prediction, b β k,k ,b provides a measure of directed excitatory interaction that can be used to compute an ROC curve. In our comparisons, we used y m ≡ 1 to allow for time-homogeneous background activity and set {g b (∆m)} to the top B = 6 principal components of a set of logistic normal impulse responses randomly sampled from the Hawkes prior.
We used an L1 penalty to promote sparsity in the parameters of the GLM, and chosen the penalty using cross validation on the last 100 seconds of the training data. Figure 3b of the main text shows the predictive log likelihoods for the Hawkes model with the correct Erdös-Renyi prior, the standard Hawkes model with a complete graph of interactions, and a GLM. On all but network 6, the network Hawkes model outperforms the competing models in terms of predictive log likelihood. Table 7 shows the average predictive performance across sample nextworks. The standard Hawkes and the GLM provide only 59.2% and 71.6%, respectively, of this predictive power.
APPENDIX C: TRADES ON THE S&P100 MODEL DETAILS
We study the trades on the S&P 100 index collected at 1s intervals during the week of Sep. 28 through Oct. 2, 2009. We group both positive and negative changes in price into the same process in order to measure overall activity. Another alternative would be to generate an "uptick" and a "downtick" process for each stock. We ignored trades outside regular trading hours because they tend to be outliers with widely varying prices. Since we are interested in short term interactions, we chose ∆t max = 60s. This also limits the number of potential event parents. If we were interested in interactions over longer durations, we would have to threshold the price changes at a higher level. We precluded self-excitation for this dataset since upticks are often followed by downticks and vice-versa. We are seeking to explain these brief price jumps using the activity of other stocks.
We run our Markov chain for 2000 iterations and compute predictive log likelihoods and the eigenvalues of the expected interaction matrix, E[A W ], using the last 400 iterations of the chain. The posterior sample illustrated in the main text is the last sample of the chain.
Trading volume varies substantially over the course of the day, with peaks at the opening and closing of the market. This daily variation is incorporated into the background rate via a Log Gaussian Cox Process with a periodic kernel. We set the period to one day. Figure 8 shows the posterior distribution over the background rate.
Though it is not discussed in the main text, we also considered stochastic block model (SBM) priors as well (Hoff, 2008), in hopes of recovering latent sector affiliations based on patterns of interaction between sectors. For example, stocks in the financial sector may have 90% probability of interacting with one another, and 30% probability of interacting with stocks in the energy sector. Rather than trying to interpret this from the embedding of a latent distance model, we can capture this belief explicitly with a stochastic block model prior on connectivity. We suppose there are J sectors, and the probability of belonging to a given sector is α ∈ [0, 1] J ∼ Dirichlet(α 0 ). The latent sector assignments are represented by the vector b ∈ [1, J] K , where b k ∼ Cat(α). The probability of a directed interaction is Pr(A k,k = 1) = B b k ,b k , where B is a J × J matrix of Bernoulli probabilities. We place a beta prior on the entries of B.
Our experiments with the SBM prior yield comparable predictive performance to the latent distance prior, as shown in Figure 9. The inferred clusters (not shown) are correlated with the clusters identified by Bloomberg.com, but more analysis is needed. It would also be interesting to study the difference in inferred interactions under the various graph models; this is left for future work.
Mon. Tues. Wed. Thur. LGCP 0.579 ± 0.006 Std. Hawkes 0.903 ± 0.003 Net. Hawkes (Erdős-Renyi) 0.893 ± 0.003 Net. Hawkes (Latent Distance) 0.879 ± 0.004 Net. Hawkes (SBM) 0.882 ± 0.004 Fig 9: Comparison of financial models on a event prediction task, relative to a homogeneous Poisson process baseline.
APPENDIX D: GANGS OF CHICAGO MODEL DETAILS
The first 12 years are used for training, 1993 is reserved for cross-validation, and the remaining two years are used to test the predictive power of the models. We also considered the crime dataset from www.data.cityofchicago.org, but this does not identify gang-related incidents.
We run our Markov chain for 700 iterations and use the last 200 iterations to compute predictive likelihoods and expectations. The posterior sample illustrated in the figure in main text is the last sample of the chain. Since this is a spatiotemporal dataset, our intensities are functions of both spatial location and time. For simplicity we factorize the intensity into λ k,x (x)λ k,t (t), where λ k,t (t) is a Gaussian process as described above, and λ k,x (x) is uniformly distributed over the spatial region associated with process k and is normalized such that it integrates to 1.
In the case of the latent distance model with the community process model, each community's location is fixed to its center of mass. With the cluster process model, we introduce a latent location for each cluster and use a Gaussian distribution for the prior probability that a community belongs to a cluster. This encourages spatially localized clusters. Figure 10 shows the cross validation results used to select the number of clusters, K, in the clustered process identity model and each of the four graph models. For the empty, complete, and Erdös-Renyi graph priors, we discover K = 15, 4, and 4 clusters respectively. The latent distance model, with its prior for spatially localized clusters, has its best performance for K = 5 clusters.
The spatial GMM process ID model from Cho et al. (2013) fails on this dataset because it assigns its spatial intensity over all of R 2 , whereas the clustering model concentrates the rate on only the communities in which the data resides. Figure 11 shows the results of this spatial process ID model on the prediction task. We did not test a latent distance model with the spatial GMM, but it would likely suffer in the same way as the empty, complete, and Erdős-Renyi graph priors. Comparison of predictive log likelihoods for Chicago homicides. This is the same as Figure 6a of the main text, but also includes the spatial GMM process identity model.
Fig 1 :
1Illustration of a Hawkes process. Events induce impulse responses on connected processes and spawn "child" events. See the main text for a complete description.
Fig 2 :
2Empirical and theoretical distribution of the maximum eigenvalue for Erdős-Renyi graphs with gamma weights. (a) Four gamma weight distributions. The colors correspond to the curves in the remaining panels. (b) Sparsity that theoretically yields 99% probability of stability as a function of p(W ) and K. (c) and (d) Theoretical (solid) and empirical (dots) distribution of the maximum eigenvalue. Color corresponds to the weight distribution in (a) and intensity indicates K and ρ shown in (b).
of models on a link prediction test averaged across ten randomly sampled synthetic networks of 30 nodes each. The network Hawkes model with the correct Erdős-Renyi graph prior outperforms a standard Hawkes model, GLM, and simple thresholding of the crosscorrelation matrix. (b) Comparison of predictive log likelihoods for the same set of networks as in
Fig 5 :
5Top: A sample from the posterior distribution over embeddings of stocks from the six largest sectors of the S&P100 under a latent distance graph model with two latent dimensions. Scale bar: the characteristic length scale of the latent distance model. The latent embedding tends to embed stocks such that they are nearby to, and hence more likely to interact with, others in their sector. Bottom: Hinton diagram of the top 4 eigenvectors. Size indicates magnitude of each stock's component in the eigenvector and colors denote sectors as in the top panel, with the addition of Materials (aqua), Utilities (orange), and Telecomm (gray). We show the eigenvectors corresponding to the four largest eigenvalues λ max = 0.74 (top row) to λ 4 = 0.34 (bottom row). clusters is chosen by cross-validation (see supplementary material). For each process identity model, we compare four graph models: a) independent LGCPs (empty), b) a standard Hawkes process with all possible interactions (complete), c) a network Hawkes model with a sparsity-inducing Erdős-Renyi graph prior, and d) a network Hawkes model with a latent distance model that prefers short-range interactions.
Fig 6 :
6Inferred interactions among clusters of community areas in the city of Chicago. (a) Predictive log likelihood for "communities" and "clusters" process identity models and four graph models. Panels (b-d) present results for the model with the highest predictive log likelihood: an Erdős-Renyi graph with K = 4 clusters. (b) The weighted interaction network in units of induced homicides over the training period(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993). (c) Inferred clustering of the 77 community areas. (d) The intensity for each cluster, broken down into the offset, the shared background rate, and the interactions (units of 10 −3 homicides per day per square kilometer).
Fig 8 :
8Posterior distribution over shared background rates for the S&P100. Shading indicates two standard deviations from the mean.
Fig 10 :
10Cross validation results for Chicago models with K clusters for each of the four graph models.
Fig 7: Relative improvement in predictive log likelihood over a homogeneous Poisson process baseline. Relative to the network Hawkes, the standard Hawkes and the GLM yield significantly less predictive power.Model
Relative prediction improvement
Network Hawkes
100%
Standard Hawkes
59.2±14.2%
GLM
71.6±9.2%
In this context λmax refers to an eigenvalue rather than a rate, and denotes the Hadamard product.
Acknowledgements. The authors wish to thank Leslie Valiant for many valuable discussions. SWL is supported by a National Defense Science and Engineering Graduate Fellowship.
Representations for partially exchangeable arrays of random variables. David J Aldous, Journal of Multivariate Analysis. 114Aldous, David J. Representations for partially exchangeable arrays of random variables. Journal of Multivariate Analysis, 11(4):581-598, 1981.
Walgreen beats expectations on higher pharmacy sales. The New York Times. Carolyn R Block, Richard Block, Associated PressUS Department of Justice, Office of Justice Programs, National Institute of JusticeStreet gang crime in ChicagoAssociated Press. Walgreen beats expectations on higher pharmacy sales. The New York Times, September 2012. Block, Carolyn R and Block, Richard. Street gang crime in Chicago. US Department of Justice, Office of Justice Programs, National Institute of Justice, 1993.
Illinois Criminal Justice Information. Homicides in Chicago, 1965-1995. ICPSR06399-v5. Carolyn R Block, Richard Block, Authority , Inter-university Consortium for Political and Social ResearchAnn Arbor, MIBlock, Carolyn R, Block, Richard, and Authority, Illinois Criminal Justice Information. Homicides in Chicago, 1965- 1995. ICPSR06399-v5. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], July 2005.
Modelling reciprocating relationships with Hawkes processes. Charles Blundell, Katherine Heller, Jeffrey Beck, Advances in Neural Information Processing Systems. Blundell, Charles, Heller, Katherine, and Beck, Jeffrey. Modelling reciprocating relationships with Hawkes processes. Advances in Neural Information Processing Systems, 2012.
Latent point process models for spatialtemporal networks. Yoon Cho, Sik, Aram Galstyan, Jeff Brantingham, George Tita, arXiv:1302.2671Cho, Yoon Sik, Galstyan, Aram, Brantingham, Jeff, and Tita, George. Latent point process models for spatial- temporal networks. arXiv:1302.2671, 2013.
Inferring neural firing rates from spike trains using Gaussian processes. John P Cunningham, Byron M Yu, Maneesh Sahani, Krishna V Shenoy, Advances in Neural Information Processing Systems. Cunningham, John P, Yu, Byron M, Sahani, Maneesh, and Shenoy, Krishna V. Inferring neural firing rates from spike trains using Gaussian processes. In Advances in Neural Information Processing Systems, pp. 329-336, 2007.
An introduction to the theory of point processes. Daryl J Daley, Vere-Jones , David ; Goldenberg, Anna Zheng, Alice X Fienberg, E Stephen, Airoldi Edoardo, M , Foundations and Trends in Machine Learning. 2A survey of statistical network modelsDaley, Daryl J and Vere-Jones, David. An introduction to the theory of point processes. 1988, 1988. Goldenberg, Anna, Zheng, Alice X, Fienberg, Stephen E, and Airoldi, Edoardo M. A survey of statistical network models. Foundations and Trends in Machine Learning, 2(2):129-233, 2010.
Spectra of some self-exciting and mutually exciting point processes. Alan G Hawkes, Biometrika. 58183Hawkes, Alan G. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83, 1971.
Modeling homophily and stochastic equivalence in symmetric relational data. Peter D Hoff, Advances in Neural Information Processing Systems. 20Hoff, Peter D. Modeling homophily and stochastic equivalence in symmetric relational data. Advances in Neural Information Processing Systems 20, 20:1-8, 2008.
Relations on probability spaces and arrays of random variables. Douglas N Hoover, PrincetonInstitute for Advanced StudyTechnical reportHoover, Douglas N. Relations on probability spaces and arrays of random variables. Technical report, Institute for Advanced Study, Princeton, 1979.
Discovering latent influence in online social activities via shared cascade Poisson processes. Iwata, Tomoharu, Amar Shah, Zoubin Ghahramani, Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMIwata, Tomoharu, Shah, Amar, and Ghahramani, Zoubin. Discovering latent influence in online social activities via shared cascade Poisson processes. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 266-274. ACM, 2013.
The link-prediction problem for social networks. David Liben-Nowell, Jon Kleinberg, Journal of the American society for information science and technology. 587Liben-Nowell, David and Kleinberg, Jon. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019-1031, 2007.
Random function priors for exchangeable arrays with applications to graphs and relational data. James Lloyd, Robert, Orbanz, Peter, Zoubin Ghahramani, Roy Daniel, M , Advances in Neural Information Processing Systems. Lloyd, James Robert, Orbanz, Peter, Ghahramani, Zoubin, and Roy, Daniel M. Random function priors for exchange- able arrays with applications to graphs and relational data. Advances in Neural Information Processing Systems, 2012.
Bayesian and L1 approaches for sparse unsupervised learning. Mohamed, Shakir, Zoubin Ghahramani, Katherine A Heller, Proceedings of the 29th International Conference on Machine Learning. the 29th International Conference on Machine LearningMohamed, Shakir, Ghahramani, Zoubin, and Heller, Katherine A. Bayesian and L1 approaches for sparse unsupervised learning. In Proceedings of the 29th International Conference on Machine Learning, pp. 751-758, 2012.
Log gaussian cox processes. Jesper Møller, Anne Syversveen, Waagepetersen Randi, Rasmus Plenge, Scandinavian Journal of Statistics. 253Møller, Jesper, Syversveen, Anne Randi, and Waagepetersen, Rasmus Plenge. Log gaussian cox processes. Scandi- navian Journal of Statistics, 25(3):451-482, 1998.
Machine learning: a probabilistic perspective. Kevin P Murphy, The MIT PressMurphy, Kevin P. Machine learning: a probabilistic perspective. The MIT Press, 2012.
Elliptical slice sampling. Iain Murray, Ryan P Adams, David J Mackay, Workshop and Conference Proceedings (AISTATS). 9Murray, Iain, Adams, Ryan P., and MacKay, David J.C. Elliptical slice sampling. Journal of Machine Learning Research: Workshop and Conference Proceedings (AISTATS), 9:541-548, 2010.
Maximum likelihood estimation of cascade point-process neural encoding models. Liam Paninski, Computation in Neural Systems. 15Paninski, Liam. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Compu- tation in Neural Systems, 15(4):243-262, January 2004.
Murder by structure: Dominance relations and the social structure of gang homicide. Andrew V Papachristos, American Journal of Sociology. 1151Papachristos, Andrew V. Murder by structure: Dominance relations and the social structure of gang homicide. American Journal of Sociology, 115(1):74-128, 2009.
Point process modelling for directed interaction networks. Patrick O Perry, Patrick J Wolfe, Journal of the Royal Statistical Society: Series B (Statistical Methodology. Perry, Patrick O and Wolfe, Patrick J. Point process modelling for directed interaction networks. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2013.
The spectral radii and norms of large dimensional non-central random matrices. Jack W Silverstein, Stochastic Models. 10Silverstein, Jack W. The spectral radii and norms of large dimensional non-central random matrices. Stochastic Models, 10(3):525-532, 1994.
Modeling events with cascades of Poisson processes. Aleksandr Simma, Jordan , Michael I , Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI). the 26th Conference on Uncertainty in Artificial Intelligence (UAI)Simma, Aleksandr and Jordan, Michael I. Modeling events with cascades of Poisson processes. Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI), 2010.
Learning social infectivity in sparse low-rank networks using multidimensional Hawkes processes. Ke Zhou, Hongyuan Zha, Le Song, Proceedings of the International Conference on Artificial Intelligence and Statistics. the International Conference on Artificial Intelligence and Statistics16Zhou, Ke, Zha, Hongyuan, and Song, Le. Learning social infectivity in sparse low-rank networks using multi- dimensional Hawkes processes. In Proceedings of the International Conference on Artificial Intelligence and Statis- tics, volume 16, 2013.
| zyda_arxiv-0964000 |
Causal-Discovery Performance of ChatGPT in the context of Neuropathic Pain Diagnosis
Ruibo Tu
KTH Royal Institute of Technology
Microsoft Research
Microsoft Research
Chao Ma [email protected]
KTH Royal Institute of Technology
Microsoft Research
Microsoft Research
Cheng Zhang [email protected]
KTH Royal Institute of Technology
Microsoft Research
Microsoft Research
Causal-Discovery Performance of ChatGPT in the context of Neuropathic Pain Diagnosis
Introduction.ChatGPT[3]has demonstrated exceptional proficiency in natural language conversation, e.g., it can answer a wide range of questions while no previous large language models can. Thus, we would like to push its limit and explore its ability to answer causal discovery questions by using a medical benchmark[5]in causal discovery.
Introduction.
ChatGPT [3] has demonstrated exceptional proficiency in natural language conversation, e.g., it can answer a wide range of questions while no previous large language models can. Thus, we would like to push its limit and explore its ability to answer causal discovery questions by using a medical benchmark [5] in causal discovery.
Causal discovery aims to uncover the underlying unknown causal relationships based purely on observational data [2]. In contrast, applying ChatGPT to answer the questions about causal relationships is fundamentally different. With the current version of ChatGPT, we can only use the names (meta information) instead of observational data of variables to answer causal questions. The answers to the causal questions given by ChatGPT are based on a trained large language model, which can be viewed as an approximation for existing knowledge in the training natural language data. Nevertheless, such investigations still provide us valuable insights into ChatGPT and raise more thoughts about how to leverage its ability. But we need to exercise great caution in the conclusion as benchmarks [4,5] utilizing known knowledge are set for evaluation purposes instead of the goal of the causal discovery. Results and Insights. The ground-truth causal relationships in neuropathic pain diagnosis are obtained from both a domain expert and known medical literature [5]. As the number of all possible cause-effect pairs in this context is huge (more than 10000 pairs), we cannot test all of them manually. Thus, we sub-sampled 50 positive pairs (ground-true causal relationships) and 50 negative pairs (wrong causal relationships) from the dataset and generated the question in the format of "X causes Y. Answer true or false", where X and Y are sampled pairs from the full causal map of the neuropathic pain dataset. The full test results can be found at shorturl.at/amBX1. Many individual answers are reasonable, such as in Figure 2, but the performance is still flowed currently. As shown in Table 1 and 2, ChatGPT tends to make false negative mistakes. We inspected the results qualitatively and quantitatively and observed that:
It only understands the languages that are typically used to describe the situations but not the underlying knowledge. We provide two examples to demonstrate it. The first example is shown in Figure 3. It cannot identify the lower abdominal discomfort that can be caused by T12 radiculopathy. The explanation identifies the lower back, hip, and leg region only, while T12 nerve goes through these regions shown in Figure 1, and the lower abdominal region is part of it. Thus, it indicates that it provides the answer based on the trained content but does not understand the human body's nervous system. The second example is shown in Figure 4, which demonstrates a lack of understanding of how regional discomfort is described. The region around the key bone is the upper shoulder region. ChatGPT can identify shoulder discomfort as an effect but not the discomfort around the key bone.
Its performance is not yet consistent and not stable. Firstly, we observe that it provides different answers to the same question. We have tested some of the queries twice on different days. As shown in Table 3, the answers on the first day differ from the ones on the second day significantly. The answers on the second day are much more conservative to claim a causal relationship. This may be due to internal model updates.
Such inconsistent performance is a major concern for answering causal questions. As the later results have very few positive answers, the final results that we used considered the earlier results when available for the table 1 and 2. Secondly, as the original dataset is associated with terms in Swedish, we found that ChatGPT can correctly identify Swedish in some cases, such as in Figure 2, but fails in some other cases, such as in Figure 5. This may contribute further to a large number of false negatives.
Conclusion.
Based on the observations, we find:
• There are some limitations for the current ChatGPT in terms of understanding new concepts and knowledge beyond the existing corpus of text training data. Moreover, the consistency and stability of its performance need to be improved. Such improvements can happen without a paradigm shift in the models.
• We need to be extremely cautious about using causal claims made by ChatGPT as causal discovery results. This is because causal discovery and causal question answering with large language models are fundamentally different tasks. Causal benchmarks may be biased towards utilizing existing knowledge for evaluation [5,4], which is against the goal of causal discovery.
• In some situations, ChatGPT does give correct answers that can be non-trivial to obtain from a domain expert, which could serve as a good complementary for causal discovery methods to resolve corner cases. This might open up new research opportunities for the causal community on utilizing the recent developments of large language models to complement, improve and develop better causal machine learning tools.
Although there are existing limitations, we believe that opportunities for ChatGPT can help to improve the causality research is huge. With deep integration with ChatGPT type of models and interface. We can also imagine a future where ChatGPT can answer different causal questions. R S1 Radikulopati causes R Lårbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false.
TRUE 1 0 L T5 Radikulopati causes L Bröstbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false.
TRUE 0 0 L C5 Radikulopati causes R Interskapulära besvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false.
TRUE 0 0 R C6 Radikulopati causes R Under armsbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. TRUE 0 0 L L1 Radikulopati causes L Mediala ljumskbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false.
TRUE 0 0 L L1 Radikulopati causes L Adduktortendalgi."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false.
TRUE 0 0 L T10 Radikulopati causes IBS."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false.
TRUE 0 0 R L5 Radikulopati causes L Bakhuvudvärk."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. TRUE 0 0 Table 3: Results demonstrate lack of consistency using ChatGPT.
Figure 1 :
1Dermatome map [1] as a reference for this benchmark.
Figure 2 :
2Example showing that ChatGPT can correctly answer the question and provide reasonable explanations.
Figure 3 :
3The lower abdominal is the region where T12 nerve passes. If looking at the dermatome map 1, it is easy to identify lower back, hip, and lower abdominal discomfort can all be caused by T12 radiculopathy.
Figure 4 :
4Example showing that ChatGPT fails to understand the region on the body. The area around the key bone is largely overlapping with the front shoulder area especially when the patient describes the symptoms.
Figure 5 :
5Example showing that ChatGPT can identify foreign language time by time and that it is not very reliable. causes R Höftkamsbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. causes L Nedre bukbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. causes L Ljumskbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. causes L Laterala armsbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true Radikulopati causes R Armbågsbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. Radikulopati causes IBS."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. Radikulopati causes Nackbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. Radikulopati causes R Laterala armbågsbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. Radikulopati causes R Handbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false. Radikulopati causes L Laterala vadbesvär."R" and "L" refer to the right and left sides of the body, respectively).Answer with true or false.
Table 1 :
1Test results demonstrate high precision and low recall.Negative Positive
Negative
50
44
Positive
0
6
Table 2 :
2Confusion matrix showing
that there were no false positives.
Rows are predictions and columns
are ground truth.
Review of causal discovery methods based on graphical models. C Glymour, K Zhang, P Spirtes, Frontiers in genetics. 10524C. Glymour, K. Zhang, and P. Spirtes. Review of causal discovery methods based on graphical models. Frontiers in genetics, 10:524, 2019.
. Openai, Chatgpt, OpenAI. Chatgpt. https://chat.openai.com/chat/.
Chatgpt causality pairs. A Sharma, A. Sharma. Chatgpt causality pairs. https://github.com/amit-sharma/chatgpt-causality-pairs.
Neuropathic pain diagnosis simulator for causal discovery algorithm evaluation. R Tu, K Zhang, B Bertilson, H Kjellstrom, C Zhang, Advances in Neural Information Processing Systems. 32R. Tu, K. Zhang, B. Bertilson, H. Kjellstrom, and C. Zhang. Neuropathic pain diagnosis simulator for causal discovery algorithm evaluation. Advances in Neural Information Processing Systems, 32, 2019.
| zyda_arxiv-0971000 |
Learning More Discriminative Local Descriptors for Few-shot Learning
Qijun Song
Siyun Zhou
Liwei Xu
Learning More Discriminative Local Descriptors for Few-shot Learning
Few-shot learning for image classification comes up as a hot topic in computer vision, which aims at fast learning from a limited number of labeled images and generalize over the new tasks. In this paper, motivated by the idea of Fisher Score, we propose a Discriminative Local Descriptors Attention (DLDA) model that adaptively selects the representative local descriptors and does not introduce any additional parameters, while most of the existing local descriptors based methods utilize the neural networks that inevitably involve the tedious parameter tuning. Moreover, we modify the traditional k-NN classification model by adjusting the weights of the k nearest neighbors according to their distances from the query point. Experiments on four benchmark datasets show that our method not only achieves higher accuracy compared with the state-of-art approaches for few-shot learning, but also possesses lower sensitivity to the choices of k.
Introduction
Image classification is an important research area in computer vision. Much of the related works rely on collecting and labeling a large amount of data, which is often very difficult and expensive. In addition, such image classification mechanisms are quite different from the human discrimination of images that enables to recognize kinds of targets given only a single image from some certain class [9]. These call for not merely an adequate reduction in the sample size for learning, but also the ability of imitating the intelligent human behavior in image discrimination. In this context, there is increasing concern about the few-shot learning, which can be generally categorized into three types: meta-learning based methods [1,4,15,18,19], data augmentation methods [3,14,17,21,24,26,30] and metric-learning based methods [7,10,12,22,26,29,32]. The meta-learning methods aim to provide a model for adjusting the parameters, and thus it can quickly learn new tasks based on the acquired knowledge, while the data augmentation methods pay more attention to the the limited number of the available data in the few-shot learning. And the metric-based learning methods first map the original images to a high-dimensional semantic space, and then compute the distance between the samples in the support set and the ones in the query set to do the classification tasks.
In this work, we concentrate on the metric-based learning methods. The literatures [7,22,26] all use the image-level features for classification, and the only difference is the metric adopted. However, due to the small sample size of each class, the image-level features based methods may not be effective. On this account, the Deep Nearest Neighbor Neural Network (DN4) model [12] uses the local descriptors of the image to learn an image-to-class metric and consequently, the model is more capable of catching the discriminative information of the image. Recently, Zheng et al. [32] apply the bi-directional local alignment methods to the DN4 model and attain better performance. Note that in these methods, not all local descriptors are well-valued in the classification task. For this point, the work [10] uses the Convolutional Neural Networks (CNNs) to generate weights for all local descriptors such that the representative local features can be emphasized. However, there still exist two major problems to be solved: most of this type methods are based on (i) the CNNs for highlighting the representative local descriptors, which introduces extra model parameters, and thus increases the model complexity and the cost for parameter adjustment; (ii) the k-NN or variants thereof for classification, which is usually sensitive to k.
To address the above two limitations, in this paper, we propose a Discriminative Local Descriptor Attention (DLDA) model and an improved k-NN based classification model. As can be seen from Fig. 1, all images are first put into a feature embedding model, and then move to the DLDA model. Correspondingly, the DLDA model produces an attention graph for each image, where the discriminative local descriptors are fully valued. This not only stresses the representative local features of each class, but also weakens the effect of noise in the classification. The proposed DLDA model is non-parametric, and does not change the size of the feature map. In the last stage, the images get into the improved k-NN based classification model, where the k-NN algorithm finds the nearest k neighbors for each local descriptor in the query set. Then we assign greater weights to those k neighbors that are more concentrated, and thus the impact of k can be reduced. Figure 1: A flow chart of the proposed few-shot learning method for the 5-way 1-shot classification. The learning framework is composed of three models: (i) a feature embedding model implemented on a CNN to extract the local descriptors of images; (ii) a discriminative local descriptors attention model to highlight the representative local features of each class of the images in the support set; (iii) an improved k-NN based classification model to measure the similarity between the images in the query set and each class in the support set The rest of this paper is organized as follows. An overview of the related works in few-shot learning is provided in Section 2. The details of the proposed method are presented in Section 3, with extensive numerical experiments following in Section 4. The conclusions of this work is finally given in Section 5.
Related work
In the following, we will provide a relatively comprehensive review of existing works on the few-shot learning, which can be classified primarily into three categories as mentioned before. Meta-learning based methods. Meta-learning [20], also known as "learning to learn" [25], draws on the previous experience to help understand the new tasks. Finn et al. [1] propose a Model-Agnostic Meta-Learning (MAML) algorithm to obtain a universal initialization strategy such that the model can converge within just a few number of iterations when faced with a new task. However, Jamal et al. [4] hold the view that such uniformly-initialized model of the meta-learner can not be well adapted to the new task due to the differences among tasks and in this regard, they design several Task-Agnostic Meta-Learning (TAML) algorithms to prevent the model from over-executing on some certain tasks. For the same purpose of getting a better initialization, Ravi et al. [18] present a Long Short Term Memory (LSTM) based meta-learner model to optimize another neural network classifier. Similarly, by taking the advantage of an LSTM-based meta-learner to have access to external memory, Santoro et al. [19] develop a memory-enhanced neural network. Although the meta-learning type approaches do spur the development of few-shot learning, it remains a tricky issue in training the models that are of complex memory network structures [15]. Data augmentation methods. Data augmentation can effectively alleviate the matter of the limited number of samples in few-shot learning through constructing new samples based on the old ones. Various geometric transformations can be taken to realize data augmentation, such as flipping [17], rotating [26], etc. Meanwhile, another technique called Random Image Cropping and Patching (RICAP) [24] randomly crops four images and patches them to generate new training images, which owns the advantages similar as the label smoothing. In [30], Zhang et al. propose the mixup method that trains a neural network using the new samples formed by the convex combinations of the samples and their labels. By the use of the Delta encoder, the work [21] is conducted to the extraction of transferable intra-class deformations between the training samples of the same class, and hence the goal of sample synthesis for an unseen class with only few provided samples can be achieved. Besides, Mehrotra et al. [14] apply the generative adversarial networks [3] into the few shot learning for data augmentation, and provide the generating adversarial residual pairwise networks for the one-shot learning problem. Metric-learning based methods. Metric-learning approaches are intended for choosing a metric of similarity which computes the distance between the samples in the query set and each class in the support set. They can be roughly divided into two categories: (i) image-level features based methods; (ii) local descriptor features based methods. Among the image-level features based methods, Siamese neural network [7] extracts the features from two given images and takes the weighted L 1 -norm to measure the distance between two feature vectors, while the matching network [26] makes good use of LSTM to enhance the network and improve the learning ability. Besides, the prototypical network [22] is another popular approach based on the image-level features, where the classification is implemented by calculating the Euclidean distance of the class prototypes in the learned embedding space. Among the local descriptor features based methods, as mentioned earlier, Li et al. [12] propose a DN4 model to learn an image-to-class measurement. Later, Zhang et al. [29] investigate the structural distance between local feature representations by using the Earth Movement Distance (EMD) to acquire the image correlation. Most recently, a Bi-Directional Local Alignment (BDLA) method is presented in [32], which designs a convex combination of the bidirectional distances between the query point and the classes of the support set for classification module. In addition, to reduce the influence of noises and obtain more representative local descriptors, Li et al. [10] develop a More Attentional Deep Nearest Neighbor Neural Network (MADN4), where the convolutional block attention module is employed for the local descriptor extraction.
The proposed method
Problem statement
In the standard few-shot learning, we are given three datasets:
(i) a training set D = (x i , y i ) N i=1
with samples x i (i = 1, 2, · · · , N ) and corresponding labels y i (i = 1, 2, · · · , N ); (ii) a support set
S = (x i , y i ) M K i=1
, where M represents the number of classes contained and K is the number of samples in each class; (iii) a query set Q, which is comprised of the samples sharing the same classes as the set S but all unlabeled. And the label space of D and the one of S are disjoint, thus the sets satisfy Q ∩ D = S ∩ D = ∅. The goal of few-shot learning is to classify the samples in the query set Q according to the support set S, and this problem is referred to as "M -way K-shot classification". Since the number of the samples in S is limited, we use the abundant samples in the training set D to learn the transferable knowledge such that the classification performance of the model on Q can be improved.
Following the episode training mechanism in [26], it is an effective way to make full use of the training set D. To achieve this, we construct multiple episodes to train our model. In each episode, we randomly select M classes of samples and K samples belonging to each class from the training set D, which form the training support set D S , and take the remaining samples of these M classes in D as the training query set D Q . Once all training episodes have been completed, we use the fully trained model to classify the query set Q according to the support set S in the testing stage.
Framework of the proposed method
As can be seen in Fig. 1, our few-shot learning method mainly consists of three parts: a feature embedding model, a discriminative local descriptors attention model, and a classification model. Following common practice, the feature embedding model is composed of a CNN, which is used for feature extraction of images in the support set and the query set. To emphasize discriminative local descriptors in the image, we introduce an additional attention mechanism model as an intermediate processing, and to improve the classification performance, we incorporate a modified k-NN into the last stage, i.e., the classification model, which calculates the similarity scores of the query image with each class of images in the support set, and then predicts the query image to be belonging to the class with the highest score.
Feature embedding model
Given an image X, we input it to the feature embedding model, which can be formulated as a mapping F θ with the neural network parameter θ. Then, we get the corresponding output F θ (X) ∈ R h×w×d , where h and w represent the height and width, respectively, and d denotes the number of channels. Note that F θ (X) can be written into the following matrix form
F θ (X) = [x 1 , . . . , x r ] ∈ R d×r ,(3.1)
where r = hw, and x i ∈ R d is called the i-th (i = 1, · · · , r) local descriptor (the red block with dotted lines in Fig. 1) of the feature map of image X. And then we perform a normalization step on each column of F θ (X).
Discriminative local descriptors attention model
In some previous studies [12,29], all local descriptors are treated fairly and some underlying representative information of the image may thus be ignored, as illustrated in Fig. 2. To address this, the work [10] generates the specified weights for local descriptors using CNNs, which however introduces additional parameters that usually requires careful tuning, and may even worsen the overfitting issue in few-shot learning. At this point, we propose a discriminative local descriptors attention model Φ, which not only underscores the importance of the discriminative local descriptors in the support set, but also avoids the involvement of the extra parameters. The proposed DLDA model is inspired by the Fisher Score approach [11], where for each local descriptor, the ratio of the intra-class similarity to the inter-class one obtained by k-NN is taken as the weight. Now we take the 5-way 1-shot problem as an example. Given five images X i (i = 1, · · · , 5) belonging to five classes that are selected from the support set, one has the output F θ (X i ) = [x i1 , · · · , x ir ] ∈ R d×r of the feature embedding model F θ . For each local descriptor x ij (j = 1, · · · , r), we find its k nearest neighbors, denoted by x m ij (m = 1, · · · , k), and then return the corresponding cosine similarities. Based on that, the intra-class and inter-class similarities are characterized by the following formulas Then, the weight w ij of x ij in the DLDA model is defined as
intra-class (x ij ) = k m=1 cos x ij , x m ij ,(3.w ij = intra-class (x ij ) inter-class (x ij ) ,(3.4)
which finally leads to the weighted feature map of the image X i aŝ
F θ (X i ) = [w i1 x i1 , · · · , w ir x ir ] [x i1 , · · · ,x ir ] ∈ R d×r . (3.5)
For the 5-way 5-shot case where there are five images per class, we first compute an average feature map of the five ones of each class, and then follow the same steps as the 5-way 1-shot case.
In addition, the DLDA model is only performed on the images in the support set, but not on the ones in the query set. It is also worth noting that the DLDA model is a non-parametric model, which may facilitate the overfitting problem to some extent.
An improved k-NN based classification model
The final stage is for classification, and one can choose any appropriate technique to deal with it. One popular choice is the k-NN approach [12,10], which however implicitly assumes that the k nearest neighbors are of equal importance in the classification decision regardless of their distances from the query point. To remedy this, we slightly modify the PNN method [28] for the final classification, where different weights are assigned based on the distance from the query point. The details of the proposed modified k-NN based classification model are described as follows.
Given an image Y in the query set, we denote the corresponding output from the feature embedding model as F θ (Y ) = [y 1 , . . . , y r ] ∈ R d×r . For each descriptor y s (s = 1, · · · , r), we find its k nearest neighborsx 1 ij , · · · ,x k ij in class i, and compute the corresponding cosine similarity as cos(y s ,x 1 ij ), · · · , cos(y s ,x k ij ). According to the basic idea that a larger cosine similarity means a smaller distance, and therefore a greater weight should be assigned, we then give the formula of the weights for the k nearest neighbors of y s as follows w sn = cos(y s ,x n ij ) k p=1 cos(y s ,x p ij )
, n = 1, · · · , k,
where cos(y s ,x n ij ) is assumed to be positive for all 1 ≤ n ≤ k. Such assumption is natural and reasonable since the parameter k is often set to a small integer, e.g., 1, 3, 5.
Finally, the similarity between image Y and class i is defined as
Similarity(Y, class i) = r s=1 k n=1
w sn cos(y s ,x n ij ), (3.7) which is sum of the rk weighted similarities between r descriptors and their k nearest neighbors.
Less sensitivity to k can be expected
As in the traditional k-NN, all k neighbors are treated equally, and thus the differences in the distances of these k neighbors from the query point are neglected. With this in mind, our proposed improved k-NN assigns a greater weight to the neighbor that is closer to the query point, or to say, attaches greater importance to the neighbor in the final classification.
For illustrative purposes, we consider the simplest k = 2 case. Suppose that the two nearest neighbors of the query point y are x 1 and x 2 , and the relation cos(y, x 1 ) > cos(y, x 2 ) holds. In the traditional k-NN, the percentages of the scores of x 1 and x 2 can be expressed as
x 1 :
cos(y, x 1 ) cos(y, x 1 ) + cos(y, x 2 ) , x 2 : cos(y, x 2 ) cos(y, x 1 ) + cos(y, x 2 )
, (3.8) while in our proposed improved k-NN, the percentages can be written as
x 1 : cos 2 (y, x 1 ) cos 2 (y, x 1 ) + cos 2 (y, x 2 ) , x 2 : cos 2 (y, x 2 ) cos 2 (y, x 1 ) + cos 2 (y, x 2 )
.
(3.9)
Comparing (3.8) and (3.9), we have cos 2 (y, x 1 ) cos 2 (y, x 1 ) + cos 2 (y, x 2 ) − cos(y, x 1 ) cos(y, x 1 ) + cos(y, x 2 ) = cos 2 (y, x 1 )[cos(y, x 1 ) + cos(y, x 2 )] − cos(y, x 1 )[cos 2 (y, x 1 ) + cos 2 (y, x 2 )] [cos 2 (y, x 1 ) + cos 2 (y, x 2 )][cos(y, x 1 ) + cos(y, x 2 )] = cos(y, x 1 ) cos(y, x 2 )[cos(y, x 1 ) − cos(y, x 2 )] [cos 2 (y, x 1 ) + cos 2 (y, x 2 )][cos(y, x 1 ) + cos(y, x 2 )] > 0, which implies that the proposed improved k-NN will enhance the importance of the nearest neighbor and relegate the farthest neighbor to lower importance, and consequently, the final classification will depend more on the nearest neighbor. More specifically, we may encounter two possible situations as shown in Fig. 3 and Fig. 4. When the cosine similarity between x 1 and y is close to the one between x 2 and y, the traditional k-NN and the improved one are nearly equivalent, but when the cosine similarity between x 1 and y is significantly larger than the one between x 2 and y, the improved k-NN may have marked effect on the insensitivity to the choice of k as the farther point x 2 is considered less informative. Therefore from the discussions above, the improved k-NN can behave more stably with respect to different choices of k (k > 1) than the traditional k-NN.
Experimental settings
Network architecture
For the sake of fairness, we adopt the same network structure of other few-shot learning methods that will be used for comparison in our experiment, e.g., [12,32]. To be specific, we use four convolutional blocks to construct the feature embedding model, and each convolutional block contains one convolutional layer with 64 filters of size 3 × 3, one batch normalization layer, and one Leaky ReLU layer. In addition, a 2×2 pooling layer is added to the first two convolutional blocks. In general, this embedding network is referred to as Conv4.
Implementation details
The experiments are implemented by PyTorch [16]. Both the 5-way 1-shot and 5-way 5-shot classifications are considered in the experiments. At the training stage, we construct 600,000 episodes randomly from the training part of the MiniImageNet dataset and 300,000 episodes from the one of the three fine-grained datasets. In each episode, for the 1-shot and 5-shot settings, we choose 1 and 5 support images, 15 and 10 query images from each class, respectively. Take the 5-way 1-shot setting as an example, we have 5 support images and 75 query images in each episode. The Adam optimizer [6] with a cross-entropy loss is used for training. The learning rate is initialized to 0.001, and will be reduced by half every 100,000 episodes. At the testing stage, we construct 600 episodes randomly from the testing part of each dataset. The testing process will be repeated 5 times and the average top-1 accuracy along with the 95% confidence interval will be presented in the results.
Baselines
To prove the feasibility and superiority of our proposed few-shot learning method, for the MiniImageNet dataset, we make comparisons with the following twelve methods: MAML [1], TAML [4], Meta-Learner LSTM [18], MetaGAN [31], GNN [2], TPN-semi [13], Relation Net [23], Matching Net [26], Prototypical Net [22], DN4 [12] and BDLA [32], MADN4 [10]. And for the three fine-grained datasets, we make comparisons with the following five methods: Matching Net [26], Prototypical Net [22], DN4 [12], BDLA [32] and GNN [2].
Results
Comparison on the MiniImageNet dataset
The results on MiniImageNet are given in Table 1. Compared with the classical metric-based methods, in the case of 5-way 1-shot, the proposed DLDA model with k = 1 gains 2.37%, 9.25% and 3.39% improvements over Relation Net [23], Matching Net [26] and Prototypical Net [22], respectively. And for the more recent DN4, our DLDA model improves the accuracy by 1.57% and 0.74% in the 5-way 1-shot and 5-way 5-shot settings, respectively, which suggests that the DLDA model is more able to emphasize the discriminative local features. Moreover, the combination of the DLDA model and the improved k-NN algorithm further enhances the classification accuracy and outperforms all of the previous methods. Note additionally that though the proposed method has similar results as the ones of MADN4 [10], our method does not introduce any new parameters, which thus avoids the tedious parameter tuning and may be beneficial to alleviate the problem of overfitting to some degree.
Comparison on fine-grained datasets
One distinguishing feature of fine-grained datasets is the small inter-class variation but the large intraclass variation, which hence makes the classification much more challenging. As can be seen from Table 2, in the case of 5-way 1-shot, the DLDA improves the accuracy by 4.03%, 1.02%, 8.28% over DN4 in the Stanford Dogs, Stanford Cars and CUB-200, respectively. And the one with the improved k-NN is ahead of DN4 under both of the 1-shot and 5-shot cases on three fine-grained datasets. Particularly, on the Standford Dogs dataset, the accuracy is enhanced by 3.67% and 6.65% in the 1-shot and 5-shot classifications, respectively. Overall, although the DLDA with/without the improved k-NN lags behind the BDLA in the 1-shot case on the Standford Cars dataset, it is safe to say that our method is the winner among all competitors.
Results on the sensitivity to k
Observe from the previous studies on k-NN involved few-shot learning methods, their numerical performance is usually dependent on the choice of k, which requires a large number of experiments such that it is set appropriately. Specifically, from Table 3 we can find that in the case of 5-way 1-shot, the accuracy of the DN4 model drops from 52.35% at k = 1 to 50.31% at k = 7, with a fluctuation of 2.04%, and the one of the BDLA model reaches its peak 52.36% at k = 3 but slides to 45.94% at k = 7, with a fluctuation of 6.42%, while the fluctuation of our proposed DLDA model added with an improved k-NN is a much smaller 0.39%. To address this issue, we provide a visual relationship between the selection of k and the corresponding accuracy in Fig. 5 and Fig. 6, which shows that our method is apparently less sensitive to the choices of k over DN4 and BDLA, both in the cases of 5-way 1-shot and 5-way 5-shot.
Results on cross-domain classification
The cross-domain classification is to generalize the model that is pre-trained on the source domain to the target domain, which appears to be an important indicator of measuring the ability of handling the differences among multiple sources in few-shot learning. In our experiments, considering that the types of the images in the MiniImageNet dataset differ significantly from the ones in the three fine-grained datasets, we use the former dataset as the source domain to train the model and use the latter three datasets as the target domain to test the model. Moreover, we select three models, Prototypical Net [22], DN4 [12] and BDLA [32] for comparison. It can be obviously seen from Table 4 that our method is the best performer under the domain shift. This result indicates that aside from an improved classification performance and higher robustness to the parameter k, our method has a better cross-domain generalization capability over other competitors, which may be because the DLDA model together with the modified k-NN makes the local descriptors extracted by the embedded network more discriminative and transferable.
Conclusions
In this paper, we focus on the few-shot image classification, and develop a new method for this problem, which includes a Discriminative Local Descriptors Attention (DLDA) model and an improved k-NN based classification model. Inspired by Fisher Score, the DLDA model gives a weight to each local descriptor in the support set for highlighting the representative local features before the image-to-class classification. Based on the idea that the value of the information can be partly reflected by the distance, in the final classification, the improved k-NN model assigns larger weights to those local descriptors that are closer to the query point. Extensive experimental results on the benchmark datasets illustrate that, compared with the state-of-the-art few-shot learning methods, the proposed method obtains a higher accuracy and a lower sensitivity to the parameter k, especially on the MiniImageNet dataset. In addition, it also shows to be more capable of dealing with the situation of domain shift.
2 )Figure 2 :
22Consider the 2-way 1-shot case. The two support images are selected from the CUB-200 dataset, which clearly belong to two different breeds of birds. The local features highlighted in the yellow boxes of the two images are quite similar, and thus the query image is hard to be correctly classified according to this type of local features. On the contrary, the local features in the red boxes are more conducive to distinguishing the query image among kinds of images, which naturally play a much more important role in the classification task inter-class (x ij ) =
Figure 3 :Figure 4 :
34The case of k = 2: cos(y, x 1 ) is slightly larger than cos(y, x 2 ) The case of k = 2: cos(y, x 1 ) is much larger than cos(y,x 2 )MiniImageNet[26]: the dataset is composed of 60,000 images selected from the ImageNet dataset, with 100 classes and 600 images per class. Each image is of size 84 × 84. Following the splitting approach in[18], we take 64 classes for training,16 classes for validation and the remaining 20 classes for testing. CUB-200 [27]: the dataset covers 200 bird species and the number of images included in each class varies. We take 130 classes for training, 20 classes for validation and the remaining 50 classes for testing. This dataset is the most widely used benchmark for fine-grained image classification. StanfordDogs [5]: the dataset consists of 120 breeds of dogs with a total of 20,580 images. We take 70 classes for training, 20 classes for validation and the remaining 30 classes for testing. StanfordCars [8]: the dataset contains 16,185 car images of 196 classes, and the classes are mainly derived according to the brand, the model, and the year of manufacture. We take 130 classes for training, 17 classes for validation and the remaining 49 classes for testing. The images in the three fine-grained image classification datasets, i.e., CUB-200, StanfordDogs, and StanfordCars, are resized uniformly to 84×84, which are consistent with the ones in the MiniImageNet dataset.
Figure 5
5Figure 5: 5-way 1-shot classification on the Mini-ImageNet dataset Figure 6: 5-way 5-shot classification on the Mini-ImageNet dataset
Table 1 :
1Average accuracy with 95% confidence intervals on the MiniImageNet datasetModel
Embedding
5-Way Accuracy (%)
1-shot
5-shot
MAML[1]
Conv4-32
48.70 ± 1.84
63.11 ± 0.92
TAML[4]
Conv4
51.73 ± 1.88
66.05 ± 0.85
Meta-Learner LSTM[18]
Conv4-32
43.44 ± 0.77
60.60 ± 0.71
MetaGAN[31]
Conv4-32
52.71 ± 0.64
68.63 ± 0.67
GNN[2]
Conv4-64
49.02 ± 0.98
63.50 ± 0.84
TPN-semi[13]
Conv4-64
52.78 ± 0.27
66.42 ± 0.21
Relation Net[23]
Conv4-64
50.44 ± 0.82
65.32 ± 0.70
Matching Net[26]
Conv4-64
43.56 ± 0.84
55.31 ± 0.73
Prototypical Net[22]
Conv4-64
49.42 ± 0.78
68.20 ± 0.66
DN4[12]
Conv4-64
51.24 ± 0.74
71.02 ± 0.64
BDLA[32]
Conv4-64
52.97 ± 0.35
71.31 ± 0.68
MADN4[10]
Conv4-64
53.20 ± 0.52
71.66 ± 0.47
DLDA(k = 1)
Conv4-64
52.81 ± 0.79
71.76 ± 0.66
DLDA+Improved k-NN(k = 3)
Conv4-64
53.20 ± 0.82
71.76 ± 0.47
Table 2 :
2Average accuracy with 95% confidence intervals on the fine-grained datasetsModel
Embed.
5-Way Accuracy (%)
Stanford Dogs
Stanford Cars
CUB-200
1-shot
5-shot
1-shot
5-shot
1-shot
5-shot
Matching Net[26]
Conv4-64
35.80
± 0.99
47.50
± 1.03
34.80
± 0.98
44.70
± 1.03
45.30
± 1.03
59.50
± 1.01
Prototypical Net[22]
Conv4-64
37.59
± 1.00
48.19
± 1.03
40.90
± 1.01
52.93
± 1.03
37.36
± 1.00
45.28
± 1.03
GNN[2]
Conv4-64
46.98
± 0.98
62.27
± 0.95
55.85
± 0.97
71.25
± 0.89
51.83
± 0.98
63.69
± 0.94
DN4(k = 1)[12]
Conv4-64
45.41
± 0.76
63.51
± 0.62
59.84
± 0.80
88.65
± 0.44
46.84
± 0.81
74.92
± 0.62
BDLA[32]
Conv4-64
48.53
± 0.87
70.07
± 0.70
64.41
± 0.84
89.04
± 0.45
50.59
± 0.97
75.36
± 0.72
DLDA(k = 1)
Conv4-64 49.44
± 0.85
69.36
± 0.69
60.86
± 0.82
89.50
± 0.41
55.12
± 0.86
74.46
± 0.65
DLDA+Improved k-NN(k = 3) Conv4-64
49.08
± 0.83
70.16
± 0.67
60.04
± 0.83
89.62
± 0.42
54.53
± 0.85
75.85
± 0.68
Table 3 :
3Average accuracy with different k (k = 1, 3, 5, 7) on the MiniImageNet datasetModel
Table 4 :
4Average accuracy with 95% confidence intervals on the fine-grained datasets using the model trained on the MiniImageNet datasetDataset
Prototypical Net
DN4
BDLA
DLDA+
Improved k-NN
Stanford Dogs
1-shot
33.11 ± 0.64
36.32 ± 0.68
35.55 ± 0.66
37.10 ± 0.70
5-shot
45.94 ± 0.65
53.43 ± 0.71
52.64 ± 0.69
53.99 ± 0.70
Stanford Cars
1-shot
29.10 ± 0.75
30.77 ± 0.57
30.62 ± 0.58
31.48 ± 0.56
5-shot
38.12 ± 0.60
46.93 ± 0.62
45.99 ± 0.61
49.63 ± 0.66
CUB-200
1-shot
39.39 ± 0.68
39.89 ± 0.73
40.40 ± 0.76
41.36 ± 0.74
5-shot
56.06 ± 0.66
59.03 ± 0.71
58.23 ± 0.72
60.02 ± 0.71
Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, C. Finn, P. Abbeel, and S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, (2017).
V Garcia, J Bruna, arXiv:1711.04043Few-shot learning with graph neural networks. arXiv preprintV. Garcia and J. Bruna, Few-shot learning with graph neural networks, arXiv preprint arXiv:1711.04043, (2017).
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger27I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, eds., vol. 27, 2014.
Task agnostic meta-learning for few-shot learning. M A Jamal, G.-J Qi, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionM. A. Jamal and G.-J. Qi, Task agnostic meta-learning for few-shot learning, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11719-11727.
Novel dataset for fine-grained image categorization: Stanford dogs. A Khosla, N Jayadevaprakash, B Yao, F.-F Li, Proc. CVPR workshop on fine-grained visual categorization (FGVC). CVPR workshop on fine-grained visual categorization (FGVC)Citeseer2A. Khosla, N. Jayadevaprakash, B. Yao, and F.-F. Li, Novel dataset for fine-grained im- age categorization: Stanford dogs, in Proc. CVPR workshop on fine-grained visual categorization (FGVC), vol. 2, Citeseer, 2011.
Adam: A method for stochastic optimization. D P Kingma, J Ba, D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, (2014).
Siamese neural networks for one-shot image recognition. G Koch, R Zemel, R Salakhutdinov, ICML deep learning workshop. Lille2G. Koch, R. Zemel, R. Salakhutdinov, et al., Siamese neural networks for one-shot im- age recognition, in ICML deep learning workshop, vol. 2, Lille, 2015.
3d object representations for fine-grained categorization. J Krause, M Stark, J Deng, L Fei-Fei, Proceedings of the IEEE international conference on computer vision workshops. the IEEE international conference on computer vision workshopsJ. Krause, M. Stark, J. Deng, and L. Fei-Fei, 3d object representations for fine-grained categorization, in Proceedings of the IEEE international conference on computer vision workshops, 2013, pp. 554-561.
One shot learning of simple visual concepts. B Lake, R Salakhutdinov, J Gross, J Tenenbaum, Proceedings of the annual meeting of the cognitive science society. the annual meeting of the cognitive science society33B. Lake, R. Salakhutdinov, J. Gross, and J. Tenenbaum, One shot learning of simple visual concepts, in Proceedings of the annual meeting of the cognitive science society, vol. 33, 2011.
More attentional local descriptors for few-shot learning. H Li, L Yang, F Gao, Artificial Neural Networks and Machine Learning -ICANN 2020. ChamSpringer International PublishingH. Li, L. Yang, and F. Gao, More attentional local descriptors for few-shot learning, in Arti- ficial Neural Networks and Machine Learning -ICANN 2020, Cham, 2020, Springer International Publishing, pp. 419-430.
Feature selection: A data perspective. J Li, K Cheng, S Wang, F Morstatter, R P Trevino, J Tang, H Liu, ACM computing surveys (CSUR). 50J. Li, K. Cheng, S. Wang, F. Morstatter, R. P. Trevino, J. Tang, and H. Liu, Fea- ture selection: A data perspective, ACM computing surveys (CSUR), 50 (2017), pp. 1-45.
Revisiting local descriptor based image-to-class measure for few-shot learning. W Li, L Wang, J Xu, J Huo, Y Gao, J Luo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionW. Li, L. Wang, J. Xu, J. Huo, Y. Gao, and J. Luo, Revisiting local descriptor based image-to-class measure for few-shot learning, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7260-7268.
Transductive propagation network for fewshot learning. Y Liu, J Lee, M Park, S Kim, Y Yang, Y. Liu, J. Lee, M. Park, S. Kim, and Y. Yang, Transductive propagation network for few- shot learning, (2018).
Generative adversarial residual pairwise networks for one shot learning. A Mehrotra, A Dukkipati, A. Mehrotra and A. Dukkipati, Generative adversarial residual pairwise networks for one shot learning, (2017).
. N Mishra, M Rohaninejad, X Chen, P Abbeel, Meta-learning with temporal convolutions. N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel, Meta-learning with temporal con- volutions, (2017).
Pytorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, Advances in neural information processing systems. 32A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., Pytorch: An imperative style, high-performance deep learning library, Advances in neural information processing systems, 32 (2019).
Low-shot learning with imprinted weights. H Qi, M Brown, D G Lowe, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionH. Qi, M. Brown, and D. G. Lowe, Low-shot learning with imprinted weights, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5822-5830.
Optimization as a model for few-shot learning. S Ravi, H Larochelle, International conference on learning representations. S. Ravi and H. Larochelle, Optimization as a model for few-shot learning, in International conference on learning representations, 2017.
A Santoro, S Bartunov, M Botvinick, D Wierstra, T Lillicrap, Metalearning with memory-augmented neural networks, in International conference on machine learning. A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap, Meta- learning with memory-augmented neural networks, in International conference on machine learn- ing, 2016, pp. 1842-1850.
. T Schaul, J Schmidhuber, Metalearning , Scholarpedia54650T. Schaul and J. Schmidhuber, Metalearning, Scholarpedia, 5 (2010), p. 4650.
Delta-encoder: an effective sample synthesis method for fewshot object recognition. E Schwartz, L Karlinsky, J Shtok, S Harary, M Marder, A Kumar, R Feris, R Giryes, A Bronstein, Advances in neural information processing systems. 31E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, A. Kumar, R. Feris, R. Giryes, and A. Bronstein, Delta-encoder: an effective sample synthesis method for few- shot object recognition, Advances in neural information processing systems, 31 (2018).
Prototypical networks for few-shot learning, Advances in neural information processing systems. J Snell, K Swersky, R Zemel, 30J. Snell, K. Swersky, and R. Zemel, Prototypical networks for few-shot learning, Advances in neural information processing systems, 30 (2017).
Learning to compare: Relation network for few-shot learning. F Sung, Y Yang, L Zhang, T Xiang, P H Torr, T M Hospedales, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionF. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales, Learning to compare: Relation network for few-shot learning, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1199-1208.
Ricap: Random image cropping and patching data augmentation for deep cnns. R Takahashi, T Matsubara, K Uehara, Asian conference on machine learning, PMLR. R. Takahashi, T. Matsubara, and K. Uehara, Ricap: Random image cropping and patch- ing data augmentation for deep cnns, in Asian conference on machine learning, PMLR, 2018, pp. 786-798.
Learning to learn. S Thrun, L Pratt, SpringerNew YorkS. Thrun and L. Pratt, Learning to learn, Springer, New York, 2012.
Matching networks for one shot learning. O Vinyals, C Blundell, T Lillicrap, D Wierstra, Advances in neural information processing systems. 29O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al., Matching networks for one shot learning, Advances in neural information processing systems, 29 (2016).
. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, Caltech-ucsd birds 200, (2010).
Pseudo nearest neighbor rule for pattern classification, Expert Systems with Applications. Y Zeng, Y Yang, L Zhao, 36Y. Zeng, Y. Yang, and L. Zhao, Pseudo nearest neighbor rule for pattern classification, Ex- pert Systems with Applications, 36 (2009), pp. 3587-3595.
Deepemd: Few-shot image classification with differentiable earth mover's distance and structured classifiers. C Zhang, Y Cai, G Lin, C Shen, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionC. Zhang, Y. Cai, G. Lin, and C. Shen, Deepemd: Few-shot image classification with dif- ferentiable earth mover's distance and structured classifiers, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 12203-12213.
H Zhang, M Cissé, Y N Dauphin, D Lopez-Paz, mixup: Beyond empirical risk minimization. H. Zhang, M. Cissé, Y. N. Dauphin, and D. Lopez-Paz, mixup: Beyond empirical risk minimization, (2017).
Metagan: An adversarial approach to few-shot learning. R Zhang, T Che, Z Ghahramani, Y Bengio, Y Song, Advances in neural information processing systems. 31R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, and Y. Song, Metagan: An adversarial approach to few-shot learning, Advances in neural information processing systems, 31 (2018).
Bdla: Bi-directional local alignment for few-shot learning. Z Zheng, X Feng, H Yu, X Li, M Gao, Applied Intelligence. 53Z. Zheng, X. Feng, H. Yu, X. Li, and M. Gao, Bdla: Bi-directional local alignment for few-shot learning, Applied Intelligence, 53 (2023), pp. 769-785.
| zyda_arxiv-0990000 |
On the Multi-Dimensional Schrödinger Operators with Point Interactions
23 Jan 2017
Nataly Goloshchapova
On the Multi-Dimensional Schrödinger Operators with Point Interactions
23 Jan 2017Schrödinger operatorpoint interactionsself-adjoint extensionsnonnegative exten- sionsscattering matrix
We study two-and three-dimensional matrix Schrödinger operators with m ∈ N point interactions. Using the technique of boundary triplets and the corresponding Weyl functions, we complete and generalize the results obtained by the other authors in this field.For instance, we parametrize all self-adjoint extensions of the initial minimal symmetric Schrödinger operator by abstract boundary conditions and characterize their spectra. Particularly, we find a sufficient condition in terms of distances and intensities for the self-adjoint extension H(3)α,X to have m ′ negative eigenvalues, i.e., κ − (Hα,X ) = m ′ ≤ m. We also give an explicit description of self-adjoint nonnegative extensions.
Introduction
Multi-dimensional Schrödinger operators with point interactions have been intensively studied in the three last decades (see [1, 3-8, 15, 17, 20, 24]). Starting from fundamental paper [7] by Berezin and Faddeev, operators associated in L 2 (R 3 ) with the differential expression It is well known that H is closed nonnegative symmetric operator with deficiency indices n ± (H) = m (cf. [3]). In [3], the authors proposed to associate with Hamiltonian (1.1) a certain m-parametric family H (3) α,X of self-adjoint extensions of the operator H. They parametrized the extensions H (3) α,X in terms of the resolvents. The latter enabled them to describe the spectrum of the H (3) α,X .
In the recent publications [8,15], boundary triplets and the corresponding Weyl functions concept (see [9,14] and also Section 2) was involved to investigate multi-dimensional Schrödinger operators with point interactions. In [6,8,15], two-and three-dimensional Schrödinger operators with one point interaction were studied.
In the present paper, we apply boundary triplets and the corresponding Weyl functions approach to study the matrix multi-dimensional Schrödinger operators with point interactions. Namely, in L 2 (R d , C n ) (d ∈ {2, 3}), we consider the following matrix Schrödinger differential expression with singular potential localized on the set X :
= {x j } m j=1 ⊂ R d − ∆ ⊗ I n + m j=1
Λ j δ(· − x j ), Λ j ∈ R n×n , j ∈ {1, .., m}.
(1.
3)
The minimal symmetric operator associated with this expression in L 2 (R d , C n ) is defined by
H := −∆ ⊗ I n , dom(H) := f ∈ W 2 2 (R d , C n ) : f (x j ) = 0, x j ∈ X . (1.4)
The matrix three-dimensional Schrödinger operator with one point interaction was studied in [8]. We generalize the results of [8] to the case of m point interactions and d = 2, 3. Namely, we construct a boundary triplet Π for H * . Moreover, we compute the corresponding Weyl function and the γ-field for Π, as well as the scattering matrix for a pair {H 0 , H Θ }. It is worth to mention that Weyl function coincides with matrix-valued function appearing in the formulas of the resolvents of H In addition, we describe proper, self-adjoint, and nonnegative self-adjoint extensions of the initial minimal symmetric operator H and characterize their spectra. In particular, we show that the family H (d) α,X might be parametrized by means of diagonal matrices (see Remark 4.8). In the case n = 1, we establish numerous links between our results and the results obtained in the previous publications mentioned above.
In Theorem 3.1 we establish a connection between the result on uniqueness of nonnegative self-adjoint extension of an arbitrary nonnegative symmetric operator A in [9, Priposition 10] and the recent result of V. Adamyan [1,Theorem 2.4]. Particularly, we reproved the result on the uniqueness of nonnegative self-adjoint extension of the minimal symmetric operator H in the case n = 1 and d = 2.
Let us briefly review the structure of the paper. Section 2 is introductory. It contains definitions and facts necessary for further exposition. In Section 3, we establish the uniqueness criterion mentioned above. In Sections 4 and 5, we investigate the matrix Schrödinger operators with point interactions in the cases d = 3 and d = 2, respectively. Namely, in Subsection 4.1(resp., 5.1), we define boundary triplet for the H * and also compute the corresponding Weyl function and the γ-field. In what follows, R z (A) denotes the resolvent (A − z) −1 of the operator A; dom(A), ker(A), ran (A) are the domain, the kernel, and the range of A, respectively; σ(A) and ρ(A) denote the spectrum and the resolvent set of A; N z stands for the defect subspace of A corresponding to eigenvalue z. Denote by C ∞ 0 (R d \ X) the space of infinitely differentiable functions with compact support.
Prelimimaries
Boundary triplets and Weyl functions
In this subsection, we recall basic notions and facts of the theory of boundary triplets (we refer the reader to [9,14] for a detailed exposition).
2.1.1
Linear relations, boundary triplets and proper extensions 1. The set C(H) of closed linear relations in H is the set of closed linear subspaces of H⊕H. Recall that dom(Θ) = f : f f ′ ∈ Θ , ran (Θ) = f ′ : f f ′ ∈ Θ , and mul (Θ) = f ′ : 0 f ′ ∈ Θ are the domain, the range, and the multivalued part of Θ. A closed linear operator in H is identified with its graph, so that the set C(H) of closed linear operators in H is viewed as a subset of C(H). In particular, a linear relation Θ is an operator if and only if the multivalued part mul (Θ) is trivial. We recall that the adjoint relation Θ * ∈ C(H) of a linear relation Θ in H is defined by
Θ * = k k ′ : (h ′ , k) = (h, k ′ ) for all h h ′ ∈ Θ .
The linear relation Θ is said to be symmetric if Θ ⊂ Θ * and self-adjoint if Θ = Θ * . The linear relation Θ is said to be nonnegative if (k ′ , k) ≥ 0 for all k k ′ ∈ Θ. For the symmetric relation Θ ⊆ Θ * in H the multivalued part mul (Θ) is the orthogonal complement of dom(Θ) in H. Setting H op := dom(Θ) and H ∞ = mul (Θ), one verifies that Θ can be written as the direct orthogonal sum of a self-adjoint operator Θ op in the subspace H op and a "pure" relation Θ ∞ = 0 f ′ : f ′ ∈ mul (Θ) in the subspace H ∞ . Any closed linear relation admits the following representation (see, for instance, [21])
Θ = {(h, h ′ ) ⊤ ∈ H ⊕ H : Ch − Dh ′ = 0}, C, D ∈ [H]. (2.1)
Note that representation (2.1) is not unique.
Let
A be a closed densely defined symmetric operator in the Hilbert space H with equal deficiency indices n ± (A) = dim ker(A * ± i) ≤ ∞.
(A * f, g) H − (f, A * g) H = (Γ 1 f, Γ 0 g) H − (Γ 0 f, Γ 1 g) H ,
holds for all f, g ∈ dom(A * ), and (ii) the mapping Γ := (Γ 0 , Γ 1 ) ⊤ : dom(A * ) → H ⊕ H is surjective.
Since n + (A) = n − (A), a boundary triplet Π = {H, Γ 0 , Γ 1 } for A * exists and is not unique [14]. Moreover, dim H = n ± (A) and dom(A) = dom(A * ) ↾ ker(Γ 0 ) ∩ ker(Γ 1 ). 9,14]). Let A be a densely defined closed symmetric operator in H with equal deficiency indices and let Π = {H, Γ 0 , Γ 1 } be a boundary triplet for A * . Then the mapping
A closed extension A of A is called proper if A ⊆ A ⊆ A * .Ext A ∋ A Θ → Θ := Γ(dom( A)) = {(Γ 0 f, Γ 1 f ) ⊤ : f ∈ dom( A)} (2.2)
establishes a bijective correspondence between the set C(H) and the set of closed proper extensions
A Θ ⊆ A * of A. Furthermore, (A Θ ) * = A Θ * holds for any Θ ∈ C(H). The extension A Θ in (2.2) is symmetric (self-adjoint) if and only if Θ is symmetric (self-adjoint).
Proposition 2.2 and representation (2.1) yield the following corollary. are called the γ-field and the Weyl function, respectively, corresponding to the boundary triplet Π.
A Θ = A B = A * ↾ dom(A B ), dom(A B ) = dom(A * ) ↾ ker Γ 1 − BΓ 0 , B ∈ C(H).
holds (see [9]). Formula (2.6) is a generalization of the well-known Krein formula for canonical resolvents. We emphasize that it is valid for any closed extension A Θ ⊆ A * of A with nonempty resolvent set. According to the representation (2.3), it reads (see [21])
R z (A Θ ) = R z (A 0 ) + γ(z) C − DM(z) −1 Dγ(z) * , z ∈ ρ(A C,D ) ∩ ρ(A 0 ). (2.7)
Let now A be a closed densely defined nonnegative symmetric operator in the Hilbert space H. Among its nonnegative self-adjoint extensions two extremal extension A F and A K are laid special emphasis on. They are called Friedrichs and Krein extension, respectively, (see [18]). Operator A is nonnegative self-adjoint extension of A if and only if A K ≤ A ≤ A F in the sense of the corresponding quadratic forms.
Proposition 2.7 ( [9, 10]). Let A be a densely defined nonnegative symmetric operator with finite deficiency indices in H, and let Π = {H, Γ 0 , Γ 1 } be a boundary triplet for A * such that A 0 ≥ 0. Let also M(·) be the corresponding Weyl function. Then the following assertions hold. (i) There exists a strong resolvent limit
M(0) := s − R − lim x↑0 M(x), (M(−∞) := s − R − lim x↓−∞ M(x)). (ii) M(0) (M(−∞)) is a self-adjoint linear relation in H associated with the semibounded below (above) quadratic form t 0 [f ] = lim x↑0 (M(x)f, f ) ≥ β (resp. t −∞ [f ] = lim x↓−∞ (M(x)f, f ) ≤ α) with the domain dom(t 0 ) = {f ∈ H : lim x↑0 |(M(x)f, f )| < ∞} = dom((M(0) op − β) 1/2 ) dom(t −∞ ) = {f ∈ H : lim x↓−∞ |(M(x)f, f )| < ∞} = dom((α − M(−∞) op )) 1/2 ).
Moreover,
dom(A K ) = {f ∈ dom(A * ) : (Γ 0 f, Γ 1 f ) ⊤ ∈ M(0)} (resp. dom(A F ) = {f ∈ dom(A * ) : (Γ 0 f, Γ 1 f ) ⊤ ∈ M(−∞)}). (iii) Extensions A 0 and A K are disjoint (A 0 and A F are disjoint) if and only if M(0) ∈ C(H) (M(−∞) ∈ C(H) resp.) Moreover, dom(A K ) = dom(A * ) ↾ ker(Γ 1 − M(0)Γ 0 ) (dom(A F ) = dom(A * ) ↾ ker(Γ 1 − M(−∞)Γ 0 )). (iv) A F = A 0 (A K = A 0 ) if and only if lim x↓−∞ (M(x)f, f ) = −∞ (lim x↑0 (M(x)f, f ) = +∞), f ∈ H \ {0}. (2.8) (v) If A 0 = A F and dom(t Θop ) ⊂ dom(t 0 )
, then the number of negative eigenvalues of self-adjoint extension A Θ of A equals the number of negative eigenvalues of the quadratic form t Θop − t 0 , i.e.,
κ − (A Θ ) = κ − (t Θop − t 0 ). Moreover, if M(0) ∈ [H], then κ − (A Θ ) = κ − (Θ − M(0)).
(vi) In particular, the A Θ is nonnegative self-adjoint if and only if
dom(t Θop ) ⊂ dom(t 0 ) and t Θop − t 0 ≥ 0. (2.9) If M(0) ∈ [H], the inequality in (2.9) takes the form Θ − M(0) ≥ 0.
Scattering matrices
Let A be a densely defined closed symmetric operator in the separable Hilbert space H with equal finite deficiency indices and let Π = {H, Γ 0 , Γ 1 } be a boundary triplet for A * . Assume that A Θ is a self-adjoint extension of A with Θ = Θ * ∈ C(H). Since here dim H is finite, by (2.6),
(A Θ − z) −1 − (A 0 − z) −1 , z ∈ ρ(A Θ ) ∩ ρ(A 0 ),
is a finite rank operator and therefore the pair {A Θ , A 0 } performs a so-called complete scattering system, that is, the wave operators
W ± (A Θ , A 0 ) := s-lim t→±∞ e itA Θ e −itA 0 P ac (A 0 ),
exist and their ranges coincide with the absolutely continuous subspace H ac (A Θ ) of A Θ , cf. [16,27]. P ac (A 0 ) denotes the orthogonal projection onto the absolutely continuous subspace H ac
(A 0 ) of A 0 . The scattering operator S(A Θ , A 0 ) of the scattering system {A Θ , A 0 } is then defined by S(A Θ , A 0 ) := W + (A Θ , A 0 ) * W − (A Θ , A 0 ).
If we regard the scattering operator as an operator in H ac (A 0 ), then S(A Θ , A 0 ) is unitary, commutes with the absolutely continuous part A ac
0 := A 0 ↾ dom(A 0 )∩H ac (A 0 ) of A 0 . It follows that S(A Θ , A 0 ) is{H z } z∈Λ M by H z := ran Im(M(z + i0)) ⊆ H, z ∈ Λ M , where M(z + i0) = s − lim ǫ→0 M(z + iǫ) and Λ M := z ∈ R : M(z + i0)
exists . In the following theorem the scattering matrix is calculated in the case of a simple operator A. Recall that symmetric operator A densely defined in H is said to be simple if there is no nontrivial subspace which reduces it to a self-adjoint operator.
Theorem 2.8. [8] Let A be as above, and let Π = {H, Γ 0 , Γ 1 } be a boundary triplet for A * with the corresponding Weyl function M(·). Assume also that Θ = Θ * ∈ C(H) and A Θ is a self-adjoint extension of A. Then the scattering matrix {S Θ (z)} z∈R of the scattering system {A Θ , A 0 } admits the representation
S Θ (z) = I Hz + 2i Im(M(z)) Θ − M(z) −1 Im(M(z)) ∈ [H z ],
for a.e. z ∈ Λ M .
Abstract description of nonnegative self-adjoint extensions
Let A be a densely defined nonnegative closed symmetric operator in H. A complete description of all nonnegative self-adjoint extensions of A, as well as uniqueness criterion for nonnegative self-adjoint extension, has originally been obtained by Krein in [18] (see also [2]). His results were generalized in numerous works (see for instance [1,5,9] and reference therein). Particularly, a description in terms of boundary triplets and the corresponding Weyl functions was obtained in [9, Theorem 4, Proposition 5](cf. Proposition 2.7 in Section 2). One more uniqueness criterion has recently been presented by V. Adamyan [1,Theorem 2.4]. In this section, we show that this criterion might be obtained in the framework of boundary triplets approach. We also find the description of all nonnegative self-adjoint extensions of A similar to that of Adamyan in the particular case A > µI > 0.
Theorem 3.1. Let A 0 be a nonnegative self-adjoint extension of a nonnegative closed symmetric operator A in H, and let P −1 be an orthogonal projector from H onto N −1 . Then A 0 is a unique nonnegative self-adjoint extension of A if and only if
lim ε↓0 (P −1 ( A 0 + 1)( A 0 + ε) −1 ↾ N −1 ) −1 = 0, (3.1) lim ε↓0 (P −1 ( A 0 + 1)(ε A 0 + I) −1 ↾ N −1 ) −1 = 0. (3.2)
Proof. It is well known (see, for instance, [9]) that for each pair of transversal extensions A 1 and A 0 there exists boundary triplet
Π = {H, Γ 0 , Γ 1 } such that ker Γ i = dom( A i ), i ∈ {0, 1}.
In particular, such boundary triplet may be constructed for the pair A 0 ≥ 0 and
A 1 , where dom( A 1 ) = dom(A) ∔ N −a , a > 0. In this case setting H = N −a , Γ 1 = P −a ( A 0 + a)P 1 , Γ 0 = P 0 , (3.3)
where P −a is the orthogonal projector from H onto N −a and P 1 , P 0 are the projectors from dom(A * ) = dom( A 0 ) ∔ N −a onto dom( A 0 ) and N −a , respectively we obtain a boundary triplet Π = {H, Γ 0 , Γ 1 } (see [9]). The corresponding Weyl function is
M a (z) = (z + a)P −a [I + (z + a)( A 0 − z) −1 ] = (z + a)P −a ( A 0 + I)( A 0 − z) −1 . (3.4)
Put a = 1. Then conditions (2.8) take the form
t 0 [f ] = lim ε↓0 t ε 0 [f ] := lim ε↓0 ((1 − ε)P −1 ( A 0 + 1)( A 0 + ε) −1 f, f ) = +∞ (3.5) t −∞ [f ] = lim ε↓0 t ε −∞ [f ] := lim ε↓0 ((ε − 1)P −1 ( A 0 + 1)(ε A 0 + I) −1 f, f ) = −∞, f ∈ N −1 . (3.6)
Since t ε 0 [f ] is non-decreasing semi-bounded from below (0 < ε < 1) family of the closed symmetric forms, (3.5) is equivalent to (3.1) (cf. [16]). Analogously, since t ε ∞ [f ] is non-increasing semibounded from above family of the closed symmetric forms, (3.6) is equivalent to (3.2). Therefore, by Proposition 2.7(iv), the equality A K = A F and, consequently, the uniqueness of nonnegative self-adjoint extension of A is equivalent to the conditions (3.1)-(3.2) (see [18]).
Assume now that A > µI > 0 and A 0 = A F in (3.3). Let also a = 1. According to Proposition 2.7(vi), the following description of all nonnegative self-adjoint extensions of A is valid.
Proposition 3.2. Let A and A 0 be as above. Then the set of all nonnegative self-adjoint extensions A Y of A might be described as follows
dom(A Y ) = dom(A * ) ↾ ker{Y Γ 1 − Γ 0 }, where Γ 0 ,
Three-dimensional Schrödinger operator with point interactions
Consider in L 2 (R 3 , C n ) matrix Schrödinger differential expression (1.3) (see [1, 3-5, 7, 8, 15, 24]). Minimal symmetric operator H associated with (1.3) is defined by (1.4).
Notice that H is closed since for any x ∈ R 3 the linear functional δ
x : f → f (x) is continuous in W 2 2 (R 3 , C n ) due
to the Sobolev embedding theorem. From the scalar case it is might be easily derived that deficiency indices of H are n ± (H) = mn.
Boundary triplet and Weyl function
In the following proposition we define a boundary triplet for the adjoint H * . For x = (x 1 , x 2 , x 3 ) ∈ R 3 we agree to write
r j := |x − x j | = (x 1 − x 1 j ) 2 + (x 2 − x 2 j ) 2 + (x 3 − x 3 j ) 2 .dom(H * ) = f = m j=1 ξ 0j e −r j r j + ξ 1j e −r j + f H : ξ 0j , ξ 1j ∈ C n , f H ∈ dom(H) . (4.1) (ii) A boundary triplet Π = {H, Γ 0 , Γ 1 } for H * is defined by H = ⊕ m j=1 C n , Γ 0 f := {Γ 0j f } m j=1 = 4π { lim x→x j f (x)|x − x j |} m j=1 = 4π{ξ 0j } m j=1 ,(4.
2)
Γ 1 f := {Γ 1j f } m j=1 = { lim x→x j f (x) − ξ 0j |x−x j | } m j=1 . (4.3) (iii) The operator H 0 = H * ↾ ker(Γ 0 ) is self-adjoint with dom(H 0 ) = W 2 2 (R 3 , C n ).
Proof. (i) Without loss of generality, it can be assumed that n = 1.
Let us show that the functions f j = e −r j /r j and g j = e −r j (j ∈ {1, .., m}) belong to dom(H * ), i.e., (Hϕ, e −r j /r j ) = (ϕ, H * (e −r j /r j )) and (Hϕ, e −r j ) = (ϕ, H * (e −r j )), ϕ ∈ C ∞ 0 (R 3 \ X). Let u(·), v(·) ∈ C 2 (Ω) ∩ C 1 (Ω). Then the second Green formula reads as follows
Ω ∆u(x)v(x) − u(x)∆v(x) dx = ∂Ω ∂u(s) ∂n v(s) − u(s) ∂v(s) ∂n ds. (4.5)
By (4.5), we obtain (Hϕ, e −r j /r j ) − (ϕ, H * (e −r j /r j )) = lim
r→∞ Br(x j )\B 1 r (x j ) −∆ϕ e −r j r j + ϕ∆( e −r j r j ) dx = lim r→∞ Sr(x j ) − ∂ϕ ∂n e −r j r j + ϕ ∂ ∂n e −r j r j ds + lim r→∞ S 1 r (x j )
∂ϕ ∂n
e −r j r j − ϕ ∂ ∂n e −r j r j ds. (4.6)
It is easily seen that ∂ ∂n e −r j r j = − e −r j r j (1 + 1 r j ). Therefore the first integral in the right-hand side of (4.6) tends to 0 as r → ∞ since ϕ ∈ C ∞ 0 (R 3 \ X). Further,
lim r→∞ S 1 r (x j ) ∂ϕ ∂n e −r j r j ds = lim r→∞ 4π ∂ϕ ∂n (x * ) e −1/r r = 0, x * ∈ S 1 r (x j ), − 1 4π lim r→∞ S 1 r (x j ) ϕ ∂ ∂n e −r j r j ds = lim r→∞ e −1/r r (1 + r)ϕ(x ′ ) = lim x ′ →x j ϕ(x ′ ) = ϕ(x j ) = 0, x ′ ∈ S 1 r (x j ).
Thus, the first equality of (4.4) holds. The second one can be proved analogously. It is not difficult to show that the functions f j and g j are linearly independent and dim(span{f j , g j } m j=1 ) = 2mn. Since span{f j , g j } m j=1 ∩ dom(H) = 0 and dim(dom(H * )/ dom(H)) = 2mn, the domain dom(H * ) takes the form (4.1).
(ii) Let f, g ∈ dom(H * ). By (4.1), we have
f = m k=1 f k + f H , f k = ξ 0k e −r k r k + ξ 1k e −r k , g = m k=1 g k + g H , g k = η 0k e −r k r k + η 1k e −r k ,
where f H , g H ∈ dom(H), and ξ 0k , ξ 1k , η 0k , η 1k ∈ C n , k ∈ {1, .., m}. Applying (4.2)-(4.
3) to f, g ∈ dom(H * ), we obtain
Γ 0 f = 4π{ξ 0j } m j=1 , Γ 1 f = −ξ 0j + k =j ξ 0k e −|x j −x k | |x j − x k | + m k=1 ξ 1k e −|x j −x k | m j=1 , Γ 0 g = 4π{η 0j } m j=1 , Γ 1 g = −η 0j + k =j η 0k e −|x j −x k | |x j − x k | + m k=1 η 1k e −|x j −x k | m j=1 . (4.7)
It is easily seen that
(H * f, g) − (f, H * g) = m j=1 m k=1 (ξ 0j H * ( e −r j r j ), η 1k e −r k ) − (ξ 0j e −r j r j , η 1k H * (e −r k )) + (ξ 1j H * (e −r j ), η 0k e −r k r k ) − (ξ 1j e −r j , η 0k H * ( e −r k r k )) .
Using the second Green formula (4.5), we get
(H * ( e −r j r j ), e −r k ) − ( e −r j r j , H * (e −r k )) = lim r→∞ Br(x j )\B 1 r (x j ) −∆( e −r j r j )e −r k dx + Br(x j )\B 1 r (x j ) e −r j r j ∆(e −r k )dx = −4πe −|x k −x j | . (4.8)
Finally, by (4.7) and (4.8),
(H * f, g) − (f, H * g) = 4π m j=1 m k=1 −ξ 0j η 1k e −|x j −x k | + ξ 1j η 0k e −|x j −x k | = m j=1 (Γ 1j f, Γ 0j g) − (Γ 0j f, Γ 1j g) = (Γ 1 f, Γ 0 g) − (Γ 0 f, Γ 1 g).
Thus, the Green identity is satisfied. It follows from (4.1) that the mapping Γ = (Γ 0 ,
Γ 1 ) ⊤ is surjective. Namely, let (h 0 , h 1 ) ⊤ ∈ H ⊕ H, where h 0 = {h 0j } m j=1 , h 1 = {h 1j } m j=1 are vectors from ⊕ m j=1 C n . If f ∈ dom(H * ), then, by (4.1), f = f H + m j=1
ξ 0j e −r j r j + ξ 1j e −r j . Let us put
ξ 0 := {ξ 0j } m j=1 , ξ 1 := {ξ 1j } m j=1 , E 0 := − e −|x k −x j | |x k − x j | − δ kj m j,k=1 , E 1 := e −|x k −x j | m k,j=1 , (4.9)
where δ kj stands for the Kronecker symbol. Therefore if ξ 0 = 1 4π h 0 and ξ 1 = (
E 1 ⊗ I n ) −1 (h 1 + 1 4π (E 0 ⊗ I n )h 0 ), then Γ 0 f = h 0 and Γ 1 f = h 1 . Hence assertion (ii) is proved. (iii) Combining (1.4) with (4.1), we obtain that any f ∈ W 2 2 (R 3 , C n ) admits the representation f = m j=1 ξ 1j e −r j + f H with m k=1 ξ k1 e −|x k −x j | = f (x j ) which proves (iii).
In what follows √ · stands for the branch of the corresponding multifunction defined on C \ R + by the condition √ 1 = 1.
(z) = i √ z 4π δ jk + G √ z (x j − x k ) m j,k=1
, z ∈ C + , (4.10)
where
G √ z (x) = e i √ z|x| 4π|x| , x =0; 0,
x=0. and δ kj stands for the Kronecker symbol;
(ii) the corresponding γ(·)-field is
γ(z)ξ = m j=1 ξ j e i √ zr j 4πr j , ξ = {ξ j } m j=1 , ξ j ∈ C n , z ∈ C + . (4.11)
Proof. Let f z ∈ N z , z ∈ C + . Then f z = m j=1 a j e i √ zr j 4πr j , a j ∈ C n (see [3, chapter II.1]). Applying Γ 0 and Γ 1 to f z , we get
Γ 0 f z = {a j } m j=1 , Γ 1 f z = a j i √ z 4π + k =j a k e i √ z|x j −x k | 4π|x j − x k | m j=1
.
(4.12) Therefore (4.10) is proved (see Definition 2.5). Finally, combining (4.12) with (2.5), we arrive at (4.11). (iv) The domain of Krein extension H K is
dom(H K ) = f = m j=1 ξ 0j e −r j r j + m k,j=1
k jk ξ 0k e −r j + f H : ξ 0j ∈ C n , f H ∈ dom(H) , (4.14)
with
K = (k kj ) m k,j=1 = (E 1 ⊗ I n ) −1 (4πM(0) + E 0 ⊗ I n )
, (vi) Krein formula for canonical resolvents takes the form
M(0) = I n ⊗ G 0 (x j − x k ) m j,k=1 = I n ⊗ 1 − δ jk 4π|x k − x j | + δ jkR z (H C,D ) = R z (H 0 ) + γ(z) C − DM(z) −1 Dγ(z) * , z ∈ ρ(H C,D ) \ R + ,(4.
15)
where γ(·)-field is defined by (4.11) and R z (H 0 ) is an integral operator with the kernel
G √ z (x, x ′ ) = e i √ z|x−x ′ | 4π|x−x ′ | ⊗ I n .
Proof.
= I n ⊗ G 0 (x j − x k ) m j,k=1 = I n ⊗ 1−δ jk 4π|x k −x j |+δ jk m j,k=1
is an operator. Therefore operators H 0 and H K are disjoint and, by Proposition 2.7(iii), formula (4.14) is valid.
(v) follows from Proposition 2.7(vi). Finally, (2.7) and formula for the kernel of (H 0 − z) −1 (see [3, chapter I.1]) yield (vi).
In [3], it is noted that, in the case n = 1, according to the extension theory, there are m 2parametric family of self-adjoint extensions of the minimal operator H defined by (1.2). However, in [3], only certain m-parametric family H (3) α,X associated with the differential expression (1.
(3) α,X is dom(H (3) α,X ) = dom(H * ) ↾ ker(Γ 1 − B α Γ 0 ), B α = diag(α 1 , .., α m ), α k ∈ R, k ∈ {1, .., m}.
(4. 16) Note also that the description of the H Remark 4.8. In [5], Yu.Arlinskii and E.Tsekanovskii described all nonnegative self-adjoint extensions H of H in the case n = 1 (see [5,Theorem 5.1]). It should be noted that the description of all nonnegative self-adjoint extensions of H close to that contained in [5] might be obtained in the framework of our scheme. It will be published elsewhere.
Spectrum of the self-adjoint extensions of the minimal Schrödinger operator and scattering matrix
In this subsection we describe point spectrum of the self-adjoint extensions of H and complete some results from [3] in this direction. i.e., the following equivalence holds
z ∈ σ p (H Θ ) ∩ R − ⇔ 0 ∈ σ p (C − DM(z)).
The corresponding eigenfunction ψ z has the form where M(0) is defined by (4.4).
ψ z = m j=1 c j e i √ zr j
Next we find sufficient conditions for the inequality κ − (H
α,X ) ≥ m ′ (with m ′ ≤ m) as well as for the equality κ − (H (3) α,X ) = m ′ to hold by applying the following Gerschgorin theorem.
α,X be defined by (4.16).
Let also K = {k i } m ′ i=1 be a subset of N. (i) Suppose that α k i < − j =k i 1 4π|x j − x k i | for k i ∈ K. (4.17) Then κ − (H (3) α,X ) ≥ m ′ . (ii) If, in addition, α k ≥ j =k 1 4π|x j −x k | for k / ∈ K, then κ − (H (3) α,X ) = m ′ .
Proof. (i) Combining Theorem 4.9(ii) with (4.4), we get
κ − (H (3) α,X ) = κ − (B α − M(0)) = κ − α k δ jk − 1 − δ jk 4π|x j − x k | + δ jk m j,k=1 .
Without loss of generality we may assume that K = {1, .., m ′ }. Denote by B m ′ the upper left m ′ × m ′ corner of the matrix B α − M(0). According to the minimax principle, α,X ) = κ − (B α − M(0)) = m ′ . Remark 4.12. Note that the idea of applying Gerschgorin's theorem is borrowed from [22]. This idea was also used in [13].
κ − (H (3) α,X ) = κ − (B α − M(0)) ≥ κ − (B m ′ ).S Θ (x) = I nm + 2i S(x) Θ − I n ⊗ i √ x 4π δ jk + G √ x (x j − x k ) m j,k=1 −1 S(x), x ∈ R + , S(x) = I n ⊗ √ x 4π δ jk + S √ x (x j − x k ) m j,k=1 , S √ x (t) = sin( √ x|t|) 4π|t| , t =0; 0,
t=0.
Two-dimensional Schrödinger operator with point interactions
In this section, we consider in L 2 (R 2 , C n ) matrix Schrödinger differential expression (1.3)(see [1,3,4,15]). Minimal symmetric operator H associated with (1.3) in L 2 (R 2 , C n ) is defined by (1.4). As above, the operator H is closed and the deficiency indices of H are n ± (H) = nm.
Boundary triplet and Weyl function
In the following proposition we describe boundary triplet for the adjoint operator H * . Let us denote ξ 0j e −r j ln(r j ) + ξ 1j e −r j + f H : ξ 0j , ξ 1j ∈ C n , f H ∈ dom(H) .
r j := |x − x j | = (x 1 − x 1 j ) 2 + (x 2 − x 2 j ) 2 , x = (x 1 , x 2 ) ∈ R 2 .H = ⊕ m j=1 C n , Γ 0 f := {Γ 0j f } m j=1 = −2π { lim x→x j f (x) ln |x − x j | } m j=1 = 2π{ξ 0j } m j=1 ,(5.
2)
Γ 1 f := {Γ 1j f } m j=1 = { lim x→x j (f (x) − ln |x − x j |ξ 0j )} m j=1 , f ∈ dom(H * ). (5.3) (iii) The operator H 0 = H * ↾ ker(Γ 0 ) is self-adjoint with dom(H 0 ) = W 2 2 (R 2 , C n ).
Proof. (i) It is well known (see [3,4]) that
dom(H * ) = f ∈ L 2 (R 2 , C n ) ∩ W 2 2,loc (R 2 \{X}, C n ) : ∆f ∈ L 2 (R 2 , C n ) .
Obviously, functions f j = η j e −r j ln(r j ) and g j = µ j e −r j ( η j , µ j ∈ C n , j ∈ {1, .., m}) belong to dom(H * ). Their linear span is 2mn-dimensional subspace in dom(H * ) that has trivial intersection with dom(H). Since dim(dom(H * )/ dom(H)) = 2mn, the domain dom(H * ) takes the form (5.1).
(ii) The second Green identity is verified similarly to 3D case. From (5.1) it follows that the mapping Γ = (Γ 0 ,
Γ 1 ) ⊤ is surjective. Namely, let (h 0 , h 1 ) ⊤ ∈ H ⊕ H, where h 0 = {h 0j } m j=1 , h 1 = {h 1j } m j=1 are vectors from ⊕ m j=1 C n . If f ∈ dom(H * ), then, by (5.1), f = f H + m j=1
ξ 0j e −r j ln(r j ) + ξ 1j e −r j . Let us put
ξ 0 := {ξ 0j } m j=1 , ξ 1 := {ξ 1j } m j=1 , E 0 := −e −|x k −x j | ln(|x k − x j | + δ kj ) m j,k=1 , E 1 := e −|x k −x j | m k,j=1 . (5.4) Therefore if ξ 0 = 1 2π h 0 and ξ 1 = (E 1 ⊗ I n ) −1 (h 1 + 1 2π (E 0 ⊗ I n )h 0 ), then Γ 0 f = h 0 and Γ 1 f = h 1 . Thereby, (ii) is proved. (iii) From (1.4) and (5.1) it follows that any function f ∈ W 2 2 (R 2 , C n ) admits the representation f = m j=1 ξ 1j e −r j + f H , where m k=1 ξ k1 e −|x k −x j | = f (x j ) which proves (iii).(z) = 1 2π (ψ(1) − ln( √ z 2i ))δ jk + G √ z (x j − x k ) m j,k=1
, z ∈ C + , (5.5)
where ψ(1) = Γ ′ (1) Γ(1) , G √ z (x) = i/4H (1) 0 ( √ z|x|), x =0;
0, x=0. and H (1) 0 (·) denotes the Hankel function of the first kind and order 0; (ii) the corresponding γ(·)-field is
γ(z)ξ = m j=1 ξ j i 4 H (1) 0 ( √ zr j ), ξ = {ξ j } m j=1 , ξ j ∈ C n , z ∈ C + . (5.6)
Proof. Let f z ∈ N z , z ∈ C + . Then, according to [3,chapter II.4],
f z := m j=1 a j i 4 H (1) 0 ( √ zr j ), a j ∈ C n .
It is not difficult to see that, by formulas (
0 (z) = 1 + 2i π (ln( z 2 ) − ψ(1)) + o(z), z → 0. (5.7)(1)
Applying Γ 0 and Γ 1 to f z and taking into account (5.7), we get
Γ 0 f z = {a j } m j=1 , Γ 1 f z = ψ(1) 2π + i 4 − ln( √ z 2 ) 2π a j + k =j i 4 H(1)0 ( √ z|x k − x j |)a k m j=1 ,(5.8)
Further, combining (5.8) with (2.5), we get (5.5) and (5.6).
Proper extensions of the minimal Schrödinger operator H
As in previous section, we describe proper extensions of the minimal operator H. (v) Krein formula for canonical resolvents takes the form
dom(H F ) = dom(H 0 ) = W 2 2 (R 2 , C n ). (iv) The domain dom(H K ) of the Krein extension H K is dom(H K ) = dom(H 0 ), m=1; {f ∈ dom(H * ) : (Γ 0 f, Γ 1 f ) ⊤ ∈ M(0)}, m > 1. where dom(M(0) op ) = ⊕ n s=1 dom(M s (0) op ), dom(M s (0) op ) = ξ = {ξ j } m j=1 ∈ C m : m j=1 ξ j = 0 ,R z (H C,D ) = R z (H 0 ) + γ(z) C − DM(z) −1 Dγ(z) * , z ∈ ρ(H C,D ) \ R + ,
where γ(·)-field is defined by (5.6) and R z (H 0 ) is an integral operator with the kernel G √ z (x,
x ′ ) = i/4H (1) 0 ( √ z|x − x ′ |) ⊗ I n .
Proof. By Proposition 2.7(iv), H F = H K = H 0 . Furthermore, from the equality H K = H F it follows that operator H has no other nonnegative self-adjoint extensions (see [18]).
Consider the case m > 1. For simplicity suppose that n = 1. Let ξ = {ξ j } m j=1 ∈ C m . Using asymptotic expansion (5.7), we get
(M(z)ξ, ξ) ∼ 1 2π (ψ(1) − ln( √ z 2i )) m j=1 |ξ j | 2 + k =j 1 2π ψ(1) − ln( √ z 2i ) − ln(|x k − x j |) ξ j ξ k = = 1 2π (ψ(1) − ln( √ z 2i )) | m j=1 ξ j | 2 − 1 2π k =j ln(|x k − x j |)ξ j ξ k , z → 0. (5.11)
From (5.11) it easily follows that limit lim Applying Proposition 2.7(ii) completes the proof of (iv). Combining (2.7) with the formula for the kernel of (H 0 − z) −1 (see [3, chapter I.5]), we obtain (v).
As in the case of 3D Schrödinger operator, only certain m-parametric family H (2) α,X associated in L 2 (R 2 ) with the differential expression (1. α,X has the following representation dom(H (2) α,X ) = dom(H * ) ↾ ker(Γ 1 − B α Γ 0 ), B α = diag(α 1 , .., α m ), α k ∈ R, k ∈ {1, .., m}.
Note that in the case d = 2 it makes certain difficulty to describe nonnegative self-adjoint extensions of H since M(0) appears to be the relation with nontrivial multivalued part. As above, assume that n = 1. It is easily seen that
dom( H) = f = c m j=1 ξ j e −r j + f H : ξ = { ξ j } m j=1 = E −1 1 e mul , c ∈ C, f H ∈ dom(H) ,
where E 1 is defined by (5.4). According to [26], we have H = H 1 ⊕ H 2 , H 1 = dom(M(0) op ) and H 2 = mul (M(0)).
Let π j , j ∈ {1, 2} denote the orthogonal projectors onto H j . Then the Weyl function M(·) defined by (5.5) admits the representation M(·) = (M kj (·)) 2 k,j=1 with M k,j (·) = π k M(·) ↾ H j , k, j ∈ {1, 2}. One may simply verify that Remark 5.6. (i) The uniqueness of nonnegative self-adjoint extension of 2D operator H, in the case n = m = 1, was established in [12] and [1].
(ii) In [1], V. Adamyan noted that, in the case m > 1 and n = 1, the operator H has non-unique nonnegative self-adjoint extension.
Spectrum of the self-adjoint extensions of the minimal Schrödinger operator and scattering matrix
Point spectrum of the self-adjoint extensions of H is described in the following theorem.
0 ( √ zr j ),(1)
where (c 1 , .., c m ) ⊤ is eigenvector of the relation Θ − M(z) corresponding to zero eigenvalue.
As in the case of 3D Schrödinger operator, 2D Schrödinger operator H is not simple. Arguing as above, we obtain
δ(· − x j ), α j ∈ R, m ∈ N (1.1)have been treated in the framework of the extension theory. Namely, the authors proposed, in the case of one point interaction, to consider all self-adjoint extensions of the following minimal Schrödinger operatorH = −∆ ↾ dom(H), dom(H) := f ∈ W 2 2 (R 3 ) : f (x j ) = 0, j ∈ {1, .., m} (1.2)as a realizations of expression (1.1).
X , d = 2, 3, in [3, chapters II.1, II.4].
Two proper extensions A 1 and A 2 of A are called disjoint if dom( A 1 ) ∩ dom( A 2 ) = dom(A) and transversal if, in addition, dom( A 1 ) ∔ dom( A 2 ) = dom(A * ) . The set of all proper extensions of A, Ext A, may be described in the following way.
Corollary 2. 3 .
3(i) The extensions A 0 := A * ↾ ker(Γ 0 ) and A 1 := A * ↾ ker(Γ 1 ) are self-adjoint. (ii) Any proper extension A Θ of the operator A admits the representation A Θ = A C,D = A * ↾ dom(A C,D ), dom(A C,D ) = dom(A * ) ↾ ker(DΓ 1 − CΓ 0 ), C, D ∈ [H]. (2.3) (iii) If, in addition, the closed extensions A Θ and A 0 are disjoint, then (2.3) takes the form
Remark 2. 4 .
4In the case dim(H) < ∞, it follows from the result of Rofe-Beketov [25] that the extension A Θ defined by (2.3) is self-adjoint if and only if the following conditions hold CD * = DC * , 0 ∈ ρ(CC * + DD * ). (2.4) 2.1.2 Weyl functions, γ-fields, and Krein type formula for resolvents Definition 2.5 ( [9]). Let Π = {H, Γ 0 , Γ 1 } be a boundary triplet for A * . The operator valued functions γ(·) : ρ(A 0 ) → [H, H] and M(·) : ρ(A 0 ) → [H] defined by γ(z) := Γ 0 ↾ N z −1 and M(z) := Γ 1 γ(z), z ∈ ρ(A 0 ), (2.5)
unitarily equivalent to multiplication operator induced by a family {S Θ (z)} of unitary operators in a spectral representation of A ac 0 (for details, see [27, Section 2.4]). Define a family of Hilbert spaces
Γ 1 are defined by (3.3) and Y runs over the set of all nonnegative contractions in N −1 satisfying the inequality 0 ≤ Y ≤ M −1 1 (0) with M 1 (·) defined by (3.4). Proof. It is easily seen that M 1 (0) ∈ [H] since A 0 = A F > µI > 0. Thus, by Proposition 2.7, any nonnegative self-adjoint extension A Θ is described by the condition Θ − M 1 (0) ≥ 0. By (3.4), Θ ≥ M 1 (0) ≥ 1. Therefore Θ −1 ∈ C(H) and 0 ≤ Θ −1 ≤ 1, i.e., in (2.3) C −1 exists and Θ −1 = C −1 D ≤ 1. Putting Y := C −1 D, we obtain the desired result.
Proposition 4. 2 .
2Let H be the minimal Schrödinger operator defined by (1.4) and let Π = {H, Γ 0 , Γ 1 } be the boundary triplet for H * defined by (4.2)-(4.3). Then (i) the Weyl function M(·) corresponding to Π has the form M(z) = n s=1 M s (z), M s
first construction of the boundary triplet, in the case m = n = 1, apparently goes back to the paper by Lyantse and Majorga[20, Theorem 2.1]. They also obtained the description of the spectrum of an arbitrary proper extension H Θ of H[20, Theorem 4.5]. Their description of (H Θ − z) −1 coincides with the Krein formula for canonical resolvents in Theorem 4.4. Another construction of the boundary triplet in the situation of general elliptic operator with the boundary conditions on the set of zero Lebesgue measure was obtained in[17]. However this construction is not suitable for our purpose. In the case m = 1, slightly different boundary triplet was obtained in[8, section 5.4].(ii) Note also that the Weyl function in the form (4.10) appears in the paper by A. Posilicano[24, Example 5.3] and in the book [3] (see Theorem 1.1.1 in chapter II.1) without connection with boundary triplets. 4.2 Proper extensions of the minimal Schrödinger operator H Proposition 2.2 gives a description of all proper extensions of H in terms of boundary triplets.The following theorem is its reformulation in more precise form.
Theorem 4 . 4 .
44Let H be the minimal Schrödinger operator (1.4), let Π = {H, Γ 0 , Γ 1 } be the boundary triplet for H * defined by (4.2)-(4.3), and M(·) the corresponding Weyl function. Assume that ξ 0 , ξ 1 , E 0 , E 1 are defined by (4.9) and H C,D is a proper extension of H. Then the following assertions hold. (i) The set of all proper extensions H C,D of H is described as follows dom(H C,D ) = {f ∈ dom(H * ) : D(E 1 ⊗ I n )ξ 1 = (4πC + D(E 0 ⊗ I n ))ξ 0 } , C, D ∈ [H]. (4.13) (ii) Moreover, H C,D is a self-adjoint extension of H if and only if (2.4) holds. (iii) Friedrichs extension H F of H coincides with H 0 : dom(H F ) = dom(H 0 ) = W 2 2 (R 3 , C n ).
Proper extension H C,D of the form (4.13) is self-adjoint and nonnegative if and only if (2.4) holds and ((CD * − DM(0)D * )h, h) ≥ 0, h ∈ H \ {0}.
iv), H F = H 0 . Finally, by Proposition 4.1(iii), dom(H F ) = dom(H 0 ) = W 2 2 (R 3 , C n ) . (iv) Note that strong resolvent limit s − R − lim x↑0 M(x) = M(0)
X might be parametrized in the framework of boundary triplet approach.
Proposition 4 . 5 .
45Let Π be the boundary triplet for H * defined by (4.2)-(4.3). Then the domain of the Schrödinger operator H
X in terms of the resolvents [3, chapter II.1] coincides with the Krein formula for canonical resolvents (4.15) with C = B α = diag(α 1 , .., α m ) and D = I m .
Remark 4 . 6 .
46In the case n = m = 1, formulas (4.13) and (4.14) are essentially simplified. Namely,dom(H C,D ) = f = ξ 0 e −r 1 r 1 + ξ 1 e −r 1 + f H : dξ 1 = (4πc + d)ξ 0 , ξ 0 , ξ 1 , c, d ∈ C, f H ∈ dom(H) and dom(H K ) = f = ξ 0 e −r 1 r 1 + ξ 0 e −r 1 + f H , ξ 0 ∈ C, f H ∈ dom(H) .
Remark 4. 7 .
7The matrix Schrödinder operator with finite number of point interactions was also studied by A. Posilicano [24, Example 5.3, Example 5.4]. Particularly, the author parametrized self-adjoint extensions of the minimal symmetric operator H. A connection between our description of self-adjoint extensions and the one obtained by A. Posilicano might be established by the formulas (4.5) and (4.6) in [24, Theorem 4.5].
Theorem 4. 9 .
9Let H be the minimal Schrödinger operator (1.4), let Π be the boundary triplet for H * defined by (4.2)-(4.3), and M(·) the corresponding Weyl function defined by (4.10). Assume that H Θ is a self-adjoint extension of H. Then the following assertions hold. (i) Point spectrum of the self-adjoint extension H Θ of H consists of at most nm negative eigenvalues (counting multiplicities). Moreover, z ∈ σ p (H Θ ) ∩ R − if and only if 0 ∈ σ p (Θ − M(z)),
4πr j , where (c 1 , .., c m ) ⊤ is eigenvector of the relation Θ − M(z) corresponding to zero eigenvalue. (ii) The number of negative eigenvalues of the self-adjoint extension H Θ is equal to the number of negative eigenvalues of the relation Θ − M(0), κ − (H Θ ) = κ − (Θ − M(0)), i.e., κ − (H C,D ) = κ − (CD * − DM(0)D * ),
Theorem 4.10. [19, Theorem 7.2.1] All eigenvalues of a matrix A = (a ij ) m i,j=1 ∈ [C m ] are contained in the union of Gerschgorin's disks G k = {z ∈ C : |z − a kk | ≤ k =j |a kj |}, k ∈ {1, .., m}.Moreover, the set consisting of m ′ disks that do not intersect with remaining m − m ′ disks contains precisely m ′ eigenvalues of the matrix A.
Proposition 4 . 11 .
411Let H
yield the corresponding Gerschgorin conditions for B m ′ . Applying the Gerschgorin theorem to the matrix B m ′ and using (4.17), we get κ − (B m ′ ) = m ′ . Combining the latter equation with (4.18), we get κ −
Consider the scattering system {H Θ , H 0 }, where H Θ = H * ↾ Γ −1 Θ with arbitrary self-adjoint relation Θ ∈ C(H). Since H is not simple, we consider the system { H Θ , H 0 }, H Θ = H Θ ⊕ H s . Then Theorem 2.8 and (4.10) imply
Theorem 4 . 13 .
413Scattering matrix { S Θ (z)} z∈R + of the scattering system { H Θ , H 0 } has the form
Proposition 5. 1 .
1Let H be the minimal Schrödinger operator defined by (1.4). Then the following assertions hold. (i) The domain of H * is defined by dom(H * ) = f = m j=1
The boundary triplet Π = {H, Γ 0 , Γ 1 } for H * might be defined as follows
Proposition 5 . 2 .
52Let H be the minimal Schrödinger operator and let Π = {H, Γ 0 , Γ 1 } be the boundary triplet for H * defined by (5.2)-(5.3). Then (i) the Weyl function M(·) corresponding to the boundary triplet Π has the form M(z) = n s=1 M s (z), M s
9.01) in [23, Section 2, §9] and (5.03), (5.07) in [23, Section 7, §5], the function H
Theorem 5. 3 .
3Let H be the minimal Schrödinger operator, let Π = {H, Γ 0 , Γ 1 } be the boundary triplet for H * defined by (5.2)-(5.3), and M(·) the corresponding Weyl function. Assume also that ξ 0 , ξ 1 , E 0 , E 1 are defined by (5.4) and H C,D is a proper extension of H. Then the following assertions hold. (i) Any proper extension H C,D of H is described as follows dom(H C,D ) = {f ∈ dom(H * ) : D(E 1 ⊗ I n )ξ 1 = (2πC + D(E 0 ⊗ I n ))ξ 0 } , C, D ∈ [H]. (ii) Extension H C,D is self-adjoint if and only if (2.4) holds. (iii) Friedrichs extension H F of H coincides with H 0 :
span{e mul }, e mul = {e j } m j=1 = {1} m j=1 . (5.10)
(i) and (ii) follow from the representation (2.3). (iii) From the asymptotic representation (see, for instance, formula (4.03) in [23, Section 7, §4]) x)f, f ) = −∞, f ∈ H \ {0}. Thus, by Proposition 2.7(iv), H F = H 0 . (iv) In the case m = 1, the Weyl function has the form M(z) = (ψ(1) − ln( √ z 2i ))I n . The latter yields lim x↓−∞ (M(x)f, f ) = −∞, lim x↑0 (M(x)f, f ) = +∞, f ∈ H \ {0}.
x↑0 (
x↑0Mξ, ξ) is finite if and only if m j=1 ξ j = 0. Thus, the domain of the operator part M(0) op is described by (5.9) Finally, (5.10) takes place since mul (M(0)) and dom(M(0) op ) are orthogonal.
1) is described in [3, chapter II.1, Theorem 4.1]. Proposition 5.4. Let Π be the boundary triplet for H * defined by (5.2)-(5.3). Then the domain of H (2)
We may overcome this by considering the following intermediate extension of H. H := H * ↾ dom( H), dom( H) = dom(H F ) ∩ dom(H K ).
H
= H 1 := H * ↾ {f ∈ dom(H * ) : Γ 0 f = π 1 Γ 1 f = 0}, with Γ 0 , Γ 1 defined by (5.2)-(5.3). From [11, Proposition 4.1] it follows that H is closed symmetric operator in L 2 (R 2 ) with deficiency indices n ± ( H) = dim(H 1 ) = m − 1. Proposition 4.1(ii)in[11] also yields that H * 1 = H * = H * ↾ {f ∈ dom(H * ) : π 2 Γ 0 f = 0}, and boundary triplet Π = { H, Γ 0 , Γ 1 } for H * might be defined as followsH = H 1 , Γ 0 = Γ 0 ↾ dom( H * ), Γ 1 = π 1 Γ 1 ↾ dom( H * ).Moreover, the Weyl function M(·) corresponding to the boundary triplet Π are given by M (·) = M 11 (·) and the equality M (0) = M(0) op is satisfied.Proposition 5.5. Let H and M(0) op be as above and let H ′ be a non-negative self-adjoint extension of H. Then (i) There exist pairs C, D ∈ [H] and C, D ∈ [ H] satisfying (2.4) and such that H ′ = H C,D = H * ↾ ker(DΓ 1 − CΓ 0 ) = H * ↾ ker( D Γ 1 − C Γ 0 ) =: H C, D . (ii) The extension H C,D = H * C,D is nonnegative if and only if ( C D * − DM(0) op D * h, h) ≥ 0, h ∈ dom(M(0) op ) \ {0}.
Theorem 5. 7 .
7Let H be the operator defined by(1.4), let Π be the boundary triplet for H * defined by (5.2)-(5.3), and let M(·) be the corresponding Weyl function. Assume also that H Θ is a selfadjoint extension of H. Then point spectrum of the self-adjoin extension H Θ consists of at most nm negative eigenvalues (counting multiplicities). Moreover, z ∈ σ p (H Θ ) ∩ R − if and only if 0 ∈ σ p (Θ − M(z)), i.e.,z ∈ σ p (H Θ ) ∩ R − ⇔ 0 ∈ σ p (C − DM(z)).The corresponding eigenfunction ψ z has the form
Theorem 5 . 8 .
58Scattering matrix { S Θ (z)} z∈R + of the scattering system { H Θ , H 0 } has the form S Θ (x) = I nm + 2i J(x) Θ − I n ⊗ 1 2π (ψ(1) − ln( √ x 2i ))δ jk + G √ z (x j − x k ) x) = I n ⊗ 1 4 J 0 ( √ x|x j − x k |) m j,k=1 , x ∈ R + , where J 0 (·) denotes Bessel function.
The description of the extensions of H is provided in Subsection 4.2 (5.2). Finally, Subsection 4.3 (5.3) is devoted to the spectral analysis of the self-adjoint extensions of H. Notation. Let H and H stand for separable Hilbert spaces; [H, H] stands for the space of bounded linear operators from H to H, [H] := [H, H]; the set of closed operators in H is denoted by C(H). Let A be a linear operator in a Hilbert space H.
Definition 2.1 ([14]). A triplet Π = {H, Γ 0 , Γ 1 } is called a boundary triplet for the adjoint operator A * of A if H is an auxiliary Hilbert space and Γ 0 , Γ 1 : dom(A * ) → H are linear mappings such that (i) the second Green identity,
AcknowledgmentsThe author thanks M.M. Malamud for posing the problem and permanent attention to the work. The author also acknowledges the referees for carefully reading of the preliminary version of the manuscript and constructive remarks.
Nonnegative Perturbations of Nonnegative Self-adjoint Operators. V Adamyan, Methods Funct.Anal.Topology. 132V. Adamyan, Nonnegative Perturbations of Nonnegative Self-adjoint Operators, Methods Funct.Anal.Topology, 13 (2007), no.2, 103-109.
N I Akhiezer, I M Glazman, Theory of Linear Operators in Hilbert Spaces. I, II, Pitman, LondonN.I. Akhiezer, I.M. Glazman, Theory of Linear Operators in Hilbert Spaces, vol I, II, Pitman, London, 1981.
S Albeverio, F Gesztesy, R Hoegh-Krohn, H Holden, Solvable Models in Quantum Mechanics, Texts and Monographs in Physics. Berlin-New YorkSpringerS. Albeverio, F. Gesztesy, R. Hoegh-Krohn, H. Holden, Solvable Models in Quantum Mechan- ics, Texts and Monographs in Physics, Springer, Berlin-New York, 1988.
Singular Perturbations of Differential Operators. S Albeverio, P Kurasov, London Mathematical Society Lecture Note Series. 271Cambridge University PressS. Albeverio, P. Kurasov, Singular Perturbations of Differential Operators, London Mathe- matical Society Lecture Note Series 271, Cambridge University Press, Cambridge, 1999.
The von Neumann Problem for Nonnegative Symmetric Operators. Yu, E Arlinskii, Tsekanovskii, Integr. Equ. Oper. Theory. 51Yu. Arlinskii, E. Tsekanovskii, The von Neumann Problem for Nonnegative Symmetric Op- erators, Integr. Equ. Oper. Theory, 51 (2005), 319-356.
Spectral Theory for Pertrubed Krein Laplasians in Nonsmooth Domains. M S Ashbaugh, F Gesztesy, M Mitrea, G Teschl, Adv. Math. 51M.S. Ashbaugh, F. Gesztesy, M. Mitrea, G. Teschl, Spectral Theory for Pertrubed Krein Laplasians in Nonsmooth Domains, Adv. Math., 51 (2010), 1372-1467.
Remark on the Schrödinger equation with singular potential. F A Berezin, L D Faddeev, Dokl.Acad.Sci. USSR. 137RussianF.A. Berezin, L.D. Faddeev, Remark on the Schrödinger equation with singular potential, Dokl.Acad.Sci. USSR 137 (1961), 1011-1014 (Russian).
Scattering matrices and Weyl functions. J Behrndt, M Malamud, H Neidhardt, Proc. London Math. Soc. 97J. Behrndt, M. Malamud, H. Neidhardt, Scattering matrices and Weyl functions, Proc. Lon- don Math. Soc. 97 (2008), 568-598.
Generalized resolvents and the boundary value problems for hermitian operators with gaps. V A Derkach, M M Malamud, J. Funct. Anal. 95V.A. Derkach, M.M. Malamud, Generalized resolvents and the boundary value problems for hermitian operators with gaps, J. Funct. Anal., 95 (1991), 1-95.
The extension theory of Hermitian operators and the moment problem. V A Derkach, M M Malamud, J. Math. Sci. V.A. Derkach, M.M. Malamud, The extension theory of Hermitian operators and the moment problem, J. Math. Sci. (New York), 73 (1995), 141-242.
Generalized resolvents of symmetric operators and admissibility. V A Derkach, S Hassi, M M Malamud, H S V De Snoo, Methods Funct.Anal.Topology. 673V.A. Derkach, S. Hassi, M.M. Malamud and H.S.V. de Snoo, Generalized resolvents of symmetric operators and admissibility, Methods Funct.Anal.Topology, 67 (2000), no. 3, 24- 55.
Some applications of operatorvalued Herglotz functions. F Gesztesy, N Kalton, K A Makarov, E Tsekanovskii, Oper. Theory: Adv. and Appl. 123F. Gesztesy, N. Kalton, K.A. Makarov and E. Tsekanovskii, Some applications of operator- valued Herglotz functions, Oper. Theory: Adv. and Appl., 123 (2001), 271-321.
On the Number of Negative Spectrum of One-Dimensional Schrödinger Operators with Point Interactions. N Goloschapova, L Oridoroga, Integr. Equ. Oper. Theory. 61N. Goloschapova, L. Oridoroga, On the Number of Negative Spectrum of One-Dimensional Schrödinger Operators with Point Interactions, Integr. Equ. Oper. Theory, 6 (2010), no. 1, 1-14.
V I Gorbachuk, M L Gorbachuk, Boundary Value Problems for Operator Differential Equations, Mathematics and its Applications. DordrechtKluwer Academic Publishers Group48V.I. Gorbachuk, M.L. Gorbachuk, Boundary Value Problems for Operator Differential Equa- tions, Mathematics and its Applications (Soviet Series) 48, Kluwer Academic Publishers Group, Dordrecht, 1991.
On symmetries in the theory of singular perturbations. S Hassi, S Kuzhel, J. Funct. Anal. 256S. Hassi, S. Kuzhel, On symmetries in the theory of singular perturbations, J. Funct. Anal., 256 (2009), 777-809.
Perturbation theory for linear operators. T Kato, Springer-VerlagNew YorkT. Kato, Perturbation theory for linear operators, Springer-Verlag, New York, 1966.
Elliptic operators with boundary conditions on a subset of measure zero. A N Kochubei, Funct. Anal. Appl. 162A.N. Kochubei, Elliptic operators with boundary conditions on a subset of measure zero, Funct. Anal. Appl., 16 (1982), no. 2, 137-139.
The theory of self-adjoint extensions of semibounded Hermitian transformations and its applications, I, Mat. Sbornik. M G Krein, 20RussianM.G. Krein, The theory of self-adjoint extensions of semibounded Hermitian transformations and its applications, I, Mat. Sbornik, 20 (1947), no.3, 431-495 (Russian).
Theory of Matrices. P Lancaster, English: Academic PressNauka, Moscow; New YorkRussian translationP. Lancaster, Theory of Matrices [Russian translation], Nauka, Moscow, 1973 (English: Aca- demic Press, New York, 1969).
On the Theory of One-Point Boundary-Value Problem for Laplas Operator,Theory of Functions. V E Lyantse, H B Majorga, Funct. Anal. and Appl. 38RussianV.E. Lyantse, H.B. Majorga, On the Theory of One-Point Boundary-Value Problem for Laplas Operator,Theory of Functions, Funct. Anal. and Appl., 38 (1982), 84-91 (Russian).
Krein type Formula for Canonical Resolvents of Dual Pairs of Linear Relations. M M Malamud, V I Mogilevskii, Methods Funct.Anal.Topology. 84M.M. Malamud, V.I. Mogilevskii, Krein type Formula for Canonical Resolvents of Dual Pairs of Linear Relations, Methods Funct.Anal.Topology, 8 (2002), no.4, 72-100.
On the Number of Negative Eigenvalues of a Schrödinger Operator with Point Interactions. O Ogurisu, Lett. Math. Phys. 85O. Ogurisu, On the Number of Negative Eigenvalues of a Schrödinger Operator with Point Interactions, Lett. Math. Phys., 85 (2008), 129-133.
Introduction to Asymptotic Methods and Special Functions. F Olver, Nauka, MoscowRussian translationF. Olver, Introduction to Asymptotic Methods and Special Functions [Russian translation], Nauka, Moscow, 1978.
Self-Adjoint Extensions of Restrictions. A Posilicano, Operators and Matrices. 24A. Posilicano, Self-Adjoint Extensions of Restrictions, Operators and Matrices, 2 (2008), no. 4, 483-506.
On Self-adjoint extensions of differential operators in a space of vektorfunctions, Theory of Functions. F S Rofe-Beketov, Funct. Anal. and Appl. 8RussianF.S. Rofe-Beketov, On Self-adjoint extensions of differential operators in a space of vektor- functions, Theory of Functions, Funct. Anal. and Appl., 8 (1969), 3-24 (Russian).
The numerical range of a linear relation and maximal relations, Theory of Functions. F S Rofe-Beketov, Funct. Anal. and Appl. 44RussianF.S. Rofe-Beketov, The numerical range of a linear relation and maximal relations, Theory of Functions, Funct. Anal. and Appl., 44 (1985), 103-112 (Russian).
D R Yafaev, Mathematical Scattering Theory: General Theory, Translations of Mathematical Monographs. Providence, RI; São PauloAmerican Mathematical Society105University of São Paulo, Rua do MatãoBrazil e-mail: [email protected]. Yafaev, Mathematical Scattering Theory: General Theory, Translations of Mathematical Monographs, 105, American Mathematical Society, Providence, RI, 1992. Nataly Goloshchapova, Institute of Mathematics and Statistics, University of São Paulo, Rua do Matão, 1010, São Paulo, 05508-090, Brazil e-mail: [email protected]
| zyda_arxiv-1024000 |
Switching Autoregressive Low-rank Tensor Models
Hyun Dong Lee [email protected]
Computer Science Department
Department of Statistics
Department of Neurology Northwestern University
Department of Statistics
Stanford University
Stanford University
Stanford University
Andrew Warrington
Computer Science Department
Department of Statistics
Department of Neurology Northwestern University
Department of Statistics
Stanford University
Stanford University
Stanford University
Joshua I Glaser [email protected]
Computer Science Department
Department of Statistics
Department of Neurology Northwestern University
Department of Statistics
Stanford University
Stanford University
Stanford University
Scott W Linderman [email protected]
Computer Science Department
Department of Statistics
Department of Neurology Northwestern University
Department of Statistics
Stanford University
Stanford University
Stanford University
Switching Autoregressive Low-rank Tensor Models
An important problem in time-series analysis is modeling systems with timevarying dynamics. Probabilistic models with joint continuous and discrete latent states offer interpretable, efficient, and experimentally useful descriptions of such data. Commonly used models include autoregressive hidden Markov models (ARHMMs) and switching linear dynamical systems (SLDSs), each with its own advantages and disadvantages. ARHMMs permit exact inference and easy parameter estimation, but are parameter intensive when modeling long dependencies, and hence are prone to overfitting. In contrast, SLDSs can capture long-range dependencies in a parameter efficient way through Markovian latent dynamics, but present an intractable likelihood and a challenging parameter estimation task. In this paper, we propose switching autoregressive low-rank tensor (SALT) models, which retain the advantages of both approaches while ameliorating the weaknesses. SALT parameterizes the tensor of an ARHMM with a low-rank factorization to control the number of parameters and allow longer range dependencies without overfitting. We prove theoretical and discuss practical connections between SALT, linear dynamical systems, and SLDSs. We empirically demonstrate quantitative advantages of SALT models on a range of simulated and real prediction tasks, including behavioral and neural datasets. Furthermore, the learned low-rank tensor provides novel insights into temporal dependencies within each discrete state.Preprint. Under review.
Introduction
Many time series analysis problems involve jointly segmenting data and modeling the time-evolution of the system within each segment. For example, a common task in computational ethology [1] -the study of natural behavior -is segmenting videos of freely moving animals into states that represent distinct behaviors, while also quantifying the differences in dynamics between states [2,3]. Similarly, discrete shifts in the dynamics of neural activity may reflect changes in underlying brain state [4,5]. Model-based segmentations are experimentally valuable, providing an unsupervised grouping of neural or behavioral states together with a model of the dynamics within each state.
One common probabilistic state space model for such analyses is the autoregressive hidden Markov model (ARHMM) [6]. For example, MoSeq [2] uses ARHMMs for unsupervised behavioral analysis of freely moving animals. ARHMMs learn a set of linear autoregressive models, indexed by a discrete state, to predict the next observation as a function of previous observations. Inference in ARHMMs then reduces to inferring which AR process best explains the observed data at each timestep (in turn also providing the segmentation). The simplicity of ARHMMs allows for exact state inference via message passing, and closed-form updates for parameter estimation using expectationmaximization (EM). However, the ARHMM requires high order autoregressive dependencies to model long timescale dependencies, and its parameter complexity is quadratic in the data dimension, making it prone to overfitting.
Switching linear dynamical systems (SLDS) [7] ameliorate some of the drawbacks of the ARHMM by introducing a low-dimensional, continuous latent state. These models have been used widely throughout neuroscience [4,[8][9][10][11]. Unlike the ARHMM, the SLDS can capture long timescale dependencies through the dynamics of the continuous latent state, while also being much more parameter efficient than ARHMMs. However, exact inference in SLDSs is intractable due to the exponential number of potential discrete state paths governing the time-evolution of the continuous latent variable. This intractability has led to many elaborate and specialized approximate inference techniques [7,[12][13][14][15][16]. Thus, the SLDS gains parameter efficiency at the expense of the computational tractability and statistical simplicity of the ARHMM.
We propose a new class of unsupervised probabilistic models that we call switching autoregressive low-rank tensor (SALT) models. Our key insight is that when you marginalize over the latent states of a linear dynamical system, you obtain an autoregressive model with full history dependence. However, the autoregressive dependencies are not arbitrarily complex -they factor into a low-rank tensor that can be well-approximated with a finite-history model (Proposition 1). SALT models are constrained ARHMMs that leverage this insight. Rather than allowing for arbitrary autoregressive dependencies, SALT models are constrained to be low-rank. Thus, SALT models inherit the parsimonious parameter complexity of SLDS as well as the ease of inference and estimation of ARHMMs. We demonstrate the advantages of SALT models empirically using synthetic data as well as real neural and behavioral time series. Finally, in addition to improving predictive performance, we show how the low-rank nature of SALT models can offer new insights into complex systems, like biological neural networks. Source code is available at https://github.com/lindermanlab/salt.
Background
This section introduces the notation used throughout the paper and describes preliminaries on lowrank tensor decomposition, vector autoregressive models, switching autoregressive models, linear dynamical systems, and switching linear dynamical systems.
Notation We follow the notation of Kolda and Bader [17]. We use lowercase letters for scalar variables (e.g. a), uppercase letters for scalar constants (e.g. A), boldface lowercase letters for vectors (e.g. a), boldface uppercase letters for matrices (e.g. A), and boldface Euler script for tensors of order three or higher (e.g. A). We will use the shorthand a 1:T ∈ R N ×T to denote a time series a 1 ∈ R N , . . . , a T ∈ R N . We use A i:: , A :j: , and A ::k to denote the horizontal, lateral, and frontal slices respectively of a three-way tensor A. Similarly, we use a i: and a :j to denote the i th row and j th column of a matrix A. a • b represents the vector outer product between vectors a and b.
The n-mode tensor-matrix (tensor-vector) product is represented as A × n A (A× n a). We represent the mode-n matricization of a tensor G as G (n) . We define × j,k to be a tensor-matrix product over the j th and k th slices of the tensor. For example, given a three-way tensor A ∈ R D1×D2×D3 and a matrix X ∈ R D2×D3 , A × 2,3 X = D2 j=1 D3 k=1 a :jk x jk .
Tensor Decomposition For A ∈ R N1×N2×N3 , the Tucker decomposition is defined as,
A = D1 i=1 D2 j=1 D3 k=1 g ijk u :i • v :j • w :k ,(1)
where u :i , v :j , and w :k are the columns of the factor matrices U ∈ R N1×D1 , V ∈ R N2×D2 , and W ∈ R N3×D3 , respectively, and g ijk are the entries in the core tensor G ∈ R D1×D2×D3 .
The CANDECOMP/PARAFAC (CP) decomposition is a special case of the Tucker decomposition, with D 1 = D 2 = D 3 and a diagonal core tensor G.
Vector autoregressive models Let y 1:T denote a multivariate time series with y t ∈ R N for all t.
An order-L vector autoregressive (VAR) model with Gaussian innovations is defined by,
y t ∼ N (A × 2,3 y t−1:t−L + b, R) ,(2)
where A ∈ R N ×N ×L is the autoregressive tensor, whose frontal slice A ::l is the dynamics matrix for lag l, b ∈ R N is the bias, and R ∈ R N ×N ⪰0 is a positive semi-definite covariance matrix. The parameters Θ = (A, b, R) are commonly estimated via ordinary least squares [18].
Switching autoregressive models One limitation of VAR models is that they assume the time series is stationary; i.e. that one set of parameters holds for all time steps. Time-varying autoregressive models allow the autoregressive process to change at various time points. One such VAR model, referred to as a switching autoregressive model or autoregressive hidden Markov model (ARHMM), switches the parameters over time according to a discrete latent state [6]. Let z t ∈ {1, . . . , H} denote the discrete state at time t, an ARHMM defines the following generative model,
z t ∼ Cat(π (zt−1) ), y t ∼ N A (zt) × 2,3 y t−1:t−L + b (zt) , R (zt) ,(3)
where π (h) ∈ {π (h) } H h=1 is the the h-th row of the discrete state transition matrix. A switching VAR model is simply a type of hidden Markov model, and as such it is easily fit via the expectation-maximization (EM) algorithm within the Baum-Welch algorithm. The M-step amounts to solving a weighted least squares problem.
Linear dynamical systems The number of parameters in a VAR model grows as O(N 2 L). For highdimensional time series, this can quickly become intractable. Linear dynamical systems (LDS) [19] offer an alternative means of modeling time series via a continuous latent state x t ∈ R D ,
x t ∼ N (Ax t−1 + b, Q), y t ∼ N (Cx t + d, R),(4)
where Q ∈ R D×D ⪰0 and R ∈ R N ×N ⪰0 . Here, the latent states follow a first-order VAR model, and the observations are conditionally independent given the latent states. As we discuss in Section 3.3, marginalizing over the continuous latent states renders y t dependent on the preceding observations, just like in a high order VAR model.
Compared to the VAR model, however, the LDS has only O(D 2 + N D + N 2 ) parameters if R is a full covariance matrix. This further reduces to O(D 2 + N D) if R is diagonal. As a result, when D ≪ N , the LDS has many fewer parameters than a VAR model. Thanks to the linear and Gaussian assumptions of the model, the parameters can be easily estimated via EM, using the Kalman smoother to compute the expected values of the latent states.
Switching linear dynamical systems A switching LDS combines the advantages of the lowdimensional continuous latent states of an LDS, with the advantages of discrete switching from an ARHMM. Let z t ∈ {1, . . . , H} be a discrete latent state with Markovian dynamics (3), and let it determine some or all of the parameters of the LDS (e.g. A would become A (zt) in (4)). We note that SLDSs often use a single-subspace, where C, d and R are shared across states, reducing parameter complexity and simplifying the optimization.
Unfortunately, parameter estimation is considerably harder in SLDS models. The posterior distribution over all latent states, p(z 1:T , x 1:T | y 1:T , Θ), where Θ denotes the parameters, is intractable [20].
Instead, these models are fit via approximate inference methods like MCMC [13,15], variational EM [7,16], particle EM [14,21], or other approximations [12]. We look to define a model that enjoys the benefits of SLDSs, but avoids the inference and estimation difficulties.
SALT: Switching Autoregressive Low-rank Tensor Models
Here we formally introduce SALT models. We begin by defining the generative model (also illustrated in Figure 1), and describing how inference and model fitting are performed. We conclude by drawing connections between SALT and SLDS models.
Generative Model
SALT factorizes each autoregressive tensor A (h) for h ∈ {1, . . . , H} of an ARHMM as a product of low-rank factors. Given the current discrete state z t , each observation y t ∈ R N is modeled as being normally distributed conditioned on L previous observations y t−1:t−L ,
z t ∼ Cat π (zt−1) ,(5)y t i.i.d. ∼ N A (zt) SALT × 2,3 y t−1:t−L + b (zt) , Σ Σ Σ (zt) ,(6)A (zt) SALT = D1 i=1 D2 j=1 D3 k=1 g (zt) ijk u (zt) :i • v (zt) :j • w (zt) :k ,(7)
where u (zt)
:i , v(zt)
:j , and w (zt) :k are the columns of the factor matrices U (zt) ∈ R N ×D1 , V (zt) ∈ R N ×D2 , and W (zt) ∈ R L×D3 , respectively, and g (zt)
ijk are the entries in the core tensor G (zt) ∈ R D1×D2×D3 . The vector b (zt) ∈ R N and positive definite matrix Σ Σ Σ (zt) ∈ R N ×N ⪰0 are the bias and covariance for state z t . Without further restriction this decomposition is a Tucker decomposition [17]. If D 1 = D 2 = D 3 and G zt is diagonal, it corresponds to a CP decomposition [17]. We refer to ARHMM models with these factorizations as Tucker-SALT and CP-SALT respectively. Note that herein we will only consider models where D 1 = D 2 = D 3 = D, where we refer to D as the "rank" of the SALT model (for both Tucker-SALT and CP-SALT). In practice, we find that models constrained in this way perform well, and so this constraint is imposed simply to reduce the search space of models. This constraint can also be easily relaxed. Table 1 shows the number of parameters for order-L ARHMMs, SLDSs, and SALT. Focusing on the lag dependence, the number of ARHMM parameters grows as O(HN 2 L), whereas SALT grows as only O(HDL) with D ≪ N . SALT can also make a simplifying single-subspace constraint, where certain emission parameters are shared across discrete states.
Low-dimensional Representation Note that SALT implicitly defines a low-dimensional continuous representation, analogous to the continuous latent variable in SLDS,
x t = D2 j=1 D3 k=1 g (zt) :jk • v (zt) :j • w (zt) :k × 2,3 y t−1:t−L .(8)
The vector x t ∈ R D1 is multiplied by the output factors, U (zt) , to obtain the mean of the next observation. These low-dimensional vectors can be visualized as in SLDS and used to further interrogate the learned dynamics, as we show in Figure 3.
Model Fitting and Inference
Since SALT models are ARHMMs, we can apply the expectation-maximization (EM) algorithm to fit model parameters and perform state space inference. We direct the reader to Murphy [19] for a detailed exposition of EM and include only the key points here.
The E-step solves for the distribution over latent variables given observed data and model parameters.
For SALT, this is the distribution over z t , denoted ω
(h) t = E[z t = h | y 1:T , θ]
. This can be computed exactly with the forward-backward algorithm, which is fast and stable. The marginal likelihood can be evaluated exactly by taking the product across t of expectations of (6) under ω 145.2K Table 1: Comparison of number of parameters for the methods we consider. We exclude covariance matrix parameters, as the parameterization of the covariance matrix is independent of method.
closed-form coordinate-wise updates to maximize the expected log likelihood evaluated in the E-step. Each factor update amounts to solving a weighted least squares problem. We include just one update step here for brevity, and provide all updates in full in Appendix A. Assuming here that b (h) = 0 for simplicity, the update rule for the lag factors is as follows:
w (h)⋆ = t ω (h) t X (h)⊤ t (Σ (h) ) −1 X (h) t −1 t ω (h) t X (h)⊤ t (Σ (h) ) −1 y t (9) where X (h) t = U (h) G (h) (1) (V (h)⊤ y t−1:t−L ⊗ I D3 ) and w (h)⋆ = vec(W (h) )
. Crucially, these coordinate wise updates are exact, and so we recover the fast and monotonic convergence of EM.
Connections Between SALT and Switching Linear Dynamical Systems
SALT is not only an intuitive regularization for ARHMMs, it is grounded in a mathematical correspondence between autoregressive models and linear dynamical systems.
Proposition 1 (Low-Rank Tensor Autoregressions Approximate Stable Linear Dynamical Systems).
Consider a stable linear time-invariant Gaussian dynamical system. We define the steady-state Kalman gain matrix as K = lim t→∞ K t , and Γ = A(I − KC). The matrix Γ ∈ R D×D has eigenvalues λ 1 , . . . , λ D . Let λ max = max d |λ d |; for a stable LDS, λ max < 1 [22]. Let n denote the number of real eigenvalues and m the number of complex conjugate pairs. Letŷ
(LDS) t −ŷ (SALT) t ∥ ∞ = O(λ L max ).
Proof. We give a sketch of the proof here and a full proof in Appendix B. The analytic form of E [y t | y <t ] is a linear function of y t−l for l = 1, . . . , ∞. For this proof sketch, consider the special case where b = d = 0. Then the coefficients of the linear function are CΓ l K. As all eigenvalues of Γ have magnitude less than one, the coefficients decay exponentially in l. We can therefore upper bound the approximation error introduced by truncating the linear function to L terms to O(λ L max ). To complete the proof, we show that the truncated linear function can be represented exactly by a tensor regression with at most a specific rank. Thus, only truncated terms contribute to the error. This proposition shows that the steady-state predictive distribution of a stable LDS can be approximated by a low-rank tensor autoregression, with a rank determined by the eigenspectrum of the LDS. We validate this proposition experimentally in Section 5.1. Note as well that the predictive distribution will converge to a fixed covariance, and hence can also be exactly represented by the covariance matrices Σ (h) estimated in SALT models.
With this foundation, it is natural to hypothesize that a switching low-rank tensor autoregression like SALT could approximate a switching LDS. There are two ways this intuition could fail: first, if the dynamics in a discrete state of an SLDS are unstable, then Proposition 1 would not hold; second, after a discrete state transition in an SLDS, it may take some time before the dynamics reach stationarity. We empirically test how well SALT approximates an SLDS in Section 5 and find that, across a variety of datasets, SALT obtains commensurate performance with considerably simpler inference and estimation algorithms.
Related Work
Low-rank tensor decompositions of time-invariant autoregressive models Similar to this work, Wang et al. [23] also modeled the transition matrices as a third-order tensor A ∈ R N ×N ×L where the A ::l is the l-th dynamics matrix. They then constrained the tensor to be low-rank via a Tucker decomposition, as defined in (1). However, unlike SALT, their model was time-invariant, and they did not have an ARHMM structure or make connections to the LDS and SLDS, as in Proposition 1.
Low-rank tensor decompositions of time-varying autoregressive models Low-rank tensor-based approaches have also been used to model time-varying AR processes [24,25]. Harris et al. [24] introduced TVART, which first splits the data into T contiguous fixed-length segments, each with its own AR-1 process. TVART can be thought of as defining a T × N × N ARHMM dynamics tensor and progressing through discrete states at fixed time points. This tensor is parameterized using the CP decomposition and optimized using an alternating least squares algorithm, with additional penalties such that the dynamics of adjacent windows are similar. By contrast, SALT automatically segments, rather than windows, the time-series into learned and re-usable discrete states.
Zhang et al. [25] constructed a Bayesian model of higher-order AR matrices that can vary over time.
First, H VAR dynamics tensors are specified, parameterized as third-order tensors with a rank-1 CP decomposition. The dynamics at any given time are then defined as a weighted sum of the tensors, where the weights have a prior density specified by an Ising model. Finally, inference over the weights is performed using MCMC. This method can be interpreted as a factorial ARHMM and hence offers substantial modeling flexibility, but it sacrifices computational tractability when H is large.
Low-rank tensor decompositions of neural networks Low-rank tensor decomposition methods have also been used to make neural networks more parameter efficient. Novikov et al. [26] used the tensor-train decomposition [27] on the dense weight matrices of the fully-connected layers to reduce the number of parameters. Yu et al. [28] and Qiu et al. [29] applied the tensor-train decomposition to the weight tensors for polynomial interactions between the hidden states of recurrent neural networks (RNNs) to efficiently capture high-order temporal dependencies. Unlike switching models with linear dynamics, recurrent neural networks have dynamics that are hard to interpret, their state estimates are not probabilistic, and they do not provide experimentally useful data segmentations.
Linear dynamical systems and low-rank linear recurrent neural networks Valente et al. [30] recently examined the relationship between LDSs and low-rank linear RNNs. They provide the conditions under which low-rank linear RNNs can exactly model the first-order autoregressive distributions of LDSs, and derive the transformation to convert between model classes under those conditions. This result has close parallels to Proposition 1. Under the conditions identified by Valente et al. [30], the approximation in Proposition 1 becomes exact with just one lag term. However, when those conditions are not satisfied, we show that one still recovers an LDS approximation with a bounded error that decays exponentially in the number of lag terms.
Results
We now empirically validate SALT by first validating the theoretical claims made in Section 3, and then apply SALT to two synthetic examples to compare SALT to existing methods. We conclude by using SALT to analyze real mouse behavioral recordings and C. elegans neural recordings.
SALT Faithfully Approximates LDS
To test the theoretical result that SALT can closely approximate a linear dynamical system, we fit SALT models to data sampled from an LDS. The LDS has D = 7 dimensional latent states with random rotational dynamics, where Γ has n = 1 real eigenvalue and m = 3 pairs of complex eigenvalues, and N = 20 observations with a random emission matrix.
For Figure 2, we trained CP-SALT and Tucker-SALT with L = 50 lags and varying ranks. We first analyzed how well SALT reconstructed the parameters of the autoregressive dynamics tensor. As predicted by Proposition 1, Figure 2A shows that the mean squared errors between the SALT tensor and the autoregressive tensor corresponding to the simulated LDS are the lowest when the ranks of CP-SALT and Tucker-SALT are n + 3m = 10 and n + 2m = 7 respectively. We then computed log-likelihoods on 5,000 timesteps of held-out test data ( Figure 2B). Interestingly, the predictive performance of both CP-SALT and Tucker-SALT reach the likelihood of the ground truth Fit trajectories and filtered observations from SLDS and SALT models. Colors indicate discrete state for ground truth (in (A)) and fitted models. SLDS and SALT find comparable filtered trajectories and observations. Note: we manually align latent trajectories for ease of inspection as both SLDS and SALT are only identifiable in the latent space up to a linear transformation.
LDS model with rank n + 2m = 7, suggesting that sometimes smaller tensors than suggested by Proposition 1 may still be able to provide good approximations to the data. We also show in Figures 2C and 2D that, as predicted, SALT models require much less data to fit than ARHMMs. We show extended empirical results and discussion on Proposition 1 in Appendix D.1.
Synthetic Switching LDS Examples
Proposition 1 quantifies the convergence properties of low-rank tensor regressions when approximating stable LDSs. Next we tested how well SALT can approximate the more expressive switching LDSs. We first applied SALT to data generated from a recurrent SLDS [15], where the twodimensional ground truth latent trajectory resembles a NASCAR ® track ( Figure 3A). SALT accurately reconstructed the ground truth filtered trajectories and discrete state segmentation, and yielded very similar results to an SLDS model. We also tested the ability of SALT to model nonlinear dynamics -specifically, a Lorenz attractor -which SLDSs are capable of modeling. Again, SALT accurately reconstructed ground truth latents and observations, and closely matched SLDS segmentations. These results suggest that SALT models provide a good alternative to SLDS models. Finally, in Appendix D.3, we used SLDS-generated data to compare SALT and TVART [24], another tensor-based method for modeling autoregressive processes, and find that SALT more accurately reconstructed autoregressive dynamics tensors than TVART.
Modeling Mouse Behavior
Next we considered a video segmentation problem commonly faced in the field of computational neuroethology [1]. Wiltschko et al. [2] collected videos of mice freely behaving in a circular open field. They projected the video data onto the top 10 principal components ( Figure 4A) and used an ARHMM to segment the PCA time series into distinct behavioral states. Here, we compared ARHMMs and CP-SALT with data from three mice. We used the first 35,949 timesteps of each recording, which were collected at 30Hz resolution. We used H = 50 discrete states and fitted ARHMMs and CP-SALT models with varying lags and ranks.
The likelihood on a held-out validation set shows that the ARHMM overfitted quickly as the number of lags increased, while CP-SALT was more robust to overfitting ( Figure 4B). We compared loglikelihoods of the best model (evaluated on the validation set) on a separate held-out test set and found that CP-SALT consistently outperformed ARHMM across mice ( Figure 4C).
We also investigated the quality of SALT segmentations of the behavioral data (Appendix E.3). We found that the PCA trajectories upon transition into a discrete SALT state were highly stereotyped, suggesting that SALT segments the data into consistent behavioral states. Furthermore, CP-SALT used fewer discrete states than the ARHMM, suggesting that the ARHMM may have oversegmented and that CP-SALT offers a more parsimonious description of the data.
Modeling C. elegans Neural Data
Finally, we analyzed neural recordings of an immobilized C. elegans worm from Kato et al. [31]. SLDS have previously been used to capture the time-varying low-dimensional dynamics of the neural activity [9,10]. We compared SLDS, ARHMM, and CP-SALT with 18 minutes of neural traces (recorded at 3Hz; ∼3200 timesteps) from one worm, in which 48 neurons were confidently identified. The dataset also contains 7 manually identified state labels based on the neural activity.
We used H = 7 discrete states and fitted SLDSs, ARHMMs, and CP-SALT with varying lags and ranks (or continuous latent dimensions for SLDSs). Following Linderman et al. [9], we searched for sets of hyperparameters that achieve ∼90% explained variance on a held-out test dataset (see Appendix F for more details). For ARHMMs and CP-SALT, we chose a larger lag (L = 9, equivalent to 3 seconds) to examine the long-timescale correlations among the neurons.
We find that SALT can perform as well as SLDSs and ARHMMs in terms of held-out explained variance ratio (a metric used by previous work [9]). As expected, we find that CP-SALT can achieve these results with far fewer parameters than ARHMMs, and with a parameter count closer to Example data with manually generated labels (Given), as well as segmentations generated by SALT, SLDS and ARHMM models. Learned states are colored based on the permutation of states that best matches given labels. All methods produce comparable segmentations, with high agreement with the given labels. (B) Confusion matrix of SALT-generated labels. (C) One-dimensional autoregressive filters learned in two states by SALT (identified as ventral and dorsal turns). Colors indicate the area under curve (red is positive; blue is negative). The first four rows are neurons known to mediate ventral turns, while the last two rows mediate dorsal turns [31,32,34]. These known behavior-tuned neurons generally have larger magnitude autoregressive filters. Interestingly, AVFL and AVFR also have large filters for dorsal turns. These neurons do not have a well-known function. However, they are associated with motor neurons, and so may simultaneously activate due to factors that co-occur with turning. This highlights how SALT may be used for proposing novel relationships in systems.
RIVL RIVL RIVR SMDVL SMDVR AVFL AVFR SMDDL SMDDR AIBL AVAR AVBL AVER RMED VB02°1 0 1 RIVR°1 0 1 SMDVL°1 0 1 SMDVR°1 0 1 AVFL°1 0 1 AVFR°1 0 1 SMDDL°1 0 1 SMDDR°0 .5 0.0 0.5 RIVL RIVL RIVR SMDVL SMDVR AVFL AVFR SMDDL SMDDR AIBL AVAR AVBL AVER RMED
SLDS than ARHMM (as more continuous latent states were required in an SLDS to achieve ∼90% explained variance; see Appendix F). Figure 5A shows that SALT, SLDS and ARHMM produce similar segmentations to the given labels, as evidenced by the confusion matrix having high entries on the leading diagonal ( Figure 5B and Appendix F). Figure 5C shows the one-dimensional autoregressive filters learned by CP-SALT, defined as
D1 i=1 D2 j=1 D3 k=1 g (h) ijk u (h) pi v (h) qj w (h)
:k for neurons p and q. We see that neurons believed to be involved in particular behavioral states have high weights in the filter (e.g., SMDV during the "Ventral Turn" state and SMDD during the "Dorsal Turn" state [9,[31][32][33][34]). This highlights how switching autoregressive models can reveal state-dependent functional interactions between neurons (or observed states more generally). In Appendix F, we show the autoregressive filters learned by an ARHMM, an SLDS, and a generalized linear model (GLM), a method commonly used to model inter-neuronal interactions [35]. Interestingly, the GLM does not find many strong functional interactions between neurons, likely because it is averaging over many unique discrete states. In addition to its advantages in parameter efficiency and estimation, SALT thus provides a novel method for finding changing functional interactions across neurons at multiple timescales.
Discussion
We introduce switching autoregressive low-rank tensor (SALT) models: a novel model class that parameterizes the autoregressive tensors of an ARHMM with a low-rank factorization. This constraint allows SALT to model time-series data with fewer parameters than ARHMMs and with simpler estimation procedures than SLDSs. We also make theoretical connections between low-rank tensor regressions and LDSs. We then demonstrate, with both synthetic and real datasets, that SALT offers both efficiency and interpretability, striking an advantageous balance between the ARHMM and SLDS. Moreover, SALT offers an enhanced ability to investigate the interactions across observations, such as neurons, across different timescales in a data-efficient manner.
SALT could be extended in many ways. For example, neural spike trains are often modeled with Poisson likelihoods instead of SALT's Gaussian noise model. In this case, the E-step would still be exact, but the M-step would no longer have closed-form coordinate updates. Likewise, the discrete state transitions could be allowed to depend on the current observations, as in recurrent state space models [15]. Altogether, SALT offers simple and effective means of modeling and inference for complex, time-varying dynamical systems. -Appendix B: SALT approximates a (Switching) Linear Dynamical System.
Supplementary Materials for: Switching Autoregressive Low-rank Tensor Models
-Appendix C: Single-subspace SALT.
-Appendix D: Synthetic Data Experiments.
-Appendix E: Modeling Mouse Behavior.
-Appendix F: Modeling C. elegans neural data.
A SALT Optimization via Tensor Regression
Let y t ∈ R N1 be the t-th outputs and X t ∈ R N2×N3 be the t-th inputs. The regression weights are a tensor A ∈ R N1×N2×N3 , which we model via a Tucker decomposition,
A = D1 i=1 D2 j=1 D3 k=1 g ijk u :i • v :j • w :k ,(10)
where u i , v j , and w k are columns of the factor matrices U ∈ R N1×D1 , V ∈ R N2×D2 , and W ∈ R N3×D3 , respectively, and g ijk are entries in the core tensor G ∈ R D1×D2×D3 . Consider the linear model, y t ∼ N (A × 2,3 X t , Q) where A × 2,3 X t is defined using the Tucker decomposition of A as,
A × 2,3 X t = A (1) vec(X t ) (11) = UG (1) (V ⊤ ⊗ W ⊤ )vec(X t ) (12) = UG (1) vec(V ⊤ X t W)(13)
where A (1) ∈ R N1×N2N3 and G (1) ∈ R D1×D2D3 are mode-1 matricizations of the corresponding tensors. Note that these equations assume that matricization and vectorization are performed in row-major order, as in Python but opposite to what is typically used in Wikipedia articles.
Equation (13) can be written in multiple ways, and these equivalent forms will be useful for deriving the updates below. We have,
A × 2,3 X t = UG (1) (I D2 ⊗ W ⊤ X ⊤ t )vec(V ⊤ ) (14) = UG (1) (V ⊤ X t ⊗ I D3 )vec(W) (15) = U ⊗ vec(V ⊤ X t W) vec(G).(16)
We minimize the negative log likelihood by coordinate descent.
Optimizing the output factors Let
x t = G (1) vec(V ⊤ X t W)(17)
for fixed V, W, and G. The NLL as a function of U is,
L(U) = 1 2 t (y t − U x t ) ⊤ Q −1 (y t − U x t ).(18)
This is a standard least squares problem with solution
U ⋆ = t y t x ⊤ t t x t x ⊤ t −1 .(19)
Optimizing the core tensors Let X t = U⊗vec(V ⊤ X t W) ∈ R N1×D1D2D3 denote the coefficient on vec(G) in eq. (16). The NLL as a function of g = vec(G) is,
L(g) = 1 2 t (y t − X t g) ⊤ Q −1 (y t − X t g).(20)
The minimizer of this quadratic form is,
g ⋆ = t X ⊤ t Q −1 X t −1 t X ⊤ t Q −1 y t(21)
Optimizing the input factors Let
X t = UG (1) (I D2 ⊗ W ⊤ X ⊤ t )(22)
for fixed U, W, and G. The NLL as a function of v = vec(V ⊤ ) is,
L(v) = 1 2 t (y t − X t v) ⊤ Q −1 (y t − X t v).(23)
The minimizer of this quadratic form is,
v ⋆ = t X ⊤ t Q −1 X t −1 t X ⊤ t Q −1 y t(24)
Optimizing the lag factors Let
X t = UG (1) (V ⊤ X t ⊗ I D3 )(25)
for fixed U, V, and G. The NLL as a function of w = vec(W) is,
L(w) = 1 2 t (y t − X t w) ⊤ Q −1 (y t − X t w).(26)
The minimizer of this quadratic form is,
w ⋆ = t X ⊤ t Q −1 X t −1 t X ⊤ t Q −1 y t(27)
Multiple discrete states If we have discrete states z t ∈ {1, . . . , H} and each state has its own parameters (
G (h) , U (h) , V (h) , W (h) , Q (h) ), then letting ω (h) t = E[z t = h]
denote the weights from the E-step, the summations in coordinate updates are weighted by ω (h) t . For example, the coordinate update for the core tensors becomes,
g (h)⋆ = t ω (h) t X (h)⊤ t Q (h)−1 X (h) t −1 t ω (h) t X (h)⊤ t Q (h)−1 y t(28)
B SALT approximates a (Switching) Linear Dynamical System
We now re-state and provide a full proof for Proposition 1.
Proposition 1 (Low-Rank Tensor Autoregressions Approximate Stable Linear Dynamical Systems).
Consider a stable linear time-invariant Gaussian dynamical system. We define the steady-state Kalman gain matrix as K = lim t→∞ K t , and Γ = A(I − KC). The matrix Γ ∈ R D×D has eigenvalues λ 1 , . . . , λ D . Let λ max = max d |λ d |; for a stable LDS, λ max < 1 [22]. Let n denote the number of real eigenvalues and m the number of complex conjugate pairs. Letŷ (LDS) t = E[y t | y <t ] denote the predictive mean under a steady-state LDS, andŷ (SALT) t the predictive mean under a SALT model. An order-L Tucker-SALT model with rank n+2m, or a CP-SALT model with rank n+3m, can approximate the predictive mean of the steady-state LDS with error ∥ŷ
(LDS) t −ŷ (SALT) t ∥ ∞ = O(λ L max ).
Proof. A stationary linear dynamical system (LDS) is defined as follows:
x t = Ax t−1 + b + ϵ t (29) y t = Cx t + d + δ t (30) where y t ∈ R N is the t-th observation, x t ∈ R D is the t-th hidden state, ϵ t i.i.d. ∼ N (0, Q), δ t i.i.d.
∼ N (0, R), and θ = (A, b, Q, C, d, R) are the parameters of the LDS.
Following the notation of Murphy [19], the one-step-ahead posterior predictive distribution for the observations of the LDS defined above can be expressed as:
p(y t |y 1:t−1 ) = N (Cµ t|t−1 + d, CΣ t|t−1 C T + R)(31)
where
µ t|t−1 = Aµ t−1 + b (32) µ t = µ t|t−1 + K t r t(33)Σ t|t−1 = AΣ t−1 A T + Q (34) Σ t = (I − K t C)Σ t|t−1 (35) p(x 1 ) = N (x 1 | µ 1|0 , Σ 1|0 )(36)K t = (Σ −1 t|t−1 + C T RC) −1 C T R −1 (37) r t = y t − Cµ t|t−1 − d.(38)
We can then expand the mean Cµ t|t−1 + d as follows:
Cµ t|t−1 + d = C t−1 l=1 Γ l AK t−l y t−l + C t−1 l=1 Γ l (b − AK t−l d) + d(39)
where
Γ l = l−1 i=1 A(I − K t−i C) for l ∈ {2, 3, . . .} ,(40)Γ 1 = I.(41)
Theorem 3.3.3 of Davis and Vinter [22] (reproduced with our notation below) states that for a stabilizable and detectable system, the lim t→∞ Σ t|t−1 = Σ, where Σ is the unique solution of the discrete algebraic Riccati equation
Σ = AΣA T − AΣC T (CΣC T + R) −1 CΣA T + Q.(42)
As we are considering stable autonomous LDSs here, the system is stabilizable and detectable, as all unobservable states are themselves stable [22,36] Theorem 3.3.3 (Reproduced from Davis and Vinter [22], updated to our notation and context). The theorem has two parts. (b) If the pair (A, C) is stabilizable then this solution Σ is unique, and Σ t|t−1 → Σ as t → ∞, where Σ t|t−1 is the sequence generated by (32)- (38) with arbitrary initial covariance Σ 0 . Then, the matrix Γ = A(I − KC) is stable, where K is the Kalman gain corresponding to Σ; i.e.,
K = (Σ −1 + C T RC) −1 C T R −1(43)
Proof. See Davis and Vinter [22]. Note that Davis and Vinter [22] define the Kalman gain as AK.
The convergence of the Kalman gain also implies that each term in the sequence Γ l converges to
Γ l = l−1 i=1 A(I − KC) = (A(I − KC)) l−1 = Γ l−1 ,(44)
where, concretely, we define Γ = A(I − KC). We can therefore make the following substitution and approximation
Cµ t|t−1 + d lim t→∞ = C t−1 l=1 Γ l AKy t−l + C t−1 l=1 Γ l (b − AKd) + d (45) = C L l=1 Γ l AKy t−l + C L l=1 Γ l (b − AKd) + d + ∞ l=L+1 F Γ l (46) ≈ C L l=1 Γ l AKy t−l + C L l=1 Γ l (b − AKd) + d(47)
The approximation is introduced as a result of truncating the sequence to consider just the "first" L terms, and discarding the higher-order terms (indicated in blue). It is important to note that each term in (45) is the sum of a geometric sequence multiplied elementwise with y t .
There are two components we prove from here. First, we derive an element-wise bound on the error introduced by the truncation, and verify that under the conditions outlined that the bound decays monotonically in L. We then show that Tucker and CP decompositions can represent the truncated summations in (47), and derive the minimum rank required for this representation to be exact.
Bounding The Error Term We first rearrange the truncated terms in (45), where we define
x l ≜ AKy t−l + b − AKd ∞ l=L+1 F Γ l = C ∞ l=L+1 Γ l AKy t−l + C ∞ l=L+1 Γ l (b − AKd) + d,(48)= ∞ l=L+1 CΓ l x l ,(49)= ∞ l=L+1 CEΛ l−1 E −1 x l ,(50)= ∞ l=L+1 PΛ l−1 q l ,(51)
where EΛE −1 is the eigendecomposition of Γ, P ≜ CE, and q l ≜ E −1 x l . We now consider the infinity-norm of the error, and apply the triangle and Cauchy-Schwartz inequalities. We can write the bound on the as
ϵ = ∞ l=L+1 F Γ l n , where n = arg max k ∞ l=L+1 F Γ l k (52) = ∞ l=L+1 D d=1 p nd λ l−1 d q l,d ,(53)≤ ∞ l=L+1 D d=1 |p nd | λ l−1 d |q l,d | .(54)
Upper bounding the absolute magnitude of q l,d by W provides a further upper bound, which we can then rearrange
ϵ ≤ W ∞ l=L+1 D d=1 |p nd | λ l−1 d ,(55)= W D d=1 |p nd | ∞ l=L+1 λ l−1 d .(56)
The first two terms are constant, and hence the upper bound is determined by the sum of the of the l th power of the eigenvalues. We can again bound this sum by setting all eigenvalues equal to the magnitude of the eigenvalue with the maximum magnitude (spectral norm), denoted λ max :
ϵ ≤ W D d=1 |p nd | ∞ l=L+1 λ l−1 max ,(57)
where these second summation is not a function of d, and W D d=1 |p nd | is constant. This summation is a truncated geometric sequence. Invoking Theorem 3.3.3 of Davis and Vinter [22] again, the matrix Γ has only stable eigenvalues, and hence λ max < 1. Therefore the sequence sum will converge to
∞ l=L+1 λ l−1 max = λ L max 1 − λ max .(58)
Rearranging again, we see that the absolute error on the n th element of y t is therefore bounded according to a power of the spectral norm
ϵ ≤ W D d=1 |p nd | λ L max 1 − λ max ,(59)= O λ L max .
(60) More specifically, for a stable linear time-invariant dynamical system, and where q -and hence yis bounded, then the bound on the error incurred reduces exponentially in the length of the window L. Furthermore, this error bound will reduce faster for systems with a lower spectral norm.
Diagonalizing the System We first transform Γ into real modal form, defined as EΛE −1 , where E and Λ are the eigenvectors and diagonal matrix of eigenvalues of Γ. Letting Γ have n real eigenvalues and m pairs of complex eigenvalues (i.e., n + 2m = D), we can express E, Λ, and E −1 as:
E = [ a 1 . . . a n b 1 c 1 . . . b m c m ](61)Λ = λ 1 . . . λ n σ 1 ω 1 −ω 1 σ 1 . . . σ m ω m −ω m σ m (62) E −1 = d T 1 . . . d T n e T 1 f T 1 . . . e T m f T m (63)
where a 1 . . . a n are the right eigenvectors corresponding to n real eigenvalues λ 1 . . . λ n , and b i and c i are the real and imaginary parts of the eigenvector corresponding to the complex eigenvalue σ i + jω i . Note that
Γ l = (A(I − KC)) l−1 = EΛ l−1 E −1 (64)
The l th power of Λ, Λ l , where l ≥ 0, can be expressed as:
Λ l = λ l 1 . . . λ l n σ 1,l ω 1,l −ω 1,l σ 1,l . . . σ m,l ω m,l −ω m,l σ m,l (65) where σ i,l = σ 2 i,l−1 − ω 2 i,l−1 , ω i,l = 2σ i,l−1 ω i,l−1 for l ≥ 2, σ i,1 = σ i , ω i,1 = ω i , σ i,0 = 1, and ω i,0 = 0.
Tucker Tensor Regression Let H ∈ R D×D×L be a three-way tensor, whose l th frontal slice H ::l = Λ l−1 . Let G ∈ R D×D×D be a three-way tensor, whose entry g ijk = 1 i=j=k for 1 ≤ k ≤ n, and g ijk = (−1) 1 i+1=j=k+1 1 (i=j=k)∨(i−1=j−1=k)∨(i=j+1=k+1)∨(i+1=j=k+1) for k ∈ {n + 1, n + 3, . . . , n + 2m − 1}. Let W ∈ R L×D be a matrix, whose entry w lk = λ l−1 k for 1 ≤ k ≤ n, w lk = σ k,l−1 for k ∈ {n+1, n+3, . . . , n+2m−1}, and w lk = −ω k,l−1 for k ∈ {n+2, n+4, . . . , n+2m}. We can then decompose H into G ∈ R D×D×D and W ∈ R L×D such that H = G × 3 W (Figure 6).
= Λ !"# Decompose 1 1 1 2 1 1 0 0 0 0 1 -1 1 ! ! " ! #$! 1 % % " % #$! 1 0 !,! !," -!,! -!," !,#$! -!,#$! 1 0 ',! '," -',! -',"
',#$! -',#$! Figure 6: Decomposition of H into G and W such that H = G × 3 W: Given an LDS whose A(I − KC) has n real eigenvalues and m pairs of complex eigenvalues, this decomposition illustrates how Tucker-SALT can approximate the LDS well with rank n + 2m.
With V = (E −1 AK) T , U = CE, m = C L l=1 Γ l (b − AKd) + d
, and X t = y t−1:t−L , we can rearrange the mean to:
Cµ t|t−1 + d ≈ C L l=1 EΛ l−1 E −1 AKy t−l + C L l=1 Γ l (b − AKd) + d (66) = U L l=1 H ::l V T y t−l + m (67) = U L l=1 (G× 3 w l )V T y t−l + m (68) = U L l=1 ((G × 2 V)× 3 w l )y t−l + m (69) = U L l=1 D j=1 D k=1 g :jk • v :j (w lk y t−l ) + m (70) = U D j=1 D k=1 g :jk (v ⊤ :j X t w :k ) + m (71) = D i=1 D j=1 D k=1 u :i g ijk (v ⊤ :j X t w :k ) + m (72) = n+2m i=1 n+2m j=1 n+2m k=1 g ijk u :i • v :j • w :k × 2,3 X t + m(73)
CP Tensor Regression By rearranging E, Λ l , and E −1 into J, P l , and S respectively as follows:
J = [ a 1 . . . a n b 1 + c 1 b 1 c 1 . . . b m + c m b m c m ](74)P l = λ l 1 . . . λ l n σ 1,l α 1,l β 1,l . . . σ m,l α m,l β m,l (75) S = d T 1 . . . d T n e T 1 + f T 1 f T 1 e T 1 . . . e T m + f T m f T m e T m (76) where J ∈ R D×(n+3m) , P l ∈ R (n+3m)×(n+3m) , S ∈ R (n+3m)×D , α i,l = ω i,l − σ i,l , and β i,l = −ω i,l − σ i,l , we can diagonalize (A(I − KC)) l as JP l S. Let V = (SAK) T , U = CJ, m = C L l=1 Γ l (b − AKd) + d, and X t = y t−1:t−L .
Let W ∈ R L×(n+3m) be a matrix, whose element in the l th row and k th column is p l−1,kk (i.e., the element in the k th row and k th column of P l−1 ), and G ∈ R (n+3m)×(n+3m)×(n+3m) be a superdiagonal 3-way tensor, where g ijk = 1 i=j=k . We can then rearrange the mean to:
Cµ t|t−1 + d ≈ C L l=1 EΛ l−1 E −1 AKy t−l + C L l=1 Γ l (b − AKd) + d (77) = C L l=1 JP l−1 SAKy t−l + m (78) = U L l=1 P l−1 V ⊤ y t−l + m (79) = L l=1 n+3m i n+3m j n+3m k g ijk u :i • v :j (p l−1,kk y t−l ) + m (80) = n+3m i n+3m j n+3m k g ijk u :i • v :j (X t w :k ) + m (81) = n+3m i=1 n+3m j=1 n+3m k=1 g ijk u :i • v :j • w :k × 2,3 X t + m(82)
And so concludes the proof.
C Single-subspace SALT
Here we explicitly define the generative model of multi-subspace and single-subspace Tucker-SALT and CP-SALT. Single-subspace SALT is analogous to single-subspace SLDSs (also defined below), where certain emission parameters (e.g., C, d, and R) are shared across discrete states. This reduces the expressivity of the model, but also reduces the number of parameters in the model. Note that both variants of all models have the same structure on the transition dynamics of z t .
Multi-subspace SALT Note that the SALT model defined in (6) and (7) in the main text is a multi-subspace SALT. We repeat the definition here for ease of comparison.
y t i.i.d. ∼ N D1 i=1 D2 j=1 D3 k=1 g (zt) ijk u (zt) :i • v (zt) :j • w (zt) :k × 2,3 y t−1:t−L + b (zt) , Σ Σ Σ (zt) ,(83)D 1 = D 2 = D 3 = D and G is diagonal for CP-SALT.
Single-subspace Tucker-SALT In single-subspace methods, the output factors are shared across discrete states
y t i.i.d. ∼ N U m (zt) + D2 j=1 D3 k=1 g (zt) :jk • v (zt) :j • w (zt) :k × 2,3 y t−1:t−L + b, Σ Σ Σ (zt) ,(84)
where m (zt) ∈ R D1 .
Single-subspace CP-SALT Single-subspace CP-SALT requires an extra tensor compared to Tucker-SALT, as this tensor can no longer be absorbed in to the core tensor.
y t i.i.d. ∼ N U ′ m (zt) + P (zt) D2 j=1 D3 k=1 g (zt) :jk • v (zt) :j • w (zt) :k × 2,3 y t−1:t−L + b, Σ Σ Σ (zt) ,(85)
where
U ′ ∈ R N ×D ′ 1 , P (zt) ∈ R D ′ 1 ×D1 , m (zt) ∈ R D ′ 1 , D 1 = D 2 = D 3 = D, and G is diago- nal.
Multi-subspace SLDS Multi-subspace SLDS is a much harder optimization problem, which we found was often numerically unstable. We therefore do not consider multi-subspace SLDS in these experiments, but include its definition here for completeness
x t ∼ N A (zt) x t−1 + b (zt) , Q (zt) ,(86)y t ∼ N C (zt) x t + d (zt) , R (zt) .(87)
Single-subspace SLDS Single-subspace SLDS was used in all of our experiments, and is typically used in practice [8,15] x
t ∼ N A (zt) x t−1 + b (zt) , Q (zt) ,(88)y t ∼ N (Cx t + d, R) .(89)
D Synthetic Data Experiments
D.1 Extended Experiments for Proposition 1
In Section 5.1 we showed that Proposition 1 can accurately predict the required rank for CP-and Tucker-SALT models. We showed results for a single LDS for clarity. We now extend this analysis across multiple random LDS and SALT models. We randomly sampled LDSs with latent dimensions ranging from 4 to 10, and observation dimensions ranging from 9 to 20. For each LDS, we fit 5 randomly initialized CP-SALT and Tucker-SALT models with L = 50 lags. We varied the rank of our fit SALT models according to the rank predicted by Proposition 1. Specifically, we computed the estimated number of ranks for a given LDS, denoted D * , and then fit SALT models with {D * − 2, D * − 1, D * , D * + 1, D * + 2} ranks. According to Proposition 1, we would expect to see the reconstruction error of the autoregressive tensor be minimized, and for prediction accuracy to saturate, at D = D * .
To analyze these model fits, we first computed the average mean squared error of the autoregressive tensor corresponding to the LDS simulation, as a function of SALT rank relative to the rank required by Proposition 1. We see, as predicted by Proposition 1, that error in the autoregressive tensor is nearly always minimized at D * ( Figure 7A). Tucker-SALT was always minimized at D * . Some CP-SALT fits have lower MSE at ranks other than predicted by Proposition 1. We believe this is due to local minima in the optimization. We next investigated the test log-likelihood as a function of the relative rank ( Figure 7B). Interestingly, the test log-likelihood shows that Tucker-SALT strongly requires the correct number of ranks for accurate prediction, but CP-SALT can often perform well with fewer ranks than predicted (although still a comparable number of ranks to Tucker-SALT). As in Figure 2, these analyses empirically confirm Proposition 1.
D.2 Quantitative Performance: Synthetic Switching LDS Experiments
We include further results and analysis for the NASCAR ® and Lorenz attractor experiments presented in Section 5.2. We compare the marginal likelihood achieved by single-subspace SALT models of different sizes. We see that SALT outperforms ARHMMs, and can fit larger models (more lags) without overfitting (Figure 8). Note that the SLDS does not admit exact inference, and so we cannot readily compute the exact marginal likelihood for the SLDS.
D.3 TVART versus SALT in recovering the parameters of SLDSs
We compared SALT to TVART [24], another tensor-based method for modeling autoregressive processes. We modified TVART (as briefly described in the original paper, [24]) so that it can handle AR(p) processes, as opposed to only AR(1) processes. TVART is also not a probabilistic model (i.e., cannot compute log-likelihoods), and so we focus our comparison on how well these methods recover the parameters of a ground-truth SLDS.
We used the same SLDS that we used to generate the NASCAR ® dataset in Section 5.2. We then used L = 7 CP-SALT and Tucker-SALT with ranks 3 and 2, respectively, and computed the MSE between the ground truth tensor and SALT tensors. For TVART, we used L = 7, bin size of 10, and ranks 2 and 3 to fit the model to the data. We then clustered the inferred dynamics parameters to assign discrete states. To get the TVART parameter estimation, we computed the mean of the dynamics parameters for each discrete state and computed the MSE against the ground truth tensor. The MSE results are as follows: Table 2 shows that SALT models recover the dynamics parameters of the ground truth SLDS more accurately. Furthermore, we see that SALT models use fewer parameters than TVART models for the dataset (as the number of parameters in TVART scales linearly with the number of windows). We also note that TVART cannot be applied to held-out data, and, without post-hoc analysis, does not readily have a notion of re-usable dynamics or syllables.
D.4 The effect of the number of switches on the recovery of the parameters of the autoregressive dynamic tensors
We asked how the frequency of discrete state switches affected SALT's ability to recover the autoregressive tensors. We trained CP-SALT, Tucker-SALT, the ARHMM, all with L = 5 lags, and the SLDS on data sampled from an SLDS with varying number of discrete state switches. The ground-truth SLDS model had H = 2 discrete states, N = 20 observations and D = 7 dimensional continuous latent states. The matrix A (h) (I − K (h) C (h) ) of each discrete state of the groundtruth SLDS had 1 real eigenvalue and 3 pairs of complex eigenvalues. We sampled 5 batches of T = 15, 000 timesteps of data from the ground-truth SLDS, with s n ∈ {1, 10, 25, 75, 125} number of discrete state switches that were evenly spaced out across the data. We then computed the mean squared error (MSE) between the SLDS tensors and the tensors reconstructed by SALT, the ARHMM, and the SLDS. (Figure 9). More precisely, we combined the 3rd order autoregressive tensors from each discrete state into a 4th order tensor, and calculated the MSE based on these 4th The mean squared error of reconstructing the autoregressive tensors increased as a function of the number of discrete state switches. Note that we combined the 3rd order autoregressive tensors from each discrete state into a 4th order tensor, and calculated the MSE based on these 4th order tensors.
order tensors. As expected, the MSE increased with the number of switches in the data, indicating that the quality of SALT approximation of SLDSs decreases as the frequency of discrete state switches increases.
E Modeling Mouse Behavior
We include further details for the mouse experiments in Section 5.3.
E.1 Training Details
We used the first 35,949 timesteps of data from each of the three mice, which were collected at 30Hz resolution. We used H = 50 discrete states and fitted ARHMMs and CP-SALT models with varying lags and ranks. Similar to Wiltschko et al. [2], we imposed stickiness on the discrete state transition matrix via a Dirichlet prior with concentration of 1.1 on non-diagonals and 6 × 10 4 on the diagonals. These prior hyperparameters were empirically chosen such that the durations of the inferred discrete states and the given labels were comparable. We trained each model 5 times with random initialization for each hyperparameter, using 100 iterations of EM on a single NVIDIA Tesla P100 GPU.
E.2 Video Generation
Here we describe how the mouse behavioral videos were generated. We first determined the CP-SALT hyperparameters as those which led to the highest log-likelihood on the validation dataset. Then, using that CP-SALT model, we computed the most likely discrete states on the train and test data. Given a discrete state h, we extracted slices of the data whose most likely discrete state was h. We padded the data by 30 frames (i.e. 1 second) both at the beginning and the end of each slice for the movie. A red dot appears on each mouse for the duration of discrete state h. We generated such videos for all 50 discrete states (as long as there existed at least one slice for each discrete state) on the train and test data. For a given discrete state, the mice in each video behaved very similarly (e.g., the mice in the video for state 18 "pause" when the red dots appear, and those in the video for state 32 "walk" forward), suggesting that CP-SALT is capable of segmenting the data into useful behavioral syllables. See "MoSeq_salt_videos_train" and "MoSeq_salt_videos_test" in the supplementary material for the videos generated from the train and test data, respectively. "salt_crowd_i.mp4" refers to the crowd video for state i. We show the principal components for states 1, 2, 13, 32, 33, 47 in Figure 10.
E.3 Modeling Mouse Behavior: Additional Analyses
We also investigated whether SALT qualitatively led to a good segmentation of the behavioral data into discrete states, shown in Figure 10. Figure 10A shows a 30 second example snippet of the test data from one mouse colored by the discrete states inferred by CP-SALT. CP-SALT used fewer discrete states to model the data than the ARHMM ( Figure 10B). Coupled with the finding that CP-SALT improves test-set likelihoods, this suggests that the ARHMM may have oversegmented the data and CP-SALT may be better able to capture the number of behavioral syllables. Figure 10C shows average test data (with two standard deviations) for a short time window around the onset of a discrete state (we also include mouse videos corresponding to that state in the supplementary materials). The shrinking gray area around the time of state onset, along with the similar behaviors of the mice in the video, suggests that CP-SALT is capable of segmenting the data into consistent behavioral syllables.
F Modeling C. elegans Neural Data
We include further details and results for the C. elegans example presented in Section 5.4. This example highlights how SALT can be used to gain scientific insight in to the system.
F.1 Training Details
We used ∼3200 timesteps of data (recorded at 3Hz) from one worm, for which 48 neurons were confidently identified. The data were manually segmented in to seven labels (reverse sustained, slow, forward, ventral turn, dorsal turn, reversal (type 1) and reversal (type 2). We therefore used H = 7 discrete states in all models (apart from the GLM). After testing multiple lag values, we selected L = 9 for all models, as these longer lags allow us to examine longer-timescale interactions and produced better segmentations across models, with only a small reduction in variance explained. We trained each model 5 times with KMeans initialization, using 100 iterations of EM on a single NVIDIA Tesla V100 GPU. Models that achieved 90% explained variance on a held-out test set were then selected and analyzed (similar to Linderman et al. [9]). Figure 11 shows additional results for training different models. In Figure 11A we see that models with larger ranks (or latent dimension) achieve higher explained variance. Interestingly, longer lags can lead to a slight reduction in the explained variance, likely due to overfitting. This effect is less pronounced in the more constrained single-subspace SALT, but, these models achieve lower explained variance ratios throughout. Longer lag models allow us to inspect longer-timescale dependencies, and so are more experimentally insightful. Figure 11B shows the confusion matrix for discrete states between learned models and the given labels. The segmentations were similar across all models that achieved 90% explained variance. Confusion matrices between given labels and predicted labels. All methods produce similar quality segmentations. Figures 12 and 13 show extended versions of the autoregressive filters included in Section 5.4. Figure 12 shows the filters learned for ventral and dorsal turns (for which panel A was included in Figure 5), while Figure 13 shows the filters for forward and backward locomotion. Note that the GLM does not have multiple discrete states, and hence the same filters are used across states. We see for ARHMM and SALT that known-behavior tuned neurons have higher magnitude filters (determined by area under curve), whereas the SLDS and GLM do not recover such strong state-specific tuning.
F.2 Additional Quantitative Results
F.3 Additional Autoregressive Filters
Since the learned SLDS did not have stable within-state dynamics, the autoregressive filters could not be computed using Equation (47). We thus show CA (h)l C + for lag l, where C + denotes the Moore-Penrose pseudoinverse of C, as a proxy for the autoregressive filters of discrete state h of the SLDS. Note that this is a post-hoc method and does not capture the true dependencies in the observation space.
We see that SALT consistently assigns high autoregressive weight to neurons known to be involved in certain behaviors (see Figures 12 and 13). In contrast, the ARHMM identifies these relationships less reliably, and the estimate of the SLDS autoregressive filters identifies few strong relationships.
As the GLM only have one "state", the autoregressive filters are averaged across state, and so few strong relationships are found. This highlights how the low-rank and switching properties of SALT can be leveraged to glean insight into the system. Figure 13: Autoregressive tensors learned by different models (Forward Locomotion and Reversal): (A-C) One-dimensional autoregressive filters learned in two states by SALT, SLDS, ARHMM (identified as forward and reverse), and (D) by a GLM. AVB and RIB are known to mediate forward locomotion, while AVA and AVE are involved in initiating reversals [31,32,37,38].
Figure 1 :
1SALT imposes a low-rank constraint on the autoregressive tensor: (A) The probabilistic graphical model of an ARHMM. (B) An example multi-dimensional time series generated from an ARHMM. Background color indicates which discrete state (and hence autoregressive tensor) was selected at each time. (C) In SALT, each autoregressive dynamics tensor of an ARHMM is parameterized as a low-rank tensor.
The M-step then updates the parameters of the model given the distribution over latent states. For SALT, the emission parameters areθ = {U (h) , V (h) , W (h) , G (h) , b (h) , Σ Σ Σ (h) , π (h) } H h=1 . We use
=
E[y t | y <t ] denote the predictive mean under a steady-state LDS, andŷ (SALT) t the predictive mean under a SALT model. An order-L Tucker-SALT model with rank n+2m, or a CP-SALT model with rank n+3m, can approximate the predictive mean of the steady-state LDS with error ∥ŷ
Figure 2 :Figure 3 :
23SALT approximates LDS: Data simulated from an LDS for which n = 1 and m = 3 (see Proposition 1). (A-B): Average mean squared error of the autoregressive tensor corresponding to the LDS simulation and the log-likelihood of test data, as a function of SALT rank. According to Proposition 1, to model the LDS Tucker-SALT and CP-SALT require 7 and 10 ranks respectively (indicated by vertical dashed lines). (C-D): Mean squared error of the learned autoregressive tensor and log-likelihood of test data as a function of training data. SALT reconstructs simulated SLDS data and Lorenz attractor: (A) Ground truth lowdimensional trajectory generated from a recurrent SLDS, as in [15], and 10-dimensional observations are generated from these latents. (B) The ground truth low-dimensional trajectory is generated from a Lorenz attractor, and 20-dimensional observations are generated from thse latents. Only 5 dimensions are shown for visual clarity. (Top): Ground truth observations and trajectories. (Middle and bottom):
Figure 4 :
4CP-SALT consistently outperforms ARHMM on mouse behavior videos and segments data into distinct behavioral syllables: (A) An example frame from the MoSeq dataset. The models were trained on the top 10 principal components of the video frames from three mice. (B) CP-SALT and ARHMM trained with different ranks and lags. Mean and standard deviation across five seeds evaluated on a validation set are shown. CP-SALT parameterization prevents overfitting for larger lags. (C) Test log-likelihood, averaged across 5 model fits, computed from the best ARHMM and CP-SALT hyperparameters in (B). CP-SALT outperforms ARHMM across all three mice.
Figure 5 :
5CP-SALT provides good segmentations of C. elegans neural data, and inferred lowrank tensors give insights into temporal dependencies among neurons in each discrete state: (A)
( a )
aIf the pair (A, C) is detectable then there exists at least one non-negative solution, Σ, to the discrete algebraic Riccati equation (42).
Normalized MSE of autoregressive tensor.
Normalized log-likelihood on held-out test set.
Figure 7 :
7Extended results examining Proposition 1. Results are shown for the ability of SALT to estimate ten randomly generated LDSs, using five SALT repeats for each LDS. MSEs (in panel A) and log-likelihoods (in panel B) are normalized by the mean MSE and mean log-likelihood of SALT models trained with D = D * . D is the rank of the fit SALT model, and D * is the necessary rank predicted by Proposition 1.
Figure 8 :
8Quantitative performance of different SALT models and ARHMMs (averaged over 3 different runs) on the synthetic experiments presented in Section 5.2. The test-set log likelihood is shown as a function of lags in the SALT model, for both (A) the NASCAR ® and (B) Lorenz synthetic datasets.
Figure 9 :
9The quality of SALT approximation of SLDSs decreases as the number of discrete state switches increases: The data comes from an SLDS with H = 2, N = 20, and D = 7. 15,000 timesteps were generated, with varying numbers of evenly spaced discrete state switches (x-axis).
Figure 10 :
10CP-SALT leads to qualitatively good segmentation of the mouse behavioral data into distinct syllables.: (A) 30 seconds of test data (Mouse 1) with the discrete states inferred by CP-SALT as the background color. (B) For one mouse, the cumulative number of frames that are captured by each discrete state, where the discrete states are ordered according to how frequently they occur. (C) The average test data, with two standard deviations, for six states of CP-SALT, aligned to the time of state onset. The shrinkage of the gray region around the state onset tells us that CP-SALT segments the test data consistently.
Figure 11 :
11-SALT L=1 SS CP-SALT L=3 SS CP-SALT L=6 SS CP-SALT L=9 MS CP-SALT L=1 MS CP-SALT L=3 MS CP-SALT L=6 MS CP-SALT L=9 SS SLDS A B SALT and SLDS perform comparably on held-out data: (A): Explained variance on a held-out sequence. Single-subspace (SS) SALT and SLDS perform comparably. Multi-subspace (MS) SALT achieves a higher explained variance with fewer ranks. Multi-subspace SLDS was numerically unstable. (B):
Figure 12 :
12Autoregressive tensors learned by different models (Ventral and Dorsal Turns): (A-C) One-dimensional autoregressive filters learned in two states by SALT, SLDS, ARHMM (identified as ventral and dorsal turns), and (D) by a GLM. RIV and SMDV are known to mediate ventral turns, while SMDD mediate dorsal turns[31,32,34].
DorsalTurn CP-SALT Lag (Rank=11, Lag=9, equivalent to 3 seconds; from left to right) SALT Tensor WeightsVentral
Turn
Active
during:
Reverse Sustained
Forward
Ventral Turn
Dorsal Turn
Reverse 1
Reverse 2
Slow
Traces
Labels
1 2 3 4 5 6 7
SALT Labels
REVSUS
SLOW
FWD
VT
DT
REV1
REV2
Given Labels
0
200
400
600
Number of Timesteps
SALT Labels
# of Timesteps
AIBL
AVAL
AVBR
AVEL
AVER
AVFL
RIBL
RIML
RIS
RIVL
RMED
SMDDL
SMDVL
VA01
VB02
Given
SALT
SLDS
0
200
400
600
800
1000
1200
1400
ARHMM°1
0
1
Table of Contents
of-Appendix A: SALT Optimization via Tensor Regression.
Table 2 :
2Results comparing SALT and TVART[24] on the NASCAR example.Model
Rank Tensor Reconstruction MSE (×10 −3 ) Number of parameters
TVART
2
0.423
1.4K
TVART
3
0.488
2.0K
Tucker-SALT
2
0.294
0.6K
CP-SALT
3
0.297
0.7K
AcknowledgmentsThis work was supported by grants from the Simons Collaboration on the Global Brain (SCGB 697092), the NIH (U19NS113201, R01NS113119, R01NS130789, and K99NS119787), the Sloan Foundation, and the Stanford Center for Human and Artificial Intelligence. We thank Liam Paninski for his constructive feedback on the paper. We also thank the members of the Linderman Lab for their support and feedback throughout the project.
Computational neuroethology: A call to action. Robert Sandeep, David J Datta, Kristin Anderson, Pietro Branson, Andrew Perona, Leifer, Neuron. 1041Sandeep Robert Datta, David J Anderson, Kristin Branson, Pietro Perona, and Andrew Leifer. Computational neuroethology: A call to action. Neuron, 104(1):11-24, 2019.
Mapping sub-second structure in mouse behavior. B Alexander, Wiltschko, J Matthew, Giuliano Johnson, Ralph E Iurilli, Jesse M Peterson, Stan L Katon, Victoria E Pashkovski, Abraira, P Ryan, Sandeep Robert Adams, Datta, Neuron. 886Alexander B Wiltschko, Matthew J Johnson, Giuliano Iurilli, Ralph E Peterson, Jesse M Katon, Stan L Pashkovski, Victoria E Abraira, Ryan P Adams, and Sandeep Robert Datta. Mapping sub-second structure in mouse behavior. Neuron, 88(6):1121-1135, 2015.
Distinguishing discrete and continuous behavioral variability using warped autoregressive HMMs. Julia Costacurta, Lea Duncker, Blue Sheffer, Winthrop Gillis, Caleb Weinreb, Jeffrey Markowitz, R Sandeep, Alex Datta, Scott Williams, Linderman, Advances in Neural Information Processing Systems. 35Julia Costacurta, Lea Duncker, Blue Sheffer, Winthrop Gillis, Caleb Weinreb, Jeffrey Markowitz, Sandeep R Datta, Alex Williams, and Scott Linderman. Distinguishing discrete and continuous behavioral variability using warped autoregressive HMMs. Advances in Neural Information Processing Systems, 35:23838-23850, 2022.
Dynamic brain interactions during picture naming. Aram Giahi Saravani, J Kiefer, Nitin Forseth, Xaq Tandon, Pitkow, 6Aram Giahi Saravani, Kiefer J Forseth, Nitin Tandon, and Xaq Pitkow. Dynamic brain interac- tions during picture naming. eNeuro, 6(4), 2019.
Metastable attractors explain the variable timing of stable behavioral action sequences. Stefano Recanatesi, Ulises Pereira-Obilinovic, Masayoshi Murakami, Zachary Mainen, Luca Mazzucato, Neuron. 1101Stefano Recanatesi, Ulises Pereira-Obilinovic, Masayoshi Murakami, Zachary Mainen, and Luca Mazzucato. Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron, 110(1):139-153, 2022.
On the application of hidden Markov models for enhancing noisy speech. Yariv Ephraim, David Malah, B-H Juang, IEEE Transactions on Acoustics, Speech, and Signal Processing. 3712Yariv Ephraim, David Malah, and B-H Juang. On the application of hidden Markov models for enhancing noisy speech. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37 (12):1846-1856, 1989.
Variational learning for switching state-space models. Zoubin Ghahramani, Geoffrey E Hinton, Neural Computation. 124Zoubin Ghahramani and Geoffrey E Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):831-864, 2000.
Dynamical segmentation of single trials from population neural data. Biljana Petreska, M Byron, John P Yu, Gopal Cunningham, Stephen Santhanam, Krishna V Ryu, Maneesh Shenoy, Sahani, Advances in Neural Information Processing Systems. 24Biljana Petreska, Byron M Yu, John P Cunningham, Gopal Santhanam, Stephen Ryu, Krishna V Shenoy, and Maneesh Sahani. Dynamical segmentation of single trials from population neural data. Advances in Neural Information Processing Systems, 24, 2011.
Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans. bioRxiv. Scott Linderman, Annika Nichols, David Blei, Manuel Zimmer, Liam Paninski, 621540Scott Linderman, Annika Nichols, David Blei, Manuel Zimmer, and Liam Paninski. Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans. bioRxiv, page 621540, 2019.
Recurrent switching dynamical systems models for multiple interacting neural populations. Joshua Glaser, Matthew Whiteway, P John, Liam Cunningham, Scott Paninski, Linderman, Advances in Neural Information Processing Systems. 33Joshua Glaser, Matthew Whiteway, John P Cunningham, Liam Paninski, and Scott Linderman. Recurrent switching dynamical systems models for multiple interacting neural populations. Advances in Neural Information Processing Systems, 33:14867-14878, 2020.
An approximate line attractor in the hypothalamus encodes an aggressive state. Aditya Nair, Tomomi Karigo, Bin Yang, Surya Ganguli, J Mark, Schnitzer, W Scott, Linderman, J David, Ann Anderson, Kennedy, Cell. 1861Aditya Nair, Tomomi Karigo, Bin Yang, Surya Ganguli, Mark J Schnitzer, Scott W Linderman, David J Anderson, and Ann Kennedy. An approximate line attractor in the hypothalamus encodes an aggressive state. Cell, 186(1):178-193, 2023.
Expectation correction for smoothed inference in switching linear dynamical systems. David Barber, Journal of Machine Learning Research. 711David Barber. Expectation correction for smoothed inference in switching linear dynamical systems. Journal of Machine Learning Research, 7(11), 2006.
Bayesian nonparametric learning of complex dynamical phenomena. Emily Beth Fox, Massachusetts Institute of TechnologyPhD thesisEmily Beth Fox. Bayesian nonparametric learning of complex dynamical phenomena. PhD thesis, Massachusetts Institute of Technology, 2009.
Rao-Blackwellised particle filtering for dynamic Bayesian networks. Kevin Murphy, Stuart Russell, Sequential Monte Carlo methods in practice. SpringerKevin Murphy and Stuart Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. In Sequential Monte Carlo methods in practice, pages 499-515. Springer, 2001.
Bayesian learning and inference in recurrent switching linear dynamical systems. Scott Linderman, Matthew Johnson, Andrew Miller, Ryan Adams, David Blei, Liam Paninski, In Artificial Intelligence and Statistics. PMLRScott Linderman, Matthew Johnson, Andrew Miller, Ryan Adams, David Blei, and Liam Paninski. Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial Intelligence and Statistics, pages 914-922. PMLR, 2017.
A general recurrent state space framework for modeling neural dynamics during decision-making. David Zoltowski, Jonathan Pillow, Scott Linderman, International Conference on Machine Learning. PMLRDavid Zoltowski, Jonathan Pillow, and Scott Linderman. A general recurrent state space framework for modeling neural dynamics during decision-making. In International Conference on Machine Learning, pages 11680-11691. PMLR, 2020.
Tensor decompositions and applications. G Tamara, Kolda, W Brett, Bader, SIAM review. 51Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51 (3):455-500, 2009.
Time series analysis. James Douglas, Hamilton , Princeton University PressJames Douglas Hamilton. Time series analysis. Princeton University Press, 2020.
Machine learning: a probabilistic perspective. P Kevin, Murphy, MIT pressKevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
Hybrid Bayesian networks for reasoning about complex systems. Uri Nahum Lerner, Stanford UniversityPhD thesisUri Nahum Lerner. Hybrid Bayesian networks for reasoning about complex systems. PhD thesis, Stanford University, 2003.
Particle filters for state estimation of jump Markov linear systems. Arnaud Doucet, J Neil, Vikram Gordon, Krishnamurthy, IEEE Transactions on signal processing. 493Arnaud Doucet, Neil J Gordon, and Vikram Krishnamurthy. Particle filters for state estimation of jump Markov linear systems. IEEE Transactions on signal processing, 49(3):613-624, 2001.
Stochastic modelling and control. H A Mark, Richard B Davis, Vinter, Chapman and Hall LondonNew YorkISBN 0412162008Mark H A Davis and Richard B Vinter. Stochastic modelling and control. Chapman and Hall London ; New York, 1985. ISBN 0412162008.
High-dimensional vector autoregressive time series modeling via tensor decomposition. Di Wang, Yao Zheng, Heng Lian, Guodong Li, Journal of the American Statistical Association. Di Wang, Yao Zheng, Heng Lian, and Guodong Li. High-dimensional vector autoregressive time series modeling via tensor decomposition. Journal of the American Statistical Association, pages 1-19, 2021.
Timevarying autoregression with low-rank tensors. Kameron Decker Harris, Aleksandr Aravkin, Rajesh Rao, Bingni Wen Brunton, SIAM Journal on Applied Dynamical Systems. 204Kameron Decker Harris, Aleksandr Aravkin, Rajesh Rao, and Bingni Wen Brunton. Time- varying autoregression with low-rank tensors. SIAM Journal on Applied Dynamical Systems, 20 (4):2335-2358, 2021.
Bayesian time-varying tensor vector autoregressive models for dynamic effective connectivity. Wei Zhang, Ivor Cribben, Sonia Petrone, Michele Guindani, arXiv:2106.14083arXiv preprintWei Zhang, Ivor Cribben, Sonia Petrone, and Michele Guindani. Bayesian time-varying tensor vector autoregressive models for dynamic effective connectivity. arXiv preprint arXiv:2106.14083, 2021.
Tensorizing neural networks. Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, Dmitry P Vetrov, Advances in Neural Information Processing Systems. 28Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. Advances in Neural Information Processing Systems, 28, 2015.
Tensor-train decomposition. V Ivan, Oseledets, SIAM Journal on Scientific Computing. 335Ivan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5): 2295-2317, 2011.
Long-term forecasting using higher order tensor RNNs. Rose Yu, Stephan Zheng, Anima Anandkumar, Yisong Yue, arXiv:1711.00073arXiv preprintRose Yu, Stephan Zheng, Anima Anandkumar, and Yisong Yue. Long-term forecasting using higher order tensor RNNs. arXiv preprint arXiv:1711.00073, 2017.
On the memory mechanism of tensor-power recurrent models. Hejia Qiu, Chao Li, Ying Weng, Zhun Sun, Xingyu He, Qibin Zhao, International Conference on Artificial Intelligence and Statistics. PMLRHejia Qiu, Chao Li, Ying Weng, Zhun Sun, Xingyu He, and Qibin Zhao. On the memory mech- anism of tensor-power recurrent models. In International Conference on Artificial Intelligence and Statistics, pages 3682-3690. PMLR, 2021.
Probing the relationship between latent linear dynamical systems and low-rank recurrent neural network models. Adrian Valente, Srdjan Ostojic, Jonathan W Pillow, Neural Computation. 349Adrian Valente, Srdjan Ostojic, and Jonathan W Pillow. Probing the relationship between latent linear dynamical systems and low-rank recurrent neural network models. Neural Computation, 34(9):1871-1892, 2022.
Global brain dynamics embed the motor command sequence of Caenorhabditis elegans. Saul Kato, S Harris, Tina Kaplan, Susanne Schrödel, Skora, H Theodore, Eviatar Lindsay, Shawn Yemini, Manuel Lockery, Zimmer, Cell. 1633Saul Kato, Harris S Kaplan, Tina Schrödel, Susanne Skora, Theodore H Lindsay, Eviatar Yemini, Shawn Lockery, and Manuel Zimmer. Global brain dynamics embed the motor command sequence of Caenorhabditis elegans. Cell, 163(3):656-669, 2015.
A circuit for navigation in Caenorhabditis elegans. M Jesse, Gray, J Joseph, Cornelia I Hill, Bargmann, Proceedings of the National Academy of Sciences. 1029Jesse M Gray, Joseph J Hill, and Cornelia I Bargmann. A circuit for navigation in Caenorhabditis elegans. Proceedings of the National Academy of Sciences, 102(9):3184-3191, 2005.
Nested neuronal dynamics orchestrate a behavioral hierarchy across timescales. Oriana Harris S Kaplan, Niklas Salazar Thula, Manuel Khoss, Zimmer, Neuron. 1053Harris S Kaplan, Oriana Salazar Thula, Niklas Khoss, and Manuel Zimmer. Nested neuronal dynamics orchestrate a behavioral hierarchy across timescales. Neuron, 105(3):562-576, 2020.
A sensory-motor neuron type mediates proprioceptive coordination of steering in C. elegans via two TRPC channels. Jihye Yeon, Jinmahn Kim, Do-Young Kim, Hyunmin Kim, Jungha Kim, Eun Jo Du, Kyeongjin Kang, Hyun-Ho Lim, Daewon Moon, Kyuhyung Kim, PLoS biology. 1662004929Jihye Yeon, Jinmahn Kim, Do-Young Kim, Hyunmin Kim, Jungha Kim, Eun Jo Du, KyeongJin Kang, Hyun-Ho Lim, Daewon Moon, and Kyuhyung Kim. A sensory-motor neuron type mediates proprioceptive coordination of steering in C. elegans via two TRPC channels. PLoS biology, 16(6):e2004929, 2018.
Spatio-temporal correlations and visual signalling in a complete neuronal population. Jonathon Jonathan W Pillow, Liam Shlens, Alexander Paninski, Alan M Sher, Litke, Eero P Chichilnisky, Simoncelli, Nature. 7207Jonathan W Pillow, Jonathon Shlens, Liam Paninski, Alexander Sher, Alan M Litke, EJ Chichilnisky, and Eero P Simoncelli. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995-999, 2008.
Subspace Methods for System Identification. Tohru Katayama, Springer1Tohru Katayama. Subspace Methods for System Identification, volume 1. Springer, 2005.
The neural circuit for touch sensitivity in caenorhabditis elegans. Martin Chalfie, John E Sulston, John G White, Eileen Southgate, Sydney Nicol Thomson, Brenner, Journal of Neuroscience. 54Martin Chalfie, John E Sulston, John G White, Eileen Southgate, J Nicol Thomson, and Sydney Brenner. The neural circuit for touch sensitivity in caenorhabditis elegans. Journal of Neuroscience, 5(4):956-964, 1985.
The neural circuits and synaptic mechanisms underlying motor initiation in c. elegans. J Beverly, Jie Piggott, Zhaoyang Liu, Feng, A Seth, Wescott, Xu, Cell. 1474Beverly J Piggott, Jie Liu, Zhaoyang Feng, Seth A Wescott, and XZ Shawn Xu. The neural circuits and synaptic mechanisms underlying motor initiation in c. elegans. Cell, 147(4): 922-933, 2011.
| zyda_arxiv-1027000 |
Hydrodynamics of crystals with interstitials
15 Sep 2000 (November 10, 2018)
G L Buchbinder
Department of Physics
Omsk State University
Peace Avenue 55-a 644077OmskRussia
Hydrodynamics of crystals with interstitials
15 Sep 2000 (November 10, 2018)
The hydrodynamic equations for a crystals with interstitials, taking into account the dissipative processes of the viscosity, heat conduction and the interstitial diffusion are derived. To achieve that we use the phenomenological approach has originally been applied in the theory of the superfluids for the derivation of equations of the two-fluid hydrodynamics. On the basis of obtained equations the problem of the propagation of plane waves in the crystal with low interstitial concentration has been considered. For the case when effects of the viscosity and the heat conduction are absent and the diffusion mobility of interstitials is small the absorption coefficient of longitudinal sound wave has been calculated.PACS numbers: 46.05.+b, 62.30.+d A great number of various in its nature processes occuring in the bulk and on the surface of the crystalline solids can be considered within the scope of the hydrodynamic description 1-11 . The presence of the slowly varying spatial and temporal disturbances in the system allows us to describe the nonequilibrium behavior in terms of a few slow, hydrodynamic variables. One of the most important hydrodynamic processes is the diffusion of interstitials in the crystalline lattice. Within the scope of the hydrodynamic approximation their dynamics is commonly determined by the well known diffusion equation rested on the classical Fick's laws. This equation is closed with respect to the interstitial concentration and the presence of the lattice is only manifested at the calculation of the diffusion coefficient for the specific microscopic model. Such an approach, justified in the case of the low impurity concentrations and weak inhomogeneity in the system proves to be unfit at the finite concentration and the large gradients. In the case the interaction between the impurity system and the lattice is also manifested at the macroscopic level through the deformation of the crystal. Therefore at the finite interstitial concentration the dynamics of interstitials is to be considered in common with motion of the lattice. As a result the hydrodynamic equations describing the corresponding process are to contain both the interstitial and lattice variables.In this paper we shall obtain hydrodynamic equations for the crystal containing interstitials. With that end in view we use the method has originally been applied in the theory of the superfluids 12,13 for the derivation of the equations of the two-fluid hydrodynamics and then employed for supersolids 14 . It gives a possibility to obtain equations describing the dynamics of interstitials relative to the moving lattice.On the basis of the found equations we shall consider the propagation of plane waves in the crystal and in the simplest approximation calculate the sound velocities and absorption coefficient caused by a diffusion of interstitials.We assume that the concentration of vacancies is negligible and the nonequilibrium state of the crystal is characterized by the mass densities of impurity particles ρ p and lattice atoms ρ L and denote by j the current density of the medium (lattice plus interstitials).where v L and v p are the velocity fields of the lattice and the impurity system, respectively. The complete set of the hydrodynamic equations has to contain the local conservation laws of mass, momentum and equation for the entropy, following from the second law of the thermodynamics. We write down the corresponding balance equations ashere ρ = ρ L + ρ p is the total mass density of the medium, v = ρ −1 j is the mass velocity, S is the entropy density of the medium, Π ik is the tensor momentum current density, q is the heat current density, J is the diffusion current density of interstitials defined asν is the chemical potential of interstitials per unite volume, T is absolute temperature and R (R > 0) is the dissipative function of the medium; summation over the repeated indexes is implied. In addition, the system of equations (2) -(4) has to be supplemented by the continuity equation for the interstitial density and equation of motion for the lattice ∂(cρ) ∂t + div cρv p = 0 ,
The hydrodynamic equations for a crystals with interstitials, taking into account the dissipative processes of the viscosity, heat conduction and the interstitial diffusion are derived. To achieve that we use the phenomenological approach has originally been applied in the theory of the superfluids for the derivation of equations of the two-fluid hydrodynamics. On the basis of obtained equations the problem of the propagation of plane waves in the crystal with low interstitial concentration has been considered. For the case when effects of the viscosity and the heat conduction are absent and the diffusion mobility of interstitials is small the absorption coefficient of longitudinal sound wave has been calculated.
PACS numbers: 46.05.+b, 62.30.+d A great number of various in its nature processes occuring in the bulk and on the surface of the crystalline solids can be considered within the scope of the hydrodynamic description 1- 11 . The presence of the slowly varying spatial and temporal disturbances in the system allows us to describe the nonequilibrium behavior in terms of a few slow, hydrodynamic variables. One of the most important hydrodynamic processes is the diffusion of interstitials in the crystalline lattice. Within the scope of the hydrodynamic approximation their dynamics is commonly determined by the well known diffusion equation rested on the classical Fick's laws. This equation is closed with respect to the interstitial concentration and the presence of the lattice is only manifested at the calculation of the diffusion coefficient for the specific microscopic model. Such an approach, justified in the case of the low impurity concentrations and weak inhomogeneity in the system proves to be unfit at the finite concentration and the large gradients. In the case the interaction between the impurity system and the lattice is also manifested at the macroscopic level through the deformation of the crystal. Therefore at the finite interstitial concentration the dynamics of interstitials is to be considered in common with motion of the lattice. As a result the hydrodynamic equations describing the corresponding process are to contain both the interstitial and lattice variables.
In this paper we shall obtain hydrodynamic equations for the crystal containing interstitials. With that end in view we use the method has originally been applied in the theory of the superfluids 12,13 for the derivation of the equations of the two-fluid hydrodynamics and then employed for supersolids 14 . It gives a possibility to obtain equations describing the dynamics of interstitials relative to the moving lattice.
On the basis of the found equations we shall consider the propagation of plane waves in the crystal and in the simplest approximation calculate the sound velocities and absorption coefficient caused by a diffusion of interstitials.
We assume that the concentration of vacancies is negligible and the nonequilibrium state of the crystal is characterized by the mass densities of impurity particles ρ p and lattice atoms ρ L and denote by j the current density of the medium (lattice plus interstitials).
j = ρ L v L + ρ p v p ,(1)
where v L and v p are the velocity fields of the lattice and the impurity system, respectively. The complete set of the hydrodynamic equations has to contain the local conservation laws of mass, momentum and equation for the entropy, following from the second law of the thermodynamics. We write down the corresponding balance equations as
∂ρ ∂t + div j = 0 ,(2)∂j i ∂t + ∂Π ik ∂x k = 0 ,(3)∂S ∂t + div (Sv + q T − ν ρT J) = R T ,(4)
here ρ = ρ L + ρ p is the total mass density of the medium, v = ρ −1 j is the mass velocity, S is the entropy density of the medium, Π ik is the tensor momentum current density, q is the heat current density, J is the diffusion current density of interstitials defined as
J = ρ p (v p − v) ,
ν is the chemical potential of interstitials per unite volume, T is absolute temperature and R (R > 0) is the dissipative function of the medium; summation over the repeated indexes is implied. In addition, the system of equations (2) -(4) has to be supplemented by the continuity equation for the interstitial density and equation of motion for the lattice
∂(cρ) ∂t + div cρv p = 0 ,(5)∂(ρ L v Li ) ∂t + ∂Π Lik ∂x k = f i ,(6)
where c = ρ p /ρ is the interstitial concentration, f is the mass force with which the impurity system acts on the lattice; Π Lik is the tensor momentum current density of the lattice which we take in the form
Π Lik = ρ L v Li v Lk − σ ik − σ ′ ik ,(7)
where σ ik is the symmetric, elastic stress tensor of the lattice
σ ik = −pLδ ik +σ ik ,(8)p L = − 1 3 σ ii , Trσ =σ ii = 0 ,
and σ ′ k describes the effect of the viscosity. Both σ ik and σ ′ ik are supposed to be known. Here and further the tilde is used to denote the traceless part of a tensor. Now let us introduce the vector u defining displacement of lattice sites and connected with velocity field v L by relationship
v L = du d t = ∂u ∂t + (v L · ∇)u .(9)
The variables ρ, S, T , c, j, u completely define the nonequilibrium state of the system and satisfy equations (2) - (6). Similarly to Ref. 12 we shall find the remaining unknown values Π ik , R, q, J, f so that the conservation energy law
∂E ∂t + div Q = 0 ,(10)
with E and Q being energy density and energy current density, respectively, would follow from Eqs.
(2) -(6). In the Eq.(10) the current Q is originally unknown as well.
To determine the form of the unknown values indicated above we shall pass to the new frame moving with velocity v L , in which the velocity of the lattice of the given element of the medium is equal zero. The energy E and the momentum density j are related by the Gallilean transformation to its values E 0 and j 0 in the frame where the lattice rests by relationships
E = ρv 2 L 2 + j 0 v L + E 0 ,(11)j = ρv L + j 0 ; j 0 = ρ p (v p − v L ) .(12)
Let us write down the differential of E 0 , considered as a function of S, ρ, c, j 0 and the infinitesimal strain tensor u ij , in the form
dE 0 = T dS + µdρ + νdc +σ ij dũ ij + wdj 0 ,(13)
here µ is the chemical potential of the medium and w = v p − v L is the relative velocity. In the Eq. (13) it has been taken into account that the variation du ii of sum of diagonal components of the strain tensor u ij is determined by the variation of the density dρ.
Differentiating the Eq.(11) with respect to time and using Eqs. (2) -(13), one obtains
∂E ∂t = (v p v L − v 2 L 2 − µ + cν ρ ) div j − wf + ρ L w i (v L · ∇)v Li − w i ∂ ∂x k (σ ik + σ ′ ik ) − v pi ∂Π ik ∂x k +σ ij ∂u ij ∂t + ν ρ div cρv p − T div(Sv + q T − ν ρT J) + R
The last equality after long and tedious, though simple, transformations is led to the form
∂E ∂t + div v 2 L 2 + µ + T S ρ j + ρ p v p (v p · w) + q − v L · (σ + σ ′ ) + v p · π = R − σ ′ ik ∂v Li ∂x k + π ik ∂v pi ∂x k + q∇T T +T J∇ ν ρT −w f − ∇p L + ρ L (∇µ + S∇T ρ ) −ρ L ν ρ ∇ c +ũ ij ∇σ ij ,(14)
where π ik is defined from the equality
Π ik = ρ L v Li v Lk + ρ p v pi v pk + pδ ik −σ ik − σ ′ ik + π ik(15)
and the notation has been introduced
p = −E 0 + T S + µρ + w · j 0 +σ ijũij(16)
In the derivatiopn of Eq. (14) we have neglected the term σ ijũik ∂v Lk /∂x j that is small in framework of the linear elasticity theory. In addition, for the terms of the second order of infinitesimal in strains the approximation has been used
(v p − v L ) · ∇(σ ikũik ) ≃ v p · ∇(σ ikũik ) ,
because the interstitial velocity is appreciably more than the lattice one. The Eq.(15) defines the momentum current density of the medium. The comparison of the Eq.(14) with the energy conservation law (10) leads to the definition of: the energy current Q
Q = v 2 L 2 + µ + T S ρ j + ρ p v p (v p · w) + q − v L · (σ + σ ′ ) + v p · π ;(17)
the dissipative function R
R = σ ′ ik ∂v Li ∂x k − π ik ∂v pi ∂x k − q∇T T − T J∇ ν ρT ;(18)
and the force f
f = ∇p L − ρ L (∇µ + S∇T ρ ) + ρ L ν ρ ∇ c −ũ ik ∇σ ik .(19)
In the framework of the linear theory the positive definiteness of R leads to the linear relationships relating the dissipative currents to the thermodynamic forces. Taking into account Onsager's reciprocity relations for the transport coefficients and time-reversal property of dissipative effects 1 , we can write this relationships, at given σ ′ ik , in the form
π ik = −η ikjm ∂v pj ∂x m , q i = − κ ik T ∂T ∂x k − α ik T ∂ ∂x k ν ρT ,(20)J i = − α ik T ∂T ∂x k − β ik T ∂ ∂x k ν ρT ,
where the fourth-rank tensor η is related to the impurity viscosity, the second-rank tensor κ has meaning of the pure heat conductivity, β is the second-rank tensor related to the interstitial diffusion and α is related to the cross effect of thermal diffusion of interstitials. It is seen from the Eq.(15) that p, given by the Eq.(16), can be interpreted as "pressure" in the medium. The relation similar to the Gibbs-Duhem relation for fluid system follows from Eq.(16).
ρdµ = dp + νd c − SdT − j 0 dw −ũ ij dσ ij .(21)
From (21) one obtains for gradients
ρ∇µ + S∇T = ∇p + ν∇c − j 0i ∇w i −ũ ij ∇σ ij .
Introducing this into the Eq.(19), we obtain the expression for the mass force as
f = ∇p L − (1 − c)∇p + cρ L ∇ w 2 2 − cũ ij ∇σ ij .(22)
This expression shows explicitly that in the limit c → 0 the mass force vanishes provided that p → p L . From this condition it follows that it must be
p L = −E 0 + T S + µρ L +σ ijũij , (c = 0).
The last relationship should be considered as the definition of the "pressure" p L . Let us note that the similar expression has been used in Ref. 2 for a crystal with vacancies. In the same limit Π ik coincides with the normal expression for a momentum current density
Π ik = ρ L v Li v Lk − σ ik − σ ′
ik and in the linear approximation in strains the expression (17) is reduced to the standard definition of the energy current density in the viscoelastic medium
Q = ρv 2 L 2 + E 0 v L − v L · (σ + σ ′ ) + q .
Having substituted the found expression for Π ik and f in (3) and (6) and restricting to the linear terms in lattice strains, we obtain the complete set of the hydrodynamic equations for a crystal with interstitials
∂ρ ∂t + div j = 0 , ρ ∂c ∂t + (j · ∇) c + div J = 0 , ∂S ∂t + div S ρ j + q T − ν ρT J = R T ,(23)ρ L ∂v L ∂t + ρ L (v L · ∇)v L = −(1 − c)∇p + ∇ ·σ + ∇ · σ ′ + cρ L ∇ w 2 2 , ∂j ∂t + v L div j + (j · ∇)v L + j 0 div v p +(v p · ∇)j 0 = −∇p + ∇ ·σ + ∇ · σ ′ −∇ · π ,
where R and the dissipative currents are defined by expressions (18) and (20).In addition, the system (23) has to be still supplemented by the conservation energy law (10). Now as a simplest example of the application of the obtained equations we shall consider the propagation of a sound in the infinite cristal with the low interstitial concentration. We shall assume that effects of the viscosity and the heat conductivity are absent so that the only dissipative process is related to the interstitial diffusion. In addition, we shall use the isotropic approximation for both the elastic stress tensor and the diffusion current, takingσ
ij = 2µ 0ũij , J = −T β∇ ν ρT ,
where µ 0 is the shear modulus and the constant β is proportianal to the diffusion coefficient. Further we shall confine ourselves to the case of the small diffusion mobility of interstitials when β ≪ 1. Assuming that a plane wave propagates along the x-direction and having eliminated the current j, one writes the set of the linearized equations (23) in the form
ρ ′ = ∂ 2 p ′ ∂x 2 − 4 3 µ 0 ∂ 3 u x ∂x 3 , ρċ ′ = T β ∂ 2 ν ′ ∂x 2 , (ν = ν/ρT ) , ρṡ ′ = − ν ρ ∂ 2 ν ′ ∂x 2 , (s = S/ρ) , ρ Lüx = −(1 − c) ∂p ′ ∂x + 4 3 µ 0 ∂ 2 u x ∂x 2 ,(24)u y = c 2 t ∂ 2 u y ∂x 2 ,ü z = c 2 t ∂ 2 u z ∂x 2 .
Here the prime denotes small deviations of a variable from its equilibrium value which is without prime and s = S/ρ is the entropy per unit mass. It follows from the last two equations that in the assumed approximation the sound velocity of the transverse waves c 2 t = µ 0 /ρ L does not depend on the interstitial concentration and coincides with its usual value in the isotropic elastic medium.
The equality (21) shows that p, c, T , w 2 andσ ij can be taken as the independent variables. Taking into account the fact that the scalar function can depend on the tensorσ ij by convolutionσ ijσij only, in the linear approxomation, one can consider ρ, ν, s to be functions of p, c, and T . In the following we shall take the space and time dependence of all variables to be of the form
exp[iω(t + x/v)],
where v is the longitudinal sound velocity , and write down the set (24) as
v 3 ∂ρ ∂T T ′ + v 3 ∂ρ ∂p − v p ′ + v 3 ∂ρ ∂c c ′ + 4 3 µ 0 iωu x = 0 , iωT β( ∂ν ∂T T ′ + ∂ν ∂p p ′ ) + (iωT β ∂ν ∂c − v 2 ρ)c ′ = 0 , ∂s ∂T T ′ + ∂s ∂p p ′ + ( ∂s ∂c + ν)c ′ = 0 , (1 − c)vp ′ + iω(ρ L v 2 − 4 3 µ 0 )u x = 0 .(25)
We shall find the sound velocity in the form v = v 0 +βv 1 , where β → 0. Having put in (25) β = 0 one finds zero approximation v 0 as
v 2 0 = 4 3 µ 0 ρ L + ∂s ∂T p ∂(ρ, s) ∂(p, T ) + O(c) ,(26)
where O(c) denotes the small terms to be proportianal a concentration c. Using the well known properties of a functional determinants, one has
∂(ρ, s) ∂(p, T ) = ∂(ρ, s) ∂(p, s) ∂(p, s) ∂(p, T ) = ∂ρ ∂p s ∂s ∂T p .(27)
Since the adiabatic compression modulus per unit mass K ad is defined as
1 K ad = 1 ρ ∂ρ ∂p s ,
we have, instead of (27)
∂(ρ, s) ∂(p, T ) = ρ K ad ∂s ∂T p .
Substitution of the last equality in the Eq.(26) yields
v 2 0 = 4 3 µ 0 ρ L + K ad ρ L ,(28)
since ρ → ρ L at c → 0. The expression (28) coincides with usual value of the longitudinal sound velocity in the isotropic elastic medium. The compatibility condition of the set of Eqs.(25) up to terms of the second order in β yields an equation for v 1 . To avoid too combersome expressions we use a number of simplifying assumptions. It is known that when introducing an impure particle into a perfect crystal its volume changes in a macroscopic value. Therefore one can suppose that at c → 0 ∂ρ/∂c has to be considerably more than all the other thermodynamic derivatives included into the set (25). In addition, we shall confine ourselves to the temperature region in which one can ignore a heat expansion and neglect by terms containing ∂ρ/∂T . Then keeping in an expression for v 1 only terms to be proportianal to ∂ρ/∂c, we obtain
v 1 = iω T δ 2ρ L v 0 ∂ρ ∂c pT , where δ = ∂(ν, s) ∂(p, T ) ∂s ∂T −1 cp × 2K ad (2ρ L v 2 0 + 3K ad )(∂ρ/∂p) cT − 3ρ L .
Allowing for v = v 0 + βv 1 , one has for the absorption coefficient
γ l = ω 2 T βδ 2ρ L v 3 0 ∂ρ ∂c pT(29)
As it seen from Eq.(29) γ l to be proportianal to ω 2 and the diffusion coefficient.
In conclusion, we have obtained the complete system of the hydrodynamic equations for a crystal with interstitials. These equations allow to describe the joint dynamics of the lattice and the impurity system and take into account the dissipative efects of the viscosity, the heat conductivity and the diffusion of interstitials. On the basis of found equations we have considered the problem of the propagation of plane waves in the crystal with small interstitial concentration and calculated the sound velocities and absorption coefficient provided that the viscosity and the heat conductivity are absent and the diffusion mobility of interstitials is small. * E-mail: [email protected]
1 P.C. Martin, O. Parodi and P.S. Pershan, Phys. Rev. A6, 2401 (1972). 2 P.D. Fleming and C. Cohen, Phys. Rev. B13, 500 (1976). 3 B.M. Aizenbud and N.D. Gershon, Physica A108, 583 (1981).
. T C Lubensky, S Ramaswamy, J Toner, Phys. Rev. 327444T.C. Lubensky, S. Ramaswamy and J. Toner, Phys. Rev. B32, 7444 (1985).
. A M Anile, S Pennisi, Phys. Rev. 4613186A.M. Anile and S. Pennisi, Phys. Rev. B46, 13186 (1992).
. Z W Gortel, L A Turski, Phys. Rev. 459389Z.W. Gortel and L.A. Turski, Phys. Rev. B45, 9389 (1992).
. A M Anile, O Muscato, Phys. Rev. 5116728A.M. Anile and O. Muscato, Phys. Rev. B51, 16728 (1995).
. H T C Stoof, K Mullen, M Wallin, S M Girvin, Phys. Rev. 535670H.T.C. Stoof, K. Mullen, M. Wallin and S.M. Girvin, Phys. Rev. B53, 5670 (1996).
. M W Wu, H L Cui, N J M Horing, Phys. Rev. 542351M.W. Wu, H.L. Cui, and N.J.M. Horing, Phys. Rev. B54, 2351 (1996).
. E B Sonin, W F Vinen, J. Phys.:Condens. Matter. 102191E.B. Sonin and W.F. Vinen, J. Phys.:Condens. Matter, 10, 2191 (1998).
. I Tokatly, O Pankratov, Phys. Rev. 6015550I. Tokatly and O. Pankratov, Phys. Rev. B60, 15550 (1999).
L D Landau, E M Lifshitz, Course of Theoretical Physics. New YorkPergamon Press6Mechanics of Fluids, 2ndL.D. Landau and E.M. Lifshitz, Mechanics of Fluids, 2nd. ed. Course of Theoretical Physics Vol.6 (Pergamon Press, New York, 1987).
I M Khalatnikov, Theory of Superfluids. MoscowNaukaI.M. Khalatnikov, Theory of Superfluids (Nauka, Moscow, 1971).
. A F Andreev, L M Lifshitz, Sov. Phys. JETP. 291107A.F. Andreev and L.M. Lifshitz, Sov. Phys. JETP 29, 1107 (1969).
| zyda_arxiv-1028000 |
Reconstructing gene expression and knockout effect scores from DNA mutation (Mut2Ex): methodology and application to cancer prediction problems
Maya Ramchandran
Maayan Baron
Reconstructing gene expression and knockout effect scores from DNA mutation (Mut2Ex): methodology and application to cancer prediction problems
Building prediction models for outcomes of clinical relevance when only a limited number of mutational features are available causes considerable challenges due to the sparseness and lowdimensionality of the data. In this article, we present a method to augment the predictive power of these features by leveraging multi-modal associative relationships between an individual's mutational profile and their corresponding gene expression or knockout effect profiles. We can thus reconstruct expression or effect scores for genes of interest from the available mutation features and then use this reconstructed representation directly to model and predict clinical outcomes. We show that our method produces significant improvements in predictive accuracy compared to models utilizing only the raw mutational data, and results in conclusions comparable to those obtained using real expression or effect profiles.
Introduction
Utilizing somatic mutation data to predict clinically relevant patient outcomes can yield suboptimal results due to the sparse information content in binary hotspot mutation features (Prasad, 2016;West, 2016). Results are considerably worse in scenarios where the number of genes assayed is small, as is the case with patient data derived from commercially available NGS panels. (Shen et al., 2015). However, the vast availability of such data alongside clinical outcomes measured on the same patients could be enormously useful in developing tools to improve diagnoses and treatment decisions, and therefore necessitates solutions to augment the granularity of mutation data in order to most effectively train prediction models for such outcomes (Hodis et al., 2012;Hospital et al., 2012;Veer et al., 2002). We show that great improvements in this direction can be made by first leveraging the correlation between hotspot mutation status and the gene expression profile of a given sample to construct a latent representation of expression space directly from mutation.
It has been previously established across many biological applications that higher level genomic features such as gene expression typically have higher discrimination and predictive power for phenotypes and other clinical features than lower level features such as somatic mutation, regardless of the machine learning model applied (Costello et al., 2014;Menden et al., 2019;Chiu et al., 2019). We demonstrate that a more effective utilization of mutation data first reconstructs expression profiles by exploiting the biological relationship between the two modalities, and then directly uses this reconstructed representation to train prediction models for any outcome of interest such as patient survival, cancer subtyping, and disease staging. We additionally show that it is possible to reconstruct scores measuring the effect of gene knockout directly from mutation using a similar framework.
In order to accomplish this, we propose a modeling approach (Mut2Ex) based on partial least squares regression, a popular statistical framework that models the common structure shared by the dependent variables and predictors in the presence of potential multi-collinearity between features (Wold, 1985). To our knowledge, Mut2Ex is the first method to transform binary mutation data into a continuous, data-rich representation directly based on gene expression. Unlike other continuous-valued embeddings of binary samples, ours is comparable with the output of gene expression assays and can be used in conjunction with or instead of patient gene expression data, either within model construction or evaluation. The intuition behind our approach is drawn from the biological connection between the mutation status of a gene and its expression, as well as the interplay between genes within pathways and other correlative relationships. To that end, Mut2Ex is capable of jointly inferring the expression state of all genes of interest as opposed to requiring separate prediction models for each gene's expression; this allows shared information to be borrowed across genes in order to increase overall efficiency and accuracy.
Previous work in this area focuses on the use of partial least squares to directly handle high dimensional gene expression data in inferential and prediction contexts, particularly in the case in which there are significantly more features than samples (Boulesteix & Strimmer, 2006;Nguyen & Rocke, 2004;Yang et al., 2017). Liquet et. al. describe an approach to study the relationship between different types of highdimensional 'omics data or between 'omics data and phenotypes (2015). However, their focus is on the integration of multiple available modalities for the same set of samples and on the grouping of features in a high-dimensional context, as opposed to our aim of inferring the expression profile of a sample given only the mutation status of a relatively small number of genes. Other work in modeling joint relationships between mutation and expression typically involves the use of deep learning models that not only require a large amount of training samples, computational complexity, resources, and time, but also ultimately lack interpretability (Avsec et al., 2021;Zhou & Troyanskaya, 2015). In contrast, our proposed approach is fast, efficient, relatively simple, and interpretable. Our model can be trained using as few as a couple hundred samples, and can handle input feature sets either larger or smaller than the sample size without sacrificing accuracy.
We begin this article by describing the Mut2Ex methodology and the contexts for which it is designed. We additionally highlight the biological and mathematical intuition involved in the formulation of the approach. We then present applications for which our method improves upon the current standard, particularly in the prediction of clinically relevant outcomes from low-dimensional commercial mutation panels measured on cancer patients. We show that our framework can be used to reconstruct either gene expression or CRISPR knockout effect scores (Behan et al., 2019), and result in similar conclusions for the ultimate prediction task of interest to what would be obtained if we had access to the true expression or effect profiles. For all applications, we show that our approach is far superior to using the mutation profiles directly for the same prediction problems. We note that while we have chosen to focus our methodological description on reconstructing gene expression for clarity, the same concepts transfer when reconstructing gene knockout effect scores.
Methods
Notation
Let X ∈ R n×p and Z ∈ R n×q be two data matrices containing n observations (rows) of p predictors (mutation) and q variables (gene expression), respectively. For both X and Z, each predictor and variable represents a gene; these gene sets do not necessarily need to be overlapping. Now, let X * ∈ R m×p be a data matrix containing m observations of the same p predictors as X (mutation) for which we have a corresponding vector y ∈ R m×1 containing the clinical outcome to be modeled. We denote by the subscript c the centered form of a matrix; that is, for matrix M ∈ R n×p , M c = M − 1 n ee T M , where e ∈ R n×1 is an n-length vector of 1's.
Partial Least Squares Regression
Partial least squares (PLS) techniques are increasingly popular in genomic applications, primarily because they have been designed to handle the situation in which there are far more (potentially correlated) features than samples. A significant advantage of partial least squares is its explicit focus on capturing the joint correlation between the input and output features in an efficient latent representation suited both for prediction tasks and dimension reduction; this is particularly effective at integrating multiple 'omics feature sets measured on the same samples. Partial least squares regression (PLSR) is based on the following latent component decompositions of the centered predictor and response matrices:
X c = LP T + E (1) Z c = LQ T + F(2)
where L ∈ R n×c is the matrix of latent components for a given number of components c, with the columns representing each component's scores across all n training observations. P ∈ R p×c and Q ∈ R q×c are matrices of coefficients for X c and Z c , respectively, and E ∈ R n×c and F ∈ R n×q are the corresponding matrices of residuals.
The foundation of PLSR is in modeling the latent compo-nents matrix L as a linear combination of X c ; that is,
L = X c W(3)
where W ∈ R p×c is a matrix of weights; optimizing W is the primary objective in PLSR algorithms. Once W is determined, the latent component matrix L is then used to predict Z c in place of the original variables within X c , where Q T is the least squares solution of equation (1); that is,
Q T = L T L −1 L T Z c(4)
Now, incorporating equations (3) and (4) into (1), we can express the regression equation relating X c to Z c as
Z c = X c W L T L −1 L T Z c + F = X c B + F (5) where the matrix B = W Q T = W L T L −1 L T Z c ∈
R p×q contains the PLS regression coefficients. The corresponding fitted response matrixẐ is written aŝ
Z = X c B = L L T L −1 L T Z c(6)
representing the least squares solution of a linear regression predicting Z c from L. Finally, to obtain predictions Z * ∈ R m×q given a new a matrix of new, uncentered observations X * ∈ R m×p , we compute
Z * = X * − 1 n ee T X c B + 1 n ee T Z c(7)
where e again represents the n-length vector of 1's.
Given this formulation, it is clear that the specification of L, and thus W , determines the basis of all components required to produce PLSR predictions of a gene expression matrix Z from a mutation matrix X. The basic idea in PLS-based approaches that the latent components L are designed to have a high covariance with the response Z. This is clear in examining the objective function for optimizing W:
W i = arg max w w T X c T Z c Z c T X c w = arg max w q j=1 Cov 2 Z cj , X c w(8)
for i = 1, . . . , c, with W i and Z cj representing the i th j th columns of W and Z c , respectively. Therefore, W is constructed by finding the linear combination of the input features that have maximal squared covariance with each dimension of the response. Popular algorithms to solve for W with appropriate constraints in the multivariate response context include NIPALS and SIMPLS; in our applications, we use an implementation of SIMPLS, although we do not anticipate major changes in performance between algorithms (Wold, 1975;Jong, 1993).
Reconstructing gene expression profiles
The PLSR framework described above is advantageously designed in handling the biological relationships between genes both within and across modalities. The optimization of the weights matrix W takes into account the correlative relationships between the gene expression values of each output gene across samples with the corresponding mutation status of the input genes. Additionally, the correlation between genes both within each mutational profile and within each gene expression profile are jointly handled through the optimization of the linear combination coefficients within W : each resulting linear combination of the features within X (thus capturing between-gene correlation) is explicitly designed to have maximal covariance with each feature (gene) in Z. The ability of PLSR to handle the potential singularity of X T X when p > n or the existence of multicollinearity between the features in X allows for flexibility in specifying the input mutational gene set, regardless of the number of samples n available for training. Given the overall structure and benefits to PLSR in this context, we can thus build a model z(X * ) to reconstruct the expression profiles Z * of a set of samples given their mutational profiles X * as follows:
z (X * ) = X * − 1 n ee T X c W W T X c T X c W −1 × W T X c T Z c + 1 n ee T Z c(9)
once W has been solved as in equation (8). We refer to model z (X * ) as Mut2Ex.
Predicting clinical outcomes
Now, we can build a prediction model for the clinical outcome y directly from the reconstructed expression Z * (the output of Mut2Ex) as opposed to the original mutation X * . That is, if we denote by y = f (·) any regression function relating an input feature set to y (for example, regularized regression or machine learning approaches such as Random Forest, Gradient Boosted Trees, or Neural Networks), we can train a modelŷ
= f Z * = f (z(X * ))(10)
instead of using the raw mutation features directly (i.e. y = f (X * )). As we show in the data application, the increase in granularity afforded by the continuously-valued reconstructed expression over sparse binary mutation data greatly improves the prediction performance of most regression approaches.
CHOOSING THE NUMBER OF COMPONENTS
The number of components c with which to construct L plays a critical role in determining a PLSR model. The max-imal number of latent components that can have non-zero covariance with Z is c max = min(n − 1, p); for genomic data in which often p > n, setting c = c max results in a fully saturated, and thus overfit, model. The the number of components c must therefore be chosen to strike a balance between generalizability and preserving variability in the reconstructed expression profile, since the lower the number of components, the more the predictions regress to the mean. Additionally, if this reconstructed expression is also intended to be used within a subsequent model to predict the clinical outcome y, it is critical that the degree of variability in the reconstruction be appropriately aligned with the variability in y. When the ultimate task is estimating y, it is less important that Z * = z(X * ) perfectly capture the true expression profile Z * in absolute value than that it reflect the correlative relationships between genes in Z * that are most predictive of y. In this case, we suggest using a cross-validation approach to choosing c by minimizing the distance (using an appropriate metric) between y and f (z(X * )) as opposed to z(X) and Z. For the results we present in Section 3, we found that restricting the model to between 40 and 50 components produced the best overall prediction outcomes; however, we emphasize that the choice of this hyperparameter is context specific and should be tuned to the particular application of interest.
FEATURE SET DETERMINATION
The choice of features for which to reconstruct gene expression is dependent on the ultimate prediction task and the gene set available within the input mutation data. We have designed Mut2Ex to effectively handle relatively lowdimensional mutational feature sets corresponding to the size of most commercial panels, and as previously mentioned, the genes to be reconstructed in expression space do not need to necessarily correspond with the available genes in mutation space. To determine an effective reconstructed expression gene set to predict clinical outcome y typically requires a combination of biological knowledge (such as previously identified driver genes) and data-driven variable selection procedures. In general, since we are using a far more sparse genomic representation of the samples to reconstruct continuous gene expression values, more effective PLSR predictions are obtained for q < p; that is, utilizing more features in mutation space to reconstruct fewer features in expression space.
Results
TRAINING SETUP
To apply the Mut2Ex method to biological data, we selected a training setup based on mutation and expression data measured on 691 cell lines from the Cancer Cell Line Encyclopedia (Barretina et al., 2012). We hypothesized that learning the relationships between the two modalities in the cell line context would capture the biologically meaningful correlations to be extrapolated without potential additional noise present in tumor data which could lead to overfitting and lack of generalizability. When subsequently applied to real tumor mutation data, the reconstructed expression would then be more likely to contain the relevant biological variation required for further inference or prediction tasks that would have been present in the true expression profiles were they measured, even if the two sets are not identical in value. We posited this translation would be effective since the mutational profiles inputted into the model tend to be far more stable between cell lines and tumor samples than gene expression. However, we underscore that Mut2Ex may be trained on pre-clinical or real tumor 'omics data depending on the context, and that our particular choice of cell line training data was designed for the applications we have chosen to present. Finally, we note that our implementation of Mut2Ex utilizes functions provided by the PLS package in R (Mevik & Wehrens, 2007).
Breast cancer classification
PAM50 GENE EXPRESSION SIGNATURES
To investigate the performance of Mut2Ex on clinical data, we examined the use of reconstructed expression in classifying breast cancer tumors into the PAM50 classification subtypes. PAM50 describes clinically meaningful intrinsic molecular subtypes defined by the mRNA expression of 50 key genes, and has been shown to significantly improve predictions of prognosis compared to other genomic signatures or tumor characteristics (Nielsen et al., 2010;Parker et al., 2009;Filipits et al., 2014). This classification is widely applied clinically to categorize breast cancer tumors into the following 5 subtypes: Luminal A, Luminal B, human epidermal growth factor receptor 2 (HER2)-enriched, Basal-like, and Normal-like. As the subtypes are based on gene expression signatures of the established PAM50 genes, classification of a new tumor sample typically requires assaying the expression level of these same genes using RNA-sequencing, microarray, or qPCR and comparing the similarity of the resulting expression profile to the signatures defined by each subtype.
EVALUATING Mut2Ex-BASED CLINICAL
SUBTYPING PERFORMANCE
The clinical importance of PAM50 underlies the highly beneficial impact of being able accurately classify tumors into the PAM50 subtypes based solely on the commercial mutation panels most commonly measured in practice. To achieve this, we applied Mut2Ex on The Cancer Genome Atlas Breast Cancer (TCGA-BRCA) project data, a compendium containing whole genome sequencing and RNA-Seq data as well as predicted PAM-50 subtypes (based on the relevant RNA-seq signatures) for 571 total breast cancer patient samples (Hospital et al., 2012). We subsetted the mutation features to the 324 genes measured on the FoundationOne CDx diagnostic panel and used Mut2Ex to reconstruct the expression of the 50 genes comprising the PAM50 signature ( Figure 2A) (Milbury et al., 2022). Only 11 of the PAM50 genes are included in the FoundationOne panel, so we were primarily reconstructing expression for genes for which we did not have the corresponding mutation statuses. We then trained a Random Forest (RF) classifier on the reconstructed expression to predict the PAM-50 molecular subtypes and compare these predictions with the assigned subtypes provided by TCGA-BRCA. We additionally considered the tasks of predicting tumor stage, HER2 status, and ER status. Finally, we note substituting other common diagnostic panels such as MSK-IMPACT and DFCI-ONCOPANEL within the Mut2Ex workflow in place of FoundationOne led to similar conclusions across all prediction problems.
For all objectives, we compared the prediction accuracy of RF learners trained on Mut2Ex-produced reconstructed ex-pression to RF learners trained on either the true expression profiles for the PAM50 genes provided by TCGA-BRCA or the original binary mutation profiles for the FoundationOne panel genes ( Figure 2C). Across the board, the models built on reconstructed expression perform very similarly to models built on true expression and generate highly accurate predictions (AUCs > .9), whereas models built on mutation features perform far worse (AUCs < .6); these results are particularly remarkable given that outcomes such as PAM50 classification are explicitly based on expression. We emphasize the concordance in prediction performance despite the relatively low correlation between the true and reconstructed expression profiles (average pearson correlation of 0.05 and 0.17 across genes and samples, respectively, Figure 2B) even for specific genes known to be important in breast cancer pathology. Nevertheless, Mut2Ex is still clearly able to extract the meaningful information from mutation status relevant for clinical outcome prediction by leveraging the associative relationships between mutation and expression.
Gene knockout effect prediction
APPLICATION TO GENE DEPENDENCY SCORES
Although precision medicine has been traditionally considered to be a direct derivative of genomics, the majority of patients do not harbor actionable mutations as currently defined. Moreover, when genomic data from patient tumors is clinically actionable, most patients who get treated solely according to their genomic panel results in practice do not benefit significantly overall and require additional measurement of phenotypic features to receive impactful clinical interventions (Letai, 2017). A far more informative-albeit impractically clinically obtainable-metric is a measure of how dependent a tumor cell is on a gene for survival, also known as a dependency score, which can be measured by genetically knocking out a gene in cancer cell models using the CRISPR/Cas-9 system and comparing the proliferation of those models to their unperturbed controls. Applied genome-wide and to a large cohort of cancer cell lines, these functional genetic perturbation screens have already led to the identification of important cancer oncogenes and tumor suppressor genes, revealed the broad essentiality of some genes to cell fitness and the context specificity of others, and helped map the association of genes to functionally distinct and highly biologically relevant pathways (Aguirre et al., 2016;Barbie et al., 2009;Tsherniak et al., 2017;Cowley et al., 2014;Marcotte et al., 2012;.
The dependency score represents an interpretable metric of perturbation response and can itself be considered an analyzable outcome in treatment determination. There is consequently a need to improve gene knockout effect score prediction models and for such models to be simple and explainable to inform high stakes decisions (Rudin, 2019). To this end, we adapted the Mut2Ex framework to reconstruct gene knockout effect scores directly from commercial mutation panels, given the biological correlation between mutation and effect we expect to see similarly to expression. Unlike the application of Mut2Ex in reconstructing expression to predict further variables, in this case we considered the reconstruction of effect scores as the ultimate objective. As in our breast cancer analysis, we restricted the mutation features for training Mut2Ex to the FoundationOne diagnostic panel in order to jointly reconstruct knockout effect scores for 78 genes that have been shown to be involved in tumorigenesis and tumor invasion; there are 53 genes in common between the two sets. Since the knockout experiments underlying the scores were all performed on cell lines, we limited our reconstruction to the CCLE cell line universe (n = 939) using a cross-validation framework so as to not overlap training and test lines.
COMPARISON TO MODELING APPROACH BY DEPMAP
The Dependency Map (DepMap) project from which the effect scores were obtained additionally provides performance metrics for models they trained on various types of 'omics features to predict dependency scores across nearly 20,000 genes. We accordingly chose to compare the correlation between the reconstructed scores from Mut2Ex and the true effect scores with the correlations achieved by the DepMap 'omics models.
DepMap considered a total of 181,951 features in training their models, including mutation status, RNA-Seq, Copy Number Variation (CNV), methylation profiling, gene fusion, and tissue annotation (Dempster et al., 2020). They investigated different training paradigms given their various feature sets; in particular, an 'all omics' model drawing from every available feature type listed above, and a 'DNA-only' model for which only DNA-based features (such as mutation and CNV) were given as inputs. For all approaches, feature selection was first performed by training a regularized regression model individually for each gene knockout to identify the top 1000 correlated features. A Random Forest (RF) model was then trained on the selected features to predict the dependency scores for the given gene across all cell lines. DepMap reported that other machine learning models such as elastic net produced similar results to RF. By comparing the performance across their 'omics models and identifying the top correlated features for each perturbation, DepMap concluded that RNA-Seq features are by far the most predictive of gene effect. We note that unlike Mut2Ex, DepMap's approach in training separate models for each gene knockout did not leverage any relationships between gene dependencies.
EVALUATING Mut2Ex-BASED DEPENDENCY
SCORE RECONSTRUCTION
For our analysis, we employed two Mut2Ex models; the first was trained only on mutation features as previously described, whereas the second additionally included CNV features for the same set of panel genes in order to examine the potential additional signal carried by copy number alterations within this context. Similarly to previous studies, we used the Pearson correlation coefficient between the reconstructed and true effect scores as our performance metric (Dempster et al., 2020;Ben-Hamo et al., 2020). All CNV data was categorized into 3 classes representing deletions, neutral variations, and amplifications. Figure 3 displays the results of our two Mut2Ex models and how they compare to DepMap's models in predicting the chosen 78 genes. We find that the reconstructed effect scores from the most parsimonious mutation-only Mut2Ex model significantly outperform all DepMap models, including those trained on RNA-seq features in the 'all-omics' paradigm ( Figure 3A). Focusing on the top performing genes from the 'DNA-only' DepMap models ( Figure 3B), we observe that Mut2Ex still produces more accurate predictions than DepMap in almost all cases. For the majority of genes, the addition of CNV features to Mut2Ex improves prediction performance over just mutation, indicating the benefit of including these features in training when available; however, even the mutation-
Discussion
In this article, we have demonstrated the ability of Mut2Ex to significantly improve the predictive signal gleaned from limited numbers of mutational features across a variety of clinical applications. By using a PLS-based framework to construct a continuous, expression (or effect)-based representation from mutational features, Mut2Ex explicitly takes advantage of the two primary types of correlation we expect to see from a biological standpoint; first, correlations between genes within each modality, and secondly, correlations between gene sets across modalities. Jointly reconstructing genes of interest in expression space allows Mut2Ex to efficiently borrow shared information across genes using a relatively small number of samples, resulting in better prediction performance than methods that handle genes separately or require large sample sizes to train deep learning models. In predicting gene knockout effect scores, we show the improvements afforded by this joint reconstruction over DepMap's separate prediction models for each gene while training on the same set of cell lines as Mut2Ex.
Other considerable advantages of Mut2Ex are the interpretability of the reconstruction model and flexibility in managing various available input features. Interpretability is a critical factor in deploying machine learning models to make clinical decisions in practice, and thus was a priority in developing Mut2Ex's ability to handle commercial mutation panels. Previous characterizations interpreting the coefficients and latent components from PLS-based models can be applied to Mut2Ex to determine the mutation features most important in reconstructing expression or effect (Kvalheim, 2010;Chun & Keleş, 2010;Tran et al., 2014;Boulesteix & Strimmer, 2006). We leave formalizing the explanatory aspects of Mut2Ex as a direction for future research. Furthermore, other available 'omics features such as copy number variation can be included within training regardless of whether they measure the same genes as in the mutation set. Our results established that the addition of CNV features boosts reconstruction accuracy for key genes and often produces equivalent or better accuracy than models evaluating hundreds of thousands of features (including gene expression). However, we emphasize that these types of additional features are not necessary for strong model performance.
We have presented two different applications of Mut2Ex to either reconstruct gene expression or knockout effect scores.
For the former, the low degree of correlation between reconstructed and true expression was ultimately unimportant in light of the high overall prediction accuracy for the clinical variables of interest. For the latter, we considered the predicted effect scores as the final output, and could reliably do so since the reconstructed effect from Mut2Ex had significantly higher correlation with the true effect scores than in the expression case. We posit this pattern is largely due to the nature of the two types of measurements rather than the degree of overlap between the input mutation features and the genes to be reconstructed, as neither the effect score nor the expression correlation distributions shift for genes included versus excluded from the FoundationOne panel; further exploration of this assertion is an intended future research direction. Gene expression is biologically an intermediary in influencing phenotypes (hence why regression models trained on expression tend to be superior in predicting phenotypic variables), whereas the knockout effect scores represent a measure of overall cell viability in response to perturbations and therefore themselves capture phenotypic changes. In addition, gene expression measurements are subject to variability depending on factors such as equipment choice, batch, and time of measurement in relation to cellular processes, resulting in distributional shifts across studies (Luo et al., 2010;Lazar et al., 2013). The stability of knockout effect scores in relation to gene expression gives compelling support as to why Mut2Ex is able to generalize the learned correlation between mutation and effect in absolute value more so than for expression. Even though distributional differences between the training and test expression data result in lower correlation values, the biologically meaningful relationships between mutation and expression in relation to phenotypic variables are preserved.
Because Mut2Ex exhibits this characteristic, an additional benefit to training prediction models on reconstructed in place of true expression is the potential for greater generalizability to new datasets. This paradigm requires that every test dataset first be processed by Mut2Ex to construct a representation of expression compatible with the prediction model, thus conceivably mitigating dataset shift and batch effects. Since Mut2Ex takes mutation as its input (a modality far less variable than expression), building regression models on reconstructed expression may reduce confounding influences and standardize training and test data in relation to a unified set of correlative structures learned by Mut2Ex. Investigating this premise further is an area for future research.
Conclusion
We have presented a method for transforming clinical panels measuring mutational features to a continuous representation based on gene expression or knockout effect. Conven-tional gene panels are not typically informative enough to drive clinical decisions on their own; however, we show that through Mut2Ex, such panels carry enough signal to generate reconstructed expression (or effect) representations that are capable of producing clinically actionable results. Furthermore, this representation is independent of the available mutation features, in the sense that the mutation features may be disjoint from the representation features and specifically selected to optimize the subsequent prediction task. We have also detailed the mathematical formulation of Mut2Ex and the intuition behind how a PLSR-based approach effectively leverages underlying biological correlations between genes in mutation and expression space even in a low-dimensional setting. Through our two data applications, we have illustrated the considerable benefit afforded by reconstructed expression and effect produced by Mut2Ex in prediction tasks of clinical importance compared to models based solely on mutation. Overall, we have shown Mut2Ex to be a powerful tool in more efficiently utilizing the vast amount of existing clinical panel data to improve treatment outcomes and derive actionable insights. Future directions include devising a standard method to optimize the reconstruction gene set based on the available mutation features, as well as identifying an appropriate metric to quantify the application-specific predictive ability of the reconstructed expression for hyperparameter tuning.
Figure 2 .
2Mut2Ex application in various BRCA clinical classification tasks: (A) Illustration of the ML workflow: Mut2Ex was applied to the TCGA-BRCA mutation set subsetted to the genes on the Foundation One panel in order to reconstruct the expression of the PAM50 signature genes. The output was then used to build individual models to predict various clinical outcomes. (B) Histogram of Pearson correlation coefficients between real and reconstructed expression of all 50 PAM50 genes, across genes (blue) & across BRCA tumor samples (orange). (C) Area under the ROC-curve (AUC) for classification models trained on either expression (teal), reconstructed expression (blue), or mutation (red).
Figure 3 .
3Predicting gene knockout effect scores using Mut2Ex: (A) Box plot of Pearson correlation coefficients between reconstructed and true knockout effect scores of the selected 78 genes using Mut2Ex (light purple), DepMap using DNA features only (dark purple) or DepMap using all features (light gray). Significance was determined by a two sample t-test (**** P< 10 −4 ) (B) Bar plots of the Pearson correlation coefficients of the top 10 performing genes by DepMap DNA feature-model.
ZephyrAI, Virginia, USA. Correspondence to: Maya Ramchandran <[email protected]>.
AcknowledgementsWe would like to thank Emily Vucic, Dillon Tracy, and Jeff Sherman for their insightful comments, as well as Felicia Kuperwaser and Yoni Schoenberg for their contributions to the data analysis.
Genomic copy number dictates a gene-independent cell response to crispr/cas9 targetinggenomic copy number affects crispr/cas9 screens. A Aguirre, R M Meyers, B Weir, Cancer discovery. 68Aguirre, A., Meyers, R. M., Weir, B., et al. Genomic copy number dictates a gene-independent cell response to crispr/cas9 targetinggenomic copy number affects crispr/cas9 screens. Cancer discovery, 6(8):914-929, 2016.
Effective gene expression prediction from sequence by integrating longrange interactions. Z Avsec, V Agarwal, Nat Methods. 18Avsec, Z., Agarwal, V., and et. al., D. V. Effective gene expression prediction from sequence by integrating long- range interactions. Nat Methods, 18:1196-1203, 2021.
Systematic rna interference reveals that oncogenic kras-driven cancers require tbk1. D Barbie, P Tamayo, J Boehm, S Kim, Nature. 4627269Barbie, D., Tamayo, P., Boehm, J., Kim, S., et al. System- atic rna interference reveals that oncogenic kras-driven cancers require tbk1. Nature, 462(7269):108-112, 2009.
The cancer cell line encyclopedia enables predictive modelling of anticancer drug sensitivity. J Barretina, G Caponigro, N Stransky, K Venkatesan, A Margolin, S Kim, C Wilson, J Lehár, Nature. 4837391Barretina, J., Caponigro, G., Stransky, N., Venkatesan, K., Margolin, A., Kim, S., Wilson, C., Lehár, J., et al. The cancer cell line encyclopedia enables predictive mod- elling of anticancer drug sensitivity. Nature, 483(7391): 603-607, 2012.
Prioritization of cancer therapeutic targets using crispr-cas9 screens. F Behan, F Iorio, G Picco, E Gonçalves, C Beaver, G Migliardi, M Garnett, Nature. 5687753Behan, F., Iorio, F., Picco, G., Gonçalves, E., Beaver, C., Migliardi, G., and Garnett, M. Prioritization of cancer therapeutic targets using crispr-cas9 screens. Nature, 568 (7753):511-516, 2019.
Predicting and affecting response to cancer therapy based on pathway-level biomarkers. R Ben-Hamo, A J Berger, N Gavert, M Miller, G Pines, R Oren, E Pikarsky, Nature communications. 111Ben-Hamo, R., Berger, A. J., Gavert, N., Miller, M., Pines, G., Oren, R., Pikarsky, E., et al. Predicting and affect- ing response to cancer therapy based on pathway-level biomarkers. Nature communications, 11(1):1-16, 2020.
Partial least squares: a versatile tool for the analysis of high-dimensional genomic data. A L Boulesteix, K Strimmer, Briefings in Bioinformatics. 81Boulesteix, A. L. and Strimmer, K. Partial least squares: a versatile tool for the analysis of high-dimensional ge- nomic data. Briefings in Bioinformatics, 8(1):32-44, 05 2006.
Predicting drug response of tumors from integrated genomic profiles by deep neural networks. Y Chiu, H Chen, T Zhang, S Zhang, A Gorthi, L Wang, Y Huang, Chen , Y , BMC medical genomics. 121Chiu, Y., Chen, H., Zhang, T., Zhang, S., Gorthi, A., Wang, L., Huang, Y., and Chen, Y. Predicting drug response of tumors from integrated genomic profiles by deep neural networks. BMC medical genomics, 12(1):143-155, 2019.
Sparse partial least squares regression for simultaneous dimension reduction and variable selection. H Chun, S Keleş, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 721Chun, H. and Keleş, S. Sparse partial least squares regres- sion for simultaneous dimension reduction and variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(1):3-25, 2010.
A community effort to assess and improve drug sensitivity prediction algorithms. J Costello, L Heiser, E Georgii, M Gonen, M Menden, Nature biotechnology. 3212Costello, J., Heiser, L., Georgii, E., Gonen, M., Menden, M., et al. A community effort to assess and improve drug sensitivity prediction algorithms. Nature biotechnology, 32(12):1202-1212, 2014.
Parallel genomescale loss of function screens in 216 cancer cell lines for the identification of context-specific genetic dependencies. G Cowley, B Weir, F Vazquez, Scientific data. 11Cowley, G., Weir, B., Vazquez, F., et al. Parallel genome- scale loss of function screens in 216 cancer cell lines for the identification of context-specific genetic dependen- cies. Scientific data, 1(1):1-12, 2014.
Gene expression has more power for predicting in vitro cancer cell vulnerabilities than genomics. bioRxiv. J Dempster, J Krill-Burger, J Mcfarland, A Warren, J Boehm, Dempster, J., Krill-Burger, J., McFarland, J., Warren, A., Boehm, J., et al. Gene expression has more power for pre- dicting in vitro cancer cell vulnerabilities than genomics. bioRxiv, 2020.
The pam50 risk-of-recurrence score predicts risk for late distant recurrence after endocrine therapy in postmenopausal women with endocrine-responsive early breast cancer-pam50 ror score and late distant recurrence in breast cancer. M Filipits, T Nielsen, M Rudas, R Greil, Clinical Cancer Research. 205Filipits, M., Nielsen, T., Rudas, M., Greil, R., et al. The pam50 risk-of-recurrence score predicts risk for late dis- tant recurrence after endocrine therapy in postmenopausal women with endocrine-responsive early breast cancer- pam50 ror score and late distant recurrence in breast can- cer. Clinical Cancer Research, 20(5):1298-1305, 2014.
A landscape of driver mutations in melanoma. E Hodis, I Watson, G Kryukov, S Arold, Cell. 1502Hodis, E., Watson, I., Kryukov, G., Arold, S., et al. A landscape of driver mutations in melanoma. Cell, 150(2): 251-263, 2012.
Comprehensive molecular portraits of human breast tumours. B . W Hospital, H M School, L Chin, P Park, R ; G Kucherlapati, C Creighton, Nature. 4907418data analysis: Baylor College of MedicineHospital, B. . W., School, H. M., Chin, L., Park, P., Kucher- lapati, R., data analysis: Baylor College of Medicine, G., Creighton, C., et al. Comprehensive molecular por- traits of human breast tumours. Nature, 490(7418):61-70, 2012.
Simpls: an alternative approach to partial least squares regression. Chemometrics and intelligent laboratory systems. S D Jong, 18Jong, S. D. Simpls: an alternative approach to partial least squares regression. Chemometrics and intelligent labora- tory systems, 18(3):251-263, 1993.
Interpretation of partial least squares regression models by means of target projection and selectivity ratio plots. M O Kvalheim, Journal of Chemometrics. 247-8Kvalheim, M. O. Interpretation of partial least squares regression models by means of target projection and se- lectivity ratio plots. Journal of Chemometrics, 24(7-8): 496-504, 2010.
Batch effect removal methods for microarray gene expression data integration: a survey. C Lazar, S Meganck, J Taminau, D Steenhoff, A Coletta, C Molter, Briefings in bioinformatics. 144Lazar, C., Meganck, S., Taminau, J., Steenhoff, D., Coletta, A., Molter, C., et al. Batch effect removal methods for microarray gene expression data integration: a survey. Briefings in bioinformatics, 14(4):469-490, 2013.
Functional precision cancer medicine-moving beyond pure genomics. A Letai, Nature medicine. 239Letai, A. Functional precision cancer medicine-moving beyond pure genomics. Nature medicine, 23(9):1028- 1035, 2017.
Group and sparse group partial least square approaches applied in genomics context. B Liquet, P L De Micheaux, B P Hejblum, R Thiébaut, Bioinformatics. 321Liquet, B., de Micheaux, P. L., Hejblum, B. P., and Thiébaut, R. Group and sparse group partial least square approaches applied in genomics context. Bioinformatics, 32(1):35- 42, 09 2015.
A comparison of batch effect removal methods for enhancement of prediction performance using maqc-ii microarray gene expression data. J Luo, M Schumacher, A Scherer, D Sanoudou, D Megherbi, The pharmacogenomics journal. 104Luo, J., Schumacher, M., Scherer, A., Sanoudou, D., Megherbi, D., et al. A comparison of batch effect re- moval methods for enhancement of prediction perfor- mance using maqc-ii microarray gene expression data. The pharmacogenomics journal, 10(4):278-291, 2010.
Essential gene profiles in breast, pancreatic, and ovarian cancer cellsdiscovering cancer dependencies with essential gene profiles. R Marcotte, K Brown, F Suarez, A Sayad, K Karamboulas, Cancer discovery. 22Marcotte, R., Brown, K., Suarez, F., Sayad, A., Karam- boulas, K., et al. Essential gene profiles in breast, pancre- atic, and ovarian cancer cellsdiscovering cancer depen- dencies with essential gene profiles. Cancer discovery, 2 (2):172-189, 2012.
Functional genomic landscape of human breast cancer drivers, vulnerabilities, and resistance. R Marcotte, A Sayad, K Brown, F Sanchez-Garcia, J Reimand, M Haider, Cell. 1641-2Marcotte, R., Sayad, A., Brown, K., Sanchez-Garcia, F., Reimand, J., Haider, M., et al. Functional genomic land- scape of human breast cancer drivers, vulnerabilities, and resistance. Cell, 164(1-2):293-309, 2016.
Community assessment to advance computational prediction of cancer drug combinations in a pharmacogenomic screen. M Menden, D Wang, M Mason, B Szalai, K Bulusu, Nature communications. 101Menden, M., Wang, D., Mason, M., Szalai, B., Bulusu, K., et al. Community assessment to advance computational prediction of cancer drug combinations in a pharmacoge- nomic screen. Nature communications, 10(1):1-17, 2019.
The pls package: Principal component and partial least squares regression in r. B.-H Mevik, R Wehrens, Journal of Statistical Software. 182Mevik, B.-H. and Wehrens, R. The pls package: Princi- pal component and partial least squares regression in r. Journal of Statistical Software, 18(2):1-23, 2007.
Clinical and analytical validation of foundationone® cdx, a comprehensive genomic profiling assay for solid tumors. C Milbury, J Creeden, W Yip, D Smith, PloS one. 173264138Milbury, C., Creeden, J., Yip, W., Smith, D., et al. Clini- cal and analytical validation of foundationone® cdx, a comprehensive genomic profiling assay for solid tumors. PloS one, 17(3):e0264138, 2022.
On partial least squares dimension reduction for microarray-based classification: A simulation study. D Nguyen, D Rocke, Computational Statistics and Data Analysis. 46Nguyen, D. and Rocke, D. On partial least squares di- mension reduction for microarray-based classification: A simulation study. Computational Statistics and Data Analysis, 46:407-425, 2004.
A comparison of pam50 intrinsic subtyping with immunohistochemistry and clinical prognostic factors in tamoxifen-treated estrogen receptor-positive breast cancerpam50 in er-positive breast cancer. T Nielsen, J Parker, S Leung, D Voduc, M Ebbert, Clinical cancer research. 1621Nielsen, T., Parker, J., Leung, S., Voduc, D., Ebbert, M., et al. A comparison of pam50 intrinsic subtyping with immunohistochemistry and clinical prognostic factors in tamoxifen-treated estrogen receptor-positive breast cancerpam50 in er-positive breast cancer. Clinical cancer research, 16(21):5222-5232, 2010.
Supervised risk predictor of breast cancer based on intrinsic subtypes. J Parker, M Mullins, M Cheang, S Leung, Journal of clinical oncology. 2781160Parker, J., Mullins, M., Cheang, M., Leung, S., et al. Su- pervised risk predictor of breast cancer based on intrinsic subtypes. Journal of clinical oncology, 27(8):1160, 2009.
Perspective: the precision-oncology illusion. V Prasad, Nature. 5377619Prasad, V. Perspective: the precision-oncology illusion. Nature, 537(7619):S63-S63, 2016.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. C Rudin, Nature Machine Intelligence. 15Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206- 215, 2019.
Clinical applications of next generation sequencing in cancer: from panels, to exomes, to genomes. T Shen, S H Pajaro-Van De Stadt, N C Yeat, J C Lin, Frontiers in genetics. 6215Shen, T., Pajaro-Van de Stadt, S. H., Yeat, N. C., and Lin, J. C.-H. Clinical applications of next generation sequencing in cancer: from panels, to exomes, to genomes. Frontiers in genetics, 6:215, 2015.
Interpretation of variable importance in partial least squares with significance multivariate correlation (smc). Chemometrics and Intelligent Laboratory Systems. T Tran, N Afanador, L Buydens, L Blanchet, 138Tran, T., Afanador, N., Buydens, L., and Blanchet, L. Inter- pretation of variable importance in partial least squares with significance multivariate correlation (smc). Chemo- metrics and Intelligent Laboratory Systems, 138:153-160, 2014.
Defining a cancer dependency map. A Tsherniak, F Vazquez, P Montgomery, B Weir, Cell. 1703Tsherniak, A., Vazquez, F., Montgomery, P., Weir, B., et al. Defining a cancer dependency map. Cell, 170(3):564- 576, 2017.
Gene expression profiling predicts clinical outcome of breast cancer. L V Veer, H Dai, M V D Vijver, Y He, A Hart, nature. 4156871Veer, L. V., Dai, H., Vijver, M. V. D., He, Y., Hart, A., et al. Gene expression profiling predicts clinical outcome of breast cancer. nature, 415(6871):530-536, 2002.
No solid evidence, only hollow argument for universal tumor sequencing: show me the data. H West, JAMA oncology. 26West, H. No solid evidence, only hollow argument for universal tumor sequencing: show me the data. JAMA oncology, 2(6):717-718, 2016.
Path models with latent variables: The nipals approach. H Wold, Quantitative sociology. ElsevierWold, H. Path models with latent variables: The nipals ap- proach. In Quantitative sociology, pp. 307-357. Elsevier, 1975.
Partial least squares. H Wold, Encyclopedia of the Statistical Sciences. Kotz, S., Johnson, N.L.6Wold, H. Partial least squares. Kotz, S., Johnson, N.L. (eds.) Encyclopedia of the Statistical Sciences, 6:581- 591, 1985.
An application of partial least squares for identifying dietary patterns in bone health. T Yang, L Aucott, G Duthie, H Macdonald, Archives of osteoporosis. 12163Yang, T., Aucott, L., Duthie, G., and Macdonald, H. An application of partial least squares for identifying dietary patterns in bone health. Archives of osteoporosis, 12(1): 63, 2017.
Predicting effects of noncoding variants with deep learning-based sequence model. J Zhou, O Troyanskaya, Nat Methods. 12Zhou, J. and Troyanskaya, O. Predicting effects of noncod- ing variants with deep learning-based sequence model. Nat Methods, 12:931-934, 2015.
| zyda_arxiv-1030000 |
Unsupervised Pixel-level Road Defect Detection via Adversarial Image-to-Frequency Transform
Jongmin Yu
Duyong Kim
Younkwan Lee
Moongu Jeon
Unsupervised Pixel-level Road Defect Detection via Adversarial Image-to-Frequency Transform
In the past few years, the performance of road defect detection has been remarkably improved thanks to advancements on various studies on computer vision and deep learning. Although a large-scale and well-annotated datasets enhance the performance of detecting road defects to some extent, it is still challengeable to derive a model which can perform reliably for various road conditions in practice, because it is intractable to construct a dataset considering diverse road conditions and defect patterns. To end this, we propose an unsupervised approach to detecting road defects, using Adversarial Image-to-Frequency Transform (AIFT). AIFT adopts the unsupervised manner and adversarial learning in deriving the defect detection model, so AIFT does not need annotations for road defects. We evaluate the efficiency of AIFT using GAPs384 dataset, Cracktree200 dataset, CRACK500 dataset, and CFD dataset. The experimental results demonstrate that the proposed approach detects various road detects, and it outperforms existing state-of-the-art approaches.
I. INTRODUCTION
Road defect detection is one of the important studies to prevent vehicle accidents and manage the road condition effectively. All over the United States, road conditions contribute to the frequency and severity of motor vehicle accidents. Almost of third of all motor vehicle crashes are related to poor road conditions, resulting in more than two million injuries and 22,000 fatalities [1]. Over time, as road infrastructure ages, the condition of that infrastructure steadily declines, and the volumes and severity of defects increase [2]. Therefore, the need for the development of a method for detecting road defects within this area only increases [3], and numerous studies have been being proposed in the literature.
Over the past decades, diverse studies have considered the use of image processing and machine learning approaches with hand-crafted features [4]- [7]. Statistical analysis [4], [6] is the oldest one and also the most popular. Acosta et al. [4] and Deutschl et al. [7] have proposed vision-based methods based on partial differential techniques. Chambon et al. [6] have presented a method based on Markovian modelling to take into account the local geometrical constraints about road cracks. Bray et al. [5] have utilized the classification approach using neural networks for identifying road defects. These approaches usually identify road defects using the contrast of texture information on a road surface.
However, the contrast between roads and the defects on the roads may be reduced due to the illumination conditions and the changes in weather [8]. Additionally, the specification of cameras for capturing the surface of the roads also can affect the detection accuracies. Hense, it is still challenging to develop a defect detection method which can cover various road conditions in the real world using a simple image processing or machine learning methods alone [9].
Recently, various approaches [10], [11] based on deep learning have been proposed to overcome these drawbacks. Pauly et al. [10] have proposed a method for road defect detection employing convolutional neural networks (CNNs). Fan et al. [11] have proposed segmentation method based on CNNs and apply an adaptive. These approaches need a wellannotated dataset for road defects, and also their performance may depend on scale of the given dataset. Regrettably, it is problematic in practice to construct such a dataset containing various patterns of road defects.
Developing an unsupervised method which does not need annotations for road defects in the training step, is an issue that has been noticed for a long time in this literature. Various unsupervised approaches based on image processing and machine learning were proposed [12], [13]. However, these approaches still have an inherent weakness which is detection performances are highly dependent on camera specifications and image qualities. Recently, among the approaches based on deep learning, several studies [14], [15] have presented unsupervised methods using autoencoder [16]. These approaches take normal road images as their training samples and optimize their models in a way to minimize reconstruction errors between their input and output. These approaches recognize defects if the reconstruction errors of inputted samples are larger than a predefined threshold.
However, according to Perera et al. [17] and Pidhorskyi et al. [18], even though a model based on the reconstruction setting obtains a well-optimized solution, there is a possibility that the model can reconstruct samples which have not appeared in the training step. It could be a significant disadvantage in detecting road defects using the model. Due to this disadvantage, the model may produce lower error than the expectation even if it takes defect samples as their input, and it can make hard to distinguish whether this sample contains defects or not.
To tackle this issue, we present an unsupervised approach, which exploits domain transformation based on adversarial learning, to detecting road defects. The proposed approach called Adversarial Image-to-Frequency Transform (AIFT) is trained by normal road images only and needs no annotations for defects. In contrast to other approaches [14], [15] optimizing their models by minimize reconstruction errors, AIFT is concentrated on deriving mapping function between an imagedomain and a frequency-domain using adversarial manner. To demonstrate the efficiency of the proposed approach for road defect detection, we compare the proposed approach with various state-of-the-art approaches, including supervised and unsupervised methods. The experimental results show that the proposed approach can outperform existing state-of-the-art methods.
The main contributions of our work are summarized as follows:
• An unsupervised method for detecting road defects, which can provide outstanding performance without a well-annotated dataset for road defects. • The adversarial learning for deriving the image-tofrequency mapping function. Our approach can derive the more optimal transform model than typical approaches such as reconstruction or classification settings. • The extensive experiments about road defect detection.
The experiments include ablation analysis depending on the loss functions and comprehensive comparison with the existing state-of-the-art methods. In the further sections, we describe the details of our approach and provide the experimental results and analysis it. We conclude this paper by summarizing our works.
II. THE PROPOSED METHOD A. Adversarial Image-to-Frequency Transform
It is essential to derive a robust model invariant to environments in order to detect a great number of defect patterns on roads. Our method is inspired by novelty detection studies [17], [18], which derive a model using inlier samples only and recognize outliers by computing a likelihood or an reconstruction error. The proposed method, called Adversarial Image-to-Frequency Transform (AIFT), initially derives a transform model between image-domain and frequencydomain using normal road pavement images only. The frequency-domain corresponding to the image-domain is generated by applying Fourier transform to the given imagedomain. Detecting road defects is conducted by comparing given and generated samples of each domain.
AIFT is composed of three components: Generator G, Image discriminator D I , Frequency discriminator D F , for applying adversarial learning. The original intention of adversarial learning is to learn generative models while avoiding approximating many intractable probabilistic computations arising in other strategies e.g., maximum likelihood estimation. This intention is suitable to derive an optimal model for covering the various visual patterns of road defects. The workflow of AIFT is illustrated in Fig 1. The generator G plays as a role for the mapping function between image-domain
X I = {X I i } i=1:n to frequency- domain X F = {X F i } i=1
:n as follows, G : X I ←→ X F . For the convenience of notation, we distinguish the notations of mappings for image-to-frequency G + : X I → X F and frequency-to-image G − : X F → X I , separately. G generate the transformed results from each domain as follows,
G + (X I ) =X F , G − (X F ) =X I ,(1)
whereX F andX I indicate the transformed results from X I and X F , respectively.X I andX F are conveyed to the two discriminators D I and D F for computing an adversarial loss. For computational-cost-effective implementation, weight sharing has employed. The discriminators D I and D F are defined as follows,
D * (X * ) = o * , o * ∈ R 1 ,(2)
where * denotes the indicator to assign the discriminators D * ∈ {D I , D F } depending on the types of inputs X * ∈ {X I , X F ,X I ,X F }. D I takes X I andX I as an input, and D I takes X F andX F as an input, respectively. o * indicates the outputs o I and o F according to the types of the inputs and the discriminators. The value of o * can be regarded by as a likelihood to discriminate whether a given sample is truth or generated. Each component is compiled by CNNs
B. Adversarial transform consistency learning
As the workflow of AIFT shown in Fig 1, the generator G plays a role as a bidirectional mapping function between image-domain X I and corresponding frequency-domain X F generated from X I . The underlying assumption for detecting road defects using AIFT is as follows. Since AIFT is only trained with normal road pavement images, if AIFT takes images containing defect patterns as an input, the error between the given samples and the transformed results would be larger than normal ones. Given this assumption, the prerequisite for precise road defect detection on AIFT is deriving a strict transform model between the image-domain and the frequency-domain from a given dataset for normal image samples for road pavement.
To end this, we present an adversarial transform consistency loss for training AIFT. Adversarial transform consistency loss is defined by,
L ATCL (G, D I , D F ) = E X I ∼p X I [logD I (X I )] + E X F ∼p X F [logD F (X F )] + EX F ∼p G + (X I ) [log(1 − D F (G + (X I )))] + EX I ∼p G − (X F ) [log(1 − D I (G − (X F )))],(3)
where G tries to generate imagesX I and frequency samples X F via G + and G − that look similar to given images X I and frequencies X F , while D I and D F aim to distinguish between given samples (X I and X F ) and transformed results (X I andX F ). Adversarial learning can, in theory, learn mappings G that produce outputs identically distributed as image and frequency domains, respectively [19]. However, with large enough capacity, G can map the same samples of an input domain to any random permutation of samples in the different domain, where any of the learned mappings can induce an output distribution that matches the target distribution. Thus, adversarial transform consistency loss alone may not guarantee that the learned function can map an individual input to the desired output.
To further reduce the space of possible mapping functions, we utilize the reconstruction loss to optimize the generator G. It is a common way to enforce the output of the generator to be close to the target through the minimization of the reconstruction error based on the pixel-wise mean square error (MSE) [20]- [23]. It is calculated in the form
L re (G) = E X I ∼p X I [ X F − G + (X I ) 2 2 ] + E X F ∼p X F [ X I − G − (X F ) 2 2 ].(4)
Consequently, the total loss function is:,
L total (G, D I , D F ) = L ATCL (G, D I , D F ) + λL re (G) (5)
where λ indicates the balancing parameter to take the weight for the reconstruction loss. Given the definition of above loss functions, the discriminators and the generator are trained by maximizing or minimizing corresponding loss terms expressed by, arg min
θ G max θ I ,θ F L total (G, D I , D F ),(6)
where θ G , θ I ,and θ F denote the parameters corresponded to the generator G, the image discriminators D I , and the frequency discriminator D F . Fig 3 illustrates the examples of the given samples and the transformed results for image and frequency domains. We have conducted the ablation studies to observe the effect of each loss term in learning AIFT.
C. Road defect detection
Detecting defects on a road is straightforward. Initially, AIFT produces the frequency sample X F using given an image samples X I . Secondly, AIFT transforms X F into the image samplesX I via G − . Road defects are defected by comparing the given image sample X I with the transformed resultX I .
Similarity metric for comparing the two samples X I and X I , is defined as follows,
d(X I ,X I ) = i,j (x I i,j logx I i,j m i,j − x I i,j log x I i,j m i,j ),(7)
where m i,j is expectation of x I i,j andx I i,j . Above similarity metric is based on Jeffery divergence, which is a modified KLdivergence to take symmetric property. Euclidean distances such as l1-norm and l2-normal are not suitable as a similarity metric for images since neighboring values are not considered [24]. Jeffrey divergence is numerically stable, symmetric, and invariant to noise and input scale [25].
III. EXPERIMENT A. Experiment setting and dataset
To evaluation the performance of the proposed method on road defect detection, we employ the best F-measure on the dataset for a fixed scale (ODS), the aggregate Fmeasure on the dataset for the best scale in each image (OIS), and AIU, which is proposed by Yang et al. [28]. AIU is computed on the detection and ground truth without nonmax suppression (NNS) and thinking operation, defined by,
1 Nt t N t pg N t p +N t g −N t pg ,
where N t denotes the total number of thresholds t ∈ {0.01, 0.99} with interval 0.01; for a given t, N t pg is the number of pixels of intersected region between the predicted and ground truth crack area; N t p and N t g denote the number of pixels of predicted and ground truth crack region, respectively. The proposed method has been evaluated on four publicly available datasets. The details of the datasets are described as follows.
GAPs384 dataset is German Asphalt Pavement Distress (GAPs) dataset presented by Eisenbach et al. [26], and it is constructed to address the issue of comparability in the pavement distress domain by providing a standardized highquality dataset of large scale. The dataset contains 1,969 gray scaled images for road defects, with various classes for defects fsuch as cracks, potholes, and inlaid patches. The resolution of images is 1,920×1,080.
Cracktree200 dataset [29] contains 206 road pavement images with 800×600 resolution, which can be categorized to various types of pavement defects. The images on this dataset are captured with some challenging issues such as shadows, occlusions, low contrast, and noise. CRACK500 dataset is constructed by Yang et al. [28]. The dataset is composed of 500 images wity 2,000×1,500, and each image has a pixel-level annotation. The dataset is seperated by training dataset and test dataset. The training dataset consists of 1,896 images, and the test dataset is composed of 1,124 images.
CFD dataset [27] contains 118 images with 480×320 resolution. Each image has pixel-level annotation and captured by Iphone 5 with focus of 4mm aperture of f /2.4 and exposure time of 1/135s.
The hyperparameter setting for the best performance is as follows. The epoch size and the batch size are 50 and 64, respectively. The balancing weight for the reconstruction loss E re is 0.1, and the critic iteration is set by 10. The networks are optimized by Adam et al. [30]. The proposed approach has implemented with Pytorch library 1 , and the experiments have conducted with GTX Titan XP and 32GB memory.
B. Ablation study
We have conducted an ablation study to observe the effect of the loss function terms on the performance of AIFT. We have trained AIFT using the three loss functions L re (Eq 4), L ATCL (Eq 3), and L total (Eq 5) using GAPs384 dataset and CFD dataset, and observed AIU at every two epochs. The hyperparameter settings applied to train each model, are all same, and only the loss functions are different. Fig 4 shows the AIU trends of AIFTs trained by the three loss functions. Table I contains AIUs, ODSs, and OISs on GAPs384 dataset and CFD dataset. The experimental results show that AIFT trained by the total loss (AIFT total ) achieves the best performance on this experiments. As shown in Table I, AIFT total achieves 0.083 of AIU, 0.247 of OIS, and 0.249 of ODS for GAPs384 dataset. These figures show that AIFT total can produce approximately 7% better performance than others.
In the experiments using CFD dataset, AIFT total achieves 0.203 of AIU, 0.701 of OIS, and 0.732 of ODS, and these figure are all higher than that of the others.
Notably, the overall experimental results demonstrate that the AIFTs trained by adversarial learning, can outperform [26], Cracktree200 [29], CRACK500 [28], and CFD [27]. "-" means the results are not provided. The bolded figures indicate that the best performance among them. 'S/U' denotes whether a model focuses on 'supervised' or 'unsupervised' approaches. FPS indicates the execution speed of each method, and it is computed by averaging the execution speeds about all datasets.
the AIFT based on the reconstruction setting (AIFT re ). Not only AIFT total , but also AIFT ATCL obtains the improved achievement than AIFT re . The AIU Trends (Fig 4) also justify that the AIFT learnt by adversarial manners can outperform the AIFT trained by the reconstruction setting. The experimental results justify adversarial learning can improve the robustness of AIFT for detecting road defects.
C. Comparison with existing state-of-the-arts
We have carried out the comparison with existing stateof-the-art methods for the crack detection [27], [28], [31] and the road defect detection [35]. For the efficiency of the experiments, only AIFT total is compared with other methods. Table II contains AIUs, OISs, and ODSs on Cracktree200, GAPs384, Cracktree200, and CFD datasets. AIFT total has achieved state-of-the-art performance for GAPs384 dataset, Cracktree200 dataset, and CFD dataset. In the experiments using GAPs384 dataset, AIFT total achieves 0.083 of AIU, 0.247 of ODS, and 0.249 of OIS. These figures show that AIFT total outperforms than the previous state-of-the-art performance that achieved by FPHBN [28]. FPHBN obtains 0.081 of AIU, 0.220 of ODS, and 0.231 of OIS. AIFT total shows 3% better performances than FPHBN. The experiments on Cracktree200 dataset and CFD dataset also show that AIFT total surpasses other methods. AIFT total produces 0.045 of AIU, 0.607 of ODS, and 0.642 of OIS in the experiments using Cracktree200 dataset. Additionally, AIFT total achieves 0.203 of AIU, 0.701 of ODS, and 0.732 of OIS on CFD dataset. These figures are 8.8% and 3% better than the previous state-of-the-art methods.
However, AIFT total could not obtain the highest performance on CRACK500 dataset. The state-of-the-art performance on CRACK500 dataset is achieved by FPHBN [28], and it produces 0.489 of AIU, 0.604 of ODS, and 0.635 of OIS, respectively. AIFT total has 0.478 of AIU, 0.549 of ODS, and 0.561 of OIS. The gaps between FPHBN and AIFT total are 0.011 on AIU, 0.055 on ODS, and 0.074 on OIS. However, FPHBN exploits a supervised approach, and it needs predetermined pixel-level annotations for road defects. Also, the network architecture applied to their approach is much deeper than Ours. These are the great advantages of detecting road defects.
The overall experiments show that AIFT total can outperform existing state-of-the-art methods. As shown in Table II, the detection performance of AIFT total surpasses other unsupervised methods [27], [34]. Additionally, AIFT total achieves outstanding detection performance in detecting defects than others based on supervised learning approaches, even AIFT total does not need an annotation for road defects in the training step. This may be thought that AIFT total is enabled to apply various practical situations in which a large-scale and well-annotated dataset can not be used. Consequently, the experimental results demonstrate that AIFT total can outperform existing state-of-the-art methods.
IV. CONCLUSIONS In this paper, we have proposed an unsupervised approach to detecting road defects, based on adversarial image-tofrequency transform. The experimental results demonstrate the proposed approach can detect various patterns of road defects without explicit annotations for road defects in the training step, and it outperforms existing state-of-the-art methods in most of the cases for experiments of road defect detection.
Fig. 1 :
1Architectural detail of the adversarial image-to-frequency transform. The blue objects denote the operation units including the generator G and the discriminators D I and D F . The red circles indicate the loss functions corresponded to the each operation unit. The red arrow lines show the work flow for the image-to-frequency cycle G + : X I →X F , and the blue arrow lines represent the process of the frequency-to-image cycle G −1 : X F →X I . The dotted arrow lines represent the correlations of each component to the loss functions.
Figure 2
2Figure 2 Archiectureal details for
Fig. 2 :
2Structural details of the network models in the generator G and the discriminators D I and D F . (a) and (b) denote the structural details of the generator G and the two discriminators D I and D F , respectively. The green, blue, and red boxes denote the convolutional layers, the deconvolutional layers, and the fully-connected layers, respectively. and fully-connected neural networks and the structural details of these components are shown inFig 2.
Fig. 3 :
3Comparison of the given and generated samples for the road pavement image and the corresponding frequency.
Fig. 4 :
4The trends of AIU over the training epochs. (a) show the AIU trend over the training epochs on GAPs384 dataset, and (b) illustrate the AIU trend with respect to the training epochs on CFD dataset. The red-coloured curve (AIFTtotal) denotes the AIU trend of AIFN trained by the total loss (Eq 5). The green-colored curve (AIFTGAN) indicates the AIU trend of AIFN trained by the ATCL loss (Eq 3) only. The blue-colored curve (AIFTre) shows the AIU trend of AIF trained by the reconstruction loss (Eq 4).
Fig. 5 :
5Visualization of the road defect detection results. The images on the first row represent the input images. The second row's images illustrate the ground-truths. The images on the third row denote the detection results for road defects.
ACKNOWLEDGMENT This work was partly supported by the ICT R&D program of MSIP/IITP. (2014-0-00077, Development of global multi target tracking and event prediction techniques based on realtime large-scale video analysis).
TABLE I :
IQuantitative performance comparison of the detection
performance on AIFT using GAPs384 dataset and CFD dataset
depending on the loss functions Lre (Eq 4), LATCL (Eq 3), and Ltotal
(Eq 5). The bolded figures indicate the best performances on the
experiments.
Model
GAPs384 dataset [26]
CFD dataset [27]
AIU
ODS
OIS
AIU
ODS
OIS
AIFT re
0.052 0.181 0.201 0.152 0.562
0.572
AIFT GAN
0.081 0.226 0.234 0.187 0.642
0.659
AIFT total
0.083 0.247 0.249 0.203 0.701
0.732
TABLE II :
IIQuantitative performance comparison about road defect detection using GAPs384
Source codes are publicly available on https://github.com/ andreYoo/Adversarial-IFTN.git
Cost of crashes related to road conditions, united states. E Zaloshnja, T R Miller, Annals of Advances in Automotive Medicine/Annual Scientific Conference. 53141Association for the Advancement of Automotive MedicineE. Zaloshnja and T. R. Miller, "Cost of crashes related to road conditions, united states, 2006," in Annals of Advances in Automotive Medicine/Annual Scientific Conference, vol. 53, p. 141, Association for the Advancement of Automotive Medicine, 2009.
Road crack detection using a single stage detector based deep neural network. T A Carr, M D Jenkins, M I Iglesias, T Buggy, G Morison, 2018 IEEE Workshop on Environmental, Energy, and Structural Monitoring Systems (EESMS). IEEET. A. Carr, M. D. Jenkins, M. I. Iglesias, T. Buggy, and G. Morison, "Road crack detection using a single stage detector based deep neural network," in 2018 IEEE Workshop on Environmental, Energy, and Structural Monitoring Systems (EESMS), pp. 1-5, IEEE, 2018.
Concrete preliminary damage inspection by classification of terrestrial laser scanner point clouds through systematic threshold definition. Z Hadavandsiri, D D Lichti, A Jahraus, D Jarron, ISPRS International Journal of Geo-Information. 812585Z. Hadavandsiri, D. D. Lichti, A. Jahraus, and D. Jarron, "Concrete preliminary damage inspection by classification of terrestrial laser scanner point clouds through systematic threshold definition," ISPRS International Journal of Geo-Information, vol. 8, no. 12, p. 585, 2019.
Low-cost video image processing system for evaluating pavement surface distress. J A Acosta, J L Figueroa, R L Mullen, Transportation research record. 1348J. A. Acosta, J. L. Figueroa, and R. L. Mullen, "Low-cost video image processing system for evaluating pavement surface distress," Transportation research record, no. 1348, 1992.
A neural network based technique for automatic classification of road cracks. J Bray, B Verma, X Li, W He, The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEEJ. Bray, B. Verma, X. Li, and W. He, "A neural network based technique for automatic classification of road cracks," in The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 907-912, IEEE, 2006.
Road crack extraction with adapted filtering and markov model-based segmentation: introduction and validation. S Chambon, C Gourraud, J M Moliard, P Nicolle, S. Chambon, C. Gourraud, J. M. Moliard, and P. Nicolle, "Road crack extraction with adapted filtering and markov model-based segmentation: introduction and validation," 2010.
Defect detection on rail surfaces by a vision based system. E Deutschl, C Gasser, A Niel, J Werschonig, IEEE Intelligent Vehicles Symposium. IEEEE. Deutschl, C. Gasser, A. Niel, and J. Werschonig, "Defect detection on rail surfaces by a vision based system," in IEEE Intelligent Vehicles Symposium, 2004, pp. 507-511, IEEE, 2004.
Automated pavement distress detection using advanced image processing techniques. Y Sun, E Salari, E Chou, 2009 IEEE International Conference on Electro/Information Technology. IEEEY. Sun, E. Salari, and E. Chou, "Automated pavement distress detection using advanced image processing techniques," in 2009 IEEE International Conference on Electro/Information Technology, pp. 373- 377, IEEE, 2009.
A new image stitching approach for resolution enhancement in camera arrays. M Baygin, M Karakose, 2015 9th International Conference on Electrical and Electronics Engineering (ELECO). IEEEM. Baygin and M. Karakose, "A new image stitching approach for resolution enhancement in camera arrays," in 2015 9th International Conference on Electrical and Electronics Engineering (ELECO), pp. 1186-1190, IEEE, 2015.
Deeper networks for pavement crack detection. L Pauly, D Hogg, R Fuentes, H Peel, Proceedings of the 34th ISARC. the 34th ISARCL. Pauly, D. Hogg, R. Fuentes, and H. Peel, "Deeper networks for pavement crack detection," in Proceedings of the 34th ISARC, pp. 479- 485, IAARC, 2017.
Road crack detection using deep convolutional neural network and adaptive thresholding. R Fan, M J Bocus, Y Zhu, J Jiao, L Wang, F Ma, S Cheng, M Liu, 2019 IEEE Intelligent Vehicles Symposium (IV). IEEER. Fan, M. J. Bocus, Y. Zhu, J. Jiao, L. Wang, F. Ma, S. Cheng, and M. Liu, "Road crack detection using deep convolutional neural network and adaptive thresholding," in 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 474-479, IEEE, 2019.
Pcabased algorithm for unsupervised bridge crack detection. I Abdel-Qader, S Pashaie-Rad, O Abudayyeh, S Yehia, Advances in Engineering Software. 3712I. Abdel-Qader, S. Pashaie-Rad, O. Abudayyeh, and S. Yehia, "Pca- based algorithm for unsupervised bridge crack detection," Advances in Engineering Software, vol. 37, no. 12, pp. 771-778, 2006.
Automatic road crack detection and characterization. H Oliveira, P L Correia, IEEE Transactions on Intelligent Transportation Systems. 141H. Oliveira and P. L. Correia, "Automatic road crack detection and characterization," IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 1, pp. 155-168, 2012.
One class based feature learning approach for defect detection using deep autoencoders. A Mujeeb, W Dai, M Erdt, A Sourin, Advanced Engineering Informatics. 42100933A. Mujeeb, W. Dai, M. Erdt, and A. Sourin, "One class based feature learning approach for defect detection using deep autoencoders," Advanced Engineering Informatics, vol. 42, p. 100933, 2019.
Deep architecture for highspeed railway insulator surface defect detection: Denoising autoencoder with multitask learning. G Kang, S Gao, L Yu, D Zhang, IEEE Transactions on Instrumentation and Measurement. G. Kang, S. Gao, L. Yu, and D. Zhang, "Deep architecture for high- speed railway insulator surface defect detection: Denoising autoencoder with multitask learning," IEEE Transactions on Instrumentation and Measurement, 2018.
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. P Vincent, H Larochelle, I Lajoie, Y Bengio, P.-A Manzagol, Journal of machine learning research. 11P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion," Journal of machine learning research, vol. 11, no. Dec, pp. 3371-3408, 2010.
Ocgan: One-class novelty detection using gans with constrained latent representations. P Perera, R Nallapati, B Xiang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionP. Perera, R. Nallapati, and B. Xiang, "Ocgan: One-class novelty detection using gans with constrained latent representations," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2898-2906, 2019.
Generative probabilistic novelty detection with adversarial autoencoders. S Pidhorskyi, R Almohsen, G Doretto, Advances in Neural Information Processing Systems. S. Pidhorskyi, R. Almohsen, and G. Doretto, "Generative probabilistic novelty detection with adversarial autoencoders," in Advances in Neural Information Processing Systems, pp. 6822-6833, 2018.
Unpaired image-to-image translation using cycle-consistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJ.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, pp. 2223- 2232, 2017.
X2ct-gan: Reconstructing ct from biplanar x-rays with generative adversarial networks. X Ying, H Guo, K Ma, J Wu, Z Weng, Y Zheng, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionX. Ying, H. Guo, K. Ma, J. Wu, Z. Weng, and Y. Zheng, "X2ct-gan: Reconstructing ct from biplanar x-rays with generative adversarial networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10619-10628, 2019.
Finding tiny faces in the wild with generative adversarial network. Y Bai, Y Zhang, M Ding, B Ghanem, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionY. Bai, Y. Zhang, M. Ding, and B. Ghanem, "Finding tiny faces in the wild with generative adversarial network," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 21-30, 2018.
Future frame prediction for anomaly detection-a new baseline. W Liu, W Luo, D Lian, S Gao, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionW. Liu, W. Luo, D. Lian, and S. Gao, "Future frame prediction for anomaly detection-a new baseline," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6536- 6545, 2018.
Deepanomaly: Fully convolutional neural network for fast anomaly detection in crowded scenes. M Sabokrou, M Fayyaz, M Fathy, Z Moayed, R Klette, Computer Vision and Image Understanding. 172M. Sabokrou, M. Fayyaz, M. Fathy, Z. Moayed, and R. Klette, "Deep- anomaly: Fully convolutional neural network for fast anomaly detection in crowded scenes," Computer Vision and Image Understanding, vol. 172, pp. 88-97, 2018.
The earth mover's distance as a metric for image retrieval. Y Rubner, C Tomasi, L J Guibas, International journal of computer vision. 402Y. Rubner, C. Tomasi, and L. J. Guibas, "The earth mover's distance as a metric for image retrieval," International journal of computer vision, vol. 40, no. 2, pp. 99-121, 2000.
Non-parametric similarity measures for unsupervised texture segmentation and image retrieval. J Puzicha, T Hofmann, J M Buhmann, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern RecognitionIEEEJ. Puzicha, T. Hofmann, and J. M. Buhmann, "Non-parametric similarity measures for unsupervised texture segmentation and image retrieval," in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 267-272, IEEE, 1997.
How to get pavement distress detection ready for deep learning? a systematic approach. M Eisenbach, R Stricker, D Seichter, K Amende, K Debes, M Sesselmann, D Ebersbach, U Stoeckert, H.-M Gross, 2017 international joint conference on neural networks (IJCNN). IEEEM. Eisenbach, R. Stricker, D. Seichter, K. Amende, K. Debes, M. Sesselmann, D. Ebersbach, U. Stoeckert, and H.-M. Gross, "How to get pavement distress detection ready for deep learning? a systematic approach," in 2017 international joint conference on neural networks (IJCNN), pp. 2039-2047, IEEE, 2017.
Automatic road crack detection using random structured forests. Y Shi, L Cui, Z Qi, F Meng, Z Chen, IEEE Transactions on Intelligent Transportation Systems. 1712Y. Shi, L. Cui, Z. Qi, F. Meng, and Z. Chen, "Automatic road crack detection using random structured forests," IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 12, pp. 3434-3445, 2016.
Feature pyramid and hierarchical boosting network for pavement crack detection. F Yang, L Zhang, S Yu, D Prokhorov, X Mei, H Ling, IEEE Transactions on Intelligent Transportation Systems. F. Yang, L. Zhang, S. Yu, D. Prokhorov, X. Mei, and H. Ling, "Feature pyramid and hierarchical boosting network for pavement crack detection," IEEE Transactions on Intelligent Transportation Systems, 2019.
Cracktree: Automatic crack detection from pavement images. Q Zou, Y Cao, Q Li, Q Mao, S Wang, Pattern Recognition Letters. 333Q. Zou, Y. Cao, Q. Li, Q. Mao, and S. Wang, "Cracktree: Automatic crack detection from pavement images," Pattern Recognition Letters, vol. 33, no. 3, pp. 227-238, 2012.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Holistically-nested edge detection. S Xie, Z Tu, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionS. Xie and Z. Tu, "Holistically-nested edge detection," in Proceedings of the IEEE international conference on computer vision, pp. 1395-1403, 2015.
Richer convolutional features for edge detection. Y Liu, M.-M Cheng, X Hu, K Wang, X Bai, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionY. Liu, M.-M. Cheng, X. Hu, K. Wang, and X. Bai, "Richer convolutional features for edge detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3000-3009, 2017.
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
Adversarial autoencoders. A Makhzani, J Shlens, N Jaitly, I Goodfellow, B Frey, arXiv:1511.05644arXiv preprintA. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, "Adversarial autoencoders," arXiv preprint arXiv:1511.05644, 2015.
Road crack detection using deep convolutional neural network. L Zhang, F Yang, Y D Zhang, Y J Zhu, 2016 IEEE international conference on image processing (ICIP). IEEEL. Zhang, F. Yang, Y. D. Zhang, and Y. J. Zhu, "Road crack detection using deep convolutional neural network," in 2016 IEEE international conference on image processing (ICIP), pp. 3708-3712, IEEE, 2016.
| zyda_arxiv-1050000 |
Backreaction in Semiclassical Cosmology: the Einstein-Langevin Equation
9403043v2 23 Mar 1994 October 24, 2018
B L Hu [email protected]
A Matacz
Department of Physics
Department of Physics
University of Maryland
20742College ParkMDUSA
University of Adelaide
5005Australia
Backreaction in Semiclassical Cosmology: the Einstein-Langevin Equation
9403043v2 23 Mar 1994 October 24, 2018arXiv:gr-qc/ (umdpp 94-31) 1
Using the influence functional formalism we show how to derive a generalized Einstein equation in the form of a Langevin equation for the description of the backreaction of quantum fields and their fluctuations on the dynamics of curved spacetimes. We show how a functional expansion on the influence functional gives the cumulants of the stochastic source, and how these cumulants enter in the equations of motion as noise sources. We derive an expression for the influence functional in terms of the Bogolubov coefficients governing the creation and annihilation operators of the Fock spaces at different times, thus relating it to the difference in particle creation in different histories. We then apply this to the case of a free quantum scalar field in a spatially flat Friedmann-Robertson-Walker universe and derive the Einstein-Langevin equations for the scale factor for these semiclassical cosmologies. This approach based on statistical field theory extends the conventional theory of semiclassical gravity based on a semiclassical Einstein equation with a source given by the average value of the energy momentum tensor, thus making it possible to probe into the statistical properties of quantum fields like noise, fluctuations, entropy, decoherence and dissipation. Recognition of the stochastic nature of semiclassical gravity is an essential step towards the investigation of the behavior of fluctuations, instability and phase transition processes associated with the crossover to quantum gravity. *
Introduction
Backreaction of quantum processes like particle creation in cosmological spacetimes [1] has been considered by many researchers in the past for the purpose of understanding how quantum effects affect the structure and dynamics of the early universe near the Planck time [2,3]. Because of the general nature and complexity of the problem, backreaction studies have also been used as a testing ground for the development and application of different formalisms in quantum field theory in curved spacetime [4], e.g., regularization schemes to obtain finite energy-momemtum tensor, perturbation methods, effective action formalism, etc. The most recent stage of development for the discussion of cosmological backreaction problems was the use of Schwinger-Keldysh (or closed-time-path, CTP) functional formalism [5], which, being formulated in the in − in boundary condition, gives rise to a real and causal equation of motion ( the semiclassical Einstein equation), where the expectation value of the energy-momentum tensor of a quantum field acts as a source which drives the classical effective geometry. In this equation one can identify a nonlocal kernel in the dissipative term whose integrated dissipative power has been shown to be equal to the energy density of the total number of particles created, thus establishing the dissipative nature of the backreaction process [6,7].
In pursuing a deeper understanding of the statistical mechanics meaning of dissipation, one of us [8] cast this backreaction problem into the conceptual framework of quantum open systems [9]. He made the observation that a Langevin-type equation is what should be expected, and predicted that for quantum fields a colored noise source should appear in the driving term. He also conjectured that the particle creation backreaction problem can be understood succintly as the manifestation of a general fluctuation-dissipation relation for quantum fields in dynamical spacetimes, a non-equilibrium generalization of such relations depicting particle creation in black holes [10,11] and de Sitter universe [12]. The missing piece in this search is the noise term associated with quantum fields.
To look into this aspect of the backreaction problem in semiclassical gravity, as well as exploring the quantum origin of noise and fluctuations in inflationary cosmology [13], and understanding the decoherence problem in quantum to classical transition [14], Hu, Paz and Zhang [15,16] looked into the relation of colored noise and nonlocal dissipation in a quantum Brownian motion model with the influence functional of Feynman and Vernon [17,18,19]. In this formalism the effects of noise and dissipation can be extracted from the noise and dissipation kernels in the real and imaginary parts of the influence functional, their interrelation residing in the fluctuation-dissipation relation obtained as a simple functional relation. If one views the quantum field as the environment and spacetime as the system in the quantum open system paradigm, then the statistical mechanical meaning of the backreaction problem in semiclassical cosmology can be understood more clearly [8]. In particular, one can identify noise with the coarse-grained quantum fields [20,21], derive the semiclassical Einstein equation as a Langevin equation [22], and understand the backreaction process as the manifestation of a fluctuation-dissipation relation [23]. Continuing their investigation of the backreaction problem via the CTP formalism, Calzetta and Hu [22] also found that the results obtained from the influence functional formalism is the same as that obtained earlier (but displayed only partially) from the Schwinger-Keldysh method. This paradigm has also been applied to problems in quantum cosmology [24]. (For an account of the search and discovery of these ideas, see [25,26].)
The specific goal of this paper is to derive the semiclassical Einstein equation in the form of a Langevin equation. Our primary task is the derivation of noise from the quantum field source, and we do this by carrying out a cumulant expansion of the influence functional. This goal is shared by two other companion papers addressing different aspects of this problem: In Ref. [22], using the closed-time-path method [6,7], Calzetta and Hu identified the source of decoherence and particle creation to the noise kernel and showed their relation through the Bogolubov coefficients. They also showed the relation of quantum noise with classical fluctuations, and derived the semiclassical Einstein equation with a noise term. In Ref. [23] Hu and Sinha started with the density matrix of the universe in quantum cosmology in the manner of [24] and demonstrated the existence of a fluctuation-dissipation relation for the particle creation and backreaction problem in a Bianchi Type-I universe. These two papers together with this one present a quantum open system approach to the backreaction problem in semiclassical gravity and cosmology. Together they can serve as a platform for exploring the transition to quantum cosmology. It can also address the dissipative nature of effective theories [8,27], and (to the extent that Einstein's general relativity can be viewed as an effective theory) possible dissipative effects in the low-energy limit of any theory of quantum gravity. For a general discussion of these ideas, see [28].
This paper is organized as follows: In Sec. 2 we give a brief review of the influence functional formalism, mainly to establish notations. Readers familiar with it can skip to the next section. In Sec 3 we show how a functional expansion on the influence functional gives the cumulants of the stochastic source, and how these cumulants enter into the equations of motion as noise sources. In Sec. 4, following [21], we derive the form of the Hamiltonian for a scalar field in terms of its normal modes and consider a class of actions where the field modes are coupled parametrically to the scale factor of the universe. We derive an expression for the influence functional in terms of the Bogolubov coefficients governing the creation and annihilation operators of the Fock spaces at different times which signify particle creation. In Sec 5, we present two standard cases of cosmological particle creation and derive the Einstein-Langevin equations describing its backreaction on the background spacetime. (2.1)
Influence Functional Theory
We will consider a to be our system variable and q to be our environmental variables.
Typically the environment has infinite degrees of freedom which is denoted here by a bold type.
We will briefly review here the Feynman-Vernon influence functional method for deriving the evolution operator. The method provides an easy way to obtain a functional representation for the evolution operator J r for the reduced density matrixρ r . Let us start first with the evolution operator J for the full density matrixρ defined bŷ
ρ(t) = J (t, t i )ρ(t i ). (2.2)
Asρ evolves unitarily under the action of (2.1), the evolution operator J has a simple path integral representation. In the position basis, the matrix elements of the evolution operator are given by
J (a f , q f , a ′ f , q ′ f , t | a i , q i , a ′ i , q ′ i , t i ) = K(a f , q f , t | a i , q i , t i )K * (a ′ f , q ′ f , t | a ′ i , q ′ i , t i ) = a f a i Da q f q i Dq exp ī h S[a, q] a ′ f a ′ i Da ′ q ′ f q ′ i Dq ′ exp − ī h S[a ′ , q ′ ] (2.3)
where the operator K is the evolution operator for the wave functions. In a path-integral representation, the functional integrals are over all histories compatible with the boundary conditions. We have used the subscripts i, f to denote the initial and final variables at t i , t.
The reduced density matrix is defined as
ρ r (a, a ′ ) = +∞ −∞ dq +∞ −∞ dq ′ ρ(a, q; a ′ , q ′ )δ(q − q ′ ) (2.4)
and is propagated in time by the evolution operator J r
ρ r (a, a ′ , t) = +∞ −∞ da i +∞ −∞ da ′ i J r (a, a ′ , t | a i , a ′ i , t i ) ρ r (a i , a ′ i , t i ). (2.5)
By using the functional representation of the full density matrix evolution operator given in (2.3), we can also represent J r in path integral form. In general, the expression is very complicated since the evolution operator J r depends on the initial state. If we assume that at a given time t = t i the system and the environment are uncorrelated
ρ(t = t i ) =ρ s (t i ) ×ρ e (t i ), (2.6)
then the evolution operator for the reduced density matrix does not depend on the initial state of the system and can be written [17] as
J r (a f , a ′ f , t | a i , a ′ i , t i ) = a f a i Da a ′ f a ′ i Da ′ exp ī h S[a] − S[a ′ ] F [a, a ′ ] (2.7)
The factor F [a, a ′ ], called the 'influence functional', is defined as
F [a, a ′ ] = +∞ −∞ dq f +∞ −∞ dq i +∞ −∞ dq ′ i q f q i Dq q f q ′ i Dq ′ × exp ī h S e [q] + S int [a, q] − S e [q ′ ] − S int [a ′ , q ′ ] ρ e (q i , q ′ i , t i ) = exp ī h S IF [a, a ′ ] (2.8) where S IF [a, a ′ ]
is the influence action. The effective action for the open quantum system is defined as
S ef f [a, a ′ ] = S[a] − S[a ′ ] + S IF [a, a ′ ].
It is not difficult to show that (2.8) has the representation independent form [21] F
[a, a ′ ] = T r Û [a t,t i ]ρ e (t i )Û † [a ′ t,t i ] (2.9)
whereÛ is the quantum propagator for the action S e [q] + S int [a(s), q] where a(s) is treated as a time dependent classical forcing term. We have found this form to be very convenient for deriving the influence functional.
It is obvious from its definition that if the interaction term is zero, the influence functional is equal to unity and the influence action is zero. In general, the influence functional is a highly non-local object. Not only does it depend on the time history, but -and this is the more important property-it also irreducibly mixes the two sets of histories in the path integral of (2.7). Note that the histories a and a ′ could be interpreted as moving forward and backward in time respectively. Viewed in this way, one can see the similarity of the influence functional [17] and the generating functional in the closed-time-path (CTP or Schwinger-Keldysh) integral formalism [5]. The Feynman rules derived in the CTP method are very useful for computing the IF.
In those cases where the initial decoupling condition (2.6) is satisfied, the influence functional depends only on the initial state of the environment. The influence functional method can be extended to more general conditions, such as thermal equilibrium between the system and the environment [29], or correlated initial states [18,19].
Stochastic Forces from the Influence Functional
In this paper we will be interested in models in which the action (2.1) has a Hamiltonian of the general form
H(a, q) = H(a) + H e (q) + n λσ(a,ȧ)ǫ(q n ,q n ) (3.1)
where λ is a coupling constant and σ and ǫ are arbitrary functions of the system and environment variables. The simplification made in (3.1) is that system environment interaction is separable. This ensures that the effect of the environment on the system can be described by a single stochastic source.
Let us introduce the sum and difference system variables as
Σ = 1 2 σ(a,ȧ) + σ(a ′ ,ȧ ′ ) , ∆ = σ(a,ȧ) − σ(a ′ ,ȧ ′ ), (3.2)
and define the real quantities
C n (t 1 , ..., t n ; Σ t 1 ,t i , ..., Σ tn,t i ] = ī h −n δ n W[Σ(s), ∆(s)] δ∆(t 1 )...δ∆(t n ) ∆=0 (3.3)
where W = lnF . The notation of C 1 (t 1 ; Σ t 1 ,t i ] means C 1 is a function of t 1 and also a functional of Σ between endpoints t 1 and t i . Writing the influence action as a functional Taylor series and generalizing the notation to n variables we find that formally
W[Σ(s), ∆(s)] = ī h dt 1 ...dt n ∆(t 1 )...∆(t n )C n (t 1 , ..., t n ; Σ t 1 ,t i , ..., Σ tn,t i ] + ... (3.4)
This form of the influence functional will turn out to be useful for its physical interpretation. From (2.9) and the propagatorÛ given bŷ
U [a t,t i ] = n T exp − ī h t t i ds Ĥ e (q, s) + λσ(s)ǫ(q n ,q n ) ,(3.5)
it is observed that C n is of order λ n in the coupling constant. We can interpret the C n in (3.4) as cumulants of a stochastic force. Consider the action
S[a(s)] = t f t i ds L(a,ȧ, s) + σ(a,ȧ)ξ(s) (3.6)
where ξ(s) is some forcing term. We want to consider the case where ξ(s) is a stochastic force with a normalised probability density functional P[σ(a,ȧ); ξ(s)]. The probability density functional is taken to be conditional on the system history σ(a,ȧ). The action (3.6) generates the influence functional
F [Σ, ∆] = exp ī h t f t i ξ(s)∆(s)ds ξ ≡ DξP[ξ, Σ] exp ī h t f t i ξ(s)∆(s)ds (3.7)
where Σ and ∆ are defined in (3.2). The essential point is that the influence functional (3.7) has the exact same form as the characteristic function of the stochastic process ξ.
Therefore given any influence functional F [Σ, ∆], we can interpret the C n , given by (3.3), as the cumulants of an effective stochastic force ξ(s) coupled to the system in a way described by the action (3.6). The probability density functional P[ξ, Σ] of ξ(s) can be obtained from a given influence functional by inverting the functional fourier transform (3.7).
If we ignore all cumulants after the second order (the cumulants are of order λ n ) we are making a Gaussian approximation to the noise. Although λ is usually assumed to be small for the series (3.4) to converge, the formal expansion in orders of λ is acceptable even if λ = 1, as long as the deviations from Gaussian are small. With the Gaussian approximation we can write the influence functional as
F [a, a ′ ] = DξP[ξ, Σ]exp ī h S IF [a, a ′ , ξ] ≡ exp ī h S IF [a, a ′ , ξ] ξ (3.8) where P[ξ, Σ] = P 0 exp − t f t i dt 1 t f t i dt 2 ξ(t 1 )C −1 2 (t 1 , t 2 ; Σ t 1 ,t i , Σ t 2 ,t i ]ξ(t 2 ) (3.9)
is the normalised functional distribution of ξ(s) and
S IF [a, a ′ , ξ] = t f t i dt 1 ∆(t 1 ) C 1 (t 1 , Σ t 1 ,t i ] + ξ(t 1 ) . (3.10)
We can use this effective action to obtain our semiclassical equation of motion which is given by
δ S ef f [a, a ′ , ξ] δ∆ a (t) ∆a=0 = 0 (3.11)
where ∆ a = a − a ′ . We find it becomes
∂L ∂a − d dt ∂L ∂ȧ + ∂σ ∂a − d dt ∂σ ∂ȧ C 1 (t, σ t,t i ] + ξ(t) − ∂σ ∂ȧ Ċ 1 (t, σ t,t i ] +ξ(t) = 0 (3.12)
where L(a,ȧ) is the system Lagrangian and ξ(t) is a zero-mean Gaussian stochastic force with a correlator given by
ξ(t)ξ(t ′ ) = C 2 (t, t ′ ; σ t,t i , σ t ′ ,t i ]. (3.13)
Clearly both the noise and driving term are still dependent on the system history in a complex way in general. However we can further simplify things by expanding around a background a = a b . In this case we approximate the first cumulant by
C 1 (t; σ t,t i ] = C 1 (t; σ t,t i ]| σ=σ b + t t i dt ′σ (t ′ )µ(t, t ′ ) + ...
(3.14)
Ċ 1 (t; σ t,t i ] =Ċ 1 (t; σ t,t i ]| σ=σ b + t t i dt ′σ (t ′ )μ(t, t ′ ) + ... (3.15) whereσ = σ − σ b and µ(t, t ′ ) = δC 1 (t; σ t,t i ] δσ(t ′ ) σ=σ b (3.16) µ(t, t ′ ) = δĊ 1 (t; σ t,t i ] δσ(t ′ ) σ=σ b (3.17)
where we have assumed in (3.15) that µ(t, t ′ ) is antisymmetric as will be the case for our examples. The noise ξ(t) now has the correlator
ξ(t)ξ(t ′ ) = C 2 (t, t ′ ; σ t,t i , σ t ′ ,t i ]| σ=σ b . (3.18)
These approximations greatly simplify (3.12). Our task is then to solve for the fluctuations a ≡ a − a b subject to the initial conditionã(t i ) =ȧ(t i ) = 0.
Influence Functional for Cosmological Backreaction
In this section, following the methods of [21], we will derive the form of the influence functional in terms of the Bogolubov coefficients in the transformation between the creation and annihilation operators of field amplitudes at different times. First we show how the dynamics of a general real scalar field in an expanding FRW universe can be described by a sum over quadratic time dependent Hamiltonians. Then we discuss the Bogolubov coefficients in terms of the squeeze parameters [31]. It also applies to the case of gravity wave perturbations whose two polarizations obey wave equations of the same form as a massless, minimally coupled scalar field (see [32] for details). The action for a free scalar field in an arbitrary space-time can be written as the sum of gravitation action S g and matter action S m of the form
S g = l 2 p d 4 x √ −g(R − 2Λ) − 2l 2 p d 3 x √ −hK (4.1) S m = l 2 p 2 d 4 x √ −g g µν ▽ µ Φ ▽ ν Φ − (m 2 + ξ d R)Φ 2 + ξ d l 2 p d 3 x √ −hKΦ 2 . (4.2)
where l 2 p = 1/(16πG) and ξ d = (n − 2)/4(n − 1) which in four dimensions d = 4 is equal to 0 for minimal coupling and 1/6 for conformal coupling. Adding a surface term in the action proportional to K, the trace of the extrinsic curvature, is necessary for a consistent variational theory [33] in the treatment of the backreaction problem.
In the spatially-flat Friedmann-Robertson-Walker (FRW) universe with metric
ds 2 = a 2 (η) dη 2 − i dx 2 i (4.3)
R = 6ä/a 3 , K = 3ȧ/a 2 (where a dot denotes a derivative with respect to conformal time η = dt/a) we have
S g = −V l 2 p dη (6ȧ 2 + 2Λa 4 ) (4.4) S m = l 2 p 2 d 4 x (χ) 2 − i (χ ,i ) 2 − 2(1 − 6ξ)ȧ a χχ − m 2 a 2 + (6ξ − 1)ȧ 2 a 2 χ 2 . (4.5)
Here χ = aΦ is the rescaled field variable and V is the volume of the universe. From now on we will absorb l p by rescaling χ and a. We can expand the scalar field in a box of co-moving volume V (fixed coordinate volume)
χ(x) = 2 V k [q + k cos k · x + q − k sin k · x] (4.6)
which leads to the Lagrangian
L(η) = 1 2 +− σ k (q σ k ) 2 − 2(1 − 6ξ d )ȧ a q σ kq σ k − k 2 + m 2 a 2 + (6ξ d − 1)ȧ 2 a 2 q σ2 k (4.7)
where k = | k| and S m (η) = L(η)dη. The canonical momentum is
p σ k = ∂L(η) ∂q σ k =q σ k − (1 − 6ξ d )ȧ a q σ k . (4.8)
Defining the canonical Hamiltonian the usual way we find
H(η) = 1 2 +− σ k p σ2 s k + (1 − 6ξ d )ȧ a (p σ s k q σ k + q σ k p σ s k ) + k 2 + m 2 a 2 + 6ξ d (6ξ d − 1)ȧ 2 a 2 q σ2 k (4.9)
where the sum is over positive k only since we have an expansion over standing rather than travelling waves. The system is quantized by promoting (p σ k , q σ k ) to operators obeying the usual harmonic oscillator commutation relations. In this way the dynamics of the field is reduced to the dynamics of time-dependent harmonic oscillators. (See [21] for details.)
In the case of a free quantized scalar field coupled to a spatially-flat FRW universe with scale factor a(s) the action thus belongs to the general form
S[a, q] = t t i ds L(a,ȧ, s) + k 1 2 m k (a,ȧ) q 2 k + b k (a,ȧ)q kqn − ω 2 k (a,ȧ)q 2 k .
(4.10)
By tracing out the scalar field we can obtain an influence functional and from which derive an equation of motion for the scale factor in the semiclassical regime. Here since we work explicitly in the semiclassical regime, the environment is quantum and gravity enters classically through the scale factor a.
We want to calculate the influence functional for this model. From (2.9) we see that it is formally given by
F [a, a ′ ] = k T r Û k [a t,t i ]ρ b (t i )Û † k [a ′ t,t i ] (4.11)
whereÛ k is the quantum propagator for the bath mode in (4.10) with a(s) treated as an arbitrary classical time dependent function. We have derived this propagator before [30] and will only quote the results here. The result for a particular mode is (we drop the mode label)
U [a t,t i ] =Ŝ(r, φ)R(θ) (4.12) whereR (θ) = e −iθB ,Ŝ(r, φ) = exp[r(Âe −2iφ − † e 2iφ )] (4.13) and =â 2 2 , † =â †2 2 ,B =â †â + 1/2. (4.14)
S andR are called squeeze and rotation operators respectively. The parameters r, φ, θ are determined from the equationsα
= −ig * β − ihα (4.15) β = ihβ + igα (4.16) where α = e −iθ cosh r, β = −e −i(2φ+θ) sinh r (4.17)
and
g(t) = 1 2 m(t)ω 2 (t) c + m(t)b 2 (t) 4c − c m(t) + ib(t) (4.18) h(t) = 1 2 c m(t) + m(t)ω 2 (t) c + m(t)b 2 (t) 4c . (4.19)
c is an arbitrary positive real constant that is usually chosen so that g = 0 at t i . We must have α = 1 and β = 0 at t i so that the initial condition for the propagator is satisfied. The time dependence on g and h comes directly from a in (4.10). Applying (4.12) to (4.11) we find that the influence functional for a mode in an initial vacuum state is given by (4.20) where |n are the usual number states. Usinĝ and making use of
F k [a, a ′ ] = n n|Ŝ(r, φ)R(θ)|0 0|R † (θ ′ )Ŝ † (r ′ , φ ′ )|nR(θ)|0 = e −iθ/2 |0 (4.21) we find F k [a, a ′ ] = n n|Ŝ(r, φ)|0 0|Ŝ † (r ′ , φ ′ )|n e −i(θ−θ ′ )/2 .n (2n)! (n!) 2 x n = 1 √ 1 − 4x (4.24) we can show that F [a, a ′ ] = k 1 α k [a ′ ]α * k [a] − β k [a ′ ]β * k [a]
.
(4.25)
This shows yet another way of deriving the influence functional in terms of the Bogolubov coeffients, in addition to the derivations given in [22].
Einstein-Langevin Equation
From the Hamiltonian (4.9) we see that the system-environment interaction is separable for two cases: the massive conformally coupled field (for which σ = a 2 in (3.1)) and the massless minimally coupled field (σ =ȧ/a) which also describes gravitons. For these two cases the results from Sec. 3 apply: (3.12) is the appropriate equation describing backreaction of the quantum scalar field on the metric. To derive the Einstein-Langevin equation we need to compute the first two cumulants given by (3.3) using the influence functional (4.25). The solution of (4.15) and (4.16) can be written as
U[a t,t i ] = T exp −i t t i ds u(s) (5.1) where u(s) = h(s) g * (s) −g(s) −h(s) (5.2) and U[a t,t i ] = α[a t,t i ] β * [a t,t i ] β[a t,t i ] α * [a t,t i ] . (5.3)
The key to calculating the functional derivative of (5.1) is recognizing that we can always write U[a t,
t i ] = U[a t,τ ]U[a τ,t i ]. We therefore find δU[a t,t i ] δ∆(τ ) = δU[a t,τ ] δ∆(τ ) U[a τ,t i ] + U[a t,τ ] δU[a τ,t i ] δ∆(τ ) . (5.4)
Making use of the formal expression for the time ordered representation of (5.1) it is easy to see that δU[a t,τ ]
δ∆(τ ) = −iU[a t,τ ] t τ ds δu(s) δ∆(τ ) (5.5) δU[a τ,t i ] δ∆(τ ) = −i τ t i ds δu(s) δ∆(τ ) U[a τ,t i ]. (5.6)
Substituting (5.5) and (5.6) into (5.4) we find that
δU[a t,t i ] δ∆(τ ) = −iU[a t,τ ] t t i ds δu(s) δ∆(τ ) U[a τ,t i ]. (5.7)
Massive conformally coupled field
For the massive conformally coupled case we have σ = a 2 and g = 1 2
(k 2 + m 2 a 2 )l 2 p c − c l 2 p , h = 1 2 (k 2 + m 2 a 2 )l 2 p c + c l 2 p (5.8)
in (4.18-19). From (3.3) and (4.25) (we have reinstated the Planck length) we find the first cumulant of the stochastic force is
C 1 (η; a 2 η,η i ] = − l 2 p m 2 2 +− σ k q 2 η = − l 2 p m 2h 4 +− σ k 1 c (α η + β η )(α η + β η ) * (5.9) whereq 2 η =Û † [a η,η i ]q 2Û
[a η,η i ] and the average is with respect to the vacuum. The propagator U is given by (4.12) with the Bogolubov coefficients determined via (4.15-16) with g, h given by (5.8). We will use this notation below as well. Similarly for the second cumulant we find
C 2 (η, η ′ ; a 2 η,η i , a 2 η ′ ,η i ] = − l 4 p m 4 8 +− σ k q 2 ηq 2 η ′ + q 2 η ′q 2 η − 2 q 2 η q 2 η ′ = − l 4 ph 2 m 4 16 +− σ k 1 c 2 (β η + α η ) 2 (α * η ′ + β * η ′ ) 2 + (β * η + α * η ) 2 (α η ′ + β η ′ ) 2 . (5.10)
Applying (3.16) to (5.9) we find the dissipation kernel to be
µ(η, η ′ ) = il 4 p m 4 4h +− σ k q 2 η q 2 η ′ − q 2 η ′ q 2 η a 2 =a 2 b = il 4 ph m 4 8 +− σ k 1 c 2 (β η + α η ) 2 (α * η ′ + β * η ′ ) 2 − (β * η + α * η ) 2 (α η ′ + β η ′ ) 2 a 2 =a 2 b . (5.11)
Again we see the close relation between the noise and dissipation kernels. Using (4.4) and σ = a 2 we find that the equation of motion (3.12) with the background approximation becomes
a − 2 3 Λa 3 + a(η) 6V l 2 p C 1 (η; a 2 η,η i ]| a 2 =a 2 b + η η i dη ′ã2 (η ′ )µ(η, η ′ ) = − a(η) 6V l 2 p ξ(η) (5.12)
where ξ is a zero mean gaussian stochastic force with the correlator (5.10) evaluated on the background a b .
Massless minimally coupled case
For the massless minimally coupled case, σ =ȧ/a,
g = −iȧ a + 1 2 l 2 p k 2 c − c l 2 p , h = 1 2 l 2 p k 2 c + c l 2 p , (5.13) we get C 1 (η; (ȧ/a) η,η i ] = − 1 2 +− σ k (pq + qp) η = − ih 2 +− σ k [α * η β η − α η β * η ] (5.14)
where p is the canonical momentum from the Lagrangian (4.7) with m = ξ = 0. For this case ∂σ ∂a − d dη ∂σ ∂ȧ = 0 so we see from (3.12) we must findĊ 1 . Taking the derivative of (5.14) and using (4.15-16) and (5.13) (with c = l 2 p k) we finḋ
C 1 (η; (ȧ/a) η,η i ] =h +− σ k k[α * η β η + α η β * η ]. (5.15)
For the second cumulant we find
C 2 (η, η ′ ; (ȧ/a) η,η i , (ȧ/a) η ′ ,η i ] = 1 8 +− σ k (pq + qp) η (pq + qp) η ′ + (pq + qp) η ′ (pq + qp) η − 2 (pq + qp) η (pq + qp) η ′ =h 2 4 +− σ k (α 2 η − β 2 η )(α * 2 η ′ − β * 2 η ′ ) + (α * 2 η − β * 2 η )(α 2 η ′ − β 2 η ′ ) . (5.16)
From (3.17) and (5.15) the dissipation kernel is given bẏ
µ(η, η ′ ) = −h +− σ k k (β 2 η + α 2 η )(α * 2 η ′ − β * 2 η ′ ) + (β * 2 η + α * 2 η )(α 2 η ′ − β 2 η ′ ) ȧ/a=(ȧ/a) b . (5.17)
The equation of motion (3.12) with the background approximation becomes
a − 2 3 Λa 3 − 1 12V l 2 p a(η) Ċ 1 (η, (ȧ/a) η,η i )|ȧ /a=(ȧ/a) b + η η i dη ′ȧ (η ′ ) a(η ′ )μ (η, η ′ ) =ξ (η) 12V l 2 p a(η)
. (5.18) We need to know the stochastic properties ofξ(η) given that ξ(η) is a zero mean gaussian stochastic force with the correlator (5.16) evaluated on a background. We can deduce this by integrating by parts the noise term in the effective action (3.4). We find that (relaxing the notation for C 2 )
η f η i dη 1 dη 2 ∆(η 1 )∆(η 2 )C 2 (η 1 , η 2 ] = surface term + η f η i dη 1 Γ(η 1 ) dC 2 dη 1 (η 1 , η i )Γ(η i ) − dC 2 dη 1 (η 1 , η f )Γ(η f ) + dC 2 dη 1 (η i , η 1 )Γ(η i ) − dC 2 dη 1 (η f , η 1 )Γ(η f ) + η f η i dt 1 η f η i dη 2 Γ(η 1 )Γ(η 2 ) d 2 C 2 (η 1 , η 2 ] dη 1 dη 2 (5.19)
where Γ(η) = dη ∆(η) = ln a − ln a ′ . The surface term will not contribute to the equation of motion but the last term of (5.19) shows clearly thatξ(t) corresponds to a zero mean gaussian stochastic force with the correlator
C 2ξ (η, η ′ ] = d 2 C 2 (η 1 , η 2 ] dη 1 dη 2 . (5.20)
The meaning of the middle term of (5.19) is more difficult to interpret. It vanishes only when the noise is stationary since we then have C 2 (η, η ′ ] = C 2 (η − η ′ ]. We will not discuss this term further since it will vanish in the example we consider next. Clearly though, its meaning will need to be considered for a study about nonstationary backgrounds.
Backreaction of graviton fluctuations about flat space
A simple case to study is a massless minimally-coupled field around a flat background (ã = a). In this case α(η) = e −ikη and β(η) = 0. We see that in this case the first cumulant (5.14) vanishes. This should be compared to the massive field where the first cumulant is divergent around a flat background. The noise kernel (5.16) becomes
C 2 (η − η ′ ] =h 2 V 32π 2 ∞ 0 dk k 2 cos[k(η − η ′ )] = −h 2 V 32π δ ′′ (η − η ′ ) (5.21)
where a prime on a function denotes a derivative taken with respect to its argument. From (5.20) we have
C 2ξ (η − η ′ ] =h 2 V 32π 2 ∞ 0 dk k 4 cos[k(η − η ′ )] =h 2 V 32π δ ′′′′ (η − η ′ ). (5.22)
The dissipation kernel (5.17) becomeṡ
µ(η − η ′ ) = −h V 16π 2 ∞ 0 dk k 3 cos[k(η − η ′ )].
(5.23)
The Einstein-Langevin equation (5.18) becomes
a − 2 3 Λa 3 − 1 12V l 2 p a(η) η η i dη ′ȧ (η ′ ) a(η ′ )μ (η − η ′ ) = 1 12V l 2 p a(η)ξ (η). (5.24)
whereξ is a zero-mean Gaussian force with the correlator (5.22). The solution of the Einstein-Langevin equations discussed here are beyond the scope of the present paper. We plan to consider these solutions in the future in the context of a general study into the dynamics of second order Langevin equations with non-local dissipation and colored noise [34].
Summary
Together with two related work [22,23], this paper seeks to establish a new framework for the study of semiclassical gravity theory based on the Einstein-Langevin equation. In [22] the noise and fluctuation terms are identified from the closed-time-path formalism and the Einstein-Langevin equation derived for perturbances off the Robertson-Walker spacetime.
In [23] the influence functional method is used to derive an equation of motion for the anisotropy matrix of the Bianchi Type-I universe. Dissipation of anisotropy from particle creation in a quantum scalar field is seen to be driven by an additional stochastic source (noise) term related to the fluctuations of particle creation and shown to be a manifestation of a fluctuation-dissipation relation. In this paper, we have derived the following results:
• By carrying out a functional Taylor series expansion on the influence functional we show how the successive orders measure the higher cumulants of noise in its most general (colored and multiplicative) forms, the lowest order truncation yielding a Gaussian noise. The second cumulant gives the autocorrelation function for the stochastic force (noise), which drives the Einstein-Langevin equation.
• Using a general form for the Hamiltonian of a quantum field whose normal modes are coupled to a curved spacetime parametrically, we showed a new way to derive the influence functional in terms of the Bogolubov coefficients between the secondquantized operators of Fock spaces at two different times. This relation connects our new influence functional / effective action method with the traditional canonical quantization approach and thus incorporates the established body of knowledge in quantum field theory in curved spacetimes.
• With the previous two results we were able to express the noise and dissipation kernels in terms of the Bogolubov coefficients. This connection offers a more transparent interpretation of the physical meaning of the many statistical mechanical processes such as decoherence and dissipation in terms of particle creation and related quantum effects.
• We have also derived the form of the Einstein-Langevin equations for some well-studied cases of scalar fields in Robertson-Walker and de Sitter spacetimes. They form the starting points of the next stage of work, which is the solution of these equations for the analysis of fluctuations, instability and phase transition. We hope to report on these problems in future communications.
Consider the quantum system described by the action S[a, q] = S[a] + S e [q] + S int [a, q].
t f t i dt 1 ∆(t 1 )C 1 (t 1 ; Σ t 1 ,t i ] − 1 2h 2 t f t i dt 1 t f t i dt 2 ∆(t 1 )∆(t 2 )C 2 (t 1 , t 2 ; Σ t 1 ,t i , Σ t 2 ,t i ] + ... + 1 n! ī h n t f t i
Acknowledgements We thank Esteban Calzetta and Sukanya Sinha for discussions. This work was done when AM visited the relativity theory group of the University of Maryland. Research is supported in part by the National Science Foundation under grant 91-19726.
. L Parker, Phys. Rev. 1831057L. Parker, Phys. Rev. 183, 1057 (1969);
. R U Sexl, H K Urbantke, Pis'ma Zh. Eksp. Teor. Fiz. 179307JETP Lett.R. U. Sexl and H. K. Urbantke, Phys. Rev., 179, 1247 (1969) Ya. B. Zel'dovich, Pis'ma Zh. Eksp. Teor. Fiz, 12 ,443 (1970) [JETP Lett. 12, 307(1970)];
. A B A Ya, Starobinsky, Zh. Teor. Eksp. Fiz. 611159Sov. Phys. JETPYa. B. Zel'dovich and A. A. Starobinsky, Zh. Teor. Eksp. Fiz. 61 2161, (1971) [Sov. Phys. JETP 34, 1159 (1972)].
For reviews on semiclassical gravity see. Nuovo Cimento 35B. g., V. N. Lukash, I. D. Novikov, A. A. Starobinsky and Ya. B. Zel'dovich293For reviews on semiclassical gravity see, e.g., V. N. Lukash, I. D. Novikov, A. A. Starobinsky and Ya. B. Zel'dovich, Nuovo Cimento 35B, 293 (1976);
B L Hu, Recent Developments in General Relativity Proc. Second Marcel Grossmann Meeting. R. RuffiniNorth Holland, AmsterdamB. L. Hu, in Recent Developments in General Relativity Proc. Second Marcel Grossmann Meeting 1979, ed. R. Ruffini (North Holland, Amsterdam, 1982);
J B Hartle, The Very Early Universe ed. Gibbons et al. CambridgeCambridge University PressJ. B. Hartle, in The Very Early Universe ed. Gibbons et al (Cambridge University Press, Cambridge 1983);
Parker, The Quantum Theory of Gravity. S. Christensen (Amdams Hilger, S. BristolParker, in The Quantum Theory of Gravity ed. S. Christensen (Amdams Hilger, S. Bristol, 1986)
. Ya , A Starobinsky, Zh. Eksp. Teor. Fiz. 611159Sov. Phys.-JETPYa. Zel'dovich and A. Starobinsky, Zh. Eksp. Teor. Fiz 61, 2161 (1971) [Sov. Phys.- JETP 34, 1159 (1971)];
. B L Hu, L Parker, Phys. Rev. 17933B. L. Hu and L. Parker, Phys. Rev. D17, 933 (1978);
. F , F. V.
. J B Fischetti, B L Hartle, Hu, Phys. Rev. 201757Fischetti, J. B. Hartle and B. L. Hu, Phys. Rev. D20, 1757 (1979);
. J B Hartle, B L Hu, Phys. Rev. 202756J. B. Hartle and B. L. Hu, Phys. Rev. D20, 1772 (1979); 21, 2756 (1980)
N Birrell, P W , Davies Quantum Fields in Curved Spaces. CambridgeCambridge University PressN. Birrell and P. W. C. Davies Quantum Fields in Curved Spaces (Cambridge Univer- sity Press, Cambridge, 1982)
. J Schwinger, J. Math. Phys. 2407J. Schwinger, J. Math. Phys. 2 (1961) 407;
. L V Keldysh, Engl. trans. Sov. Phys. JEPT. 471018Zh. Eksp. Teor. Fiz.L. V. Keldysh, Zh. Eksp. Teor. Fiz. 47 , 1515 (1964) [Engl. trans. Sov. Phys. JEPT 20, 1018 (1965)].
. G Zhou, Z Su, B Hao, L Yu, Phys. Rep. 1181G. Zhou, Z. Su, B. Hao and L. Yu, Phys. Rep. 118, 1 (1985);
Z Su, L Y Chen, X Yu, K Chou, Quantum Concepts in Space and Time. R. Penrose and C. J. IshamOxfordClaredon Press379810Z. Su, L. Y. Chen, X. Yu and K. Chou, Phys. Rev. B37, 9810 (1988). B. S. DeWitt, in Quantum Concepts in Space and Time, ed. R. Penrose and C. J. Isham (Claredon Press, Oxford, 1986);
. R D Jordan, ; E Calzetta, B L Hu, Phys. Rev. J. P. Paz331054Phys. Rev.R. D. Jordan, Phys. Rev. D33, 44 (1986). E. Calzetta and B. L. Hu, Phys. Rev. D35, 495 (1987). J. P. Paz, Phys. Rev. D41, 1054 (1990);
. E Calzetta, B L Hu, Phys. Rev. 35495E. Calzetta and B. L. Hu, Phys. Rev. D35, 495 (1987).
. E Calzetta, B L Hu, Phys. Rev. 40656E. Calzetta and B. L. Hu, Phys. Rev. D40, 656 (1989).
. B L Hu, Physica. 158399B. L. Hu, Physica A158, 399 (1989).
E G See, E B Davies, The Quantum Theory of Open Systems. LondonAcademic PressSee, e.g., E. B. Davies, The Quantum Theory of Open Systems (Academic Press, Lon- don, 1976);
The Nonequilibrium Statistical Mechanics of Open and Closed Systems. K Lindenberg, B J West, Weiss, Quantum Dissipative Systems. New York; SingaporeWorld ScientificK. Lindenberg and B. J. West, The Nonequilibrium Statistical Mechanics of Open and Closed Systems (VCH Press, New York, 1990) U. Weiss, Quantum Dissi- pative Systems (World Scientific, Singapore, 1993)
. P Candelas, D W Sciama, Phys. Rev. Lett. 381372P. Candelas and D. W. Sciama, Phys. Rev. Lett. 38, 1372 (1977)
. D W Sciama, Centenario di Einstein (Editrici Giunti Barbera UniversitariaD. W. Sciama, in Centenario di Einstein (Editrici Giunti Barbera Universitaria) (1979)
. E Mottola, Phys. Rev. 332136E. Mottola, Phys. Rev. D33, 2136 (1986).
Quantum Origin of Noise and Fluctuations in Cosmology. B L Hu, J P Paz, Y Zhang, The Origin of Structure in the Universe. E. Gunzig and P. NardoneDordrechtKluwer227B. L. Hu, J. P. Paz and Y. Zhang "Quantum Origin of Noise and Fluctuations in Cosmology", in The Origin of Structure in the Universe, edited by E. Gunzig and P. Nardone (Kluwer, Dordrecht, 1993), p. 227.
Quantum Mechanics of Closed Systems. R H Omnés ; W, ; J B Zurek, Hartle, Directions in General Relativity. B. L. Hu, M. P. Ryan and C. V. VishveswaraCambridge64281Cambridge Univ.Rev. Mod. Phys.R. Omnés, Rev. Mod. Phys. 64, 339 (1992). W. H. Zurek, Prog. Theor. Phys. 89, 281 (1993). J. B. Hartle, "Quantum Mechanics of Closed Systems" in Directions in General Relativity, Vol. 1: Misner Festschrift, eds B. L. Hu, M. P. Ryan and C. V. Vishveswara (Cambridge Univ., Cambridge, 1993).
. B L Hu, J P Paz, Y Zhang, Phys. Rev. 452843B. L. Hu, J. P. Paz and Y. Zhang, Phys. Rev. D45, 2843 (1992)
. B L Hu, J P Paz, Y Zhang, Phys. Rev. 471576B. L. Hu, J. P. Paz and Y. Zhang, Phys. Rev. D47, 1576 (1993)
R Feynman, F Vernon, R. Feynman and A. Hibbs, Quantum Mechanics and Path Integrals. New YorkMcGraw -Hill24118R. Feynman and F. Vernon, Ann. Phys. (NY) 24, 118 (1963). R. Feynman and A. Hibbs, Quantum Mechanics and Path Integrals, (McGraw -Hill, New York, 1965).
. A O Caldeira, A J Leggett, Physica. 121587A. O. Caldeira and A. J. Leggett, Physica 121A, 587 (1983);
. Ann. Phys. (NY). 149374Ann. Phys. (NY) 149, 374 (1983).
. H Grabert, P Schramm, G L Ingold, Phys. Rep. 168115H. Grabert, P. Schramm and G. L. Ingold, Phys. Rep. 168, 115 (1988).
Quantum Noise in Gravitation and Cosmology" Invited Talk at the Workshop on Fluctuations and Order. B L Hu, A Matacz, Univ. Maryland preprint. M. MillonasSpringer VerlagB. L. Hu and A. Matacz, "Quantum Noise in Gravitation and Cosmology" Invited Talk at the Workshop on Fluctuations and Order, ed. M. Millonas (Springer Verlag, Berlin, 1994). Univ. Maryland preprint pp94-44 (1994)
Quantum Brownian Motion in a Bath of Parametric Oscillators. B L Hu, A Matacz, Univ. Maryland preprint. B. L. Hu and A. Matacz, "Quantum Brownian Motion in a Bath of Parametric Oscil- lators", Univ. Maryland preprint pp93-210 (1993)
Noise and Fluctuations in Semiclassical Gravity. E Calzetta, B L Hu, Univ. Maryland preprintE. Calzetta and B. L. Hu,"Noise and Fluctuations in Semiclassical Gravity", Univ. Maryland preprint 93-216 (1993).
Fluctuation-Dissipation Relation in Cosmology. B L Hu, S Sinha, Univ. Maryland preprintB. L. Hu and S. Sinha, "Fluctuation-Dissipation Relation in Cosmology", Univ. Mary- land preprint pp93-164 (1993)
. J P Paz, S Sinha, Phys. Rev. 452823J. P. Paz and S. Sinha, Phys. Rev. D45, 2823 (1992)
Minisuperspace as a Quantum Open System. B L Hu, J P Paz, S Sinha, Directions in General Relativity. B. L. Hu, M. P. Ryan and C. V. Vishveswara1Cambridge Univ.B. L. Hu, J. P. Paz and S. Sinha, "Minisuperspace as a Quantum Open System" in Directions in General Relativity Vol. 1: Misner Festschrift, eds B. L. Hu, M. P. Ryan and C. V. Vishveswara (Cambridge Univ., Cambridge, 1993)
B L Hu, Quantum Statistical Processes in the Early Universe" in Quantum Physics and the Universe, Proc. Waseda Conference. M. Namiki et alTokyoPergamon Press37391B. L. Hu, "Quantum Statistical Processes in the Early Universe" in Quantum Physics and the Universe, Proc. Waseda Conference, Aug. 1992 ed. M. Namiki et al (Pergamon Press, Tokyo, 1993). Vistas in Astronomy 37, 391 (1993)
Dissipative Nature of Effective Field Theories. E Calzetta, B L Hu, Yuhong Zhang, in preparationE. Calzetta, B. L. Hu and Yuhong Zhang, "Dissipative Nature of Effective Field The- ories" (in preparation)
Quantum Statistical Fields in Gravitation and Cosmology. B L Hu, Proc. Third International Workshop on Thermal Field Theory and Applications. R. Kobes and G. KunstatterThird International Workshop on Thermal Field Theory and ApplicationsSingaporeWorld ScientificB. L. Hu, "Quantum Statistical Fields in Gravitation and Cosmology" in Proc. Third International Workshop on Thermal Field Theory and Applications, eds. R. Kobes and G. Kunstatter (World Scientific, Singapore, 1994)
. V Hakim, V Ambegoakar, Phys. Rev. 32423V. Hakim and V. Ambegoakar, Phys. Rev. A32, 423 (1985)
. A Matacz, Phys. Rev. D. 49788A. Matacz, Phys. Rev. D. 49, 788 (1994).
. B L Hu, G W Kang, A Matacz, Int. J. Mod. Phys. A. B. L. Hu, G. W. Kang and A. Matacz, Int. J. Mod. Phys. A (1994)
. L Grishchuk, Y V Siderov, Phys. Rev. 423414L. Grishchuk and Y. V. Siderov, Phys. Rev. D42, 3414 (1990)
. G W Gibbons, S W Hawking, Phys. Rev. 152752G. W. Gibbons and S. W. Hawking, Phys. Rev. D15, 2752 (1977);
. J Z Simon, private communicationJ. Z. Simon, private communication
. A L Matacz, work in progressA.L. Matacz, work in progress.
| zyda_arxiv-1051000 |
Spin-Gauge Theory of Gravity with Higgs-field Mechanism
arXiv:1306.2085v1 [gr-qc] 10 Jun 2013
H Dehnen
Physics Department
University of Konstanz
Box 5560D-7750Konstanz 1
E Hitzer
Physics Department
University of Konstanz
Box 5560D-7750Konstanz 1
Spin-Gauge Theory of Gravity with Higgs-field Mechanism
arXiv:1306.2085v1 [gr-qc] 10 Jun 2013
We propose a Lorentz-covariant Yang-Mills spin-gauge theory, where the function valued Dirac matrices play the role of a non-scalar Higgs-field. As symmetry group we choose SU(2) × U(1). After symmetry breaking a nonscalar Lorentz-covariant Higgs-field gravity appears, which can be interpreted within a classical limit as Einstein's metrical theory of gravity, where we restrict ourselves in a first step to its linearized version.
I. Introduction
Within the solar system and the binary pulsar PSR 1913 + 16 the classical gravitational interaction is described very well by Einstein's general relativity.
However, this theory -simultaneously the oldest non-abelian gauge-theory with the Poincargroup as gauge group -is not quantizable until now. On the other hand all the other fundamental interactions and their unifications are described successfully by quantizable Lorentz-covariant gauge theories with unitary gauge groups. Therefore the suspicion exists, that Einstein's theory represents only a classical macroscopic description of gravity and that the fundamental microscopic gravitational interaction between elementary particles is also described by a unitary gauge group on the Minkowski spacetime in such a way, that Einstein's theory of macroscopic gravity is reached as an effective theory within a certain classical limit similarly as in the strong interaction the nuclear forces follow from the quantum chromodynamics. 1 In this way the problem of quantization of gravity and its unification with the other interactions would be solvable.
In this connection the statement is of interest (Dehnen, Frommert and Ghaboussi, 1990), that the scalar Higgs-field of the elementary particle physics the basis of which are of course unitary transformation groups, mediates a Lorentz-invariant attractive gravitational interaction between those elementary particles which become massive by the spontaneous symmetry breaking, i.e. the Higgs-field has its source only in the mass and acts back only on the mass of the particles. The equivalence of inertial and gravitational mass is fulfilled automatically within this Higgs-field gravity. But if 1 For this general intention see also Stumpf, 1988. the strength of this gravity shall be of the order of the Newtonian one, the mass of the gauge-bosons will be of the order of the Planck-mass.
For the last reason the standard Higgs-gravity, e.g. within the electroweak interaction (see Dehnen, Frommert, 1991), has presumably nothing to do with usual gravity. However, here the question arises, whether Einstein's tensorial gravity may be a consequence of a more sophisticated Higgs-field, which is especially not a scalar one.
For this we extend back to a Yang-Mills SU(2)×U(1) spin-gauge theory of gravity on the Minkowski space-time of special relativity proposed by Dehnen et al. (Dehnen, Ghaboussi, 1985 and ; see also Chisholm, Farwell, 1989).
In this theory, where a subgroup of the unitary transformations of Dirac's γmatrices between their different representations (internal spin group; see also Drechsler, 1988 andBade, Jehle, 1953;cf. also Barut, 1984) is gauged, the γ-matrices became function valued but remained covariantly constant with respect to the internal spin group, whereas the gravitational interaction is mediated by the four gauge-bosons belonging to the group SU(2) × U(1) and the classical non-euclidian metric is constructed out of them as an effective field in a certain manner.
Here a modification in the sense of the Higgs-field gravity is indicated:
Instead of considering the γ-matrices as covariantly constant it is possible to treat them as true field variables with a Higgs-Lagrange density, and this because also the γ-matrices possess a non-trivial ground-state, namely the usual constant standard representations. Because the γ-matrices can be understood as square root of the metric the gauge group is that of the square root of the metric; moreover, in consequence of this group the several spin states (or particle-antiparticle states) are indistinguishable with respect to the interaction following from gauging the spin group (universality of the interaction). Both properties suggest that real gravity is involved.
In this way we get a quantizable unitary spin-gauge theory with Dirac's γ-matrices as Higgs-fields; on this level an unification with all the other interactions may be possible. After spontaneous symmetry breaking a nonscalar Higgs-gravity appears, which can be identified in a classical limit with Einstein's gravity, where we restrict ourselves in the first step for simplicity to the linear theory. The essential points are the following: the theory is from the beginning only Lorentz-covariant. After symmetry breaking and performing a unitary gauge the action of the excited γ-Higgs-field on the fermions in the Minkowski space-time is reinterpreted as if there would exist non-euclidean space-time connections and a non-euclidean metric (effective metric), in which the fermions move freely; then the deviation from the Minkowski space-time describes classical gravity. This happens, as usual, in the de Donder gauge and not in general coordinate covariance, which depends also on the fact that with the choice of the unitary gauge a gauge fixing is connected. In this way the gravitational constant is produced only by the symmetry breaking and the non-euclidian metric comes out to be an effective field, whereas the gauge-bosons get masses of the order of the Planck-mass and can be therefore neglected in the low energy limit; but in the high energy limit (≃ 10 19 GeV) an additional "strong" gravitational interaction exists. Simultaneously, our results give a new light on the role of the Higgs mechanism.
Finally we note, that as in the previous spin-gauge theory (c.f. Ghaboussi, Dehnen and Israelit, 1987) a richer space-time geometrical structure results than only a Riemannian one. We find also an effective non-metricity, whereas an effective torsion does not appear. The question, whether it is possible to change the Lagrangian so that the non-metricity vanishes, will be clarified in a later paper.
II. The Model
In the beginning we repeat briefly the foundations of the previous work (Dehnen, Ghaboussi, 1986; see also Babu Joseph, Sabir, 1988) so far as necessary. Using 4-spinors it is appropriate to introduce the transformation matrices of the group SU(2) × U(1) in their 4 × 4-representation (a = 0, 1, 2, 3) :
(2.1) U = e iλa(x µ )τ a ,
where the SU(2)-generators are given by the Pauli matrices σ i as follows
(i = 1, 2, 3) : 2 (2.2) τ i = 1 2 σ i 0 0 σ i .
The U(1)-generator τ 0 may be diagonal and commutes with (2.2); but its special form shall be determined only later. Thus the commutator relations for the generators τ a are
(2.3) τ b , τ c = iǫ a bc τ a ,
where ǫ a bc is the Levi-Civita symbol with the additional property to be zero, if a, b, or c is zero.
Then the 4-spinor ψ and the Dirac matrices γ µ transform as 3
(2.4) ψ ′ = Uψ, γ ′ µ = Uγ µ U −1 and the covariant spinor derivative reads
(2.5) D µ ψ = (∂ µ + igω µ )ψ
(g gauge coupling constant). The gauge potentials ω µ obey the transformation
law 4 (2.6) ω ′ µ = Uω µ U −1 + i g U |µ U −1
and are connected with the real valued gauge fields ω µa by
(2.7) ω µ = ω µa τ a .
According to (2.4) Dirac's γ-matrices become necessarly function valued, in consequence of which we need determination equations for them; as such ones we have chosen in our previous paper in analogy to general relativity:
(2.8) D α γ µ = ∂ α γ µ + ig [ω α , γ µ ] = 0 , γ (µ γ ν) = η µν · 1 (η µν = η µν = diag(+1, −1, −1, −1) Minkowski metric)
. Because the γmatrices are the formal square root of the metric, the gauge transformations (2.4) are those, which are associated with the root of the metric. Therefore the concept described by the formulae (2.1) up to (2.7) may have to do something with gravity. And indeed, in our previous paper we could show, that a space-time geometrical interpretation of the theory results in an effective non-euclidian metric given by
(2.9) g µν = ω µa ω νb η ab .
However, the result (2.9) is connected with the condition that the gauge potentials ω µa do never vanish and possess a non-trivial ground-state representing according to (2.9) in the lowest order the Minkowski metric. This is an unusual feature; furthermore the conditions (2.8) are chosen for simplicity.
Therefore it may be justified to give up the relations (2.8) and (2.9) and to consider Dirac's γ-matrices as true field variables with a Higgs-Lagrange density, so that the non-trivial ground-state can be identified with the constant standard representations. It will come out, that after symmetry breaking the excited γ-Higgs-fields mediate a non-scalar Higgs-gravity, which results finally in Einstein's metrical theory, where instead of (2.9) the connection between the effective non-euclidian metric and the γ-Higgs-field will be deduced from a space-time geometrical interpretation of the equation of motion for the 4-momentum of the fermions described by the spinor fields ψ.
III. Lagrange Density and Field Equations
The translation of the model into a field-theoretical description results in a Lagrange density consisting of three minimally coupled Lorentz-and gaugeinvariant real valued parts (h = 1, c = 1):
(3.1) L = L M (ψ) + L F (ω) + L H (γ) .
Beginning with the last part, L H (γ) belongs to the γ-Higgs-field and has the form:
(3.2) L H (γ) = 1 2 tr [(D αγ µ )(D αγ µ )] − V (γ) − kψγ µγ µ ψ, where (3.2a) V (γ) = µ 2 2 tr(γ µγ µ ) + λ 4! (trγ µγ µ ) 2
is the Higgs-potential. Hereinγ µ denotes from now the dynamic function valued γ-matrices, which obey the transformation law (2.4) and the groundstates of which are proportional to the constant standard representations γ µ (bear this change of notation in mind). The last term on the right hand side of (3.2) represents the Yukawa-coupling term for generating the mass of the fermions by the γ-Higgs-field. In view of the elektroweak interaction later oñ γ µ must become isospin valued, which leads to the possibility of unification in a 8-dimensional spin-isospin space (c.f. chapt. 6).
The second term on the right hand side of (3.1) is that of the gauge-fields ω µ :
(3.3) L F (ω) = − 1 16π F µνa F µν b s ab ,
where s ab is the group-metric of SU(2) × U(1) and can be taken here as δ ab (but compare the previous work) . The gauge field strength are defined in the usual manner by
(3.4) F µν = 1 ig [D µ , D ν ] = F µνa τ a with (3.4a) F µνa = ω νa|µ − ω µa|ν − gǫ a jk ω µj ω νk .
The first Lagrangian in (3.1) concerns the fermionic matter fields and takes the form (ψ is only proportional to the Dirac spinor, see (4.4):
(3.5) L M (ψ) = i 2 ψγ µ D µ ψ − i 2 (D µ ψ)γ µ ψ.
The adjoint spinor ψ is given by
(3.6) ψ = ψ † ζ,
wherein ζ represents the SU(2) × U(1)-covariant matrix with the property:
(3.7) (ζγ µ ) † = ζγ µ .
In view of the commutability of covariant derivative and multiplication with ζ in (3.5) it is further necessary that
(3.8) D µ ζ = 0.
So long as (see chapt. 4) (3.9) γ 0 , τ a = 0, the equations (3.7) and (3.8) will be fulfilled only (up to a constant factor) by (3.10) ζ = γ 0 (ζ † = ζ, ζ 2 = 1), so that (3.6) yields as usual ψ = ψ † γ 0 . Because of (3.9) the matrix ζ is not only covariant but even invariant under gauge transformations. These results depend essentially on the relation (3.9), which may be not valid in a larger group (e.g. U(4)). 5 Finally we note, that one can prove easily with the use of (4.8), that all three expressions (3.2), (3.3) and (3.5) of the Lagrangian are real valued and contain no dimensional parameter with exception of µ 2 in (3.2), which has the dimension of a mass square.
The field equations following from the action principle associated with (3.1) are given by the generalized Dirac-equation
(3.11) iγ µ D µ ψ + i 2 (D µγ µ )ψ − kγ µγ µ ψ = 0
as well as its adjoint equation, by the inhomogeneous Yang-Mills equation
(3.12) ∂ ν F νµ a + gǫ a bc F νµ b ω νc = 4πj µ a with the gauge currents j µ a = j µ a (ψ) + j µ a (γ) = g 2 ψ {γ µ , τ a } ψ+ (3.12a) +igtr ([γ α , τ a ] D µγ α )
belonging to the matter and the Higgs-field respectively, and by the γ-Higgs-
field equation: 6 D α D αγµ A B + µ 2 + λ 6 tr(γ αγ α ) γ µ A B = = i 2 ψ B · (D µ ψ) A − (D µ ψ) B · ψ A − (3.13) −k ψ B · (γ µ ψ) A + (ψγ µ ) B · ψ A .
Herein the lower capital latin index A and the upper index B denote the contragradiently tranformed rows and collumns of the spinorial matrices respectively. The homogeneous Yang-Mills equation following from the Jacobiidentity reads:
(3.14)
∂ [µ F νλ]a + gω k[µ F νλ]j ǫ kj a = 0.
Finally we note the conservation laws valid modulo the field equations.
First, from (3.12) the gauge current conservation follows immediately:
(3.15) ∂ µ (j µ a + g 4π ǫ a bc F µν b ω νc ) = 0.
Secondly, the energy-momentum law takes the form
(3.16) ∂ ν T µ ν = 0,
where T µ ν is the gauge-invariant canonical energy-momentum tensor consisting of three parts corresponding to (3.1)
(3.17) T µ ν = T µ ν (ψ) + T µ ν (ω) + T µ ν (γ)
with (modulo Dirac-equation):
(3.18a) T µ ν (ψ) = i 2 ψγ ν D µ ψ − (D µ ψ)γ ν ψ , (3.18b) T µ ν (ω) = − 1 4π F µαa F ναa − 1 4 F αβ a F a αβ δ ν µ , T µ ν (γ) = tr [(D νγ α )(D µγ α )] − −δ ν µ 1 2 tr (D αγ β )(D αγ β ) − µ 2 2 tr(γ αγ α )− (3.18c) − λ 4! (trγ αγ α ) 2 .
Because of the Yukawa-coupling term in (3.2) the trace of (3.18a) does not vanish. With the use of the Dirac-equation (3.11) and its adjoint equation one finds:
(3.19) T µ µ (ψ) = kψγ µγ µ ψ.
By insertion of (3.18) into (3.17) one obtains from ( (3.20)
∂ ν T µν (ψ) = − i 2 ψ(D µγα )(D α ψ) − −(D α ψ) (D µγα )ψ] + kψ {D µγ α ,γ α } ψ+ +F µ αa j αa (ψ)
. Intergration over the space-like hypersurface t = const. and neglection of surface integrals in the space-like infinity yield the momentum law for the 4-momentum p µ = T µ 0 (ψ)d 3 x of the fermions. On the right hand side of (3.20) one recognizes the Lorentz-forces of the gauge fields and the force of the γ-Higgs-field.
We finish with two remarks. First, the energy momentum tensor T µ ν (γ), equ. (3.18c), does not vanish for the ground-state, see (4.2), but has the value:
(3.21) (0) T µ ν ( (0) γ ) = − 3 2 µ 4 λ δ µ ν .
However this can be renormalized to zero by changing the Higgs-potential
IV. Spontaneous Symmetry Breaking
Although one can recognize the gravitational structure already in equation
γ µ = − 6µ 2 λ = v 2 (µ 2 < 0).
Simultaneously, herewith all field equations (3.11) up to (3.14) are fulfilled.
The ground-state (0) γ µ of the γ-Higgs-field must be proportional to the (constant) Dirac standard representation γ µ , i.e.
γ µ = bγ µ . 7 Insertion into (4.1) results because of {γ µ , γ ν } = 2η µν · 1 in b = v 4 , so that we have for the ground-state:
(4.2) (0) γ µ = v 4 γ µ .
Herewith the Lagrange density (3.5) for the spinorial matter fields reads considering the γ-Higgs-field ground-state only:
(4.3) i 2 ψ v 4 γ µ ∂ µ ψ + h.c.
Comparison with the usual Dirac Lagrangian i 2 ψ DIR γ µ ∂ µ ψ DIR results in (ψ DIR Dirac spinor):
(4.4) ψ = 2 √ v ψ DIR .
Herewith the fermionic mass term in (3.2), identical with the trace (3.19) of the energy-momentum tensor T µ ν (ψ), takes the form for the groundstate
(0) γ µ : (4.5) T µ µ (ψ DIR ) = ψ DIR mψ DIR
with the mass:
(4.5a) m = kv.
On the other hand the Higgs-field gauge current j µ a (γ) gives rise after symmetry breaking to the mass of the gauge-bosons ω µ a . In the lowest order we find from (3.12a) with the use of (4.2):
(4.6) −4πj µ a ( 0 γ ) = M 2 ab ω µb , M 2 ab = M 2 ab ρν η ρν , M 2 ab ρν = − π 4 g 2 v 2 tr ([τ a , γ ρ ] [τ b , γ ν ]) .
Here it is convenient to choose the U(1)-generator explicitely. If we take the unit matrix, the gauge-boson ω µ 0 remains massless of course (rest symmetry) and must be taken into account also in the low energy limit. In order to avoid this, 8 the first possibility consists in view of (3.9) in the choice τ 0 = 1 2 γ 0 . Doing this we obtain from (4.6) with the use of (2.2) the diagonal mass matrix for the gauge-bosons:
(4.7)
M 2 00 = 3πg 2 v 2 , M 2 ij = 2πg 2 v 2 δ ij and zero otherwise. As we will see later the value of (4.7) is of the order of the square of the Planck-mass ( ∧ = 10 19 GeV), so that all gauge-bosons can be neglected in the low energy limit. A second possibility for avoiding the ω µ 0boson exists in remaining the unit matrix for τ 0 but choosing the associated gauge-coupling constant g 1 sufficiently small (g 1 << 1). This choice has the advantage, that the unitary SU(2) × U(1) transformation (2.1) up to (2.4) is exactly identical with that of the 2-spinors (resulting from a decomposition of the chiral representation).
As one can prove easily, the general Higgs-fieldγ µ can be represented, if no spin orientation is present (classical limit), by
(4.8)γ µ (x α ) = h µ λ (x ν )U (0) γ λ U −1 ,
so that it can be reduced within the unitary gauge as usual to the groundstate (4.2) in the following way
(4.8a)γ µ (x ν ) = h µ λ (x ν ) (0) γ λ , where 9 (4.8b) h µ λ (x ν ) = δ µ λ + ǫ µ λ (x ν )
and ǫ µ λ (x ν ) describes the deviations from the ground-state, i.e. the excited Higgs-field. Herewith we are able to write down all field equations after symmetry breaking exactly in a non-matrix valued form. Of course, the h µ λ (x ν ) look like a tetrad-field, but their determination and connection with the effective non-euclidean metric follow only from the γ-Higgs-field equation after symmetry breaking.
V. Field Equations after Symmetry breaking and Gravitational Interaction
In this section we restrict ourselves in a first step for simplicity to the linearized theory, i.e. |ǫ µ λ | << 1 (weak field limit). We start in view of the gravitational aspect with the Higgs-field equation (3.13). Going over from a spinorial description to a Lorentz-tensorial equation we multiply (3.13) at first by γ λ B A . Then after insertion of (4.2), (4.4), (4.5), (4.8a) and (4.8b) we obtain linearized in ǫ µ λ under neglection of the gauge-boson interaction because of (4.7) (low energy limit):
(5.1) ∂ α ∂ α ǫ µλ − µ 2 2 ǫη µλ = 4 v 2 T µλ (ψ DIR ) − 1 2 T (ψ DIR )η µλ , where (5.1a) T µλ (ψ DIR ) = i 2 ψ DIR γ λ D µ ψ DIR − (D µ ψ DIR )γ λ ψ DIR
is the usual (canonical) Dirac energy-momentum tensor. Obviously the antisymmetric and the traceless symmetric part of ǫ µλ remain massless, whereas the scalar trace ǫ = ǫ µλ η µλ possesses the Higgs-mass:
(5.1b) M = −2µ 2 .
It seems, that in the standard model of electroweak interaction only this scalar part of the total Higgs-field is taken into account. Furthermore, if (5.1)
shall describe usual gravity, v 2 ∼ G −1 (G Newtonian gravitational constant) must be valid, so that (4.7) is indeed of the order of the square of the Planck-
mass M P l = 1 √ G .
Before comparing (5.1) with Einstein's field equations it is appropriate to interpret at first the Higgs-field forces in (3.20) geometrically, where in the low energy limit the Lorentz-forces of the gauge fields can be neglected.
Insertion of (4.2), (4.4), (4.8a) and (4.8b) into (3.20) gives with respect to (3.18a) and (5.1a):
∂ ν T µν (ψ DIR ) = −∂ ν ǫ νρ T µρ (ψ DIR ) − ∂ µ ǫ ρν T ρν (ψ DIR )+ (5.2) + 1 2 ∂ µ ǫT (ψ DIR )
linearized with regard to ǫ µ λ . The equations (5.1) and (5.2) describe the γ-Higgs-field interaction in its linearized version which is obviously very similar to that of general relativity. Now, the comparison of (5.2) with the energy-momentum law of a classical affine geometrical theory with the affine connections Γ µ νρ (5.3)
D (Γ) ν T µν = 0 ⇒ ∂ ν T µν = −Γ ν νρ T µρ − Γ µ νρ T ρν
is possible. Within this classical procedure we neglect all spin-influences, in consequence of which, c.f. (5.7), T ρν (ψ DIR ≡ T ρν = T νρ is a symmetric tensor; thus we find the unique identification: Consequently, in the space-time geometrical limit the excited Higgs-field ǫ µ λ or more precisely its derivatives play effectively the role of affine connections (effective connections). Their field equations are obtained in the following way: The equations (5.1) take the form assuming a negligible Higgs-mass (5.1b):
(5.4) Γ µ νρ = ∂ µ (ǫ ρν − 1 2 ǫη ρν ) + 1 14 (∂ ν ǫδ µ ρ + ∂ ρ ǫδ ν µ ) if (5.4a) ∂ ν ǫ [ρν] =(5.5a) ∂ α ∂ α ǫ (µν) − 1 2 ǫη µν = 4 v 2 T (µν) (ψ DIR ) and (5.5b) ∂ α ∂ α ǫ [µν] = 4 v 2 T [µν] (ψ DIR ).
In the lowest order, which is considered here only, the right hand sides of (5.5) possess in view of (5.2) vanishing divergences. Therefore the following constraints hold in consequence of the field equations:
(5.6) (a) ∂ ν ǫ (µν) − 1 2 ǫη µν = 0, (b) ∂ ν ǫ [µν] = 0,
the first of which has the structure of the de Donder condition and the second of which guarantees the fulfillment of the condition (5.4a).
Evidently the source of the antisymmetric part ǫ [µν] is the antisymmetric part of the fermionic energy-momentum tensor (5.1a), which can be written with the use of the Dirac-equation in the lowest order:
T [µν] (ψ DIR ) = 1 2 ψ DIR γ [µ σ ν ] λ D λ ψ DIR + (5.7) + (D λ ψ DIR )σ λ [µ γ ν ] ψ DIR ,
where σ µν = iγ [µ γ ν] is the spin-operator. Therefore, if we neglect in the classical macroscopic limit all spin influences, the solution of (5.5b) is:
(5.8) ǫ [µν] ≡ 0,
whereby in the classical limit ǫ µν = ǫ (µν) is valid.
For discussion of the field equation (5.5a) for the symmetric part ǫ (µν) we compare directly with Einstein's linearized field equations of gravity. Setting (5.9) g µν = η µν + γ µν and choosing the de Donder gauge, c.f. (5.6a) (5.9a) ∂ ν (γ µν − 1 2 γη µν ) = 0 (γ = γ µν η µν ) it is valid:
(5.10) ∂ α ∂ α 1 2 (γ µν − 1 2 γη µν ) = −8πGT (µν) .
The comparison with (5.5a) results immediately in: goes over in the lowest order into the Newtonian gravitational law; for this
(5.12) ǫ (µν) = − 1 4 γ µν ⇒ α = − 1 4
, v 2 = 1/πG must be valid in view of (5.11) with γ µν = 2Φdiag(1, 1, 1, 1) with respect to (5.10) (Φ Newtonian gravitational potential). Herewith the effective noneuclidian metric takes the form with respect to (5.9):
(5.13) g µν = η µν − 4ǫ (µν) .
Analyzing the affine geometric connections (5.4) we note that beside the Christoffel-symbols belonging to the metric (5.13) (5.14)
α µν = 2η αλ (∂ λ ǫ (µν) − ∂ ν ǫ (µλ) − ∂ µ ǫ (λν) )
there exists no effective torsion (5.15) Γ µ [νρ] ≡ 0 but effective non-metricity:
Q µνλ = −D (Γ) µ g νλ = 4∂ µ ǫ (νλ) + ∂ λ ǫ (νµ) + ∂ ν ǫ (λµ) − (5.16) − 3 7 (∂ ν ǫη µλ + ∂ λ ǫη µν ) + 1 7 ∂ µ ǫη νλ
assuming ǫ [µν] ≡ 0 in all cases, c.f. (5.8).
The practical consequences of the appearance of non-metricity or even better its avoidance shall be investigated elsewhere.
Now we note the Dirac-equation for gravitational interaction according to the spin-gauge theory as well as the Yang-Mills equation for the very massive gauge fields. From (3.11) it follows immediately after insertion of (4.2), (4.4),
(4.5a) and (4.8) under neglection of the gauge-boson interaction (low energy limit):
(5.17) iγ µ D µ ψ DIR − m(1 + 1 2 ǫ)ψ DIR = 0 with (5.17a) D µ = ∂ µ + ǫ λ µ ∂ λ + 1 2 (∂ λ ǫ λ µ ).
In its non-relativistic limit this equation goes over into the Schrödingerequation with usual Newtonian gravitational potential. Iteration of (5.17), elimination of all spin influences and linearization in ǫ λ µ give: 10
(5.18) D µ D µ ψ DIR + m 2 c 2 h 2 (1 + ǫ)ψ DIR + i 2 mc h γ µ (∂ µ ǫ)ψ DIR = 0.
With the ansatz
(5.19) ψ DIR = ǫ −i mc 2 h t ϕ(x ν )
we obtain from (5.18) under neglection of all terms up to the order of
c −1 (ǫ λ µ ∼ c −2 ) the Schrödinger-equation (5.20)h 2 2m ∆ϕ + mc 2 2 (2ǫ 00 − ǫ)ϕ =h i ∂ t ϕ
Because of (5.12) ǫ 00 = − 1 2 Φ/c 2 , ǫ = Φ/c 2 is valid, so that (5.20) goes over into
(5.21)h 2 2m ∆ϕ − mΦϕ =h i ∂ t ϕ,
i.e. the usual Schrödinger-equation with classical gravitational potential Φ.
We have shown this explicitely, because this quantum mechanical equation has been tested experimentally until now only for the gravitational interaction by the neutron-interference experiment of Collela, Overhauser and Werner (1975). It may be of interest however, that the Schrödinger-equation
(5.21) does not guarantee, that atomic clocks and lengths measure the effective non-Euclidean metric; for this the influence of the gravitational field on the electric Coulomb potential between electron and nucleus of the atom is necessary (see e.g. Papapetrou, 1956), which is not yet included in our theory.
Finally, for the inhomogeneous Yang-Mills equation we obtain from (3.12) and (3.12a) with the use of (4.2), (4.4), (4.6) and (4.8):
∂ ν F νµ a + gǫ a bc F νµ b ω νc + M 2 ab ω µb + 2ǫ (ρν) M 2 ab ρν ω µb = = 4π g 2 ψ DIR γ λ , τ a ψ DIR (δ µ λ + ǫ µ λ )+ (5.22) ig v 2 16 (∂ µ ǫ ρν )tr ([γ ρ , τ a ] γ ν ) ,
where we have restricted ourselves also to linearized interaction terms with respect to gravitation. On the left hand side we recognize the mass term and the interaction of the massive bosons with the gravitational potentials; on the right hand side we find as sources gravitationally influenced Dirac gauge currents and a current associated with the gravitational field itself (remaining Higgs-field current). Because of this it may be justified to call the gauge-boson interaction as a "strong" but very massive gravitational interaction; its coupling constant g remains however undetermined within our present theoretical approach.
VI. Final Remarks
In extension of a previous spin-gauge theory of gravity we have shown, that
Dirac's γ-matrices can be treated as a quantizable Higgs-field, in consequence of which Einstein's metrical theory of gravitation follows as the classical macroscopic limit of the Higgs-field interaction after symmetry breaking.
In spite of this success there are several problems for the future. First, the effective space-time geometrical structure is not only a Riemannian one, but also non-metricity is present, which should be suppressed in the next step since no observational hint on it exists. This may be possible because the Lagrange density (3.2) for the Higgs-field is not yet unique but can be supplemented in its kinetic term, e.g. by tr[(D αγµ )(D µγ α )]. In connection with this it may also be attainable to avoid the constraint (5.6a), which corresponds to the de Donder condition, and perhaps in this way Einstein's theory can be reached even exactly and not only in its linearized version as presented above.
Furthermore the theory, as it stands, contains only the gravitational interaction between the fermions. But the gravitational interaction with all bosons must be included within a complete and consistent theory of gravita-tion; otherwise, as remarked above (c. f. Papapetrou, (1956)), atomic clocks and lengths do not measure the non-Euclidean effective metric. This may require however an unification with the other interactions on the microscopic level of unitary phase gauge transformations within a high dimensional (e.g.
8-dimensional) spin-isospin space describing gravitational and (electro-) weak interaction separately.
In this respect one could have a bold idea: Because in our theory the γ-matrices are treated as Higgs-field it could be possible to introduce the chiral asymmetry of the fermions with regard to the weak interaction, which is however present already in the SU(5) − GUT , by a special choice of the ground-state of the γ-Higgs-field in the course of the spontaneous symmetry breaking at approximately 10 19 GeV connected with the separation of gravitational and electro-weak interaction.
3.16) the equation of motion for the fermions. After substitution of the second covariant derivatives of the γ-Higgs-field using the field equation (3.13) one finds with the help of the Yang-Mills equations (3.12) and (3.14):
( 3 .
32a) correspondingly; otherwise (3.21) will give rise within the complete theory to a cosmological constant. Secondly, the γ-Higgs-field equation (3.13) contains as source forγ µ the fermionic energy-momentum tensor T µ ν (ψ) in its spinor valued form; and in this form it appears also in the γ-Higgs-field force of (3.20). This fact confirmes the supposition, that the γ-Higgs-field equation results in Einstein's field equation of gravitation (for the fermions) after a space-time geometrical interpretation of the γ-Higgs-field forces in (3.20) defining the effective space-time geometrical connection coefficients.
( 3 .
313) and (3.20) the space-time geometrical interpretation is only possible after symmetry breaking. The minimum of the energy-momentum tensor (3.18) in absence of matter and gauge fields is reached, when the Higgs-
0 is valid, see (5.6b). Herewith the forces of the excited γ-Higgs-field on the fermions in the Minkowski space-time are reinterpreted as the action of noneuclidean space-time geometrical connections.
the proportional constant α. Consequently the constraint (5.6a) is identical with the de Donder condition (5.9a) and Newton's gravitational constant G is correlated, as expected, with the Higgs-field ground-state value v. The constant α is adjusted in such a way, that the equation of motion (5.2)
The explicite form of (2.2) is only used in (4.7).
γ µ are tensors with respect to the unitary transformations (2.1), but they are not elements of the adjoint representation. 4 |µ denotes the partial derivative with respect to the coordinate x µ .
A generalization of the theory to the full gauge group U(4) is in preparation (see alsoDrechsler, 1988).
Ifγ µ is considered to be traceless, see (4.2) and (4.8), then also the traceless version of (3.13) is valid only.
Of course, global unitary transformations between the different standard representations and simultaneously of the generators are allowed.
It seems to us not suitable to identify this boson with the photon in view of the electroweak interaction.9 Lifting and lowering of indices is performed always with η µν and η νλ respectively.
We useh, c explicitely because of the ordering with respect to c −1 .
. W Bade, H Jehle, Rev. Mod. Phys. 25714see alsoBade, W. , Jehle, H., Rev. Mod. Phys. 25, 714 (1953); see also
. O Laporte, B Uhlenbeck, Phys. Rev. 371380Laporte, O., Uhlenbeck, B., Phys. Rev. 37, 1380 (1931)
. Babu Joseph, K Sabir, M , Mod. Phys. Lett. A. 3497Babu Joseph, K., Sabir, M., Mod. Phys. Lett. A 3, 497 (1988)
. A O Barut, J Mcewan, Phys. Lett. 135172Barut, A.O., McEwan, J., Phys. Lett 135, 172 (1984);
. Lett. Math. Phys. 1167Lett. Math. Phys. 11, 67 (1986)
. J Chisholm, R Farwell, J. Phys. 221059and the literature cited thereinChisholm, J., Farwell, R., J. Phys. A22, 1059 (1989), and the literature cited therein
. R Collela, A Overhauser, S Werner, Phys. Rev. Lett. 341472Collela, R., Overhauser, A., Werner, S., Phys. Rev. Lett. 34, 1472 (1975)
. H Dehnen, F Ghaboussi, Nucl. Phys. 262144Dehnen, H., Ghaboussi, F., Nucl. Phys. B262, 144 (1985)
. H Dehnen, F Ghaboussi, Phys. Rev. 332205Dehnen, H., Ghaboussi, F., Phys. Rev. D33, 2205 (1986)
. H Dehnen, H Frommert, F Ghaboussi, Int. J. theor. Phys. 29537Dehnen, H., Frommert, H., Ghaboussi, F., Int. J. theor. Phys. 29, 537 (1990)
. H Dehnen, H Frommert, Int. J. theor. Phys. 30985Dehnen, H., Frommert, H., Int. J. theor. Phys. 30, 985 (1991)
. W Drechsler, Z. Phys. 41197Drechsler, W., Z. Phys. C41, 197 (1988)
. F Ghaboussi, H Dehnen, M Israelit, Phys. Rev. 351189Ghaboussi, F., Dehnen, H., Israelit, M., Phys. Rev. D35, 1189 (1987)
. F Ghaboussi, Il Nuov. Cim. 1041475Ghaboussi, F., Il Nuov. Cim. 104A, 1475 (1991)
. A Papapetrou, Ann. der Phys. 17Papapetrou, A., Ann. der Phys. 17, 214, (1956)
. J Schouten, Ricci-Calculus, SpringerBerlin2nd editionSchouten, J., Ricci-Calculus, 2nd edition, Springer (Berlin, 1954)
. H Stumpf, Z. Naturforsch. 43345Stumpf, H., Z. Naturforsch. 43a, 345 (1988)
| zyda_arxiv-1069000 |
Topological insulator in the core of the superconducting vortex in graphene
10 Feb 2010
Igor F Herbut
Department of Physics
Simon Fraser University
V5A 1S6BurnabyBritish ColumbiaCanada
Topological insulator in the core of the superconducting vortex in graphene
10 Feb 2010
The core of the vortex in a general superconducting order parameter in graphene is argued to be ordered, with the possible local order parameters forming the algebra U (1) × Cl(3). A sufficiently strong Zeeman coupling of the magnetic field of the vortex to the electron spin breaks the degeneracy in the core in favor of the anomalous quantum Hall state. I consider a variety of superconducting condensates on the honeycomb lattice and demonstrate the surprising universality of this result. A way to experimentally determine the outcome of the possible competition between different types of orders in the core is proposed.
Besides the usual metallic state [1], Dirac quasiparticles in graphene may, at least in principle, form a multitude of insulating phases [2]. The insulators should be understood as different types of dynamically or externally generated masses of Dirac fermions, which preserve the quasi-relativistic invariance of the low-energy theory, but break some of the space-time symmetries of the Hamiltonian [3]. Yet another type of ordered phase of electrons on honeycomb lattice is the superconductor, which breaks the U(1) symmetry associated with the conservation of the particle number [3][4][5][6]. The superconducting state should be inducible in graphene by the proximity effect for example [7]. At the level of the continuum Dirac Hamiltonian, superconducting states in graphene may be distinguished by their symmetry with respect to the inversion of the two inequivalent Dirac points, as the s and p waves, and with respect to the translational invariance, as uniform and not. The variety of such superconducting states notwithstanding, I show here that the core of the superconducting vortex in undoped graphene is always ordered, and that, at least in the simplest model, always in the same way: the core of the vortex is the Haldane-Kane-Melle (HKM) topological insulator [8,9].
The above result is obtained within two complementary Dirac representations of the Bogoliubov-de Gennes (BdG) quasiparticle Hamiltonian, in which it arises in related, but distinct ways. The problem of the internal structure of the superconducting vortex in graphene may be mapped onto the general Dirac Hamiltonian in presence of two-component mass-term with a twist. It is well known that the single zero-energy state in this situation would imply a quantum number fractionalization [10]. In graphene, however, the spin-1/2 of the electrons dictates two copies of Dirac fermions, and the concomitant doubling of the number of zero-energy states restores the usual quantization rule. The same zero-energy states, nevertheless, are responsible for another nontrivial property of the vortex core: it always harbors a finite local order parameter (OP) of some kind. An early example of this effect was provided in ref. [11], where the vortex configuration of the in-plane Néel OP on honeycomb lattice was shown to induce an out-of-plane component of the same OP. More recently, the superconducting vortex was argued to exhibit charge-density waves (CDW) and bond-density waves (BDW) in the core [12]. Here it will be shown that in general the vortex of unit vorticity in the mass of the two-copy four-component Dirac Hamiltonian allows four order parameters, which close U (1) × Cl (3), where Cl(3) stands for the three-dimensional Clifford algebra. The members of the algebra depend on the type of the underlying order; nevertheless, in a superconductor of arbitrary symmetry, if the magnetic field in the vortex core is sufficiently strong the HKM anomalous quantum Hall state is universally the ground state. The effect is due to the Zeeman coupling of the magnetic field to the electron spin, which, quite unexpectedly, always selects the above state out of a wide variety of competing possibilities.
The HKM insulator is interesting in its own right, due to its quantized spin Hall effect, and the accompanying structure of the edge states [9]. Such a topologically nontrivial ground state is favored by the spin-orbit interactions in graphene, which however are well known to be way too weak to lead to an observable effect. It was recently proposed that this ground state may also result from the electron-electron Coulomb repulsion in the presence of a bulge in the graphene sheet [13]. The present considerations suggest that placing graphene on top of a type-II superconductor in a mixed state may provide an alternative, albeit local, realization of this elusive state of matter.
The announced result will be first derived within the rotationally invariant representation of the paring Hamiltonian. Graphene exhibits gapless excitations near two inequivalent Dirac points at ± K, with K = (1, 1/ √ 3)(2π/a √ 3), for example, where a is the lattice spacing [2]. One may form an eight-component Dirac-Nambu fermion as
Ψ † = (Ψ † + , Ψ † − ), where Ψ † σ ( k, ω) = (u † σ ( k, ω), v † σ ( k, ω), σu −σ (− k, −ω), σv −σ (− k, −ω)),(1)
where σ = ± labels the projection of the electron spin along the z-axis, and k = K + p, with | p| ≪ K. u σ and v σ are the Grassmann variables corresponding to two triangular sublattices of the honeycomb lattice. The imaginary-time Lagrangian for the excitations near the Dirac points in this representation becomes
L = Ψ † ( x, τ )(∂ τ + H 0 )Ψ( x, τ ),(2)
where the single-particle Hamiltonian is H 0 = I 2 ⊗ iγ 0 γ ipi , with γ 0 = σ 3 ⊗σ 3 , γ 1 = I 2 ⊗σ 2 , and γ 2 = I 2 ⊗σ 1 , and i = 1, 2. We may then choose γ 3 = σ 1 ⊗ σ 3 and
γ 5 = σ 2 ⊗ σ 3 . {I 2 , σ} is the Pauli basis of two-component matrices.
The Lagrangian is invariant under the global U (4) symmetry generated by the 16 matrices
{I 2 , σ} ⊗ {I 4 , γ 3 , γ 5 , γ 35 }, where γ 35 = iγ 3 γ 5 = σ 3 ⊗ I 2 ,
and I 4 is the four-dimensional unit matrix. The SU (2) subgroup generated by the S = σ ⊗ I 4 is the group of rotations of the electron spin, and is, of course, exact. This is due to the factor "σ" in the right half of the Dirac-Nambu fermion in Eq. (1), which ensures that the rotation may be accomplished by the matrix multiplication from the left. The above representation is in this sense manifestly rotationally invariant. It will also prove helpful to recognize N = I 2 ⊗ γ 35 as the number operator.
There are also 16 linearly independent Hermitian matrices that anticommute with H 0 :
{I 2 , σ} ⊗ {γ 0 , iγ 0 γ 3 , iγ 0 γ 5 , iγ 1 γ 2 }.
The addition of any of these matrices to the Hamiltonian H 0 would gap the Dirac spectrum. The matrices that correspond to a superconducting orders are those that do not commute with the particle-number. It is easy to see that Ψ † (I 2 ⊗ iγ 0 γ 3 )Ψ = Re∆ s , and Ψ † (I 2 ⊗ iγ 0 γ 5 )Ψ = Im∆ s , where ∆ s represents the spatially uniform, complex swave superconducting OP. Consider then the BdG pairing Hamiltonian in the presence of the vortex in the underlying s-wave superconducting ground state:
H BdG = H 0 + |∆ s (r)|I 2 ⊗ [iγ 0 (γ 3 sin θ + γ 5 cos θ)],(3)
where (r, θ) are the polar coordinates in the graphene plane. |∆ s (r → ∞)| = const., and otherwise arbitrary. The magnetic field in the vortex will be included shortly. An index theorem [14] guarantees that the spectrum of the block-diagonal Hamiltonian H BdG contains two states with zero energy: Ψ † 0,1 = (ψ 0 , 0) and Ψ † 0,2 = (0, ψ 0 ), where ψ 0 is the rotationally invariant, four-component bound state [10]. The Hilbert space H 0 spanned by these two zero-energy states is invariant under all operators that commute or anticommute with H BdG [13], with four falling into the latter category: {I 2 , σ} ⊗ γ 0 . These are at the center of our study.
On the other hand, the expectation value of any traceless single-particle operator M that anticommutes with the Dirac Hamiltonian is easily seen to derive entirely from the zero-energy states [11]. At T = 0:
Ψ † M Ψ = 1 2 ( i,oc − i,unoc )ψ † 0,i ( x)M ψ 0,i ( x),(4)
where {ψ 0,i ( x)} is a basis in H 0 . Since H 0 is twodimensional, the four anticommuting operators from above in H 0 reduce to the familiar Pauli matrices {I 2 , σ}.
In full analogy to the spin-1/2 problem [15], any two orthogonal states in H 0 are then the +1 and −1 eigenstates of (n · σ) ⊗ γ 0 for some unit vectorn , and the +1 eigenstates of I 2 ⊗ γ 0 . Equation (4) then implies that: a) when one state is occupied and the second state empty, Ψ † (n · σ ⊗ γ 0 )Ψ = ψ † 0 ψ 0 , and Ψ † (I 2 ⊗ γ 0 )Ψ = 0, and b) when both states are occupied or both empty
Ψ † (I 2 ⊗ γ 0 )Ψ = ±ψ † 0 ψ 0 , with Ψ † (n · σ ⊗ γ 0 )Ψ = 0 for alln.
It is straightforward to rewrite these fermion bilinears in terms of the original electrons. For example
Ψ † (I 2 ⊗ γ 0 )Ψ = u † σ ( x, τ )u σ ( x, τ ) − v † σ ( x, τ )v σ ( x, τ ). (5)
The corresponding average represents therefore the CDW. Similarly,
Ψ † ( k, ω)(σ 3 ⊗ γ 0 )Ψ( k, ω) = σ[u † σ ( k, ω)u σ ( k, ω) (6) −u † σ (− k, −ω)u σ (− k, −ω)] − [u → v],
and may be recognized as the z-component of the vector OP for the HKM topological insulator [8,9]. The ground state with a finite expectation value of this operator would break the rotational invariance, the sublattice exchange symmetry, and, most importantly, the time reversal for each spin projection separately. Let us take the effect of the magnetic field of the vortex into account next. Assume the field to be along the zdirection, orthogonal to the graphene plane. First, the sole orbital effect of the localized magnetic field is the change of the form of the zero-energy states [16]. This follows from the observation that in the Dirac-Nambu representation the magnetic field enters the Hamiltonian in Eq. (3) by modifying only the H 0 term as
H 0 → H 0 [A] = I 2 ⊗ iγ 0 γ i (p i − A i γ 35 ) = (7) e −χ( x)I2⊗γ0 H 0 e −χ( x)I2⊗γ0 , where the magnetic field is B = ǫ ij ∂ i A j = ∂ 2 χ.H Z = gB( x)(σ 3 ⊗ I 4 ),(8)
and g ≈ 2 for the electrons in graphene. One may consider H Z as a weak perturbation to the spectrum of H BdG [17]. The commutation relations with the four operators that anticommute with H BdG imply that, within H 0 , H Z is proportional to σ 3 ⊗ γ 0 , and therefore it splits the +1 and −1 eigenstates of this particular OP. Equation (4) then implies that
Ψ † (σ 3 ⊗ γ 0 )Ψ ∝ |Exp[χ( x) − r 0 ∆(r ′ )dr ′ ]| 2 ,(9)
whereas the averages of the other three OPs vanish. As the total flux of the magnetic field is hc/2e, we have χ(x) ≈ (1/2) ln | x| at distances beyond the magnetic field's penetration depth, and the OP is exponentially localized within the coherence length ξ = 1/∆ s (r = ∞). Interestingly, the same conclusion follows even for the superconductor with a different symmetry. Consider the BdG Hamiltonian for a vortex in the p-wave state:
H BdG = H 0 + |∆ p (r)|σ 3 ⊗ [iγ 0 (γ 3 sin θ + γ 5 cos θ)],(10)
which breaks the spin-rotational symmetry. This superconducting OP is odd under the exchange of the Dirac points, and it is energetically preferable for strong nextnearest neighbor attraction between electrons [5]. The U (1) × Cl(3) algebra of OPs in the core is now different: U (1) = {I 2 × γ 0 } and Cl(3) = {σ 3 ⊗ γ 0 , σ 1 ⊗ iγ 1 γ 2 , σ 2 ⊗ iγ 1 γ 2 }. The last two matrices correspond to the x and y components of the Néel OP. The orbital effect of the magnetic field in resolving the degeneracy of the zeroenergy states is still null. H Z , on the other hand, within H 0 is again proportional to σ 3 ⊗ γ 0 , as dictated by the commutation relation of the H Z with the above set of OPs. This implies the same result in Eq. (9) follows for the p-wave vortex as well.
Let us now construct a different representation of the problem in which the manifest rotational invariance of the Lagrangian in Eq. (2) is sacrificed [18]. Doing so will enable us to include the non-uniform insulating and superconducting states, and make contact with the recent work of Ghaemi et al. [12]. Consider a different eight-
component Dirac fermion Φ † = (Φ † + , (T (σ 1 ⊗ I 2 )Φ + ) † )
, where the Dirac field Φ + is now rather standard [2]:
Φ † + ( q, ω) = (u † + ( K + q, ω), v † + ( K + q, ω), u † + (− K + q, ω), v † + (− K + q, ω)).(11)
T is the time-reversal operator, consisting of spin, momentum, and frequency reversal, and particle-hole operator exchange. The Lagrangian can still be written in the form in Eq. (2), but with the Hamiltonian H 0 replaced withH 0 = (I 2 ⊗ iγ 0 γ 1 )p 1 + (σ 3 ⊗ iγ 0 γ 2 )p 2 , and with γ 0 = I 2 ⊗σ 3 , γ 1 = σ 3 ⊗σ 2 , γ 2 = I 2 ⊗σ 1 , γ 3 = σ 1 ⊗σ 2 , and γ 5 = σ 2 ⊗ σ 2 [2]. Once the number operator in this representation is recognized asÑ = σ 3 ⊗ I 4 , the set of sixteen matrices that anticommute withH 0 may be divided into those corresponding to the superconducting ({σ 1 , σ 2 } ⊗ {γ 1 , iγ 0 γ 2 , iγ 1 γ 5 , iγ 1 γ 3 }) and the insulating OPs ({σ 3 , I 2 }⊗{γ 0 , iγ 0 γ 3 , iγ 0 γ 5 , iγ 1 γ 2 }). In revealing the orders corresponding to these fermion bilinears it is useful to discern P = σ 3 ⊗ γ 35 as the generator of translations, and I s = I 2 ⊗ iγ 1 γ 5 as the K ↔ − K inversion operator [2]. This immediately implies that the averages Φ † M Φ represent: a) uniform s wave, for M = {σ 1 , σ 2 } ⊗ iγ 1 γ 5 , b) nonuniform s wave, for M = {σ 1 , σ 2 } ⊗ iγ 0 γ 2 , c) uniform p wave, for M = {σ 1 , σ 2 } ⊗ iγ 1 γ 3 , and d) nonuniform p wave superconductor, for M = {σ 1 , σ 2 } ⊗ γ 1 . Explicit rewriting of the above bilinears shows that the nonuniform superconducting states have the 2 K periodicity of the OP, and thus represent the LOFF phases [19] with the Kekule texture. Similarly, the insulating states are: a) CDW, for M = σ 3 ⊗ γ 0 , b) spin-scalar Kekule BDW [20], for M = {σ 3 ⊗ iγ 0 γ 3 , I 2 ⊗ iγ 0 γ 5 }, c) z-component of the HKM, for M = I 2 ⊗ iγ 1 γ 2 . Exchanging σ 3 and I 2 in the above further yields, a) Néel, b) spin-vector BDW, and c) spin-scalar Haldane's OP, respectively.
Let us reconsider the Hamiltonian with the vortex in the s-wave superconducting OP. In this representation it takes the form of
H BdG =H 0 + |∆ s (r)|(σ 1 sin θ + σ 2 cos θ) ⊗ iγ 1 γ 5 . (12)
The U (1) × Cl(3) OP algebra consists now of Cl(3) = {σ 3 ⊗ γ 0 , σ 3 ⊗ iγ 0 γ 3 , I 2 ⊗ iγ 0 γ 5 }, and U (1) = {I 2 ⊗ iγ 1 γ 2 }. The CDW and BDW belong to the Cl(3) part, in agreement with the observation in ref. [12]. In this representation the Zeeman term becomes proportional to the unit matrix I 2 ⊗ I 4 . The Zeeman effect of the magnetic field is thus to shift both zero-energy states equally, so that both are above, or below, the Fermi level [21]. From Eq.
(4) we see that it is the OP belonging to the U (1) factor that develops an expectation value in this case, which is precisely the HKM state again. Within the Φ-representation it is evident that the above result holds irrespectively of the symmetry of the superconducting OP. For each one of the four possible choices the U (1) part of the U (1) × Cl(3) algebra of the core states will always be the HKM, since the matrix iγ 1 γ 2 anticommutes with all the superconducting states, while at the same time it commutes with all the insulating states, which are the exclusive members of the Cl(3). Since the Zeeman term is proportional to the unity operator only the universal U (1) part of the algebra develops the expectation value for any symmetry of the OP.
There is always the U (1) × Cl(3) algebra of possible OPs in the core, irrespectively of the type of order supporting the vortex. For example, in the Ψrepresentation a vortex in the Néel OP in the x-y plane, ∼ (σ 1 cos θ + σ 2 sin θ) ⊗ iγ 1 γ 2 , in the core may exhibit the p-wave superconductor, the CDW, and the z-component of the same Néel OP (σ 3 ⊗ iγ 1 γ 2 ) [11]. The most general Dirac Hamiltonian with a vortex can be written as (4), by a unitary transformation it can be reduced to a block-diagonal form M i = α i ⊕ β i , where α i and β i are four-dimensional [22]. Since {α i } and {β i } then provide two (in general, different) Hermitian four-dimensional representations of Cl(4), they are equivalent [22]. Any H gen may therefore be transformed into the block-diagonal form of H BdG as given in Eq. (3), with the same four-dimensional matrix operator in both blocks. The only four operators that anticommute with such a Hamiltonian are obviously I 2 ⊗ γ 0 and σ ⊗ γ 0 , which form the algebra U (1) × Cl (3).
H gen = M 1p1 + M 2p2 + o 1 M 3 + o 2 M 4 ,(13)
If the superconducting OP in graphene is induced by a layer of a type-II superconductor laid on top, the Zeeman energy would roughly be ǫ z ≈ H×kelvin, if the magnetic field H is expressed in tesla. The leading effect of the graphene lattice, for example, comes from the ∼ 2 K Fourier component of the vortex configuration, which in Φ-representations is proportional to the matrix σ 1 ⊗I 4 . It is straightforward to check using our formalism that this perturbation favors the scalar BDW in the core, in agreement with the result of direct lattice calculation [12]. The magnitude of this splitting should crudely be ǫ dw ∼ |∆|(a/ξ) ≈ (10 2 a/ξ) 2 kelvin, where a ≈ 1Å is the lattice spacing. In the N b 3 Ge with T c ≈ 23K for example, ǫ dw ≈ 1K. The upper critical field, however, is ∼ 40 Tesla, so at the magnetic fields of few tesla the order in the core should predominantly be the HKM. It is also possible to increase the HKM component of the OP by adding a component to the magnetic field parallel to the superconducting layer so not to suppress the condensate. A distinct sign of the HKM OP in the core would then be the increase of the gap in the local density of states with the total magnetic field at a finite temperature. This follows from the generalization of Eq. (4) to finite temperatures:
OP = ψ † 0 ψ 0 2 (tanh( ǫ z + ǫ dw 2k B T ) + s tanh( ǫ z − ǫ dw 2k B T )),(14)
with s = 1 for the HKM, and s = −1 for a linear combination of the CDW and BDW. Assuming ǫ z and ǫ dw to differ by a factor of 5 or more one OP is by an order of magnitude larger than the other over a wide range of temperatures. If the dominant order is any density wave, for ǫ dw ≫ ǫ z , the OP would in contrast decrease with the magnetic field.
It may also be useful to note that the competing orders behave differently under the changes of the signs of vorticity and/or magnetic field. For example, if the sign of both the vorticity and the magnetic field is reversed the BDW changes into its complementary pattern [12] whereas the HKM state remains invariant. A probe sensitive to the sign of the gap would thus help distinguish between the different core states. This work was supported by NSERC of Canada. The author wishes to thank V. Juričić, B. Roy, B. Seradjeh, and P. Nikolić for discussions, and to the Aspen Center for Physics where a part of this work was carried out.
(In our unitsh = e = c = 1.) Since the matrix I 2 ⊗ γ 0 anticommutes with the mass term in Eq. (3), and acts as the unit operator within H 0 , the zero-energy states with and without the magnetic field differ only in the factor exp[χ( x)]. More important turns out to be the Zeeman effect on the electron spin. With the Zeeman term the Hamiltonian becomes H BdG + H Z , where
sin θ) is the OP, and {M i } are four arbitrary eight-dimensional Hermitian matrices satisfying [M i , M j ] + = 2δ ij . Since {M i } form a representation of the Clifford algebra Clwhere (o 1 , o 2 ) = |o(r)|(cos θ,
. A K Geim, K S Novoselov, Nature Mater. 6183A. K. Geim and K. S. Novoselov, Nature Mater. 6, 183 (2007).
. I F Herbut, Phys. Rev. Lett. 97146401I. F. Herbut, Phys. Rev. Lett. 97, 146401 (2006);
. I F Herbut, V Juričić, B Roy, Phys. Rev. B. 7985116and references thereinI. F. Herbut, V. Juričić, and B. Roy, Phys. Rev. B 79, 085116 (2009), and references therein.
. D V Khveshchenko, J. of Phys. Cond. Matt. 2175303D. V. Khveshchenko, J. of Phys. Cond. Matt. 21, 075303 (2009).
. E Zhao, A Paramekanti, Phys. Rev. Lett. 97230404E. Zhao and A. Paramekanti, Phys. Rev. Lett. 97, 230404 (2006);
. A M Black-Schaffer, S Doniach, Phys. Rev. B. 75134512A. M. Black-Schaffer and S. Doniach, Phys. Rev. B 75 134512 (2007);
. B Uchoa, A H Castro Neto, Phys. Rev. Lett. 98146801B. Uchoa and A. H. Castro Neto, Phys. Rev. Lett. 98, 146801 (2007).
. C Honerkamp, Phys. Rev. Lett. 100146404C. Honerkamp, Phys. Rev. Lett. 100, 146404 (2008).
. D L Bergman, K Le Hur, Phys. Rev. B. 79184520D. L. Bergman and K. Le Hur, Phys. Rev. B 79, 184520 (2009).
. C W J Beenakker, Phys. Rev. Lett. 9767007C. W. J. Beenakker, Phys. Rev. Lett. 97, 067007 (2006);
. H B Herschee, Nature. 44656H. B. Herschee et al., Nature 446, 56 (2007);
. A Shailos, Eur. Phys. Lett. 7957008A. Shailos et al., Eur. Phys. Lett. 79, 57008 (2007).
. F D M Haldane, Phys. Rev. Lett. 612015F. D. M. Haldane, Phys. Rev. Lett. 61, 2015 (1988).
. C L Kane, E J Mele, Phys. Rev. Lett. 95226801C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005);
. S Raghu, ibid. 100156401S. Raghu et al., ibid. 100, 156401 (2008).
. R Jackiw, P Rossi, Nucl. Phys. B. 190681R. Jackiw and P. Rossi, Nucl. Phys. B 190, 681 (1981).
. I F Herbut, Phys. Rev. Lett. 99206404I. F. Herbut, Phys. Rev. Lett. 99, 206404 (2007).
. P Ghaemi, S Ryu, D.-H Lee, arXiv:0903.1662P. Ghaemi, S. Ryu, and D.-H. Lee, arXiv:0903.1662.
. I F Herbut, Phys. Rev. B. 78205433I. F. Herbut, Phys. Rev. B 78, 205433 (2008).
. E J Weinberg, Phys. Rev. D. 242669E. J. Weinberg, Phys. Rev. D 24, 2669 (1981).
L E See, Ballentine, Quantum Mechanics. Englewood CliffsPrentice HallCh. 7.4.See, for example L. E. Ballentine, Quantum Mechanics, (Prentice Hall, Englewood Cliffs, 1990), Ch. 7.4.
. R Jackiw, S.-Y Pi, Phys. Rev. Lett. 98266402R. Jackiw and S.-Y. Pi, Phys. Rev. Lett. 98, 266402 (2007).
. P Ghaemi, F Wilczek, arXiv:0709.2626preprintP. Ghaemi and F. Wilczek, preprint, arXiv:0709.2626.
I F See, Herbut, for related issues in d-wave superconductors. 6694504See, I. F. Herbut, Phys. Rev. B 66, 094504 (2002), for related issues in d-wave superconductors.
. A I Larkin, Y N Ovchinnikov, Zh. Eksp. Teor. Fiz. 471136Sov. Phys. JETPA. I. Larkin and Y. N. Ovchinnikov, Zh. Eksp. Teor. Fiz. 47, 1136 (1964), [Sov. Phys. JETP 20, 762 (1965)];
. P Fulde, R A Ferrell, Phys. Rev. 135550P. Fulde and R. A. Ferrell, Phys. Rev. 135, A550 (1964).
. C-Y Hou, C Chamon, C Mudry, Phys. Rev. Lett. 98186809C-Y. Hou, C. Chamon, and C. Mudry, Phys. Rev. Lett. 98, 186809 (2007)
Since the zero-energy states are the Majorana fermions [17] such a shift does not change the particle number. Since the zero-energy states are the Majorana fermions [17] such a shift does not change the particle number.
S Schweber, An Introduction to Relativistic Quantum Field Theory. New YorkHarper and RowS. Schweber, An Introduction to Relativistic Quantum Field Theory (Harper and Row, New York, 1961).
| zyda_arxiv-1113000 |
Crow instability in trapped Bose-Einstein condensates
18 Jul 2011
Tapio P Simula
School of Physics
Monash University
3800VictoriaAustralia
Crow instability in trapped Bose-Einstein condensates
18 Jul 2011
We show theoretically that elongated vortex-antivortex dipoles can be created controllably in trapped Bose-Einstein condensates, using known experimental techniques. Vortex dipoles of sufficient length are unstable and cascade into slow vortex rings which ultimately decay via sound emission. This instability of antiparallel vortex line elements, which self-generates Kelvin waves on vortex loops and in trapped atomic gases, may play a role in bridging the Kelvin-wave and Kolmogorov-Richardson cascades of quantum turbulence.Contrails in the sky left behind by aircraft can reveal the pair of counter-circulating wing-tip vortices generated in the wake of the plane [1]. A large proportion of the energy required to keep an aircraft airborne is consumed in the continuous generation of such wing-tip vortices. These powerful eddies create a bottleneck at airports due to the hazard they impose on aircrafts flying in their vicinity. The Crow instability mechanism seeded by atmospheric turbulence is considered to be a major agent responsible for breaking up these coherent and long-lived wing-tip vortex pairs[2,3]. In water vortex dipoles, an additional short-wave instability has been observed to grow due to a three-dimensional elliptic instability mechanism in an antisymmetric mode together with the long-wave Crow instability[4]. A quantum analogue of the Crow instability [5, 6] may occur when bodies are dragged through superfluids creating vortices whose circulation is quantized [7]. arXiv:1107.3379v1 [cond-mat.quant-gas]
Bose-Einstein condensates (BECs) are versatile quantum liquids exhibiting rich superfluid dynamics [8]. Quantized vorticity and persistent currents are the hallmark of superfluidity in BECs [10][11][12][13][14][15]. The analog of the Kadomtsev-Petviashvili [16] or 'snake' instability of solitons has been observed to lead to generation of vortices and vortex rings in BECs [17][18][19][20][21][22]. In the limit of antiparallel vortices it further transforms into the Crow instability [5]. In pancake condensates vortex-antivortex pairs or vortex dipoles can be nucleated via active stimulation [23]. For suitable parameter regimes such vortex shedding from a moving obstacle [25] is theoretically predicted to exhibit a Bénard-von Kármán vortex street structure [24]. Vortex dipoles can also form spontaneously via the Berezinskii-Kosterlitz-Thouless mechanism in finitetemperature systems in which entropy may cover the energy cost of pair creation [26][27][28][29][30]. Furthermore, a quench through a BEC phase transition facilitates stochastic formation of vortex dipoles via the Kibble-Zurek scenario as observed in recent experiments [31,32].
Solitary vortex dipoles have been created and observed experimentally in oblate condensates. However, the three-dimensional Crow instability is manifestly absent in sufficiently flat condensates where axial degrees of freedom play no role in vortex dynamics. Since spontaneous formation of long three-dimensional vortices is suppressed, an active pair-creation method is needed to isolate and observe three-dimensional instabilities of interacting vortex lines. Here we propose such a method to generate long vortex dipoles in trapped quantum degenerate gases and study their superfluid decay dynamics.
Bose-Einstein condensates may be represented by a macroscopic wavefunction φ(r, t) whose evolution is governed by the Gross-Pitaevskii equation
i ∂ t φ(r, t) = − 2 ∇ 2 2m + V ext (r) + 4π 2 a m |φ(r)| 2 φ(r, t),
(1) where the constant a is the s-wave scattering length of particles with mass m, and V ext (r) is an external potential used to confine and manipulate the atoms. We first consider an initial state wave function φ(r, 0) = (x+d/2+ cos(kz) + iy)(x − d/2 − cos(kz) − iy) which uses Cartesian coordinates to represent two counter-circulating vortex lines separated by distance d. The vortices are sinusoidally perturbed with an amplitude and a wave vector k in a symmetric mode about their equilibrium positions, see Fig. 1(a). This vortex dipole generates a mutual induction field which causes the pair to travel with a speed inversely proportional to d in the positive y direction. In addition to the translational motion, each perturbed vortex spins about its own axis. This self-induced spinning motion is due to the curvature of the vortex and comes in the form of Kelvin waves of growing amplitude [33,34]. In the Crow instability mode the vortices become phaselocked at a certain angle when the total induced velocity field stops the vortices from spinning. Eventually parts of the vortices overlap, forming a chain of vortex reconnections, shown in Fig. 1(c), which break the vortex dipole into a sequence of vortex loops, see Fig. 1(d). The dynamics illustrated in Fig.1 is obtained by considering a homogeneous (V ext = 0) non-interacting (a = 0) system, formally integrating Eq. (1) and truncating the power series expansion of the resulting matrix exponential to second order, yielding an analytically soluble model. Vortices are visualized in Fig. 1 by plotting isosurfaces of the function |φ(r, t)| 2 .
Next we model the full nonlinear dynamics of an elongated vortex dipole embedded in an inhomogeneous background by numerically solving the full Gross-Pitaevskii equation. We consider N = |φ(r)| 2 dr = 8 × 10 5 Bose- Einstein condensed 87 Rb atoms in an anisotropic harmonic potential V ext (r) = mω 2 (x 2 + y 2 + λ 2 z z 2 )/2, where λ z = ω z /ω = 0.2, and ω = 2π × 100Hz. The condensate healing length ξ 0 = 0.15µm characterizes the size of the vortex core while for our parameters the harmonic oscillator length a 0 = /mω = 6.6ξ 0 . We prepare the initial state by phase imprinting two straight = 0 vortex lines of opposite circulation in the condensate wavefunction. The cores of the vortices are placed at locations (x 0 = ±d/2, y 0 = −2a 0 ) and are oriented along the z axis. The ends of the dipole are deliberately shortcircuited at z 0 = ±20a 0 , topologically forming a highly anisotropic vortex ring, shown in Fig. 2(a). Note that vortices described by a single analytical complex function cannot have free ends and will always form closed loops, although in practice quantum and/or thermal fluctuations may provide an effective boundary for the condensate where vortices may terminate.
The mother vortex ring is propelled forward in the y direction by the self-induced superflow. The inhomogeneous condensate density causes the vortex line elements in the lower density regions to travel faster than those in higher density regions. Due to this differential velocity the ends of the dipole bend, self-generating Kelvin waves which then propagate along the vortex lines symmetrically from both ends toward the center of the condensate, see Fig. 2(a) and Fig. 2(b). These Kelvin waves catalyse the growth of instabilities on the vortex dipole. A train of daughter vortex loops is created when the unstable modes have grown in amplitude to close the gap separating the vortices, igniting reconnections at the nodes of the most unstable modes, shown in Fig. 2(b) and Fig. 2(c). This violent process leaves the daughter vortex loops ringing (carrying Kelvin wave excitations) transforming them into slow vortex rings [35,36]. These daughter vortex loops subsequently undergo several re- combinations and reconnections and ultimately convert their energy into sound waves. If the size of the generated vortex loops is large enough, they will penetrate the condensate surface. This can result in rotation of the axis of vorticity by 90 degrees, from initially being oriented along the z axis to lying along the x axis, cf. Fig. 2(a) and Fig. 2(d). Table I summarizes the dependence of the instability on the separation d(ξ 0 ) between the vortices. The wavelength of the fastest growing mode λ τ (ξ 0 ) and the angles β τ and α τ (cf. Fig. 1(d) and Fig. 1(e)), which in our case are approximately equal, are measured at the time τ when the vortex dipole first crosses the line y = z = 0. Finally, we propose an experiment to probe the physics described above. We nucleate vortex-antivortex dipoles in the wake of a moving repulsive laser beam which is tuned off-resonance from a suitable atomic transition. This creates a repulsive potential
V (r, t) = V 0 (t) σ 2 0 σ(z) 2 e −2r(t) 2 /σ(z) 2(2)
for the condensate atoms. Here we use r 2 = x 2 + (y − y 0 + vt) 2 , σ(z) = σ 0 1 + (z/z R ) 2 and the beam is focused to σ 0 = z R /10 with Rayleigh range z R = 22 µm and V 0 = 12.5[1 + tanh(t − τ )] ω. This laser spoon trav- els in the condensate for τ = 6 ms at a speed v = 0.68 mm/s corresponding to a Mach number 0.4 and is then smoothly withdrawn by ramping down the laser power. In Fig. 3a the laser beam is highlighted in the condensate density isosurface and its motion is along the y axis. After turning off the laser beam a vortex dipole is revealed, which nucleated in the wake of the laser, see Fig. 3(b). This vortex dipole becomes spontaneously short circuited at its ends due to the combination of the inhomogeneous condensate density and the curvature in the laser intensity profile. The mother vortex loop breaks into five primary daughter loops, shown in Fig. 3(c) and Fig. 3(d), which undergo multiple reconnections before being converted into sound waves, which become visible as ripples on the condensate surface as time progresses, see frames (e)-(g) in Fig. 3. Increasing the Mach number of the traveling laser spoon increases the number of mother vortex rings produced in the wake of the laser. The large number of condensate and laser parameters readily accessible in experiments enables controlled production of elongated vortex dipoles and daughter vortex loops.
Vortex dipoles and their dynamics in Bose-Einstein condensates have recently attracted considerable interest [23,31,32,37,38]. Experiments have considered effectively two-dimensional systems where the vortex dipoles are long-lived structures. We have shown that in elongated harmonically trapped condensates where axial vortex degrees of freedom are active, vortex dipoles become susceptible to the Crow instability mechanism. Selfgenerated Kelvin-wave excitations cause vortex dipoles to disintegrate forming a sequence of vortex loops which eventually decay into sound waves. We have proposed and simulated an experiment to study this phenomenon. Due to the high degree of controllability in this vortexdipole creation method, it could also be used to reproducibly generate loopy vortex states and to study their decay.
Our results have further implications for decay mechanisms of quantum turbulence in harmonically trapped Bose-Einstein condensates. In uniform systems, a reconnection-dominated Kolmogorov-Richardson cascade has been suggested to be transformed to a Kelvinwave cascade when vortex-vortex collisions become too infrequent to yield a sufficient vortex reconnection rate. However, the self-generated vortex loop instabilities may be able to sustain vortex reconnections down to the dissipation scale. It will be particularly interesting to apply vortex-dipole nucleation methods to spinor BECs where the evolution of non-Abelian vortex dipoles with fractional charge may lead to drastically different vortex dynamics.
I thank David Paganin for many useful discussions and comments on this manuscript.
FIG. 1 .
1Evolution of a sinusoidally perturbed vortex-antivortex dipole. (a) The superflow around the vortices is denoted by the circulation vectors κ± and the oriented circles on top of the subfigure indicate the rotational motion of vortex line elements in the direction opposite to the circulating superflow. The pair translates in the y direction shown by the arrow, due to the mutual induction field. (b) The amplitude of the instability grows. (c) Vortex reconnections occur at locations where vortices come in contact with each other. (d) The initial vortex pair has broken into a sequence of vortex loops. (e) Side view of frame d. The projection of the vortex lines on the z = −2d plane are shown on the bottom of each subfigure a-d which are plotted, respectively, for times t/t0 =0.02, 0.04, 0.05, and 0.06, where the unit of time t0 = 4md 2 / . Parameters described in the text are = d/200 and k = 5/d.
FIG. 2 .
2Crow instability of an elongated vortex dipole in a harmonically trapped Bose-Einstein condensate for d = 28ξ0. (a) Kelvin waves are being generated at the ends of the dipole. (b) Two vortex loops have been pinched off, one from each end of the dipole. (c) Five daughter vortex loops have been produced through reconnection events. (d) The five vortex loops have expanded in diameter appearing as ten vortex lines. The vorticity axis is now aligned along the x axis. (e) A snapshot from a simulation using d = 5ξ0. The x × y × z dimensions of each rectangular box are 10a0 × 10a0 × 60a0 and the times of the snapshots are marked in the frames.
TABLE I .
IInstability parameters for different initial vortex dipole separations d. Mean distance dτ separating the vortex lines at time τ is shown in parenthesis.d(ξ0)
λτ (ξ0)
βτ (∠)
τ (ms)
26(11)
20
55 •
12
20(10)
14
43 •
9
12(7)
12
40 •
6
9(6)
12
22 •
4
5(4)
12
18 •
2
. J R Spreiter, A H Sacks, J. Aeronaut. Sci. 1821J. R. Spreiter and A. H. Sacks, J. Aeronaut. Sci. 18, 21 (1951).
. S C Crow, AIAA Journal. 82172S. C. Crow, AIAA Journal 8, 2172 (1970).
. D C Lewellen, W S Lewellen, J. Atmos. Sci. 58390D.C. Lewellen and W. S. Lewellen, J. Atmos. Sci. 58, 390 (2001).
FIG. 3. Vortex dipole creation and instability in a harmonically trapped Bose-Einstein condensate. (a) A laser beam shown in green translates in the y direction. (b) Vortex dipole has formed in the wake of the laser spoon. (c) Decay of the dipole has begun. (d) Five daughter loops have been created. T Leweke, C H K Williamson, J. Fluid Mech. 360e) Daughter vortex loops have been relinked in further reconnectionsT. Leweke and C. H. K. Williamson, J. Fluid Mech. 360, FIG. 3. Vortex dipole creation and instability in a harmonically trapped Bose-Einstein condensate. (a) A laser beam shown in green translates in the y direction. (b) Vortex dipole has formed in the wake of the laser spoon. (c) Decay of the dipole has begun. (d) Five daughter loops have been created. (e) Daughter vortex loops have been relinked in further reconnections.
Initial vortex dipole has disintegrated creating sound waves seen as ripples on the condensate surface. The x × y × z dimensions of each rectangular box are 10a0 × 10a0 × 60a0 and the times of the snapshots are marked in the frames. A snapshot with two vortex rings. (g). 85A snapshot with two vortex rings. (g) Initial vortex dipole has disintegrated creating sound waves seen as ripples on the condensate surface. The x × y × z dimensions of each rectangular box are 10a0 × 10a0 × 60a0 and the times of the snapshots are marked in the frames. 85 (1998).
. E A Kuznetsov, J J Rasmussen, Phys. Rev. E. 514479E. A. Kuznetsov and J. J. Rasmussen, Phys. Rev. E. 51, 4479 (1995).
. N G Berloff, P H Roberts, J. Phys. A: Math. Gen. 3410057N. G. Berloff and P. H. Roberts, J. Phys. A: Math. Gen. 34, 10057 (2001).
. R P Feynman, Prog. Low Temp. Phys. 117R.P. Feynman, Prog. Low Temp. Phys. 1, 17 (1955).
Quantum liquids: Bose condensation and Cooper pairing in condensed-matter systems. A J Leggett, Oxford University PressA. J. Leggett, Quantum liquids: Bose condensation and Cooper pairing in condensed-matter systems. (Oxford University Press 2006).
. C Raman, M Köhl, R Onofrio, D S Durfee, C E Kuklewicz, Z Hadzibabic, W Ketterle, Phys. Rev. Lett. 832502C. Raman, M. Köhl, R. Onofrio, D. S. Durfee, C. E. Kuklewicz, Z. Hadzibabic, and W. Ketterle, Phys. Rev. Lett. 83, 2502 (1999).
. M R Matthews, B P Anderson, P C Haljan, D S Hall, C E Wieman, E A Cornell, Phys. Rev. Lett. 832498M. R. Matthews, B. P.Anderson, P. C. Haljan, D. S. Hall, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. 83, 2498 (1999).
. K W Madison, F Chevy, W Wohlleben, J Dalibard, Jour. Mod. Optics. 472715K. W. Madison, F. Chevy, W. Wohlleben, and J. Dal- ibard, Jour. Mod. Optics 47, 2715 (2000).
. J R Abo-Shaeer, C Raman, J M Vogels, W Ketterle, Science. 292476J.R. Abo-Shaeer, C. Raman, J. M. Vogels, and W. Ket- terle, Science 292, 476 (2001).
. E Hodby, G Hechenblaikner, S A Hopkins, O M Maragò, C J Foot, Phys. Rev. Lett. 8810405E. Hodby, G. Hechenblaikner, S.A. Hopkins, O.M. Maragò, and C. J. Foot, Phys. Rev. Lett. 88, 010405 (2002).
. C Ryu, M F Andersen, P Cladé, V Natarajan, K Helmerson, W D Phillips, Phys. Rev. Lett. 99260401C. Ryu, M. F. Andersen, P. Cladé, V. Natarajan, K. Helmerson, and W. D. Phillips, Phys. Rev. Lett. 99, 260401 (2007).
. A Ramanathan, K C Wright, S R Muniz, M Zelan, W T Hill, C J Lobb, K Helmerson, W D Phillips, G K Campbell, Phys. Rev. Lett. 106130401A. Ramanathan, K. C. Wright, S. R. Muniz, M. Zelan, W. T. Hill, C. J. Lobb, K. Helmerson, W. D. Phillips, and G. K. Campbell, Phys. Rev. Lett. 106, 130401 (2011).
. B B Kadomtsev, V I Petviashvili, Dokl. Akad. Nauk SSSR. 192753B. B. Kadomtsev and V. I. Petviashvili, Dokl. Akad. Nauk SSSR 192, 753 (1970).
. B P Anderson, P C Haljan, C A Regal, D L Feder, L A Collins, C W Clark, E A Cornell, Phys. Rev. Lett. 862926B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder, L. A. Collins, C. W. Clark, and E. A. Cornell, Phys. Rev. Lett. 86, 2926 (2001).
. Z Dutton, M Budde, C Slowe, L V Hau, Science. 293663Z. Dutton, M. Budde, C. Slowe, and L. V. Hau, Science 293, 663 (2001).
. N S Ginsberg, J Brand, L V Hau, Phys. Rev. Lett. 9440403N. S. Ginsberg, J. Brand, and L. V. Hau, Phys. Rev. Lett. 94, 040403 (2005).
. J Ruostekoski, Z Dutton, Phys. Rev. A. 7263626J. Ruostekoski and Z. Dutton, Phys. Rev. A 72, 063626 (2005).
. I Shomroni, E Lahoud, S Levy, J Steinhauer, Nat. Phys. 5193I. Shomroni, E. Lahoud, S. Levy, and J. Steinhauer, Nat. Phys. 5, 193 (2009).
. M Ma, R Carretero-González, P G Kevrekidis, D J Frantzeskakis, B A Malomed, Phys. Rev. A. 8223621M. Ma, R. Carretero-González, P. G. Kevrekidis, D. J. Frantzeskakis, and B. A. Malomed, Phys. Rev. A 82, 023621 (2010).
. T W Neely, E C Samson, A S Bradley, M J Davis, B P Anderson, Phys. Rev. Lett. 104160401T. W. Neely, E. C. Samson, A. S. Bradley, M. J. Davis, and B. P. Anderson, Phys. Rev. Lett. 104, 160401 (2010).
. K Sasaki, N Suzuki, H Saito, Phys. Rev. Lett. 104150404K. Sasaki, N. Suzuki, and H. Saito, Phys. Rev. Lett. 104, 150404 (2010).
. S Inouye, S Gupta, T Rosenband, A P Chikkatur, A Görlitz, T L Gustavson, A E Leanhardt, D E Pritchard, W Ketterle, Phys. Rev. Lett. 8780402S. Inouye, S. Gupta, T. Rosenband, A. P. Chikkatur, A. Görlitz, T. L. Gustavson, A. E. Leanhardt, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. 87, 080402 (2001).
. T P Simula, P B Blakie, Phys. Rev. Lett. 9620404T. P. Simula, and P. B. Blakie, Phys. Rev. Lett. 96, 020404 (2006).
. Z Hadzibabic, P Krüger, M Cheneau, B Battelier, J Dalibard, Nature. 4411118Z. Hadzibabic, P. Krüger, M. Cheneau, B. Battelier, and J. Dalibard, Nature 441, 1118 (2006).
. P Cladé, C Ryu, A Ramanathan, K Helmerson, W D Phillips, Phys. Rev. Lett. 102170401P. Cladé, C. Ryu, A. Ramanathan, K. Helmerson, W. D. Phillips, Phys. Rev. Lett. 102, 170401 (2009).
. S Tung, G Lamporesi, D Lobser, L Xia, E A Cornell, Phys. Rev. Lett. 105230408S. Tung, G. Lamporesi, D. Lobser, L. Xia, E. A. Cornell, Phys. Rev. Lett. 105, 230408 (2010).
. C.-L Hung, X Zhang, N Gemelke, C Chin, Nature. 470236C.-L. Hung, X. Zhang, N. Gemelke, C. Chin, Nature 470, 236 (2011).
. C N Weiler, T W Neely, D R Scherer, A S Bradley, M J Davis, B P Anderson, Nature. 455948C. N. Weiler, T. W. Neely, D. R. Scherer, A. S. Bradley, M. J. Davis, and B. P. Anderson, Nature 455, 948 (2008).
. D V Freilich, D M Bianchi, A M Kaufman, T K Langin, D S Hall, Science. 3291182D. V. Freilich, D. M. Bianchi, A. M. Kaufman, T. K. Langin, and D. S. Hall, Science 329, 1182 (2010).
. W Thomson, Philos. Mag. 10155Lord KelvinW. Thomson (Lord Kelvin), Philos. Mag. 10, 155 (1880).
. T P Simula, T Mizushima, K Machida, Phys. Rev. Lett. 10120402T. P. Simula, T. Mizushima, and K. Machida, Phys. Rev. Lett. 101, 020402 (2008).
. C F Barenghi, R Hänninen, M Tsubota, Phys. Rev. E. 7446303C. F. Barenghi, R. Hänninen, and M. Tsubota, Phys. Rev. E 74, 046303 (2006).
. R E Hershberger, D Bolster, R J Donnelly, Phys. Rev. E. 8236309R. E. Hershberger, D. Bolster, and R. J. Donnelly, Phys. Rev. E 82, 036309 (2010).
. P Kuopanportti, J A M Huhtamäki, M Möttönen, Phys. Rev. A. 8311603P. Kuopanportti, J. A. M. Huhtamäki, and M. Möttönen, Phys. Rev. A 83, 011603 (2011).
. S J Rooney, P B Blakie, B P Anderson, A S Bradley, arXiv:1105.1189S. J. Rooney, P. B. Blakie, B. P. Anderson, and A. S.Bradley, arXiv:1105.1189 (2011).
| zyda_arxiv-1131000 |
Maybe, Maybe Not: A Survey on Uncertainty in Visualization
Krisha Mehta
Maybe, Maybe Not: A Survey on Uncertainty in Visualization
Index Terms: Uncertainty, Data Visualization
Understanding and evaluating uncertainty play a key role in decisionmaking. When a viewer studies a visualization that demands inference, it is necessary that uncertainty is portrayed in it. This paper showcases the importance of representing uncertainty in visualizations. It provides an overview of uncertainty visualization and the challenges authors and viewers face when working with such charts. I divide the visualization pipeline into four parts, namely data collection, preprocessing, visualization, and inference, to evaluate how uncertainty impacts them. Next, I investigate the authors' methodologies to process and design uncertainty. Finally, I contribute by exploring future paths for uncertainty visualization.
INTRODUCTION
With a rise in the complexity and dimensionality of data, analyzing and modeling data becomes more challenging. When most of our decisions are data-driven, it becomes imperative that we know the nature of the data and the patterns it contains. As a result, analyzing the inherent uncertainty in the data is gaining more significance. In various fields, uncertainty can signify different things. For instance, data bias, random or systematic error, and statistical variance are all factors that contribute to data uncertainty. Without understanding the underlying uncertainty in our data, we cannot make accurate predictions. Similarly, to observe the true structure of our data and as well as identify patterns in it, we need to visualize it. Today, we can no longer undermine the significance of uncertainty nor ignore the importance of visualizations for data analysis.
As mentioned before, uncertainty is bound to exist whenever there is data. Therefore representation of uncertainty in data visualizations is crucial. Consider the example of hurricane path maps, as shown in Figure 1. The increase in the width of the predicted path with time is not due to an increase in the size of the hurricane. Instead, it is representing the inherent uncertainty in the data. In other words, the visualization indicates that compared to Friday, Sunday's hurricane path is more difficult to predict with any degree of accuracy.
Information tends to be withheld from the viewer when one does not portray uncertainty in the visualization. Therefore the viewer might occasionally be ignorant of this exclusion. This breach of trust can have significant consequences for both the author and the viewer. Given this significance, it is reasonable to assume that visualizations frequently include uncertainty. But how often do we encounter charts that represent uncertainty? How frequently do we check for bias in graphs that represent public surveys? As it turns out, not frequently.
In a recent study [9], 121 journalism articles, social science surveys, and economic estimates were examined. Out of 449 visualizations created for inference, the study demonstrates that only * e-mail: [email protected] Figure 1: An example chart for Matthew showing its five-day forecast track [5] 14 accurately depict uncertainty. "What's Going on in This Graph?" is a New York Times (NYT) initiative to increase graphical literacy, especially among students. Different categories of charts, such as maps, parts-to-whole, and associations, are published for students to explore and analyze. When I looked into the distribution of these charts, I found that only 6 out of the 136 charts show uncertainty.
The question I ask is, do we actually examine uncertainty representations when we come across them in order to make decisions, or do we simply ignore them? Does uncertainty offer value or just clutter these visualizations? I try to investigate these questions in this paper. Visualizations are an integral part of newspapers, government bills, and business earnings reports to name a few. The public uses them to gain insights, spot trends, and make decisions.
Hence, when we visualize data, it becomes critical to support those visualizations with information about uncertainty. People frequently use visualizations to examine data and make observations. A lack of uncertainty representation could result in incorrect and erroneous interpretations. However, it can be challenging to visualize uncertainty. There are limited standard guidelines or protocols that authors can follow when they create such charts. Given these drawbacks, uncertainty visualization is considered one of the top research problems in data visualization [13]. With the help of a few uncertainty visualization examples, this survey studies how uncertainty contributes to every phase in visualization. Most research in this area focuses on creating charts with uncertainty and how viewers may perceive them. However, uncertainty is also influential in the other parts of the data visualization process, such as during data collection and preprocessing.
The objectives of this paper are as follows: This work is divided into the following sections. Section 2 defines uncertainty and describes the relationship between uncertainty and visualization. In Section 3, I classify the data visualization pipeline into four phases, analyzing the involvement of uncertainty in each phase. The classification helps look at each phase individually, focusing on the challenges and bottlenecks authors and viewers face when working with uncertainty visualization. Finally, I study some state-of-the-art methods to visualize uncertainty and discuss future directions for research. I conclude the paper in Section 4.
UNCERTAINTY AND VISUALIZATION
Visualizations are incredibly important for examining, analyzing, and interpreting data in the era of big data. Visualizations are evidence that a picture really does say a thousand words. They aid viewers in seeing trends, background noise, and outliers. Asking the correct questions can be quite challenging when there is an abundance of data. Through visualizations, viewers can determine what questions the data can help answer. With improvements in hardware, software, and graphics theory, data visualizations are adopted more frequently and widely [26]. Viewers use visualizations to make decisions. However, making decisions and drawing observations by looking at visualizations can be complex due to the statistical variance and uncertainty present in these visualizations.
As mentioned previously, uncertainty can have different definitions based on different scenarios [3]. Broadly speaking, uncertainty is classified into two types, aleatory and epistemic. Aleatory uncertainty rises from random fluctuation and unknown outcomes when an experiment is run multiple times in a consistent environment. For example, in a drug trial, a participant's blood pressure can vary due to stress and anxiety. There might also be measurement errors in the sphygmomanometer. Aleatory uncertainty can be minimized by controlling individual factors and increasing the number of readings. Epistemic uncertainty, on the other hand, rises from a lack of knowledge, like predicting the outcome of the same experiment in a completely different, unknown environment. For example, predicting the effect of a drug on a new disease. Uncertainty can be measured, like risks but can also be unquantified, like bias. While aleatory uncertainty is more widely represented in the visualizations [25], both types can be represented with distribution graphs. Uncertainty and visualizations are interweaved, and working with one often requires working with the other. In 1644, Michael Florent van Langren was one of the first researchers to use visualization for statistical analysis [25]. He used a 1D line graph to present the 12 known estimated longitudinal distances between Toledo and Rome, as shown in Figure 2. Instead of using a table to show this data, Langren used this graph to showcase the wide range of variation. Even though all the distances were over-estimated (actual distance, in longitude, is shown using the arrow), the graph remains classic in demonstrating the power of visualization. The popular Anscombe's quartet [1] is a perfect example of how data with similar statistics might have a very different distribution which is observed when visualized. The quartet consists of four datasets with 11 points having nearly the same mean, sample variance, correlation, linear regression, and coefficient of determination. The four datasets may appear very similar to viewers looking at the data and the descriptive statistics. However, when one visualizes them, the difference in their distribution is very evident, as shown in Figure 3. Looking at data in tabular form may hide insightful observations and can lead to erroneous conclusions. Today, researchers across all domains use extensive libraries such as [4,11,12,19,22] to analyze data uncertainty. Using visualizations to represent and study uncertainty in data is widely adopted. However, uncertainty in visualizations is often not communicated [9]. One of the earliest instances of uncertainty being presented can be traced back to the 18th century. Joseph Priestley, a British scientist, created "A Chart of Biography" to present the lifespans of famous people as shown in Figure 4. He used horizontal lines to portray the lifetime of about 2000 people and used dots before or after the lines to communicate uncertainty.
Visualizations of uncertainty, however, are not common. Numerous factors influence why authors decide against visualizing uncertainty. Since they do not know all the information about the dataset, viewers may draw inaccurate conclusions in the absence of uncertainty representation. Nevertheless, introducing more uncertainty could also make the audience feel too overwhelmed to pay attention to it. The study of why visualizing uncertainty is rare is still in its early stages. In the section that follows, I go through each of these issues in more detail and look at how uncertainty affects every stage of data visualization. Figure 5: The data visualization process divided into four stages to show how uncertainty affects each stage Previous works in the field have attempted to classify the data visualization process differently. [14] considers sampling, modeling, visualization, and decision-making as the primary sources of uncertainty. This paper follows a similar classification. I divide the visualization pipeline into data collection, preprocessing, visualization and inference as shown in Figure 5. Pang et al. [18] classify the process into data collection, derivation, and visualization and discuss how uncertainty is introduced in each stage.
UNCERTAINTY IN VISUALIZATION
Under the data collection phase, the paper mainly discusses the uncertainty added due to measurement errors. However, there are other sources, such as bias and sampling error, that the paper fails to describe. I investigate these uncertainties in Section 3.3.1. The authors then discuss the change data undergoes when it is preprocessed. These changes include converting one unit to another, rescaling, and resampling. However, they do not mention other vital issues such as missing data, approximation, and interpolation that I examine in Section 3.3.2. Next, the authors highlight how uncertainty also influences the data visualization stage itself. They mainly focus on radiosity and volume rendering, while this paper delves more into 2D visualizations. Finally, I explore how viewers infer these visualizations and the challenges they face while making a decision from these charts.
Uncertainty is presented at every phase of this classification. However, understanding and evaluating uncertainty in each of these phases is unique. Therefore, authors are required to approach these uncertainties based on their type and complexity, understand their abstraction, and then present them in visualizations in a way that is easy to grasp.
Given the interdisciplinary nature of visualizations, the format, quantity, and type of data used to create them vary immensely. Different data implies different data collection processes and uncertainties. Uncertainty is intertwined with data acquisition and can arise from random variables and modeling errors [14]. Pang et al. [18] explain how almost all acquired data has statistical variation. Collected data can have errors, bias, and variance. [23] study how bias can be introduced during the process of collecting data. Datasets are prone to various biases that include but are not limited to selection bias, volunteer bias, admission bias, survivor bias, and misclassification bias.
It is imperative that datasets resemble the true population as closely as possible. Data can also contain different types of errors, such as coverage error, sampling error, nonresponse error, and measurement error [7]. Missing data points is another common challenge researchers face during data collection. Figure 6: Free Speech, a graph by the New York Times based on a national poll including 1,507 U.S residents [16] Correcting these errors is not always possible, but they can be mentioned in the visualization to inform the viewer. However, uncertainty is often ignored when authors create visualizations. Other times this uncertainty in data is not communicated to them [9]. For example, when I analyze a piece called "Free Speech" (as shown in Figure 6) published in the What's Going On in This Graph section of the NYT. [16], we can see how information about uncertainty from the data source is not mentioned directly in the graph. The bars of the graph do not sum to 100 percent since they are missing the no-response segment. The article mentions that the margin of error for the sample is +/-3.1%, but the graph makes no mention of it.
Efforts are being made by researchers to improve the way uncertainty in the data collection phase is captured, processed, and communicated. Athawale et al. [2] propose using statistical summary maps to represent uncertainty in scalar field data caused by data acquisition.
Data Preprocessing
Raw data is imperfect and can consist of noise and error. Once data is collected, it undergoes processing for accuracy and standardization. However, this phase adds uncertainty to the data that may not be immediately evident. For example, fundamental transformations like rounding off values, converting data from one unit to another, rescaling, resampling, and quantizing can add uncertainty [1]. Even though this might seem minor, the impact can be significant. For example, based on whether we take the value of pi as 22/7(3.14285) or 3.14159, the area of the Sun can vary by a difference of 239x10 6 sq. miles. A significant setback that most datasets suffer from is missing data. Data can have missing values for many reasons, such as instrument malfunction, incomplete observations, and lost data. Missing values leave a gap in the dataset, which makes room for uncertainty. Working with such uncertainty requires the authors to take extra measures during preprocessing. Authors attempt to find close estimates of the missing values to provide the viewers with a complete picture. One way to tackle this problem is by deleting the complete entry that has the missing value. This leads to a loss of data and insights. Another option is to make an educated guess about the missing value. However, this is highly unreliable and often not recommended. Using interpolation, imputation, or other techniques can induce errors [3].
Sometimes, authors choose to encode these estimated values differently in their designs to inform the viewer about the gap in the dataset. However, how authors choose to visualize this encoding becomes very influential in how viewers perceive these graphs. Whether authors highlight, downplay, annotate or remove the missing values determines how much confidence and credibility the viewer shows in the visualization [24].
Visualization Creation
Since uncertainty is ingrained in different parts of the data collection process, it is not easy to identify and control it. However, once the data is cleaned and processed, the authors face a new problem. Creating visualizations requires authors to make various decisions on behalf of the viewer. Authors are expected to choose the type of visualization based on data type, which may lead them to choose the scaling, sorting, ordering, and aesthetics [27]. Compelling visualizations are accurate and suggest an understanding and interpretation of data. Hence, it is the author's responsibility to analyze data correctly before creating any visualizations. Midway [15] describes ten design principles authors can follow to create charts. However, none of those principles discuss how uncertainty can be presented. Creating effective visualizations is hard. However, when we add uncertainty representation, the task becomes much more complex [17]. The data visualization community of researchers, designers, journalists, etc., has been reluctant to add uncertainty to their charts. Authors are aware of how significant uncertainty visualization is. Yet, they choose to exclude uncertainty when they design their charts for various reasons discussed below.
Uncertainty is hard to represent
Though data is replete with uncertainty, the difficulty lies in determining if it should be represented and how. If the uncertainty has no direct relationship to the goal of the visualization, then it may not be included in the visualization. But this is not a conclusion that authors can quickly draw. The rise in techniques of visualizing uncertainty can make it harder for authors to decide which one to choose from. One of the biggest challenges in visualizing uncertainty is discovering and communicating the relationship and impact that the uncertainty has on the data. Data visualization is often a preferred choice for analysis due to its ability to present high-dimensional data. However, uncertainty also has dimensions, generally classified into scalar, vector, and tensor [20]. While scalar and vector fields of uncertainty are depicted in charts, tensor fields are often avoided. Mapping these dimensions of uncertainty along with the dimensions of data is challenging and often overlooked when creating charts. Instead, authors tend to simplify uncertainty to align with the dimensionality of the data.
Uncertainty is hard to calculate and verify
Another reason why authors choose to exclude uncertainty from their charts is that calculating uncertainty is complex [9]. It is well known that even mathematicians and statisticians sometimes find it challenging to calculate the error or variance in a dataset. Verifying if the presented uncertainty is correct is challenging. Moreover, if the authors make an error while designing their charts, they end up providing wrong information to the viewers and losing their trust.
Viewers may be overwhelmed
[9] explains why the inclusion of uncertainty in graphs is not widely adopted. Authors believe that uncertainty can be challenging for the viewers to perceive and understand. As a result, viewers may choose to either look at an alternative graph that does not contain any uncertainty representation or overlook the uncertainty in their graph altogether.
Uncertainty can add clutter to the visualization
Authors can be unsure of how effective communicating uncertainty is. They also worry about adding more information to an already visually complex visualization. For many authors, the goal of a chart is to express a signal [9] that can be useful to their viewers. This signal tends to present a single point or a single source of truth. Uncertainty tends to challenge that notion by obfuscating the signal.
Additionally, expressing the intricacy of uncertainty through a visual abstraction is challenging. The dimensionality of the data also plays a vital role in deciding whether uncertainty should be represented or not. An increase in the dimensionality of data makes it harder for the human visual system to perceive it effectively. Sometimes even two-dimensional charts can be overwhelming for the viewer. In such a case, representing uncertainty adds visual overload [20].
Visualization Inference
Uncertainty is hard to understand and analyze. When faced with perceiving an uncertain visualization, viewers can get confused or derive inaccurate information from it. One easy method viewers tend to use is to ignore the uncertainty in the graph altogether. Another way is to substitute tricky calculations with easy ones or use heuristics to make decisions. However, this may not always give a correct observation. The most common approach to show uncertainty is by using box plots and error bars. Though widely used, viewers may find them challenging to analyze [6]. Sometimes visualizing uncertainty as frequency instead of distribution provide a better understanding.
Currently, research is being done to create visualizations that help understand uncertainty more intuitively. For example, hypothetical outcome plots (HOPs) represent uncertainty by animating a finite set of individual draws [10]. This approach expects no prior knowledge of the domain from the viewer. However, using HOPs in physical media might be challenging. Bubble treemaps [8] are another approach for visualizing uncertainty. These circular treemaps encode additional information about uncertainty by allocating additional space for visuals.
While uncertainty is still underrepresented in visualizations, more researchers are slowly adding it to their designs. One of the significant setbacks in uncertainty visualizations for authors is calculating uncertainty, while for viewers, it is graphical literacy. Efforts can be taken to increase this literacy through different programs gradually. Furthermore, work should be done to understand what visualization type best suits a given uncertainty type. This relationship can also depend on the type of data being represented and the target audience viewing the graph. For example, it is necessary for graphs published in newspapers and reports to be easily understandable by the public. Hence, studies focusing on visualizing uncertainty with no prior knowledge or information can be very insightful.
CONCLUSION
Uncertainty visualization is one of the most complex research areas in data visualization today. This work provided an overview of uncertainty visualization and the relationship between uncertainty and visualization. I divided the visualization pipeline into four phases and surveyed papers to study how uncertainty interacts with each phase of the process. The work also investigated why the representation of uncertainty is not widely practiced by the data visualization community and the challenges viewers face when inferring from such a graph. Lastly, I discussed a few state-of-the-art methods to design uncertainty visualization and offered a glance into the interesting future research this field has to offer.
•
Provide an entry point for anyone who wants to learn about uncertainty visualization • Delineate the significance of uncertainty visualizations • Explore how uncertainty influences every phase of the data visualization process • Understand the challenges authors and viewers face when interacting with it • Discuss the open problems and future research directions in the field
Figure 2 :
2Langren's line graph is one of the first visualizations to present uncertainty
Figure 3 :
3Anscombe's quartet represents four datasets with similar statistics but very different distributions.
Figure 4 :
4Priestley's Chart of Biography[21]
Graphs in statistical analysis. The american statistician. F J Anscombe, 27F. J. Anscombe. Graphs in statistical analysis. The american statisti- cian, 27(1):17-21, 1973.
Uncertainty visualization of 2d morse complex ensembles using statistical summary maps. T Athawale, D Maljovec, L Yan, C Johnson, V Pascucci, B Wang, IEEE Transactions on Visualization and Computer Graphics. T. Athawale, D. Maljovec, L. Yan, C. Johnson, V. Pascucci, and B. Wang. Uncertainty visualization of 2d morse complex ensembles using statistical summary maps. IEEE Transactions on Visualization and Computer Graphics, 2020.
Overview and state-of-the-art of uncertainty visualization. G.-P Bonneau, H.-C Hege, C R Johnson, M M Oliveira, K Potter, P Rheingans, T Schultz, Scientific Visualization. SpringerG.-P. Bonneau, H.-C. Hege, C. R. Johnson, M. M. Oliveira, K. Potter, P. Rheingans, and T. Schultz. Overview and state-of-the-art of uncer- tainty visualization. In Scientific Visualization, pp. 3-27. Springer, 2014.
D³ data-driven documents. M Bostock, V Ogievetsky, J Heer, 10.1109/TVCG.2011.185IEEE Transactions on Visualization and Computer Graphics. 1712M. Bostock, V. Ogievetsky, and J. Heer. D³ data-driven docu- ments. IEEE Transactions on Visualization and Computer Graphics, 17(12):2301-2309, 2011. doi: 10.1109/TVCG.2011.185
An example chart of a chart for mattew showing its five-day forecast track. N H Center, Wikimedia CommonsN. H. Center. An example chart of a chart for mattew showing its five-day forecast track. Wikimedia Commons, 2016.
Error bars considered harmful: Exploring alternate encodings for mean and error. M Correll, M Gleicher, IEEE transactions on visualization and computer graphics. 2012M. Correll and M. Gleicher. Error bars considered harmful: Explor- ing alternate encodings for mean and error. IEEE transactions on visualization and computer graphics, 20(12):2142-2151, 2014.
Sampling methods for web and e-mail surveys. The SAGE handbook of online research methods. R D Fricker, SAGE Publications LtdLondonR. D. Fricker. Sampling methods for web and e-mail surveys. The SAGE handbook of online research methods. London: SAGE Publica- tions Ltd, 2008.
Bubble treemaps for uncertainty visualization. J Görtler, C Schulz, D Weiskopf, O Deussen, 10.1109/TVCG.2017.2743959IEEE Transactions on Visualization and Computer Graphics. 241J. Görtler, C. Schulz, D. Weiskopf, and O. Deussen. Bubble treemaps for uncertainty visualization. IEEE Transactions on Visualization and Computer Graphics, 24(1):719-728, 2018. doi: 10.1109/TVCG.2017. 2743959
Why authors don't visualize uncertainty. J Hullman, IEEE transactions on visualization and computer graphics. 261J. Hullman. Why authors don't visualize uncertainty. IEEE transactions on visualization and computer graphics, 26(1):130-139, 2019.
Hypothetical outcome plots outperform error bars and violin plots for inferences about reliability of variable ordering. J Hullman, P Resnick, E Adar, PLOS ONE. 1011J. Hullman, P. Resnick, and E. Adar. Hypothetical outcome plots outperform error bars and violin plots for inferences about reliability of variable ordering. PLOS ONE, 10(11), 2015.
Matplotlib: A 2d graphics environment. J D Hunter, Computing in science & engineering. 903J. D. Hunter. Matplotlib: A 2d graphics environment. Computing in science & engineering, 9(03):90-95, 2007.
Collaborative data science. P T Inc, P. T. Inc. Collaborative data science. 2015.
A next step: Visualizing errors and uncertainty. C R Johnson, A R Sanderson, IEEE Computer Graphics and Applications. 235C. R. Johnson and A. R. Sanderson. A next step: Visualizing errors and uncertainty. IEEE Computer Graphics and Applications, 23(5):6-10, 2003.
Recent advances and challenges in uncertainty visualization: a survey. A Kamal, P Dhakal, A Y Javaid, V K Devabhaktuni, D Kaur, J Zaientz, R Marinier, Journal of Visualization. 245A. Kamal, P. Dhakal, A. Y. Javaid, V. K. Devabhaktuni, D. Kaur, J. Zaientz, and R. Marinier. Recent advances and challenges in uncer- tainty visualization: a survey. Journal of Visualization, 24(5):861-890, 2021.
Principles of effective data visualization. S R Midway, Patterns. 19100141S. R. Midway. Principles of effective data visualization. Patterns, 1(9):100141, 2020.
Free speech. The New York Times. T L Network, T. L. Network. Free speech. The New York Times, 2022.
Uncertainty visualization. L Padilla, M Kay, J Hullman, L. Padilla, M. Kay, and J. Hullman. Uncertainty visualization. 2020.
Approaches to uncertainty visualization. A T Pang, C M Wittenbrink, S K Lodha, The Visual Computer. 138A. T. Pang, C. M. Wittenbrink, S. K. Lodha, et al. Approaches to uncertainty visualization. The Visual Computer, 13(8):370-390, 1997.
Visualizations with statistical details: The'ggstatsplot'approach. I , Journal of Open Source Software. 6613167I. Patil. Visualizations with statistical details: The'ggstatsplot'approach. Journal of Open Source Software, 6(61):3167, 2021.
From quantification to visualization: A taxonomy of uncertainty visualization approaches. K Potter, P Rosen, C R Johnson, IFIP Working Conference on Uncertainty Quantification. SpringerK. Potter, P. Rosen, and C. R. Johnson. From quantification to visu- alization: A taxonomy of uncertainty visualization approaches. In IFIP Working Conference on Uncertainty Quantification, pp. 226-249. Springer, 2011.
A Description of a Chart of Biography: With a Catalogue of All the Names Inserted in It, and the Dates Annexed to Them. J Priestley, William Eyres1765J. Priestley. A Description of a Chart of Biography: With a Catalogue of All the Names Inserted in It, and the Dates Annexed to Them. William Eyres, 1765.
Vegalite: A grammar of interactive graphics. A Satyanarayan, D Moritz, K Wongsuphasawat, J Heer, IEEE transactions on visualization and computer graphics. 231A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega- lite: A grammar of interactive graphics. IEEE transactions on visual- ization and computer graphics, 23(1):341-350, 2016.
Bias in research. A.-M Simundic, Biochemia medica. 231A.-M. Simundic. Bias in research. Biochemia medica, 23(1):12-15, 2013.
Where's my data? evaluating visualizations with missing data. H Song, D A Szafir, IEEE transactions on visualization and computer graphics. 251H. Song and D. A. Szafir. Where's my data? evaluating visualizations with missing data. IEEE transactions on visualization and computer graphics, 25(1):914-924, 2018.
E R Tufte, S R Mckay, W Christian, J R Matey, Visual explanations: Images and quantities, evidence and narrative. E. R. Tufte, S. R. McKay, W. Christian, and J. R. Matey. Visual explanations: Images and quantities, evidence and narrative, 1998.
Why is Data Visualization Important? What is Important in Data Visualization?. A Unwin, Harvard Data Science Review. 21A. Unwin. Why is Data Visualization Important? What is Important in Data Visualization? Harvard Data Science Review, 2(1), jan 31 2020. https://hdsr.mitpress.mit.edu/pub/zok97i7p.
The grammar of graphics. L Wilkinson, Handbook of computational statistics. SpringerL. Wilkinson. The grammar of graphics. In Handbook of computational statistics, pp. 375-414. Springer, 2012.
| zyda_arxiv-1160000 |
Vortex lattices in rapidly rotating Bose-Einstein condensates: modes and correlation functions
18 Aug 2003
Gordon Baym
Department of Physics
University of Illinois at Urbana-Champaign
1110 West Green Street61801UrbanaIllinois
Vortex lattices in rapidly rotating Bose-Einstein condensates: modes and correlation functions
18 Aug 2003(July 24, 2021)
After delineating the physical regimes which vortex lattices encounter in rotating Bose-Einstein condensates as the rotation rate, Ω, increases, we derive the normal modes of the vortex lattice in two dimensions at zero temperature. Taking into account effects of the finite compressibility, we find an inertial mode of frequency ≥ 2Ω, and a primarily transverse Tkachenko mode, whose frequency goes from being linear in the wave vector in the slowly rotating regime, where Ω is small compared with the lowest compressional mode frequency, to quadratic in the wave vector in the opposite limit. We calculate the correlation functions of vortex displacements and phase, density and superfluid velocities, and find that the zero-point excitations of the soft quadratic Tkachenko modes lead in a large system to a loss of long range phase correlations, growing logarithmically with distance, and hence lead to a fragmented state at zero temperature. The vortex positional ordering is preserved at zero temperature, but the thermally excited Tkachenko modes cause the relative positional fluctuations to grow logarithmically with separation at finite temperature. The superfluid density, defined in terms of the transverse velocity autocorrelation function, vanishes at all temperatures. Finally we construct the long wavelength single particle Green's function in the rotating system and calculate the condensate depletion as a function of temperature.
I. INTRODUCTION
Under rapid rotation, a superfluid forms a triangular lattice of quantized vortices carrying the angular momentum of the system. Such structure was seen experimentally in superfluid helium in Refs. [1]. Rapidly rotating Bose-Einstein condensates of cold atoms [2][3][4] open up the study of the physics of vortex lattices in regimes well beyond those achievable in superfluid helium [5][6][7]. With increasing rotational frequency, Ω, condensates go through a number of different regimes. At Ω small compared with the lowest compressional frequencies, Ω ≪ sk 0 , where s is the sound velocity and k 0 ∼ π/R ⊥ is the lowest wavenumber in the finite geometry, with R ⊥ the size of the system perpendicular to the rotation axis, the system is in the "stiff" Thomas-Fermi regime, and responds to rotation effectively as an incompressible fluid. As Ω becomes larger than the lowest sound mode frequencies, Ω > ∼ sk 0 , or essentially that the outer edge of the cloud moves supersonically, the system enters the "soft" Thomas-Fermi regime, where the compressibility becomes important in the response to rotation.
When, in harmonically trapped condensates, Ω approaches the transverse trapping frequency, ω ⊥ , the centrifugal force begins to balance the trapping force, and the system flattens towards a lower density and therefore more compressible, two-dimensional cloud. For Ω ≫ ms, where m is the particle mass, the condensate wave function becomes formed primarily of particle orbits in the lowest Landau level of the Coriolis force -the mean field quantum Hall state [8] -with approximate order parameter, for rotation about the z axis,
Ψ(r) = CΠ j (u − u j )e −|u| 2 /2d 2 ⊥ , u = x + iy,(1)
where the u j = x j + iy j are the vortex locations in the usual complex notation, d ⊥ = (mω ⊥ ) −1/2 is the transverse oscillator length, and C is a normalization constant. The structure of the trapped condensate in the axial direction approaches a Gaussian shape as |Omega → ω ⊥ . Although Eq. (1) predicts that the transverse structure is also Gaussian, the transverse structure, if Thomas-Fermi in the non-rotating cloud, remains Thomas-Fermi in this limit [9], owing to a small admixture of higher Landau levels in the order parameter. Experiments in this regimes are reported in Ref. [6]. With further increase of Ω, the vortex lattice is expected to melt, at the point where the number of vortices becomes of order ten percent of the number of particles, [10,11], and the system eventually enters new highly correlated bosonic quantum Hall many-particle states, no longer describable in mean field [12][13][14][15][16]. Prior to discussing the modes of the lattice, it is useful to lay out the demarcations between the various regimes in a harmonically trapped gas. We assume that the interaction between the particles, of total number N , is described by a repulsive s-wave interaction parameter g = 4πa s /m, where a s is the scattering length; we use units in which h = 1. In Thomas-Fermi, the transverse radius, R ⊥ , is given by [17],
R 2 ⊥ = d 2 ⊥ τ (1 − (Ω/ω ⊥ ) 2 ) 3/5 ,(2)
where τ = [(15N ba s /d ⊥ )(ω z /ω ⊥ )] 2/5 , and ω z is the axial trapping frequency. The factor b, > ∼ 1, describes the renormalization of the interaction energy of long wavelength density fluctuations in the system [18]. Furthermore,
Ω ms(0) 2 = 2 τ Ω ω ⊥ (1 − (Ω/ω ⊥ ) 2 ) 2/5 ,(3)
where s(0) is the sound velocity in the center of the cloud. If we write k 0 = α/R ⊥ , where α ≃ 5.45 [19], the criterion for being in the soft regime, that Ω/sk 0 be large, becomes
Ω/ω ⊥ (1 − (Ω/ω ⊥ ) 2 ) 1/2 ≫ 1 √ 2α ≃ 0.13.(4)
The experiments of Ref [5] on Tkachenko modes reach Ω/sk 0 ∼ 1.15. The criterion to be in the mean field quantum Hall regime, Ω/ms 2 ≫ 1, can be conveniently written in terms of the filling factor, ν = N/N v , where N v is the total number of vortices in the rotating cloud. Since
N v = πn v R 2 ⊥ = mΩR 2 ⊥ = Ω ω ⊥ τ (1 − (Ω/ω ⊥ ) 2 ) 3/5 ,(5)
where n v = mΩ/π is the density of vortices, we find
Ω ms(0) 2 = 2 ν 2/3 Ω ω ⊥ 1/3 d ⊥ ω ⊥ 15ba s ω z 2/3 .(6)
This result is independent of the number of particles, as long as the system is in the Thomas-Fermi regime. For the parameters ω ⊥ /ω z = 8.3/5.2 of the Tkachenko mode experiment of Ref. [5] in 87 Rb, we find Ω/ms(0) 2 ≃ 200/ν 2/3 in the limit Ω → ω ⊥ . The vortex lattice supports a number of modes, first discussed by Tkachenko [20] for a two-dimensional incompressible fluid, and later in Ref. [21] at finite temperature with full effects of the normal fluid, dissipation and Kelvin oscillations of the vortex lines in three dimensions. The low frequency in-plane Tkachenko mode is an elliptically polarized oscillation of the vortex lines, with the semi-major axis of the ellipse orthogonal to the direction of propagation. The Tkachenko mode is linear at small wave vector, k, in the transverse plane,
ω T = 2C 2 mn 1/2 k → Ω 4m 1/2 k,(7)
where n is the particle density, and C 2 is the elastic shear modulus of the vortex lattice; at slow rotation, C 2 = nΩ/8. In the soft regime, the dispersion relation instead becomes quadratic [11,22],
ω T = s 2 C 2 2Ω 2 nm 1/2 k 2 .(8)
Were it possible to rotate helium sufficiently rapidly one would also see this very long wavelength quadratic behavior. Such Tkachenko soft modes can play havoc with the stability of a large system, causing loss of long range phase coherence even at zero temperature; they are eventually responsible for the melting of the lattice [11]. In a recent paper [22] we derived the modes of the vortex lattice for all rotation rates, through constructing the conservation laws and superfluid acceleration equation describing the long wavelength behavior of the system. In this paper we focus on deriving the correlation functions of density, superfluid phase and velocity, and vortex displacements from equilibrium, which enable us to understand the effects of the soft infrared structure on the stability of the superfluid and lattice. This work is a generalization of Ref. [23], which discussed the effects of the oscillations of the vortex lines at finite temperature in liquid helium on the long ranged phase correlations of the superfluid. As we shall see, the long wavelength Tkachenko modes lead to fragmentation of the condensate, even at zero temperature. Whether the system loses phase coherence over its volume or the lattice melts first depends on the number of particles in the system and its rotation rate.
In Sec. II, we review the basic equations describing the dynamics, restricting the analysis to linearized motion in two dimensions, and neglecting the normal component of the superfluid as well as dissipative terms. The analysis of the full three-dimensional problem will be published separately [24]. In Sec. III we construct the correlation functions of the physical quantities of interest, and in Sec. IV study the condensate depletion by constructing the single particle Green's function for the rotating superfluid.
II. CONDENSATE PHASE AND CONSERVATION LAWS
Let us first recapitulate the conservation laws and equation for the superfluid phase which govern the long wavelength behavior of the system. While much of this material has been given in Ref. [22], we include it here in order to facilitate the derivation of the correlation functions. The basic formalism given here applies to general bosonic superfluids.
A. Condensate phase
We work in the frame co-rotating with the lattice, and describe the deviations of the vortices from their home positions by the continuum displacement field, ǫ(r, t). In linear order in the vortex displacements, the long wavelength superfluid velocity, v(r, t), can be written, following Ref. [23], in terms of the long wavelength vortex-lattice displacement field and the phase Φ(r, t) of the order parameter, as v + 2 Ω × ǫ = ∇Φ/m;
The curl of this equation is
∇ × v = −2Ω ∇ · ǫ.(10)
The origin of Eq. (9) is the law of conservation of vorticity, ̟ ≡ ∇ × v:
∂ ̟ ∂t + ∇ × ( ̟ ×˙ ǫ) = 0,(11)
where here˙ ǫ tells the rate at which the vorticity moves about. Since under uniform rotation, ̟ = 2 Ω, the time derivative of the curl of Eq. (9) is just this equation linearized. The longitudinal part of the left side of Eq. (9) is trivially the gradient of a scalar. Equation (9) constrains the number of degrees of freedom in two dimensions to four from the original five -n, v, and ǫ. The time derivative of Eq. (9) is the superfluid acceleration equation,
m ∂ v ∂t + 2 Ω ×˙ ǫ = − ∇(µ − V eff ),(12)
where µ is the chemical potential. For an axially symmetric harmonic confining trap of frequency ω ⊥ in the transverse direction and ω z in the axial direction,
V eff = m 2 (ω 2 ⊥ − Ω 2 )r 2 + ω 2 z z 2 ,(13)
where r denotes (x, y). In the frame corotating with the vortex lattice, the chemical potential µ is related to the phase by
µ(r, t) − V eff = −h m ∂Φ(r, t) ∂t .(14)
B. Conservation laws
The dynamics are specified by the conservation laws of particles and momentum, together with the superfluid acceleration equation, (12). The continuity equation takes its usual form,
∂n(r, t) ∂t + ∇ · j(r, t) = 0,(15)
where n is the density and j = n v the particle current. Conservation of momentum reads:
m ∂ j ∂t + 2m Ω × j + ∇P + n ∇V eff = − σ − ζ.(16)
Here P is the pressure, and σ is the elastic stress tensor, discussed below. At zero temperature, ∇P = n ∇µ, while in equilibrium, ∇P + n ∇V eff = 0. To calculate the displacement autocorrelation functions, we include here, as in [23], an external driving force, − ζ ( r, t), acting on the lattice, derived from an external perturbation H ′ = ζ · ǫ.
The elastic stress tensor is derived from the elastic energy density of the lattice, which in two dimensions has the form (in the notation of [21]),
E(r) = 2C 1 ( ∇ · ǫ ) 2 + C 2 ∂ǫ x ∂x − ∂ǫ y ∂y 2 + ∂ǫ x ∂y + ∂ǫ y ∂x 2 ,(17)
where C 1 is the compressional modulus, and C 2 the shear modulus of the vortex lattice. The elastic constants are density-dependent properties of the fluid. Then the elastic stress tensor, σ i , is given in terms of the total elastic energy, E el = d 2 rE(r), by
σ i (r, t) = δE el δǫ i = −4∇ i C 1 ∇ · ǫ − 2 ∇ · C 2 ∇ǫ i ,(18)
where we allow for effects of density gradients entering through the elastic constants. In an incompressible fluid,
C 2 = nΩ/8 = −C 1 .
On the other hand, in the quantum Hall regime, the shear modulus is determined by the deviations of the interaction energy caused by distorting the vortex lattice, and [25]
C 2 ≃ 81 80π 4 ms 2 n,(19)
in agreement with the shear modulus given numerically in [11]. Calculation of the elastic constants C 1 and C 2 over the full range of Ω from the stiff Thomas-Fermi to the quantum Hall limits will be given in Ref. [25]. Subtraction of Eq. (16) divided by n, from the superfluid acceleration equation (12), with ∇P = n ∇µ, yields
2 Ω × (˙ ǫ − v ) = σ + ζ mn .(20)
Equations (12), (15), (18), and (20) fully specify the problem for a trapped system with a non-uniform density.
C. Modes
In the following, we neglect effects of non-uniformity of the equilibrium density for simplicity, and proceed to derive the modes as in [22]. The curl of Eq. (20) is
∇ · (˙ ǫ − v ) = 1 2Ωnm ∇ × ( σ + ζ ),(21)
while its divergence, together with (10), yields,
∇ ×˙ ǫ + 2 Ω ∇ · ǫ = − 1 2Ωnm ∇ · ( σ + ζ ),(22)
where ∇ × σ = −2C 2 ∇ 2 ( ∇ × ǫ), and ∇ · σ = −2(C 2 + 2C 1 )∇ 2 ( ∇ · ǫ). To eliminate v from (21), we note that the density oscillations are governed by
− ∂ 2 ∂t 2 + s 2 ∇ 2 n = 2nΩ ∇ ×ǫ,(23)
where the sound speed, s, is given by ms 2 = ∂P/∂n [26]. In terms of the frequency, ω, and wave vector, k, we have then
k · v = 2ω 2 ω 2 − s 2 k 2 Ω · k × ǫ,(24)
so that in terms of longitudinal and transverse components [27],
−iωǫ T + 2Ω + C 2 + 2C 1 Ωnm k 2 ǫ L = − ζ L 2Ωnm ; −iωǫ L − 2ω 2 Ω ω 2 − s 2 k 2 + C 2 Ωmn k 2 ǫ T = ζ T 2Ωnm .(25)
Solving for ǫ L and ǫ T we have
ǫ L = 1 nmD ω 2 + (ω 2 − s 2 k 2 ) C 2 2Ω 2 nm k 2 ζ L + iω ω 2 − s 2 k 2 2Ω ζ T , ǫ T = 1 nmD (ω 2 − s 2 k 2 ) 1 + C 2 + 2C 1 2Ω 2 nm k 2 ζ T − iω ω 2 − s 2 k 2 2Ω ζ L ,(26)
where the secular determinant, whose zeroes determine the mode frequencies, is
D(k, ω) ≡ ω 4 − ω 2 4Ω 2 + s 2 + 4 nm (C 1 + C 2 ) k 2 + 2s 2 C 2 nm k 4 = (ω 2 − ω 2 I )(ω 2 − ω 2 T ) = 0;(27)
we have dropped terms of second order in the elastic constants.
For 2s 2 C 2 k 4 /nm ≪ (4Ω 2 + (s 2 + 4(C 1 + C 2 )/nm)k 2 ) 2 , as is always the case at long wavelengths in both the incompressible and quantum Hall limits, the mode frequencies are given by
ω 2 I = 4Ω 2 + s 2 + 4(C 1 + C 2 ) nm k 2 ,(28)
and
ω 2 T = 2C 2 nm s 2 k 4 (4Ω 2 + (s 2 + 4(C 1 + C 2 )/nm)k 2 )
.
The first mode is the standard inertial mode of a rotating fluid; for Ω ≪ s 2 k 2 the mode is a sound wave, while for Ω ≫ s 2 k 2 , the mode frequencies begin essentially at twice the axial trapping frequency. This mode has been calculated in realistic trapping geometries in [28] and [29]. The second mode is the elliptically polarized Tkachenko mode. Equation (22) implies that the inertial mode is circularly polarized: ǫ L /ǫ T ≃ i, and in the Tkachenko mode, ǫ L /ǫ T ≃ iω T /2Ω; the small longitudinally polarized component is π/2 out of phase with the transversely polarized component. In the limit of an incompressible fluid (s 2 → ∞),
ω 2 I = 4Ω 2 + s 2 + 4C 2 nm k 2 ,(30)
and the Tkachenko frequency, ω T , is linear in k, Eq. (7). In the soft limit, by contrast,
ω 2 T = s 2 C 2 2Ω 2 nm k 4 ;(31)
unlike in the stiff Thomas-Fermi regime, the mode frequency is quadratic in k at long wavelengths; using Eq. (19) for
C 2 we have ω T ≃ 9 4π 2 √ 10 s 2 k 2 Ω .(32)
The present results for the modes are valid for a uniform system over the entire range of rotation frequencies, from the slowly rotating stiff regime up to the melting of the vortex lattice. The Tkachenko mode has been calculated numerically for realistic trapping geometries in [19] in the stiff limit and more generally in [30].
III. CORRELATION FUNCTIONS
We turn to determining the effects of the lattice modes on the lattice ordering, and the phase coherence and condensate fraction of the rotating superfluid. To do so we construct the correlation functions of the density, superfluid velocity, vortex displacements, and phase from the dynamical equations in the previous section. All the correlation functions of interest, including the single particle Green's function, can be written in terms of the density-density and displacement-displacement correlation functions. In the following we let AB (k, z) denote the Fourier transform in space and the analytic continuation of the Fourier transformation in imaginary time to complex frequency, z, of the correlation function A(rt)B(r ′ t ′ ) − A(rt) B(r ′ t ′ ) .
The density-density correlation function, nn (k, z), is readily found from the response of n(r, t) to an external potential, U (r, t), coupled to the density. Using Eq. (24) to eliminate ǫ T from (23), we have
nn (k, z) = ∞ −∞ dω 2π B(b, ω) z − ω = nk 2 mD(k, z) z 2 − 2C 2 k 2 nm ,(33)
with D given by (27). As z → ∞, nn (k, z) → nk 2 /mz 2 , which is the expected f-sum rule on the spectral weight B(k, ω),
∞ −∞ dω 2π ωB(k, ω) = nk 2 m .(34)
Similarly, nn (k → 0, 0) → −n/ms 2 , yielding the correct compressibility sum rule,
lim k→0 ∞ −∞ dω 2π B(b, ω) ω = n ms 2 .(35)
As expected, the f-sum rule is dominated by the high frequency inertial mode; the low frequency Tkachenko mode dominates the compressibility sum rule [31]. The correlations of the longitudinal velocity are given in terms of nn (k, ω), as usual, by
v L n (k, z) = z nk nn (k, z),(36)
and
v L v L (k, z) = z 2 n 2 k 2 nn (k, z) − 1 nm (37) = 1 nmD z 2 4Ω 2 + s 2 k 2 + 2 nm (2C 1 + C 2 )k 2 − 2s 2 C 2 nm k 4 .(38)
The correlation functions of the elastic displacements are given by
ǫ i ǫ j = ∞ −∞ dω 2π B ij (k, ω) z − ω = δ ǫ i δζ j .(39)
where ǫ i is the displacement in the i th direction induced by the force ζ. Equations (39) and (25) then imply,
ǫ L ǫ L (k, z) = 1 nmD(k, z) z 2 + (z 2 − s 2 k 2 ) C 2 2Ω 2 nm k 2 (40) ǫ T ǫ T (k, z) = z 2 − s 2 k 2 nmD(k, z) 1 + C 2 + 2C 1 2Ω 2 nm k 2 (41) ǫ L ǫ T (k, z) = iz z 2 − s 2 k 2 2ΩnmD = ǫ T ǫ L * .(42)
We note, for later calculation of the phase correlations, that the displacement-density correlation function, found from Eq. (23) together with the continuity equation, is
nǫ T (k, z) = ǫ T n (k, z) = 2nΩzk z 2 − s 2 k 2 ǫ T ǫ T ,(43)
and thus the displacement-longitudinal velocity correlation is,
ǫ T v L (k, z) = v L ǫ T (k, z) = 2Ωz 2 z 2 − s 2 k 2 ǫ T ǫ T .(44)
A. Lattice displacements
We first address the effects of the vortex modes on the lattice displacements from equilibrium. At finite temperature, T , the equal time displacement correlations are given by
(ǫ i (r) − ǫ i (r ′ )) 2 = 2 d 2 k (2π) 2 Z 1 − cos k · R ∞ 0 dω 2π B ii (k, ω)(1 + 2f (ω)),(45)
where Z is the thickness of the system in the z direction, R = r − r ′ , and f (ω) = 1/(e βω − 1), with β = 1/k B T . The spectral weights B ij are found from Eqs. (40)-(42) by letting z → ω and
1 D(k, z) → 2π ω 2 I − ω 2 T 1 2ω I (δ(ω − ω I ) − δ(ω − ω I )) − 1 2ω T (δ(ω − ω T ) − δ(ω − ω T )) .(46)
The leading terms in the mean displacement of a single vortex from equilibrium due to excitations of the modes are
ǫ 2 = d 2 k (2π) 2 Z 1 ω 2 I nm ω I (1 + 2f (ω I )) + s 2 k 2 2ω T (1 + 2f (ω T )) .(47)
The mean displacement is convergent at zero temperature. However, at finite temperature it diverges logarithmically with system size if the Tkachenko mode spectrum reaches down into the soft quadratic regime; then
ǫ 2 ℓ 2 ∼ T mΩ 8πZC 2 ln N v .(48)
The relative separation, with only the leading terms kept,
( ǫ (r) − ǫ (r ′ )) 2 = 2 d 2 k (2π) 2 Z 1 − cos k · R nmω 2 I ω I (1 + 2f (ω I )) + s 2 k 2 2ω T (1 + 2f (ω T )) ,(49)
converges at zero temperature, and lattice preserves long range positional order at large separation; at finite temperature, however, ( ǫ (r) − ǫ (r ′ )) 2 grows logarithmically with separation, ∼ (T mΩ/4πZC 2 ) ln N v (R), where N v (R) is the number of vortices within radius R. In the stiff Thomas-Fermi limit, this expression becomes (4π/nZλ 2 ) ln N v (R), where λ is the thermal wavelength; in the quantum Hall limit we have rather (20π 4 /81ν)(T /ms 2 ) ln N v (R). Equation (47) may be used with a Lindemann criterion to estimate the point where the lattice melts at T = 0 in an extended system [10,11]. The zero temperature displacements are sensitive to the entire spectrum of modes up to the lattice Debye vector, k d = (4πn v ) 1/2 , whereas the mode frequencies derived here are valid only for k ≪ k d . For a first estimate, we replace the integrand by its infrared limit, letting d 2 k/(2π) 2 → n v . Then using the Tkachenko frequency at rapid rotation, (32), we have
ǫ 2 ℓ 2 = 1 2ν 1 + s 2 k 2 4ω T Ω ≃ 1 2ν 1 + π 2 √ 10 9 ≃ 2.23 ν ,(50)
where ℓ = (1/πn v ) 1/2 is the radius of the Wigner-Seitz cell around a given vortex, and we have used (19). The "1" term arises from the inertial mode, with equal contributions from the transverse and longitudinal displacements; the final term arises solely from the soft Tkachenko mode contribution to the transverse displacements. In order to take into account approximately the mode structure at larger wave vector, we include the kinetic term −∇ 2 n/4m in the pressure. In the non-rotating weakly interacting gas, this term modifies the linear spectrum, sk, into the full Bogoliubov spectrum, E k = (sk) 2 + (k 2 /2m) 2 1/2 . Inclusion of such a term is equivalent to replacing s 2 by s 2 + k 2 /4m 2 in the mode spectrum, with the effect of stiffening the Tkachenko mode at high wave vector. Then at Ω/ms 2 ∼ 40, as expected under typical experimental conditions at ν ∼ 10, the contribution of the Tkachenko modes to ǫ 2 is reduced from 1.73/ν in Eq. (50) to ∼ 0.72/ν, a result consistent with that reported in Ref. [11], mΩ ǫ 2 ≃ 0.66/ν. Then with the contribution of the inertial modes included,
ǫ 2 ℓ 2 ≃ 1.22 ν .(51)
Taking the Lindemann criterion for melting in two dimensions put forth in [10], ǫ 2 /ℓ 2 ≃ 0.07, we find melting of the lattice at filling ν ∼ 17; were we to take only the Tkachenko mode contribution, the melting would be at ν ∼ 10.
B. Phase correlations
We next determine the effect of the vortex excitations on the correlations of the order parameter in the superfluid and on the condensate fraction. The condensate density, n 0 , is most conveniently found,à la Onsager-Penrose, as the limit of the single particle density matrix for large separation:
ψ(r)ψ † (r ′ ) → n 0 , | r − r ′ | → ∞,(52)
where ψ(r) is the single particle annihilation operator [32]. We write ψ(r) in terms of the density and phase operators, viz., ψ(r) = n(r)e iΦ(r) , and expand the long wavelength structure to second order in small fluctuations of n about n, its equilibrium value, and e iΦ(r) about unity. Then in terms of equal time correlation functions,
ψ(r)ψ † (r ′ ) ≃n e iΦ(r) e −iΦ(r ′ ) + 1 4n δn(r)δn(r ′ ) − (δn(r)) 2 + 1 2 δn(r)(e iΦ(r) − e −iΦ(r ′ ) ) + (e iΦ(r) − e −iΦ(r ′ ) )δn(r ′ ) ,(53)
where δn(r) = n(r)−n, andn denotes the average density. The first term on the right is the U(1)-invariant correlation of the order parameter, given, for Gaussianly-distributed Fourier components of the phase, by
e iΦ(r) e −iΦ(r ′ ) = e − 1 2 (Φ(r)−Φ(r ′ )) 2 .(54)
Equation (33) implies that as | r − r ′ | → ∞, δn(r)δn(r ′ ) and also the final bracket in (53) vanish. Thus the density of particles in the condensate is given by,
n 0 = lim |r−r ′ |→∞ ne − 1 2 (Φ( r)−Φ( r ′ )) 2 − 1 4n δn(r) 2 .(55)
When the phase flucutations are convergent, and vanishing at large separation, we can expand to second order to find the usual expression for the condensate depletion in the Bogoliubov approximation (see Sec. IV):
n ′ =n Φ 2 + 1 4n δn(r) 2 .(56)
To determine the phase-phase and the density-phase correlations, we note that the divergence of Eq. (9) implies that the Fourier compenents of the phase obey Φ = −(im/k)(v L − 2Ωǫ T ). Thus
ΦΦ (k, z) = m 2 k 2 v L v L (k, z) − 2Ω ( ǫ T v L (k, z) + v L ǫ T (k, z)) + 4Ω 2 ǫ T ǫ T (k, z) ,(57)ΦΦ (k, z) = m 2 k 2 z 2 n 2 k 2 nn − nk 2 mz 2 − 4Ω 2 z 2 + s 2 k 2 z 2 − s 2 k 2 ǫ T ǫ T = ms 2 nD z 2 − 4Ω 2 − 4(C 1 + C 2 ) nm k 2 .(58)
Similarly,
Φn (k, z) = −i mz nk 2 nn (k, z) − 4Ω 2 n 2 k 2 z 2 − s 2 k 2 ǫ T ǫ T = −iz n ms 2 ΦΦ (k, z).(59)
Note that the phase fluctuations are more divergent in the infrared limit by a factor 1/k 2 than the transverse displacement fluctuations.
The relative phase correlations are given by:
(Φ(r) − Φ(r ′ )) 2 = ms 2 n d 2 k (2π) 2 Z 1 − cos k · R ω T (1 + 2f (ω T )),(60)
plus finite terms, where R = r − r ′ . In the stiff limit (s 2 k 2 ≫ Ω 2 ), the relative phase fluctuations are convergent at zero temperature, but, as discussed in [23], the finite temperature contribution of the Tkachenko modes washes out the phase correlations logarithmically, and the system is in fact in a fragmented condensate phase [33]. One finds similar behavior at zero temperature in the soft limit. The zero temperature part of the integral (60) is ∼ ln(k D R). Thus from Eq. (54), we find the correlation of the phase factors,
e iΦ(r) e −iΦ(r ′ ) ∼ (k D R) −η ∼ N v (R) −η/2 ,(61)
where again N v (R) is the number of vortices within radius R, and
η = 1 ν ms 2n 8C 2 1/2 .(62)
From the limits Ω ≪ ms 2 to Ω ≫ ms 2 , the range of η is,
1 ν ms 2 Ω 1/2 ≥ η ≥ π 2 √ 10 9ν ;(63)
is . The phase correlations fall off algebraically at large R, as expected for a two-dimensional system [34]. As the phase correlations decrease, the condensate fraction also falls, as (n 0 /n) ∼ N −η/2 v . The falloff of the phase correlations begins to become important for (η/2) ln N v > ∼ 1, which translates effectively into the condition in the quantum Hall limit that ν < ∼ 1.7 ln N v , or N < ∼ νe 0.6ν ; for N = 10 6 , the condition is that N v > ∼ 5 × 10 4 ; for N = 10 4 , N v > ∼ 10 3 ; and for N = 10 3 , one needs only N v > ∼ 10 2 to find loss of phase coherence across the system.
One may ask whether the falloff is significant by the time the lattice will have melted. The most divergent terms in the phase fluctuations are induced by the transverse displacement fluctuations in Eq. (58), so that in a system of transverse radius R ⊥ , one has roughly,
lim | r− r ′ |→R ⊥ 1 2 (Φ(r) − Φ(r ′ )) 2 ≃ 8 ℓ 4 k −2 ǫ 2 T ,(64)
where the average k −2 ∼ (1/4πn v ) ln N v is taken with respect to the weight in the transverse displacement correlation function. Thus
lim | r− r ′ |→R ⊥ 1 2 (Φ(r) − Φ(r ′ )) 2 ∼ 2 ǫ 2 T ℓ 2 ln N v .(65)
The Lindemann criterion, ǫ 2 T /ℓ 2 ∼ 0.07, implies that for N v > ∼ 10 3 the right side of Eq. (65) exceeds unity at the melting point, setting the scale for number of vortices for which loss of long range order of the condensate prior to melting becomes important.
At finite temperature, the phase correlation integral (60) is logarithmically singular in the infrared for all R, indicating that in two dimensions, the system is, as expected, no longer Bose condensed. The situation is the same as in the stiff Thomas-Fermi limit, where the phase correlations in two dimensions diverge in the infrared, but in three dimensions fall algebraically with separation [23].
IV. THE SINGLE PARTICLE GREEN'S FUNCTION AND CONDENSATE DEPLETION
We now determine the structure of the single particle excitations in terms of the modes of the lattice by constructing the long wavelength behavior of the single particle Green's function,
G(rt, r ′ t ′ ) = −i T [(ψ(rt) − ψ(rt) )(ψ † (r ′ t ′ ) − ψ † (r ′ t ′ ) ](66)
(T denotes the time ordered product), in terms of the correlation functions calculated in the previous section. Again expanding in small fluctuations about equilibrium, we have G(k, z) = 1 4n nn (k, z) + 1 2n ne −iΦ (k, z) + e iΦ n (k, z) +n e iΦ e −iΦ (k, z).
which, with Eqs. (38), (41), and (44), becomes,
ACKNOWLEDGEMENTSI am grateful to S. Stringari, M. Cozzini, S.A. Gifford, C.J. Pethick, V. Schweikhard, J. Anglin, and S. Vishveshwara for useful discussions. My thanks to the Aspen Center for Physics for hospitality during the course of this research. This work was supported in part by NSF Grant PHY00-98353.To second order in the fluctuations of the density and phase, G(k, ω) = 1 4n nn (k, z) + i 2 ( Φn (k, z) − nΦ (k, z)) +n ΦΦ (k, z).To illustrate this method of calculating G, we first consider a weakly interacting non-rotating system, for which Eqs.(33),(27), (59), and (58) imply,andwhere E k = gnk 2 /m + (k 2 /2m) 2 1/2 is the Bogoliubov single particle energy. (The term ∼ k 4 in E 2 k appears only if one includes the kinetic term, −∇ 2 n/4m in the pressure.) Substitution of these correlation functions into Eq. (68) yields the usual Bogoliubov result:In the presence of rotation, we use Eqs.(58)and(59), to writeThe contribution of the modes to the density, n ′ k , of particles of momentum k excited out of the condensate, to leading orders in k 2 , is thenThe contribution of the inertial mode (the final terms) is always finite in two dimensions. However, the contribution of the Tkachenko mode in the soft limit (the first term) leads to a logarithmic divergence of n ′ in two dimensions at zero temperature[11], and a quadratic divergence at finite temperature, thus destroying the condensation in an infinite system. It is instructive, finally, to calculate the effect of the lattice modes on the superfluid mass density, ρ s , which is related to the transverse velocity autocorrelation function by[35]Since v T = −2Ωǫ L , we find from Eq. (40) thatthat is, the superfluid mass density vanishes. As discussed in[23]the lattice excitations replenish the sum rule (75), a reflection of the fact that the moment of inertia of a rapidly rotating superfluid is effectively the classical value[36]. The vanishing of ρ s is consistent with the behavior of G(k → 0, 0), which according to Josephson's sum rule should approach −m 2 n 0 /ρ s k 2[35], while in fact G(k → 0, 0) → −(Ωn/C 2 )(2m 2 /k 4 ).
. G A Williams, R E Packard, Phys. Rev. Lett. 33459G.A. Williams and R.E. Packard, Phys. Rev. Lett. 33, 280 (1974); 33, 459 (1978);
. E J Yarmchuk, M J V Gordon, R E Packard, Phys. Rev. Lett. 43214E.J. Yarmchuk, M.J.V. Gordon, and R.E. Packard, Phys. Rev. Lett. 43, 214 (1979);
. E J Yarmchuk, R E Packard, J. Low Temp. Phys. 46479E.J. Yarmchuk and R.E. Packard, J. Low Temp. Phys. 46, 479 (1982).
. K W Madison, F Chevy, W Wohlleben, J Dalibard, Phys. Rev. Lett. 84806K.W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. 84, 806 (2000);
. F Chevy, K W Madison, J Dalibard, Phys. Rev. Lett. 852223F. Chevy, K.W. Madison, and J. Dalibard, Phys. Rev. Lett. 85, 2223 (2000);
. K W Madison, F Chevy, V Bretin, J Dalibard, Phys. Rev. Lett. 864443K.W. Madison, F. Chevy, V. Bretin, and J. Dalibard, Phys. Rev. Lett. 86, 4443 (2001).
. J R Abo-Shaeer, C Raman, J M Vogels, W Ketterle, Science. 292476J.R. Abo-Shaeer, C. Raman, J.M. Vogels, and W. Ketterle, Science 292, 476 (2001).
. P C Haljan, I Coddington, P Engels, E A Cornell, Phys. Rev. Lett. 87210403P.C. Haljan, I. Coddington, P. Engels, and E.A. Cornell, Phys. Rev. Lett. 87, 210403 (2001);
. P Engels, I Coddington, P C Haljan, E A Cornell, Phys. Rev. Lett. 89100403P. Engels, I. Coddington, P.C. Haljan, and E.A. Cornell, Phys. Rev. Lett. 89, 100403 (2002).
. I Coddington, P Engels, V Schweikhard, E Cornell, cond-mat/0305008I. Coddington, P. Engels, V. Schweikhard, and E. Cornell, cond-mat/0305008.
. V Schweikhard, I Coddington, P Engels, V P Mogendorff, E Cornell, to be publishedV. Schweikhard, I. Coddington, P. Engels, V.P. Mogendorff, and E. Cornell, to be published.
. V Bretin, S Stock, Y Seurin, J Dalibard, cond-mat/0307464V. Bretin, S. Stock, Y. Seurin, and J. Dalibard, cond-mat/0307464.
. T.-L Ho, Phys. Rev. Lett. 8760403T.-L. Ho, Phys. Rev. Lett. 87, 060403 (2001).
. G Baym, C J Pethick, cond-mat/0308325G. Baym and C.J. Pethick, cond-mat/0308325.
. A Rozhkov, D Stroud, Phys. Rev. 5412697A. Rozhkov and D. Stroud, Phys. Rev. B54, R12697 (1996).
. J Sinova, C B Hanna, A H Macdonald, Phys. Rev. Lett. 8930403J. Sinova, C.B. Hanna, and A.H. MacDonald, Phys. Rev. Lett. 89, 030403 (2002).
. N K Wilkin, J M F Gunn, R A Smith, Phys. Rev. Lett. 802265N.K. Wilkin, J.M.F. Gunn, and R.A. Smith, Phys. Rev. Lett. 80, 2265 (1998);
. N K Wilkin, J M F Gunn, Phys. Rev. Lett. 846N.K. Wilkin and J.M.F. Gunn, Phys. Rev. Lett. 84, 6 (2000);
. N R Cooper, N K Wilkin, J M F Gunn, Phys. Rev. Lett. 87120405N.R. Cooper, N.K. Wilkin, J.M.F. Gunn, Phys. Rev. Lett. 87, 120405 (2001).
. S Viefers, T H Hansson, S M Reimann, Phys. Rev. A. 6253604S. Viefers, T.H. Hansson, and S.M. Reimann, Phys. Rev. A 62, 053604 (2000).
. N Regnault, Th, Jolicoeur, cond-mat/0212477N. Regnault and Th. Jolicoeur, cond-mat/0212477.
. J W Reijnders, F J M Van Lankvelt, K Schoutens, N Read, Phys. Rev. Letters. 89120401J.W. Reijnders, F.J.M. van Lankvelt, K. Schoutens, and N. Read, Phys. Rev. Letters 89, 120401 (2002);
cond-mat/0207715, where further references can be found. M A Cazalilla, M.A. Cazalilla, cond-mat/0207715, where further references can be found.
. U R Fischer, G Baym, Phys. Rev. Lett. 90140402U. R. Fischer and G. Baym, Phys. Rev. Lett. 90, 140402 (2003).
b/2, wheren is the mean density in a unit cell of the lattice, and b = n 2 /n 2 . The specific value of b depends on the details of the density distribution in the cell. At slow rotation, b = 1 + Ω/gn + · · ·. In the mean field quantum Hall regime. The interaction energy density of a rapidly rotating system is. 9,11], b = (e 2 − 5)/4(e − 2) 2 = 1.158. We omit the bar over the mean density when it causes no confusionThe interaction energy density of a rapidly rotating system is [17] gn 2 b/2, wheren is the mean density in a unit cell of the lattice, and b = n 2 /n 2 . The specific value of b depends on the details of the density distribution in the cell. At slow rotation, b = 1 + Ω/gn + · · ·. In the mean field quantum Hall regime [9,11], b = (e 2 − 5)/4(e − 2) 2 = 1.158. We omit the bar over the mean density when it causes no confusion.
. J R Anglin, M Crescimanno, cond-mat/0210063J.R. Anglin and M. Crescimanno, cond-mat/0210063.
. V K Tkachenko, Zh. Eksp. Teor. Fiz. 491282Sov. Phys. JETPV.K. Tkachenko, Zh. Eksp. Teor. Fiz. 49, 1875 (1965) [Sov. Phys. JETP 22, 1282 (1966)];
. Zh. Eksp. Teor. Fiz. 501049Sov. Phys. JETPZh. Eksp. Teor. Fiz. 50, 1573 (1966) [Sov. Phys. JETP 23, 1049 (1966)];
. Zh. Eksp. Teor. Fiz. 561763Sov. Phys. JETPZh. Eksp. Teor. Fiz. 56, 1763 (1969) [Sov. Phys. JETP 29, 245 (1969)].
. G Baym, E Chandler, J. Low Temp. Phys. 50119G. Baym and E. Chandler, J. Low Temp. Phys. 50, 57 (1983); 62, 119 (1986).
. G Baym, cond-mat/0305294Phys. Rev. Lett. in pressG. Baym, Phys. Rev. Lett. 2003 (in press), cond-mat/0305294.
. G Baym, Phys. Rev. 5111697G. Baym, Phys. Rev. B51, 11697 (1995).
. S A Gifford, G Baym, S.A. Gifford and G. Baym, to be published.
. G Baym, G. Baym, to be published.
In a weakly interacting non-rotating homogeneous gas, s 2 = gn/m; however in a rotating system one must take into account the correction b to the interaction energy. 17Neglect of the dependence of b on the mean density, a reasonable approximation. implies s 2 = gbn/m, the result we use hereIn a weakly interacting non-rotating homogeneous gas, s 2 = gn/m; however in a rotating system one must take into account the correction b to the interaction energy [17,18]. Neglect of the dependence of b on the mean density, a reasonable approximation, implies s 2 = gbn/m, the result we use here.
The transverse and longitudinal components of a vector X, e.g., ǫ, ζ, or v, are defined by XT = Ω · (k × X) and XL =k · X. The transverse and longitudinal components of a vector X, e.g., ǫ, ζ, or v, are defined by XT = Ω · (k × X) and XL =k · X.
. M Cozzini, S Stringari, Phys. Rev. 6741602M. Cozzini and S. Stringari, Phys. Rev. A67, 041602 (2003).
. S Choi, L O Baksmaty, S J Woo, N P Bigelow, cond-mat/0306549S. Choi, L.O. Baksmaty, S.J. Woo, and N.P. Bigelow, cond-mat/0306549.
. L O Baksmaty, S J Woo, S Choi, N P Bigelow, cond-mat/0307368L.O. Baksmaty, S.J. Woo, S. Choi, N.P. Bigelow, cond-mat/0307368.
. M Cozzini, L Pitaevskii, S Stringari, to be publishedM. Cozzini, L. Pitaevskii, and S. Stringari, to be published.
For measurements of the single particle density matrix in a Bose condensate, see I. Bloch, T. Hänsch, and T. Esslinger. Nature. 403166For measurements of the single particle density matrix in a Bose condensate, see I. Bloch, T. Hänsch, and T. Esslinger, Nature 403, 166 (2000).
. E Mueller, T.-L Ho, G Baym, M Ueda, to be publishedE. Mueller, T.-L. Ho, G. Baym, and M. Ueda, to be published.
) e −iΦ(r ′ ) for large R does not in itself indicate a loss of superfluidity. Rather, the condensate becomes fragmented. As Josephson stressed by analogy to a very floppy "slinky. It should be borne in mind, however, that the vanishing of the correlation function e iΦ(r. such vanishing correlations are not the same as the fluctuations in the winding of the phase angle around a closed loopIt should be borne in mind, however, that the vanishing of the correlation function e iΦ(r) e −iΦ(r ′ ) for large R does not in itself indicate a loss of superfluidity. Rather, the condensate becomes fragmented. As Josephson stressed by analogy to a very floppy "slinky," such vanishing correlations are not the same as the fluctuations in the winding of the phase angle around a closed loop.
G Baym, Mathematical Methods in Solid State and Superfluid Theory. R.C. Clark and G.H. Derrick (Oliver and BoydEdinburgh121G. Baym, in Mathematical Methods in Solid State and Superfluid Theory, ed. by R.C. Clark and G.H. Derrick (Oliver and Boyd, Edinburgh, 1969), p 121.
However, this result does not imply that the superfluid mass density measured dynamically, e.g., in a second sound experiment, vanishes, since in a system that is not Galilean invariant such a measurement of ρs is independent of a measurement via the moment of inertia. However, this result does not imply that the superfluid mass density measured dynamically, e.g., in a second sound experiment, vanishes, since in a system that is not Galilean invariant such a measurement of ρs is independent of a measurement via the moment of inertia.
| zyda_arxiv-1181000 |
Reinforcement Learning with Automated Auxiliary Loss Search
Tairan He
Shanghai Jiao Tong University
Yuge Zhang
Microsoft Research Asia
Kan Ren
Microsoft Research Asia
Minghuan Liu
Shanghai Jiao Tong University
Che Wang
New York University
Weinan Zhang
Shanghai Jiao Tong University
Yuqing Yang
Microsoft Research Asia
Dongsheng Li
Microsoft Research Asia
Reinforcement Learning with Automated Auxiliary Loss Search
A good state representation is crucial to solving complicated reinforcement learning (RL) challenges. Many recent works focus on designing auxiliary losses for learning informative representations. Unfortunately, these handcrafted objectives rely heavily on expert knowledge and may be sub-optimal. In this paper, we propose a principled and universal method for learning better representations with auxiliary loss functions, named Automated Auxiliary Loss Search (A2LS), which automatically searches for top-performing auxiliary loss functions for RL. Specifically, based on the collected trajectory data, we define a general auxiliary loss space of size 7.5 × 10 20 and explore the space with an efficient evolutionary search strategy. Empirical results show that the discovered auxiliary loss (namely, A2-winner) significantly improves the performance on both high-dimensional (image) and lowdimensional (vector) unseen tasks with much higher efficiency, showing promising generalization ability to different settings and even different benchmark domains. We conduct a statistical analysis to reveal the relations between patterns of auxiliary losses and RL performance. The codes and supplementary materials are available at https://seqml.github.io/a2ls.
Introduction
Reinforcement learning (RL) has achieved remarkable progress in games [31,47,50], financial trading [8] and robotics [13]. However, in its core part, without designs tailored to specific tasks, general RL paradigms are still learning implicit representations from critic loss (value predictions) and actor loss (maximizing cumulative reward). In many real-world scenarios where observations are complicated (e.g., images) or incomplete (e.g., partial observable), training an agent that is able to extract informative signals from those inputs becomes incredibly sample-inefficient. Therefore, many recent works have been devoted to obtaining a good state representation, which is believed to be one of the key solutions to improve the efficacy of RL [23,24]. One of the main streams is adding auxiliary losses to update the state encoder. Under the hood, it resorts to informative and dense learning signals in order to encode various prior knowledge and regularization [40], and obtain better latent representations. Over the years, a series of works have attempted to figure out the form of the most helpful auxiliary loss for RL. Quite a few advances have been made, including observation reconstruction [51], reward prediction [20], environment dynamics prediction [40,6,35], etc. But we note two problems in this evolving process: (i) each of the loss designs listed above are obtained through empirical trial-and-errors based on expert designs, thus heavily relying on human labor and expertise; (ii) few works have used the final performance of RL as an optimization objective to directly search the auxiliary loss, indicating that these designs could be sub-optimal. To resolve the issues of the existing handcrafted solution mentioned above, we decide to automate the process of designing the auxiliary loss functions of RL and propose a principled solution named Automated Auxiliary Loss Search (A2LS). A2LS formulates the problem as a bi-level optimization where we try to find the best auxiliary loss, which, to the most extent, helps train a good RL agent. The outer loop searches for auxiliary losses based on RL performance to ensure the searched losses align with the RL objective, while the inner loop performs RL training with the searched auxiliary loss function. Specifically, A2LS utilizes an evolutionary strategy to search the configuration of auxiliary losses over a novel search space of size 7.5 × 10 20 that covers many existing solutions. By searching on a small set of simulated training environments of continuous control from Deepmind Control suite (DMC) [43], A2LS finalizes a loss, namely A2-winner.
To evaluate the generalizability of the discovered auxiliary loss A2-winner, we test A2-winner on a wide set of test environments, including both image-based and vector-based (with proprioceptive features like positions, velocities and accelerations as inputs) tasks. Extensive experiments show the searched loss function is highly effective and largely outperforms strong baseline methods. More importantly, the searched auxiliary loss generalizes well to unseen settings such as (i) different robots of control; (ii) different data types of observation; (iii) partially observable settings; (iv) different network architectures; and (v) even to a totally different discrete control domain (Atari 2600 games [1]). In the end, we make detailed statistical analyses on the relation between RL performance and patterns of auxiliary losses based on the data of whole evolutionary search process, providing useful insights on future studies of auxiliary loss designs and representation learning in RL.
Problem Formulation and Background
We consider the standard Markov Decision Process (MDP) E where the state, action and reward at time step t are denoted as (s t , a t , r t ). The sequence of rollout data sampled by the agent in the episodic environment is (s 0 , . . . , s t , a t , r t , s t+1 , · · · , s T ), where T represents the episode length. Suppose the RL agent is parameterized by ω (either the policy π or the state-action value function Q), with a state encoder g θ parameterized by θ ⊆ ω which plays a key role for representation learning in RL. The agent is required to maximize its cumulative rewards in environment E by optimizing ω, noted as R(ω; E) = E π [ T −1 t=0 r t ].
In this paper, we aim to find the optimal auxiliary loss function L Aux such that the agent can reach the best performance by optimizing ω under a combination of an arbitrary RL loss function L RL together with an auxiliary loss L Aux . Formally, our optimization goal is:
max LAux R(min ω L RL (ω; E) + λL Aux (θ; E); E) ,(1)
where λ is a hyper-parameter balancing the relative weight of the auxiliary loss. The left part (inner loop) of Figure 1 illustrates how data and gradients flow in RL training when an auxiliary loss is enabled. Some instances of L RL and L Aux are given in Appendix B. Unfortunately, existing auxiliary losses L Aux are handcrafted, which heavily rely on expert knowledge, and may not generalize well [20,6] MSE 1 {s t , a t } {r t } Action inference [40,6] MSE 1 {s t , s t+1 } {a t } CURL [23] Bilinear 1 {s t } {s t } ATC [42] Bilinear k {s t } {s t+1 , · · · , s t+k } SPR [39] N-MSE k {s t , a t , a t+1 , · · · , a t+k−1 } {s t+1 , · · · , s t+k } in different scenarios as shown in the experiment part. To find better auxiliary loss functions for representation learning in RL, we introduce our principled solution in the following section.
Automated Auxiliary Loss Search
To meet our goal of finding top-performing auxiliary loss functions without expert assignment, we turn to the help of automated loss search, which has shown promising results in the automated machine learning (AutoML) community [27,28,48]. Correspondingly, we propose Automated Auxiliary Loss Search (A2LS), a principled solution for resolving the above bi-level optimization problem in Equation 1. A2LS resolves the inner problem as a standard RL training procedure; for the outer one, A2LS defines a finite and discrete search space (Section 3.1), and designs a novel evolution strategy to efficiently explore the space (Section 3.2).
Search Space Design
We have argued that almost all existing auxiliary losses require expert knowledge, and we expect to search for a better one automatically. To this end, it is clear that we should design a search space that satisfies the following desiderata.
• Generalization: the search space should cover most of the existing handcrafted auxiliary losses to ensure the searched results can be no worse than handcrafted losses; • Atomicity: the search space should be composed of several independent dimensions to fit into any general search algorithm [30] and support an efficient search scheme; • Sufficiency: the search space should be large enough to contain the top-performing solutions.
Given the criteria, we conclude and list some existing auxiliary losses in Table 1 and find their commonalities, as well as differences. We realize that these losses share similar components and computation flow. As shown in Figure 2, when training the RL agent, the loss firstly selects a sequence {s t , a t , r t } i+k t=i from the replay buffer, when k is called horizon. The agent then tries to predict some elements in the sequence (called target) based on another picked set of elements from the sequence (called source). Finally, the loss calculates and minimizes the prediction error (rigorously defined with operator). To be more specific, the encoder part g θ of the agent, first encodes the source into latent representations, which is further fed into a predictor h to get a prediction y; the auxiliary loss is computed by the prediction y and the targetŷ that is translated from the target by a target encoder gθ, using an operator f . The target encoder is updated in an momentum manner as shown in Figure 2 (details are given in Appendix C.1.2). Formally,
L Aux (θ; E) = f h g θ (seq source ) , gθ(seq target ) ,(2)
where seq source , seq target ⊆ {s t , a t , r t } i+k t=i are both subsets of the candidate sequence. And for simplicity, we will denote g θ (s t , a t , r t , s t+1 , · · · ) as short for [g θ (s t ), a t , r t , g θ (s t+1 ), · · · ] for the rest of this paper (the encoder g only deals with states {s i }). Thereafter, we observe that these existing auxiliary losses differ in two dimensions, i.e., input elements and operator, where input elements are further combined by horizon, source and target. These differences compose our search dimensions of the whole space. We then illustrate the search ranges of these dimensions in detail.
Input elements. The input elements denote all inputs to the loss functions, which can be further Figure 2: Overview of the search space {I, f } and the computation graph of auxiliary loss functions. I selects a candidate sequence {s t , a t , r t } i+k t=i with horizon k; then determine a source and a target as arbitrary subsets of the sequence; an encoder g θ first encodes the source into latent representations, which is fed into a predictor h to get a prediction y; the auxiliary loss is computed over the prediction y and the ground truthŷ that is translated from the target by a target encoder gθ, using a operator f . disassembled as horizon, source and target. Different from previous automated loss search works, the target here is not "ground-truth" because auxiliary losses in RL have no labels beforehand. Instead, both source and target are generated via interacting with the environment in a self-supervised manner. Particularly, the input elements first determine a candidate sequence {s t , a t , r t } i+k t=i with horizon k. Then, it chooses two subsets from the candidate sequence as source and target respectively. For example, the subsets can be {s t }, {s t , s t+1 }, or {s t , r t+1 , a t+2 }, {s t , s t+1 , a t+1 }, etc.
Operator. Given a prediction y and its targetŷ, the auxiliary loss is computed by an operator f , which is often a similarity measure. In our work, we cover all different operators f used by the previous works, including inner product (Inner) [17,42], bilinear inner product (Bilinear) [23], cosine similarity (Cosine) [3], mean squared error (MSE) [35,6] and normalized mean squared error (N-MSE) [39]. Additionally, other works also utilize contrastive objectives, e.g., InfoNCE loss [33], incorporating the trick to sample un-paired predictions and targets as negative samples and maximize the distances between them. This technique is orthogonal to the five similarity measures mentioned above, so we make it optional and create 5 × 2 = 10 different operators in total.
Final design. In the light of preceding discussion, with the definition of input elements and operator, we finish the design of the search space, which satisfactorily meets the desiderata mentioned above. Specifically, the space is generalizable to cover most of the existing handcrafted auxiliary losses; additionally, the atomicity is embodied by the compositionality that all input elements work with any operator; most importantly, the search space is sufficiently large with a total size of 7.5 × 10 20 (detailed calculation can be found in Appendix E) to find better solutions.
Search Strategy
The success of evolution strategies in exploring large, multi-dimensional search space has been proven in many works [19,4]. Similarly, A2LS adopts an evolutionary algorithm [37] to search for topperforming auxiliary loss functions over the designed search space. In its essence, the evolutionary algorithm (i) keeps a population of loss function candidates; (ii) evaluates their performance; (iii) eliminates the worst and evolves into a new better population. Note that step (ii) of "evaluating" is very costly because it needs to train the RL agents with dozens of different auxiliary loss functions. Therefore, our key technical contribution contains how to further reduce the search cost (Section 3.2.1) and how to make an efficient search procedure (Section 3.2.2).
Search Space Pruning
In our preliminary experiment, we find out the dimension of operator in the search space can be simplified. In particular, MSE outperforms all the others by significant gaps in most cases. So we effectively prune other choices of operators except MSE. See Appendix D.1 for complete comparative results and an ablation study on the effectiveness of search space pruning.
Evolution Procedure
Our evolution procedure roughly contains four important components: (i) evaluation and selection: a population of candidate auxiliary losses is evaluated through an inner loop of RL training, then we select the top candidates for the next evolution stage (i.e., generation); (ii) mutation: the selected candidates mutate to form a new population and move to the next stage; (iii) loss rejection: filter out and skip evaluating invalid auxiliary losses for the next stage; and (iv) bootstrapping initial population: assign more chance to initial auxiliary losses that may contain useful patterns by prior knowledge for higher efficiency. The step-by-step evolution algorithm is provided in Algorithm 1 in the appendix, and an overview of the A2LS pipeline is illustrated in Figure 1. We next describe them in detail.
Evaluation and selection. At each evolution stage, we first train a population of candidates with a population size P = 100 by the inner loop of RL training. The candidates are then sorted by computing the approximated area under learning curve (AULC) [11,41], which is a single metric reflecting both the convergence speed and the final performance [46] with low variance of results. After each training stage, the top-25% candidates are selected to generate the population for the next stage. We include an ablation study on the effectiveness of AULC in Appendix D.3.
Mutation.
To obtain a new population of auxiliary loss functions, we propose a novel mutation strategy. First, we represent both the source and the target of the input elements as a pair of binary masks, where each bit of the mask represents selected by 1 or not selected by 0. For instance, given a candidate sequence {s t , a t , r t , s t+1 , a t+1 , r t+1 }, the binary mask of this subset sequence {s t , a t , r t+1 } is denoted as 110001. Afterward, we adopt four types of mutations, also shown in Figure 3: (i) replacement (50% of the population): flip the given binary mask with probability p = 1 2·(3k+3) with the horizon length k; (ii) crossover (20%): generate a new candidate by randomly combining the mask bits of two candidates with the same horizon length in the population; (iii) horizon decrease and horizon increase (10%): append new binary masks to the tail or delete existing binary masks at the back. (iv) random generation (20%): every bit of the binary mask is generated from a Bernoulli distribution B(0.5).
Loss rejection protocol. Since the auxiliary loss needs to be differentiable with respect to the parameters of the state encoder, we perform a gradient flow check on randomly generated loss functions during evolution and skip evaluating invalid auxiliary losses. Concretely, the following conditions must be satisfied to make a valid loss function: (i) having at least one state element in seq source to make sure the gradient of auxiliary loss can propagate back to the state encoder; (ii) seq target is not empty; (iii) the horizon should be within a reasonable range (1 ≤ k ≤ 10 in our experiments). If a loss is rejected, we repeat the mutation to fill the population.
Bootstrapping initial population. To improve the computational efficiency so that the algorithm can find reasonable loss functions quickly, we incorporate prior knowledge into the initialization of the search. Particularly, before the first stage of evolution, we bootstrap the initial population with a prior distribution that assigns high probability to auxiliary loss functions containing useful patterns like dynamics and reward prediction. More implementation details are provided in Appendix C.3.
Evolution and Searched Results
Cheetah-Run (Image)
Reacher-Easy (Image) Walker-Walk (Image) Figure 4: Evolution process in the three training (image-based) environments. Every white dot represents a candidate of auxiliary loss, and y-axis shows its corresponding approximated AULC score [11,41]. The horizontal lines show the scores of the baselines. The AULC score is approximated with the average evaluation score at 100k, 200k, 300k, 400k, 500k time steps.
As mentioned in Section 1, we expect to find auxiliary losses that align with the RL objective and generalize well to unseen test environments. To do so, we use A2LS to search over a small set of training environments, and then test the searched results on a wide range of test environments. In this section, we first introduce the evolution on training environments and search results.
Evolution on Training Environments
The training environments are chosen as three image-based (observations for agents are images) continuous control tasks in DMC benchmark [43], Cheetah-Run, Reacher-Easy, and Walker-Walk.
For each environment, we set the total budget to 16k GPU hours (on NVIDIA P100) and terminate the search when the resource is exhausted. Due to computation complexity, we only run one seed for each inner loop RL training, but we try to prevent such randomness by cross validation (see Section 4.2). We use the same network architecture and hyperparameters config as CURL [23] (see Appendix C.4.1 for details) to train the RL agents. To evaluate the population during evolution, we measure A2LS as compared to SAC, SAC-wo-aug, and CURL, where we randomly crop images from 100 × 100 to 84 × 84 as data augmentation (the same technique used in CURL [23]) for all methods except SAC-wo-aug. The whole evolution process on three environments is demonstrated in Figure 4. Even in the early stages (e.g., stage 1), some of the auxiliary loss candidates already surpass baselines, indicating the high potential of automated loss search. The overall AULC scores of the population continue to improve when more stages come in (detailed numbers are summerized in Appendix D.10). Judging from the trend, we believe the performances could improve even more if we had further increased the budget.
Searched Results: A2-winner
Cheetah-Run Reacher-Easy Walker-Walk
Normalized AULC Score (w.r.t lowest score) Figure 5: Cross validation on imagebased training environments.
Although some candidates in the population have achieved remarkable AULC scores in the evolution (Figure 4), they were only evaluated with one random seed in one environment, making their robustness under question. To ensure that we find a consistently-useful auxiliary loss, we conduct a cross validation. We first choose the top 5 candidates of stage-5 of the evolution on Cheetah-Run (detailed top candidates during the whole evolution procedure are provided in Appendix F). For each of the five candidates, we repeat the RL training on all three training environments, shown in Figure 5. Finally, we mark the best among five (green bar in Figure 5) as our final searched result. We call it A2-winner, which has the following form:
L Aux (θ; E) = h g θ (s t+1 , a t+1 , a t+2 , a t+3 ) − gθ(r t , r t+1 , s t+2 , s t+3 ) 2 .(3)
Generalization Experiments
To verify the effectiveness of the searched results, we conduct various generalization experiments on a wide range of test environments in depth. Implementation details and more ablation studies are given in Appendix C and Appendix D.
Generalize to unseen image-based tasks. We first investigate the generalizability of A2-winner to unseen image-based tasks by training agents with A2-winner on common DMC tasks and compare with model-based and model-free baselines that use different auxiliary loss functions (see Appendix C.5 for details about baseline methods). The results are summarized in Table 2 where A2-winner greatly outperforms other baseline methods on most tasks, including unseen test environments. This implies that A2-winner is a robust and effective auxiliary loss for image-based continuous control tasks to improve both the efficiency and final performance. Table 3 where A2-winner outperforms all baselines, showing strong evidence of the generalization and potential usages of A2-winner. Note that the base RL algorithm used in Atari is a value-based method, indicating that A2-winner generalizes well to both value-based and policy-based RL algorithms.
Generalize to different observation types. To see whether A2-winner (searched in image-based environments) is able to generalize to the environments with different observation types, we test A2-winner on vector-based (inputs for RL agents are proprioceptive features such as positions, velocities and accelerations) tasks of DMC and list the results in Table 4. Concretely, we compare A2-winner with SAC-Identity, SAC and CURL, where SAC-Identity does not have state encoder while the others share the same state encoder architecture (See Appendix C.1.1 and Appendix D.6 for detailed implementations). To our delight, A2-winner still outperforms all baselines in 12 out of 18 environments, showing A2-winner can also benefit RL performance in vector-based observations. Moreover, the performance gain is particularly significant in more complex environments like Humanoid, where SAC barely learns anything at 1000K time steps. In order to get a deeper understanding of this phenomenon, we additionally visualize the Q loss landscape for both methods in Appendix D.7. r Generalize to different hypothesis spaces. The architecture of a neural network defines a hypothesis space of functions to be optimized. During the evolutionary search in Section 4.1, the encoder architecture has been kept static as a 4-layer convolutional neural network. Since encoder architecture may have a large impact on the RL training process [34,2], we test A2-winner with three encoders with different depth of neural networks. The result is shown in Figure 6. Note that even though the auxiliary loss is searched with a 4-layer encoder, the 6-layer convolutional encoder is able to perform better in both two environments. This proves that the auxiliary loss function of A2-winner is able to improve RL performance with a deeper and more expressive image encoder. Moreover, the ranking of RL performance (6-layer > 4-layer > 2-layer) is consistent across the two environments. This shows that the auxiliary loss function of A2-winner does not overfit one specific architecture of the encoder. Generalize to partially observable scenarios.
Cheetah-Run Hopper-Hop
Claiming the generality of a method based on conclusions drawn just on fully observable environments like DMC is very dangerous. Therefore, we conduct an ablation study on the Partially Observable Markov Decision Process (POMDP) setting to see whether A2-winner is able to perform well in POMDP. We random mask 20% of the state dimensions (e.g., 15 dimensions -> 12 dimensions) to form a POMDP environment in DMC. As demonstrated in Figure 7, A2-winner consistently outperforms CURL and SAC-DenseMLP in the POMDP setting in Hopper-Hop and Cheetah-Run, showing that A2-winner is not only effective in fully observable environments but also partially observable environments.
To search or not? As shown above, the searched result A2-winner can generalize well to all kinds of different settings. A natural question here is, however, for a new type of domain, why not perform a new evolution search, instead of simply using the previously searched result? To compare these two solutions, we conduct another evolutionary search similar to Section 4.1 but replaced the three image-based tasks with three vector-based ones (marked by † in Table 4) from scratch. More details are summarized in Appendix D.5. We name the searched result as "A2-winner-v". As shown in Table 4, A2-winner-v is a very strong-performing loss for vector-based tasks, even stronger than A2-winner. Actually, A2-winner-v is able to outperform baselines in 16 out of 18 environments (with 15 unseen test environments), while A2-winner only outperforms baselines in 12 out of 18 environments. However, please note that it costs another 5k GPU hours (on NVIDIA P100) to
Analysis of Auxiliary Loss Functions
In this section, we analyze all the loss functions we have evaluated during the evolution procedure as a whole dataset in order to gain some insights into the role of auxiliary loss in RL performance. By doing so, we hope to shed light on future auxiliary loss designs. We will also release this "dataset" publicly to facilitate future research.
Typical patterns. We say that an auxiliary loss candidate has a certain pattern if the pattern's source is a subset of the candidate's source, and the pattern's target is a subset of the candidate's target. For instance, a loss candidate of {s t , a t } → {s t+1 , s t+2 } has the pattern {s t , a t } → {s t+1 }, and does not have the pattern {a t , s t+1 } → {s t }. We then try to analyze whether a certain pattern is helpful to representation learning in RL in expectation.
Specifically, we analyze the following patterns: (i) forward dynamics {s t , a t } → {s t+1 }; (ii) inverse dynamics {a t , s t+1 } → {s t }; (iii) reward prediction {s t , a t } → {r t }; (iv) action inference {s t , s t+1 } → {a t } and (v) state reconstruction in the latent space {s t } → {s t }. For each of these patterns, we categorize all the loss functions we have evaluated into (i) with or (ii) without this pattern. We then calculate the average RL performances of these two categories, summarized in Table 5. Some interesting observations are as follows.
(i) Forward dynamics is helpful in most tasks and improves RL performance on Reacher-Easy (image) and Cheetah-Run (vector) significantly (p-value<0.05).
(ii) State reconstruction in the latent space improves RL performance in image-based tasks but undermines vector-based tasks. The improvements in image-based tasks could be attributed to the combination of augmentation techniques, which, combined with reconstruction loss, enforces the extraction of meaningful features. In contrast, no augmentation is used in the vector-based setting, and thus the encoder learns no useful representations. This also explains why CURL performs poorly in vector-based experiments.
(iii) In the vector-based setting, some typical human-designed patterns (e.g., reward prediction, inverse dynamics, and action inference) can be very detrimental to RL performance, implying that some renowned techniques in loss designs might not work well under atypical settings.
Number of Sources and Targets. We further investigate whether it is more beneficial to use a small number of sources to predict a large number of targets (n target > n source , e.g., using s t to predict s t+1 , s t+2 , s t+3 ), or the other way around (n target < n source , e.g., using s t , s t+1 , s t+2 to predict s t+3 ). Statistical results are shown in Table 5, where we find that auxiliary losses with more states on the target side have a significant advantage over losses with more states on the source side. This result echoes recent works [42,39]: predicting more states leads to strong performance gains.
Related Work
Reinforcement Learning with Auxiliary Losses. Usage of auxiliary tasks for learning better state representations and improving the sample efficiency of RL agents, especially on image-based tasks, has been explored in many recent works. A number of manually designed auxiliary objectives are shown to boost RL performance, including observation reconstruction [51], reward prediction [20], dynamics prediction [6] and contrastive learning objectives [23,39,42]. It is worth noting that most of these works focus on image-based settings, and only a limited number of works study the vector-based setting [32,35]. Although people may think that vector-based settings can benefit less from auxiliary tasks due to their lower-dimensional state space, we show in our paper that there is still much potential for improving their performance with better learned representations.
Compared to the previous works, we point out two major advantages of our approach. (i) Instead of handcrafting an auxiliary loss with expert knowledge, A2LS automatically searches for the best auxiliary loss, relieving researchers from such tedious work. (ii) A2LS is a principled approach that can be used in arbitrary RL settings. We discover great auxiliary losses that bring significant performance improvement in image-based and the rarely studied vector-based settings. Automated Loss Design. In the AutoML community, it has become a trend to design good loss functions that can outperform traditional and handcrafted ones. To be specific, to resolve computer vision tasks, AM-LFS [27] defines the loss function search space as a parameterized probability distribution of the hyper-parameters of softmax loss. A recent work, AutoLoss-Zero [28], proposes to search loss functions with primitive mathematical operators.
For RL, existing works focus on searching for a better RL objective, EPG [19] and MetaGenRL [21] define the search space of loss functions as parameters of a low complexity neural network. Recently, [4] defines the search space of RL loss functions as a directed acyclic graph and discovers two DQN-like regularized RL losses. Note that none of these works investigates auxiliary loss functions, which are crucial to facilitate representation learning in RL and to make RL successful in highly complex environments. To the best of our knowledge, our work is the first attempt to search for auxiliary loss functions that can significantly improve RL performance.
Conclusion and Future Work
We present A2LS, a principled and universal framework for automated auxiliary loss design for RL. By searching on training environments with this framework, we discover a top-performing auxiliary loss function A2-winner that generalizes well to a diverse set of test environments. Furthermore, we present an in-depth investigation of the statistical relations between auxiliary loss patterns and RL performance. We hope our studies provide insights that will deepen the understanding of auxiliary losses in RL, and shed light on how to make RL more efficient and practical. Limitations of our current work lie in that searching requires an expensive computational cost. In the future, we plan to incorporate more delicate information such as higher-order information [12] of the inner-loop RL training procedure to derive more efficient auxiliary loss search methods. 6: end for 7: Cross validate (introduced in Section 4.2) top-performing candidates during evolution to get the optimal auxiliary loss L * . 8: return L *
References
B Examples of Loss Functions
We show examples of existing L RL and L Aux below.
RL loss instances. RL losses are the basic objectives for solving RL problems. For example, when solving discrete control tasks, the Deep Q Networks (DQN) [31] only fit the Q function, where L RL is minimizing the error between Q ω and Qω (target Q network):
L RL = L RL,Q (ω; E) = E s,r∼E,a∼π (Q ω (s t , a t ) − (r t + γ max a Qω(s t+1 , a t ))) 2 (4)
However, for continuous action space, the agent is always required to optimize a policy function alongside the Q loss as in eq. (4). For instance, Soft Actor Critic (SAC) [14] additionally optimizes the policy by policy gradient like:
L RL = L RL,Q + L RL,π , L RL,π (ω; E) = E s,r∼E,a∼π (− min i=1,2
Qω i (s t , a t ) + α log π ω (a t |s t )) (5)
Auxiliary loss instances. Besides L RL , adding an auxiliary loss L Aux helps to learn informative state representation for the best learning efficiency and final performance. For example, auxiliary loss of forward dynamics measures the mean squared error of state in the latent space:
L Aux (θ; E) = h(g θ (s t ), a t ) − gθ(s t+1 ) 2 ,(6)
where h denotes a predictor network. Another instance, Contrastive Unsupervised RL (CURL) [23] designs the auxiliary loss by contrasitive similarity relations:
L Aux (θ; E) = exp(g θ (s t ) W gθ(s t+ )) exp(g θ (s t ) W gθ(s t+ )) + K−1 i=0 exp(g θ (s t ) W gθ(s i )) ,(7)
where s t and s t+ are states of the same state s t after different random augmentations, and W is a learned parameter matrix.
C Implementation Details C.1 Architecture
C.1.1 State Encoder Architectures
In Figure 8, we demonstrate the overall architecture when auxiliary loss is used. The architecture is generally identical to architectures adopted in CURL [23]. "Image-based" and "1-layer DenseMLP" are the architectures we used in our experiments. "MLP" and "4-layer DenseMLP" are for ablations. Ablation details are given in Appendix D.6. Figure 8: Network structures of image-based RL and vector-based RL with auxiliary losses.
Conv
C.1.2 Siamese Network
For a fair comparison with baseline methods, we follow the same Siamese network structure for representation learning as CURL [23]. As shown in Figure 2, when computing targetsŷ for auxiliary losses, we map states to state embeddings with a target encoder. We stop gradients from target encoderθ and updateθ in the exponential moving averaged (EMA) manner whereθ = τ θ + (1 − τ )θ. This step, i.e., to freeze the gradients of the target encoder, is necessary when the loss is computed without negative samples. Otherwise, encoders will collapse to generate the same representation for any input. We have verified this in our early experiments.
C.2 Loss Operators
Instance Discrimination Our implementation is based on InfoNCE loss [33]:
L = log exp(φ(y,ŷ)) exp(φ(y,ŷ)) + K−1 i=0 exp(φ(y, y i ))(8)
The instance discrimination loss can be interpreted as a log-loss of a K-way softmax classifier whose label isŷ. The difference between discrimination-based loss operators lies in the discrimination objective φ used to measure agreement between (y,ŷ) pairs. Inner employs inner product φ(y,ŷ) = y ŷ while Bilinear employs bilinear product φ(y,ŷ) = yWŷ, where W is a learnable parameter matrix. Cosine uses cosine distance φ(y,ŷ) = y ŷ y · ŷ for further matrix calculation. As for losses implemented with cross entropy calculation without negative samples, we only take diagonal elements of matrix M where M i,j = φ(y j ,ŷ i ) for cross entropy calculation.
Mean Squared Error
The implementation of MSE-based loss operators are straightforward. MSE loss operator = (y −ŷ) 2 while normalized MSE = ( y y −ŷ ŷ ) 2 . When combined with negative samples, MSE loss operator (with negative pairs) = (y −ŷ) 2 − (y − y i ) 2 while normalized MSE (with negative pairs) = ( y y −ŷ ŷ ) 2 − ( y y − yi yi ) 2 .
C.3 Evolution Strategy
Horizon-changing Mutations There are two kinds of mutations that can change horizon length. One is to decrease horizon length. Specifically, we remove the last time step, i.e., (s t+k , a t+k , r t+k ) if the target horizon length is k. The other is to increase horizon length, in which we append three randomly generated bits to the given masks at the end. We do not shorten the horizon when it becomes too small (less than 1) or lengthen the horizon when it is too long (exceeding 10).
Mutating Source and Target Masks When mutating a candidate, the mutation on the source and the target are independent except for horizon change mutation where two masks should either both increase horizon or decrease horizon.
Initialization At each initialization, we randomly generate 75 auxiliary loss functions (every bit of masks are generated from Bernoulli(p) where p = 0.5.) and generate 25 auxiliary loss functions with prior probability, which makes the auxiliary loss have some features like forward dynamics prediction or reward prediction. The prior probability for generating the pattern of forward dynamics prediction is: (i) every bit of states from target is generated from Bernoulli(p) where p = 0.2; (ii) every bit of actions from source is generated from Bernoulli(p) where p = 0.8; (iii) every bit of states from target is generated by flipping the states of source; (iv) The other bits are generated from Bernoulli(p) where p = 0.5. The prior probability for generating the pattern of reward prediction is: (i) every bit of rewards from target is generated from Bernoulli(p) where p = 0.8; (ii) Every bit of states and actions from target is 0; (iii) The other bits are generated from Bernoulli(p) where p = 0.5.
C.4 Training Details
C.4.1 Hyper-parameters in the Image-based Setting
We use the same hyper-parameters for A2LS, SAC-wo-aug, SAC and CURL during the search phase to ensure a fair comparison. When evaluating the searched auxiliary loss, we use a slightly larger setting (e.g., larger batch size) to train RL agents sufficiently. A full list is shown in Table 6.
C.4.2 Hyper-parameters in the Vector-based Setting
We use the same hyper-parameters for A2LS, SAC-Identity, SAC-DenseMLP and CURL-DenseMLP, shown in Table 7. Since training in vector-based environments is substantially faster than in imagebased environments, there is no need to balance training cost and agent performance. We use this setting for both the search and final evaluation phases.
C.5 Baselines Implementation
Image-based Setting These following baselines are chosen because they are competitive methods for benchmarking control from pixels. CURL [23] is the main baseline to compare within the image-based setting, which is considered to be the state-of-the-art image-based RL algorithm. CURL learns state representations with a contrastive auxiliary loss. Vector-based Setting As for the vector-based setting, we compare A2LS with SAC-Identity, SAC and CURL. SAC-Identity is the vanilla vector-based SAC where states are directly fed to actor/critic networks. SAC and CURL use the same architecture of 1-layer densely connected MLP as a state encoder. Note that both A2LS and baseline methods use the same hyper-parameter reported in Table 7 without additional hyper-parameter tuning.
D Additional Experiment Results
D.1 Search Space Pruning
Results of Search Space Pruning Considering that the loss space is huge, an effective optimization strategy is required. Directly grid-searching over the whole space is infeasible because of unacceptable computational costs. Thus some advanced techniques such as space pruning and an elaborate search strategy are necessary. Our search space can be seen as a combination of the space for the input I and the space for the operator f . Inspired by AutoML works [5, 52] that search for hyper-parameters first and then neural architectures, we approximate the joint search of input and operator in Equation (1) in a two-step manner. The optimal auxiliary loss {I * , f * } can be optimized as:
max L R(M ω * (L) ; E) = max I,f R(M ω * (I,f ) ; E) ≈ max I R(M ω * (I,f * ) ; E) where f * ≈ arg max f E I [R(M ω * (I,f ) ; E)](9)
To decide the best loss operator, for every f in the operator space, we estimate E I [R(M ω * (I,f ) ; E)] with a random sampling strategy. We run 15 trials for each loss operator to estimate performance expectation. For each of 10 possible f in the search space (5 operators with optional negative samples), we run 5 trials on each of the 3 image-based environments (used in evolution) with the same input elements {s t , a t } → {s t+1 }, as we found that forward dynamics is a reasonable representative of our search space with highly competitive performance. Surprisingly, as summarized in Table 8, the simplest MSE without negative samples outperforms all other loss operators with complex designs. Therefore, this loss operator is chosen for the rest of this paper. Ablation Study on Search Space Pruning As introduced in Appendix D.1, we decompose the full search space into operator and input elements. Here we try to directly apply the evolution strategy to the whole space without the pruning step. The comparison results are shown in Figure 9. We can see that pruning improves the evolution process, making it easier to find good candidates.
D.2 Learning Curves for A2LS on Image-based DMControl
We benchmark the performance of A2LS to the best-performing image-based baseline (CURL). As shown in Figure 10, the sample efficiency of A2LS outperforms CURL in 10 out of 12 environments. Note that the learning curves of CURL may not match the data in Table 2. This is because we use the data reported in the CURL paper for tabular while we rerun CURL for learning curves plotting, where we find the performance of our rerunning CURL is sightly below the CURL paper. The y-axis represents episodic reward and x-axis represents interaction steps.
D.3 Effectiveness of AULC scores
To illustrate why we use the area under learning curve (AULC) instead of other metrics, we select top-10 candidates with different evolution metrics. In practice, AULC is calculated as the sum of scores of all checkpoints during training. Figure 11 demonstrates the usage of AULC score could well balance both sample efficiency and final performance. The learning curves of the top-10 candidates selected by AULC score look better than the other two metrics (that select top candidates simply with 100k step score or 500k step score).
D.4 Comparing Auxiliary Loss with Data Augmentation
Besides auxiliary losses, data augmentation has been shown to be a strong technique for data-efficient RL, especially in image-based environments [25,22]. RAD [25] can be seen as a version of CURL without contrastive loss but with a better image transformation function for data augmentation. We compare A2-winner with RAD in both image-based and vector-based DMControl environments. The learning curves in image-based environments are shown in Figure 12, where no statistically significant difference is observed. As readers may notice, the scores on RAD paper [25] are higher than the RAD and A2-winner learning curves reported. To avoid a misleading conclusion that RAD is much stronger than A2-winner , we would like to emphasize some key differences between RAD and our implementationt: 1) Large Conv encoder output dim (RAD: 47, A2LS/CURL: 25);
2) Larger image size (RAD: 108, A2LS/CURL: 100); 3) Larger encoder feature dim (RAD: 64, A2LS/CURL: 50). We use the hyper-parameters used in CURL for consistency of scores reported in our paper. However, in vector-based environments, as shown in Figure 13, A2-winner greatly outperforms RAD. Due to the huge difference between images and proprioceptive features, RAD could not transfer augmentation techniques like random crop and transforms used for images to vectors. Though RAD designs noise and random scaling for proprioceptive features, A2-winner shows much better performance on vector-based settings. These results show that recent progress in using data augmentation for RL is still limited to image-based RL while using auxiliary loss functions for RL is able to boost RL across environments with totally different data types of observation.
Besides comparing auxiliary losses with data augmentation in DMC, we also provide experimental results in Atari [1]. As shown in Table 3, A2-winner significantly outperforms DrQ [22].
Ball in cup-Catch Cartpole-Swingup Cheetah-Run
Finger-Spin Reacher-Easy Walker-Walk Experiment Settings As for vector-based RL, we use a 1-layer densely connected MLP as the state encoder as shown in Figure 14 due to the lowdimensional state space. So, for this setting, we focus on this simple encoder structure. Additional ablations on state encoder architectures are given in Appendix D.6. In the search phase, we compare A2LS to SAC-Identity, SAC-DenseMLP, CURL-DenseMLP. To ensure a fair comparison, all SAC related hyper-parameters are the same as those reported in the CURL paper. Details can be found in Appendix C.4.2. SAC-Identity is vanilla SAC with no state encoder, while the other three methods (A2LS, SAC-DenseMLP, CURL-DenseMLP) use the same encoder architecture. Different from the image-based setting, there is no data augmentation in the vector-based setting. Note that many environments that are challenging in image-based settings become easy to tackle with vector-based inputs. Therefore we apply our search framework to more challenging environments for vector-based RL, including Cheetah-Run, Hopper-Hop and Quadruped-Run.
Cheetah-Run (Vector)
Hopper-Hop (Vector) Quadruped-Run (Vector) Figure 15: Evolution process in in the three training (vector-based) environments. Every white dot represents a loss candidate, and the score of y-axis shows its corresponding approximated AULC score. The horizontal lines show the scores of baselines. The AULC score is approximated with the average evaluation score at 300k, 600k, 900k, 1200k, 1500k time steps (Cheetah-Run at 100k, 200k, 300k, 400K).
Search Results Similar to image-based settings, we approximate AULC with the average score agents achieved at 300k, 600k, 900k, 1200k, and 1500k time steps 3 . For each environment, we early stop the experiment when the budget of 1,500 GPU hours is exhausted. The evolution process is shown in Figure 15, where we find a large portion of candidates outperform baselines (horizontal dashed lines). The performance improvement is especially significant on Cheetah-Run, where almost all candidates in the population greatly outperform all baselines by the end of the first stage. Similar to image-based settings, we also use cross-validation to select the best loss function, which we call "A2-winner-v" here (all the top candidates during evolution are reported in Appendix F) D.6 Encoder Architecture Ablation for Vector-based RL As shown in Figure 14, we choose a 1-layer densely connected MLP as the state encoder for vectorbased RL. We conduct an ablation study on different encoder architectures in the vector-based setting.
The results are summarized in Table 9, where A2LS with 4-layer encoders consistently perform worse than 1-layer encoders. We also note that dense connection is helpful in the vector-based setting compared with naive MLP encoders.
D.7 Visualization of Loss Landscape
In an effort to reveal why auxiliary losses are helpful to RL, we draw the loss landscape of critic loss of both A2-winner and SAC using the technique in [29,34]. We choose Humanoid-Stand as the testing environment since we observe the most significant advantage of A2-winner over SAC on complex robotics tasks like Humanoid. Note that the only difference between A2-winner and SAC is whether using auxiliary loss or not. As shown in Figure 16, the critic loss landscape of A2-winner appears to be convex during training, while the loss landscape of SAC becomes more non-convex as training proceeds. The auxiliary loss of A2-winner is able to efficiently boost Q learning (gaining near 300 reward at 500k steps), while SAC suffers from the poor results of critic learning (gaining near 0 reward even at 1000k time steps). This result shows that such an auxiliary loss might make learning easier from an optimization perspective.
D.8 Histogram of Auxiliary Loss Analysis
The histogram of each pattern analysis is shown in Figure 17. Besides CURL, many recent works (e.g., SPR [39] and ATC [42]) also proposed advanced auxiliary losses that achieve strong performance. Surprisingly, we find that both SPR and ATC designed similar patterns as we conclude in Section 6, like forward dynamics and n target > n source . Particularly, in ATC, they train the encoder only with ATC loss, and we find the performance of A2-winner has better performances than the results reported in their paper: we are 2× more sample efficient to reach 800 scores on Cartpole-Swingup, 2× more sample efficient to reach 100 scores on Hopper-Hop, and 3× more sample efficient to reach 600 scores on Cartpole-Swingup sparse (see Figure 2 of [42]). As for SPR, we find they have superior performance on Atari games benchmark, as shown in Table 10, where A2-winner outperform all baselines except SPR. However, note that, our A2-winner is only searched on a small set of DMC benchmarks and can still generalize well to discrete-control tasks of Atari, while SPR is designed and only evaluated on Atari environments. In addition, we believe such a gap can arise from different base RL algorithm implementation (A2LS is based on Efficient Rainbow DQN while SPR adopts Categorical DQN) and different hyper-parameters. To illustrate the trend of increasing performance during evolution, we provide the average AULC score of populations of each stage in Table 11. As for comparing evoluationary search with random Figure 17: Histogram of statistical analysis of auxiliary loss candidates in six evolution processes. The x-axis represents approximated AULC score while the y-axis represents the percentage of the corresponding bin of population. Best viewed in color. sampling, we can take the stage-1 of each evolution procedure as random sampling. As shown in Table 11, the average performance of the stage-1 population (i.e., random sampling) is even worse than SAC in Cheetah-Run and Walker-Walk. Nevertheless, as evolution continues, the performance of the evolved population in the following stages improves significantly, surpassing the score of SAC.
D.10 The Trend of Increasing Performance during Evolution
To illustrate the trend of increasing performance of best individuals during evolution, we provide the average AULC score of the top 5 candidates of the population at each stage in Table 12. As shown in Table 12, there is an obvious trend that the performance of the best individuals in the population at each stage continues to improve and also outperformed the baseline by a large margin during the evolution across all the three training environments.
E Search Space Complexity Analysis
The search space size is calculated by the size of input element space multiplying by the size of the loss operator space.
For input elements, the search space for input elements is a pair of binary masks (m,m), each of which is up to length (3k + 3) if the length of an interaction data sequence, i.e., horizon, is limited to k steps. In our case, we set the maximum horizon length k max = 10. we calculate separately for each possible horizon length k. When length is k, the interaction sequence length (s t , a t , r t , · · · , s t+k ) has length (3k + 3). For binary maskm, there are 2 3k+3 different options. There are also 2 3k+3 distinct binary mask m to select targets. Therefore, there are 2 6k+6 combinations when horizon length is fixed to k. As our maximum horizon is 10, we enumerate k from 1 to 10, resulting in 10 i=1 2 6i+6 . For operator, we can learn intuitively from Table 8 that there are 5 different similarity measures with or without negative samples, resulting in 5 × 2 = 10 different loss operators.
In total, the size of the entire space is We introduce all the top-performing auxiliary losses during evolution in this section. Note that MSE is chosen (details are given Appendix D.1) as the loss operator for all the auxliary losses reported below. The source seq source and target seq target of auxiliary loss of A2-winner are:
{s t+1 , a t+1 , a t+2 , a t+3 } → {r t , r t+1 , s t+2 , s t+3 },
where A2-winner is the third-best candidate of stage 4 in Cheetah-Run (Image).
The source seq source and target seq target of auxiliary loss of A2-winner-v are:
{s t , a t , a t+1 , s t+2 , a t+2 , a t+3 , r t+3 , a t+4 , r t+4 , a t+5 , a t+7 , s t+8 , a t+8 , r t+8 } → {s t+1 , s t+3 , a t+4 , s t+6 , s t+9 },
where A2-winner-v is the fourth-best candidate of stage 4 in Cheetah-Run (Vector).
These two losses are chosen because they are the best-performing loss functions during crossvalidation.
F.2 During Evolution
We report all the top-5 auxiliary loss candidates during evolution in this section. {r t , s t+1 , a t+1 , r t+1 , a t+2 , r t+2 , a t+3 , r t+3 } → {s t , a t , s t+2 , s t+3 , s t+4 } {s t , a t , r t } → {s t+1 } {s t , a t , a t+1 , r t+2 } → {s t , a t , s t+1 , a t+1 , r t+1 , s t+2 , r t+2 , s t+3 } {s t , r t , a t+1 , a t+2 , a t+3 , r t+3 , r t+4 , a t+5 , r t+5 , s t+6 , s t+7 } → {s t , a t , s t+1 , s t+2 , r t+2 , r t+3 , s t+4 , r t+5 , s t+6 , a t+6 , s t+7 } {s t , a t , s t+1 , a t+1 , s t+2 , r t+2 } → {s t , s t+1 , r t+1 , s t+2 , r t+2 , s t+3 }
Stage-2
{s t , a t+1 , r t+2 , s t+4 , r t+4 } → {s t+2 , a t+3 , r t+3 , a t+4 , s t+5 } {s t , a t , a t+2 , r t+2 } → {s t , r t , s t+1 , s t+2 , r t+2 } {a t , r t , s t+1 , r t+1 , s t+2 , a t+2 , r t+2 , a t+3 , a t+4 } → {s t+1 , s t+2 , s t+3 , a t+3 , s t+4 } {s t , a t , r t , a t+1 , r t+1 , a t+2 , r t+2 , a t+3 , a t+4 } → {s t , s t+1 , s t+2 , s t+4 , s t+5 } {r t , s t+1 , r t+1 } → {s t , a t , a t+1 , s t+2 }
Stage-3
{s t , a t , a t+2 , r t+2 } → {s t , s t+1 , s t+2 , r t+2 } {s t , r t , a t+1 , a t+3 , r t+3 , r t+4 , a t+5 , r t+5 , s t+6 , a t+6 , s t+7 } → {s t , a t , s t+1 , s t+2 , r t+2 , r t+3 , s t+4 , r t+5 , s t+6 , s t+7 , a t+7 } {s t , a t , a t+1 , r t+1 , a t+2 , r t+2 , a t+3 , s t+4 , a t+4 } → {s t+1 , s t+2 , s t+4 , s t+5 } {s t , a t , a t+1 , r t+1 , r t+2 , s t+3 , a t+3 , r t+4 } → {s t+1 , s t+2 , r t+3 , s t+4 , a t+4 , s t+5 } {r t , s t+1 } → {s t , a t , a t+1 , s t+2 }
Stage-4
{s t , s t+1 , a t+2 , r t+2 , s t+3 , s t+4 } → {a t+1 , s t+2 , r t+2 , s t+4 , a t+4 , s t+5 } {s t , a t , a t+1 , r t+1 , r t+2 , s t+3 , a t+3 } → {s t+1 , s t+2 , r t+3 , s t+4 } {s t } → {s t , r t , s t+1 , r t+1 , s t+2 , a t+2 , r t+2 } {s t , r t , a t+1 , s t+2 , a t+2 , r t+2 , a t+3 , r t+3 , a t+4 } → {s t , a t , s t+1 , r t+1 , r t+3 , s t+4 , a t+4 , s t+5 } {r t , s t+1 , a t+1 } → {s t , a t , a t+1 , s t+2 } Stage-5 * {a t , r t , s t+1 , a t+1 , r t+1 , a t+2 , r t+2 , a t+3 , r t+3 } → {r t , s t+1 , a t+1 , s t+2 , s t+4 } {s t , a t+1 , r t+2 , a t+3 , s t+4 , r t+4 } → {s t+1 , s t+2 , a t+3 , s t+4 , a t+4 , s t+5 } † {s t+1 , a t+1 , a t+2 , a t+3 } → {r t , r t+1 , s t+2 , s t+3 } {s t } → {s t , r t , s t+1 , r t+1 , s t+2 } {s t } → {s t , r t , s t+1 , r t+1 , s t+2 , r t+2 }
Stage-6
{a t+1 , r t+1 , s t+2 , r t+2 , a t+3 , a t+4 } → {r t , s t+1 , r t+1 , r t+3 , s t+4 , a t+4 , s t+5 } {s t , a t+1 , a t+3 , s t+4 , r t+4 } → {s t+1 , s t+2 , a t+3 , s t+4 , a t+4 , s t+5 } {s t , a t+1 , r t+1 , s t+2 , a t+2 , r t+2 , s t+3 , a t+3 , r t+3 } → {a t , s t+2 , r t+2 , a t+3 , s t+4 , a t+4 , s t+5 } {s t , a t+1 , r t+2 , a t+3 , s t+4 , r t+4 } → {s t+1 , s t+2 , a t+3 , s t+4 , a t+4 , s t+5 , a t+5 } {s t , a t+1 , a t+2 , r t+2 , s t+3 , a t+3 , s t+4 , r t+4 } → {s t+1 , a t+1 , s t+2 , r t+2 , s t+4 , a t+4 , r t+4 , s t+5 }
Stage-7
{s t , a t , r t , a t+1 , r t+2 , a t+3 , s t+4 } → {a t , r t , s t+2 , r t+3 , s t+4 , a t+4 , r t+4 , s t+5 } {s t , r t+2 , a t+3 , s t+4 } → {a t , s t+1 , s t+2 , r t+2 , a t+3 , r t+3 , s t+4 , a t+4 , r t+4 , s t+5 } {s t+1 , a t+2 , r t+2 , s t+3 , a t+3 , r t+3 , s t+4 } → {r t , a t+1 , r t+1 , s t+2 , r t+2 , s t+4 , a t+4 , s t+5 } {s t , a t+1 , s t+2 , a t+2 , r t+2 , s t+3 , a t+3 , r t+3 , a t+4 } → {a t , s t+1 , s t+2 , r t+2 , s t+3 , r t+3 , s t+4 , a t+4 , s t+5 } {s t , a t+1 , r t+2 , a t+3 , s t+4 , r t+4 } → {s t+1 , s t+2 , r t+2 , a t+3 , s t+4 , a t+4 , s t+5 , a t+5 } * : Used for cross-validation. †: A2-winner.
Figure 1 :
1Overview of A2LS. A2LS contains an inner loop (left) and an outer loop (right). The inner loop performs an RL training procedure with searched auxiliary loss functions. The outer loop searches auxiliary loss functions using an evolutionary algorithm to select the better auxiliary losses.
Figure 3 :
3Four types of mutation strategy for evolution. We represent both the source and the target of the input elements as a pair of binary masks, where each bit of the binary mask represents selected (green block) by 1 or not selected (white block) by 0.
Figure 6 :
6Comparison of A2-winner with different depth of convolutional encoder in image-based DMC environments.
Figure 7 :
7Comparison of A2-winner and baselines in partially observable vector-based DMC environments.
Figure 9 :Figure 10 :
910Comparison of evolution with and without pruning by performance histogram. Learning curves of A2-winner and CURL on 12 DMC environments. Shadow represents the standard deviation over five random seeds. The curves are uniformly smoothed for visual display.
Figure 11 :
11Learning curves of top-10 loss candidates selected with different metrics.
Figure 12 :Figure 13 :Figure 14 :
121314Comparison of learning curves of A2LS and RAD in image-based DMC environments. Comparison of learning curves of A2LS and RAD in vector-based DMControl environments. Network architecture of 1-layer DenseMLP state encoder.
Figure 16 :
16Left: Learning curves of A2-winner and SAC on vector-based Humanoid-Stand. Right: Critic Loss Landscape of A2-winner (upper right ) and SAC (lower right) at 250k, 500k, 750k and 1000k time steps, trained on vector-based Humanoid-Stand. The first row shows 3D surface plots, and the second row shows heatmap plots of loss landscapes.
≈ 7.5 × 10 20 .
Table 1 :
1Typical solution with auxiliary loss and their common elements.Auxiliary Loss
Operator
Input Elements
Horizon
Source
Target
Forward dynamics [35, 40, 6]
MSE
1
{s t , a t }
{s t+1 }
Inverse dynamics
MSE
1
{a t , s t+1 }
{s t }
Reward prediction
Table 2 :
2Episodic rewards (mean & standard deviation for 10 seeds) on DMC100K (100K time steps)
and DMC500K (500K time steps). Note that the optimal score of DMC is 1000 for all environments.
The baseline methods are PlaNet [16], Dreamer [15], SAC+AE [51], SLAC [26], image-based
SAC [14]. Performance values of all baselines are referred to [23], except for Image SAC. Learning
curves of all 12 DMC environments are included in Appendix D.2.
500K Steps Scores A2-winner
CURL §
PlaNet §
Dreamer § SAC+AE § SLACv1 § Image SAC
Cheetah-Run †
613 ± 39
518 ± 28 305 ± 131 570 ± 253 550 ± 34
640 ± 19
99 ± 28
Reacher-Easy †
938 ± 46
929 ± 44 210 ± 390 793 ± 164 627 ± 58
-
312 ± 132
Walker-Walk †
917 ± 18
902 ± 43
351 ± 58
897 ± 49
847 ± 48
842 ± 51
76 ± 44
Finger-Spin *
983 ± 4
926 ± 45 561 ± 284 796 ± 183 884 ± 128 673 ± 92
282 ± 102
Cartpole-Swingup *
864 ± 19
841 ± 45
475 ± 71
762 ± 27
735 ± 63
-
344 ± 104
Ball in cup-Catch *
970 ± 8
959 ± 27 460 ± 380 897 ± 87
794 ± 58
852 ± 71
200 ± 114
100K Steps Scores
Cheetah-Run †
449 ± 34
299 ± 48
138 ± 88 235 ± 137 267 ± 24
319 ± 56
128 ± 12
Reacher-Easy †
778 ± 164 538 ± 223
20 ± 50
314 ± 155 274 ± 14
-
277 ± 69
Walker-Walk †
510 ± 151
403 ± 24
224 ± 48
277 ± 12
394 ± 22
361 ± 73
127 ± 28
Finger-Spin *
872 ± 27
767 ± 56 136 ± 216 341 ± 70
740 ± 64 693 ± 141 160 ± 138
Cartpole-Swingup *
815 ± 66
582 ± 146 297 ± 39
326 ± 27
311 ± 11
-
243 ± 19
Ball in cup-Catch *
862 ± 167
769 ± 43
0 ± 0
246 ± 174 391 ± 82 512 ± 110
100 ± 90
†: Training environments. * : Unseen test environments. §: Results reported in [23].
Table 3 :
3Mean and Median scores (normalized by human score and random score) achieved by A2LS and baselines on 26 Atari games benchmarked at 100k time-steps (Atari100k).Metric
A2-winner CURL Eff. Rainbow DrQ [22] Random Human
Mean Human-Norm'd
0.568
0.381
0.285
0.357
0.000
1.000
Median Human-Norm'd
0.317
0.175
0.161
0.268
0.000
1.000
Generalize to totally different
benchmark domains. To further ver-
ify the generalizability of A2-winner
on totally different benchmark
domains other than DMC tasks, we
conduct experiments on the Atari
2600 Games [1], where we take
Efficient Rainbow [44] as the base RL algorithm and add A2-winner to obtain a better state
representation. Results are shown in
Table 4 :
4Episodic rewards (mean & standard deviation for 10 seeds) on DMC100K (easy tasks) and DMC1000K (difficult tasks) with vector inputs. Training environments. * : Unseen test environments.100K Steps Scores
A2-winner
A2-winner-v SAC-Identity
SAC
CURL
Cheetah-Run †
529 ± 76
472 ± 30
237 ± 27
172 ± 29
190 ± 32
Finger-Spin *
790 ± 128
837 ± 52
805 ± 32
785 ± 106 712 ± 83
Finger-Turn hard *
272 ± 149
218 ± 117
347 ± 150
174 ± 94
43 ± 42
Cartpole-Swingup *
866 ± 24
877 ± 5
873 ± 10
866 ± 7
854 ± 17
Cartpole-Swingup sparse *
634 ± 226
695 ± 147
455 ± 359
627 ± 307 446 ± 196
Reacher-Easy *
818 ± 211
934 ± 38
697 ± 192
874 ± 87 749 ± 183
Walker-Stand *
935 ± 32
948 ± 7
940 ± 10
862 ± 196 767 ± 104
Walker-Walk *
932 ± 39
906 ± 78
873 ± 89
925 ± 22
852 ± 64
Walker-Run *
616 ± 52
564 ± 45
559 ± 34
403 ± 43
289 ± 61
Ball in cup-Catch *
964 ± 7
965 ± 7
954 ± 12
962 ± 13
941 ± 32
Fish-Upright *
586 ± 128
498 ± 88
471 ± 62
400 ± 62 295 ± 117
Hopper-Stand *
177 ± 257
311 ± 177
14 ± 16
26 ± 40
6 ± 3
1,000K Steps Scores
A2-winner-v
A2-winner
SAC-Identity
SAC
CURL
Quadruped-Run †
863 ± 50
838 ± 58
345 ± 157
707 ± 148 497 ± 128
Hopper-Hop †
213 ± 31
278 ± 106
121 ± 51
134 ± 93
60 ± 22
Pendulum-Swingup *
200 ± 322
579 ± 410
506 ± 374
379 ± 391 363 ± 366
Humanoid-Stand *
329 ± 35
286 ± 15
9 ± 2
7 ± 1
7 ± 1
Humanoid-Walk *
311 ± 36
299 ± 55
16 ± 28
2 ± 0
2 ± 0
Humanoid-Run *
75 ± 37
88 ± 2
1 ± 0
1 ± 0
1 ± 0
†: Ball in cup-Catch
Cheetah-Run
Table 5 :
5Statistical analysis on auxiliary loss functions. The number reported is the difference of the expected RL score when the auxiliary losses have one pattern compared to those do not have. The corresponding p-value from the t-test is also reported. Positive numbers indicate that this pattern is beneficial. If the performance gain is statistically significant, the number is marked with the asterisk, indicating it is very likely to be helpful. Negative numbers indicate this pattern is detrimental.The score difference between average performances w/ and w/o typical patterns (w/ -w/o) Forward dynamics Inverse dynamics Reward prediction Action inference State reconstruction Cheetah-Run (Image)+1.28
−3.51
−31.16 * *
−75.95 * *
+42.44 * *
Reacher-Easy (Image)
+28.25 *
+8.36
+37.80 * *
+3.35
+70.72 * *
Walker-Walk (Image)
+22.20
−48.59 * *
−8.11
+29.86 *
+13.93
Cheetah-Run (Vector)
+94.18 * *
−23.66 * *
−33.28 * *
−109.33 * *
−50.15 * *
Hopper-Hop (Vector)
+15.50 * *
−16.47 * *
−11.30 *
−32.10 * *
−25.67 * *
Quadruped-Run (Vector)
−28.07
−18.19
−114.23 * *
−105.37 * *
−82.06 * *
* : p-value < 0.05. * * : p-value < 0.01
The score difference between two sets varying the number of elements in source and target
State, n target > n source
Action, n target > n source
Reward, n target > n source
Cheetah-Run (Image)
+80.09 * *
+13.62
+3.33
Reacher-Easy (Image)
+1.98
−12.72
+65.66 * *
Walker-Walk (Image)
+73.56 * *
+42.22 *
−41.90 *
Cheetah-Run (Vector)
+188.06 * *
−102.62 * *
−93.94 * *
Hopper-Hop (Vector)
+19.80 * *
−29.70 * *
−5.03
Quadruped-Run (Vector)
+75.17 * *
−4.31
−46.60 *
* : p-value < 0.05. * * : p-value < 0.01
search for A2-winner-v while there is no additional cost to directly use A2-winner. It is a trade-off
between lower computational cost and better performance.
[ 1 ]
1Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In International Conference on Machine Learning, pages 1407-1416. PMLR, 2018. [8] Yuchen Fang, Kan Ren, Weiqing Liu, Dong Zhou, Weinan Zhang, Jiang Bian, Yong Yu, and Tie-Yan Liu. Universal trading for order execution with oracle policy distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 107-115, 2021. Optimize policies {π ωp } P stage-i with RL loss L RL and corresponding Auxiliary loss {L p } P stage-i .Journal of Artificial Intelligence
Research, 47:253-279, 2013.
[2] Johan Bjorck, Carla P Gomes, and Kilian Q Weinberger. Towards deeper deep reinforcement
learning. arXiv preprint arXiv:2106.01151, 2021.
[3] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework
for contrastive learning of visual representations. In International conference on machine
learning, pages 1597-1607. PMLR, 2020.
[4] John D Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Quoc V Le, Sergey Levine, Honglak
Lee, and Aleksandra Faust. Evolving reinforcement learning algorithms. In International
Conference on Learning Representations, 2020.
[5] Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen,
Yuandong Tian, Matthew Yu, Peter Vajda, and Joseph E. Gonzalez. Fbnetv3: Joint architecture-
recipe search using predictor pretraining, 2020.
[6] Tim De Bruin, Jens Kober, Karl Tuyls, and Robert Babuška. Integrating state representation
learning into deep reinforcement learning. IEEE Robotics and Automation Letters, 3(3):1394-
1401, 2018.
[7] [9] Aleksandra Faust, Anthony Francis, and Dar Mehta. Evolving rewards to automate reinforce-
ment learning. arXiv preprint arXiv:1905.07628, 2019.
[10] Jörg K. H. Franke, Gregor Köhler, André Biedenkapp, and Frank Hutter. Sample-efficient
automated deep reinforcement learning. In 9th International Conference on Learning Represen-
tations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
[11] Sina Ghiassian, Banafsheh Rafiee, Yat Long Lo, and Adam White. Improving performance
in reinforcement learning by breaking generalization in neural networks. arXiv preprint
arXiv:2003.07417, 2020.
[12] Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net opti-
mization via hessian eigenvalue density. In International Conference on Machine Learning,
pages 2232-2241. PMLR, 2019.
[13] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning
for robotic manipulation with asynchronous off-policy updates. In 2017 IEEE international
conference on robotics and automation (ICRA), pages 3389-3396. IEEE, 2017.
[14] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan,
Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms
and applications. arXiv preprint arXiv:1812.05905, 2018.
[15] Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control:
Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019.
[16] Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee,
and James Davidson. Learning latent dynamics for planning from pixels. In International
Conference on Machine Learning, pages 2555-2565. PMLR, 2019.
A Algorithm
Algorithm 1 Automated Auxiliary Loss Search
1: Initialization: Randomly generate (bootstrapping) P auxiliary loss functions {L p } P
stage-1 and P
parameterized policy {π ωp } P
stage-1 ;
2: for i = 1, 2, . . . N do
3:
4:
Evaluate performance (AULC scores) of each RL agent {E p } P
stage-i and select top T candidates
{L t } T
stage-i .
5:
Apply mutations and loss rejection check (introduced in Section 3.2.2) on top T candidates
{L t } T
stage-i to generate new auxiliary loss candidates {L p } P
stage-(i+1)
Table 6 :
6Hyper-parameters used in image-based environments.Hyper-parameter
During Evolution
Final Evaluation of A2-winner
Random crop
False for SAC-wo-aug;
True for others
True
Observation rendering
(84, 84) for SAC-wo-aug;
(100, 100) for others
(100, 100)
Observation downsampling
(84, 84)
(84, 84)
Replay buffer size
100000
100000
Initial steps
1000
1000
Stacked frames
3
3
Actoin repeat
4 (Cheetah-Run, Reacher-Easy)
2 (Walker-Walk);
8 (Cartpole-Swingup);
4 (Others)
2 (Walker-Walk, Finger-Spin)
Hidden units (MLP)
1024
1024
Hidden units (Predictor MLP)
256
256
Evaluation episodes
10
10
Optimizer
Adam
Adam
(β 1 , β 2 ) for actor/critic/encoder
(.9, .999)
(.9, .999)
(β 1 , β 2 ) for entropy α
(.5, .999)
(.5, .999)
Learning rate for actor/critic
1e-3
2e-4 (Cheetah-Run);
1e-3 (Others)
Learning rate for encoder
1e-3
3e-3 (Cheetah-Run, Finger-Spin, Walker-Walk);
1e-3 (Others)
Learning for α
1e-4
1e-4
Batch size for RL loss
128
512
Batch size for auxiliary loss
128
128 (Walker-Walk)
256 (Cheetah-Run, Finger-Spin)
512 (Others);
Auxiliary Loss multipilier λ
1
1
Q function EMA τ
0.01
0.01
Critic target update freq
2
2
Convolutional layers
4
4
Number of filters
32
32
Non-linearity
ReLU
ReLU
Encoder EMA τ
0.05
0.05
Latent dimension
50
50
Discount γ
.99
.99
Initial temperature
0.1
0.1
Table 7 :
7Hyper-parameters used in vector-based environments.Replay buffer size
100000
Initial steps
1000
Action repeat
4
Hidden units (MLP)
1024
Hidden units (Predictor MLP)
256
Evaluation episodes
10
Optimizer
Adam
(β 1 , β 2 ) for actor/critic/encoder
(.9, .999)
(β 1 , β 2 ) for entropy α
(.5, .999)
Learning rate for actor/critic/encoder 2e-4 (Cheetah-Run);
1e-3 (Others)
Learning for α
1e-4
Batch size
512
Auxiliary Loss multipilier λ
1
Q function EMA τ
0.01
Critic target update freq
2
DenseMLP Layers
1
Non-linearity
ReLU
Encoder EMA τ
0.05
Latent dimension of DenseMLP
40
Discount γ
.99
Initial temperature
0.1
Table 8 :
8Normalized episodic rewards (mean & standard deviation for 5 seeds) of 3 environments used in evolution on image-based DMControl500K with different loss operators.Loss operator and discrimination
Inner
Bilinear
Cosine
MSE
N-MSE
w/ negative samples
0.979 ± 0.344 0.953 ± 0.329 0.872 ± 0.412
0.124 ± 0.125
0.933 ± 0.360
w/o negative samples
0.669 ± 0.311 0.707 ± 0.299 0.959 ± 0.225 1.000 ± 0.223 0.993 ± 0.229
Table 9 :
9Normalized episodic rewards of A2LS (mean & standard deviation for 5 seeds of 6 environments) on v DMControl100K with different encoder architectures.A2LS-MLP (1-layer) A2LS-MLP (4-layer) A2LS-DenseMLP (1-layer) A2LS-DenseMLP (4-layer)
0.919 ± 0.217
0.544 ± 0.360
1.000 ± 0.129
0.813 ± 0.218
Table 10 :
10Mean and Median scores (normalized by human score and random score) achieved by A2LS and baselines on 26 Atari games benchmarked at 100k time-steps (Atari100k). A2-winner CURL Eff. Rainbow DrQ[22] SimPLe DER OTRainbow SPR Random Human D.9 Comparing A2-winner with Advanced Human-designed Auxiliary LossesMetric
Mean Human-Norm'd
0.568
0.381
0.285
0.357
0.443
0.285
0.264
0.704
0.000
1.000
Median Human-Norm'd
0.317
0.175
0.161
0.268
0.144
0.161
0.204
0.415
0.000
1.000
Table 11 :
11Average AULC scores of populations of each stage.stage-1 stage-2 stage-3 stage-4 stage-5 stage-6 stage-7 SAC (baseline)
Cheetah-Run 191.75 252.51 258.09 284.53 349.52 351.51 352.57
285.82
Reacher-Easy 674.87 782.75 812.61 823.04 810.15 811.88 827.19
637.60
Walker-Walk 599.38 633.75 716.18 702.49
N/A
N/A
N/A
675.84
Table 12 :
12Average AULC scores of Top-5 candidates of each stage.stage-1 stage-2 stage-3 stage-4 stage-5 stage-6 stage-7 SAC (baseline)
Cheetah-Run 398.18 424.27 428.08 485.54 487.94 482.65 498.46
285.82
Reacher-Easy 931.27 950.61 943.83 938.91 954.77 955.02 969.43
637.60
Walker-Walk 834.09 883.77 896.52 880.73
N/A
N/A
N/A
675.84
Table 13 :
13Top-5 candidates of each stage in Cheetah-Run (Image) evolution process Cheetah-Run (Image) Stage-1
As for Cheetah-Run, we still use average score agents achieved at 100k, 200k, 300k, 400K and 500k time steps since agents converge close to optimal score within 500k time steps.
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738, 2020.
Deep reinforcement learning that matters. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence32Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
Evolved policy gradients. Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly C Stadie, Filip Wolski, Jonathan Ho, Pieter Abbeel, ; Hanna, M Wallach, Hugo Larochelle, Kristen Grauman, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems. Nicolò Cesa-Bianchi, and Roman GarnettNeurIPS; Montréal, CanadaSamy Bengio,Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly C. Stadie, Filip Wolski, Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 5405-5414, 2018.
Reinforcement learning with unsupervised auxiliary tasks. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netMax Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
Improving generalization in meta reinforcement learning using learned objectives. Louis Kirsch, Jürgen Sjoerd Van Steenkiste, Schmidhuber, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Louis Kirsch, Sjoerd van Steenkiste, and Jürgen Schmidhuber. Improving generalization in meta reinforcement learning using learned objectives. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. Ilya Kostrikov, Denis Yarats, Rob Fergus, arXiv:2004.13649arXiv preprintIlya Kostrikov, Denis Yarats, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649, 2020.
Curl: Contrastive unsupervised representations for reinforcement learning. Michael Laskin, Aravind Srinivas, Pieter Abbeel, International Conference on Machine Learning. PMLRMichael Laskin, Aravind Srinivas, and Pieter Abbeel. Curl: Contrastive unsupervised represen- tations for reinforcement learning. In International Conference on Machine Learning, pages 5639-5650. PMLR, 2020.
Reinforcement learning with augmented data. Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind Srinivas, Advances in Neural Information Processing Systems. 33Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas. Reinforcement learning with augmented data. Advances in Neural Information Processing Systems, 33:19884-19895, 2020.
Reinforcement learning with augmented data. Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind Srinivas, Advances in Neural Information Processing Systems. 33Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas. Reinforcement learning with augmented data. Advances in Neural Information Processing Systems, 33:19884-19895, 2020.
Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. Alex X Lee, Anusha Nagabandi, Pieter Abbeel, Sergey Levine, arXiv:1907.00953arXiv preprintAlex X Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. arXiv preprint arXiv:1907.00953, 2019.
Amlfs: Automl for loss function search. Chuming Li, Xin Yuan, Chen Lin, Minghao Guo, Wei Wu, Junjie Yan, Wanli Ouyang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionChuming Li, Xin Yuan, Chen Lin, Minghao Guo, Wei Wu, Junjie Yan, and Wanli Ouyang. Am- lfs: Automl for loss function search. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8410-8419, 2019.
Autoloss-zero: Searching loss functions from scratch for generic tasks. Hao Li, Tianwen Fu, Jifeng Dai, Hongsheng Li, Gao Huang, Xizhou Zhu, arXiv:2103.14026arXiv preprintHao Li, Tianwen Fu, Jifeng Dai, Hongsheng Li, Gao Huang, and Xizhou Zhu. Autoloss-zero: Searching loss functions from scratch for generic tasks. arXiv preprint arXiv:2103.14026, 2021.
Visualizing the loss landscape of neural nets. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein, Advances in neural information processing systems. 31Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018.
Boah: A tool suite for multi-fidelity bayesian optimization & analysis of hyperparameters. M Lindauer, K Eggensperger, M Feurer, A Biedenkapp, J Marben, P Müller, F Hutter, arXiv:1908.06756cs.LGM. Lindauer, K. Eggensperger, M. Feurer, A. Biedenkapp, J. Marben, P. Müller, and F. Hutter. Boah: A tool suite for multi-fidelity bayesian optimization & analysis of hyperparameters. arXiv:1908.06756 [cs.LG].
Playing atari with deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller, arXiv:1312.5602arXiv preprintVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Learning state representation for deep actor-critic control. Jelle Munk, Jens Kober, Robert Babuška, 2016 IEEE 55th Conference on Decision and Control (CDC). IEEEJelle Munk, Jens Kober, and Robert Babuška. Learning state representation for deep actor-critic control. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 4667-4673. IEEE, 2016.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748Representation learning with contrastive predictive coding. arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Training larger networks for deep reinforcement learning. Kei Ota, K Devesh, Asako Jha, Kanezaki, arXiv:2102.07920arXiv preprintKei Ota, Devesh K Jha, and Asako Kanezaki. Training larger networks for deep reinforcement learning. arXiv preprint arXiv:2102.07920, 2021.
Can increasing input dimensionality improve deep reinforcement learning. Kei Ota, Tomoaki Oiki, Devesh Jha, Toshisada Mariyama, Daniel Nikovski, International Conference on Machine Learning. PMLRKei Ota, Tomoaki Oiki, Devesh Jha, Toshisada Mariyama, and Daniel Nikovski. Can increasing input dimensionality improve deep reinforcement learning? In International Conference on Machine Learning, pages 7424-7433. PMLR, 2020.
Fast efficient hyperparameter tuning for policy gradients. Supratik Paul, Vitaly Kurin, Shimon Whiteson, arXiv:1902.06583arXiv preprintSupratik Paul, Vitaly Kurin, and Shimon Whiteson. Fast efficient hyperparameter tuning for policy gradients. arXiv preprint arXiv:1902.06583, 2019.
Regularized evolution for image classifier architecture search. Esteban Real, Alok Aggarwal, Yanping Huang, Quoc V Le, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Regularized evolution for image classifier architecture search. Proceedings of the AAAI Conference on Artificial Intelligence, 33:4780-4789, Jul 2019.
Learning to design RNA. Frederic Runge, Danny Stoll, Stefan Falkner, Frank Hutter, 7th International Conference on Learning Representations, ICLR 2019. New Orleans, LA, USAOpenReview.netFrederic Runge, Danny Stoll, Stefan Falkner, and Frank Hutter. Learning to design RNA. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
Data-efficient reinforcement learning with self-predictive representations. Max Schwarzer, Ankesh Anand, Rishab Goel, Devon Hjelm, Aaron Courville, Philip Bachman, International Conference on Learning Representations. Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron Courville, and Philip Bachman. Data-efficient reinforcement learning with self-predictive representations. In Interna- tional Conference on Learning Representations, 2020.
Loss is its own reward: Self-supervision for reinforcement learning. Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell, arXiv:1612.07307arXiv preprintEvan Shelhamer, Parsa Mahmoudieh, Max Argus, and Trevor Darrell. Loss is its own reward: Self-supervision for reinforcement learning. arXiv preprint arXiv:1612.07307, 2016.
Incentivizing exploration in reinforcement learning with deep predictive models. C Bradly, Sergey Stadie, Pieter Levine, Abbeel, arXiv:1507.00814arXiv preprintBradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
Decoupling representation learning from reinforcement learning. Adam Stooke, Kimin Lee, Pieter Abbeel, Michael Laskin, International Conference on Machine Learning. PMLRAdam Stooke, Kimin Lee, Pieter Abbeel, and Michael Laskin. Decoupling representation learning from reinforcement learning. In International Conference on Machine Learning, pages 9870-9879. PMLR, 2021.
. Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego De Las, David Casas, Budden, arXiv:1801.00690Abbas Abdolmaleki. Andrew LefrancqarXiv preprintJosh Merel. et al. Deepmind control suiteYuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
When to use parametric models in reinforcement learning?. Matteo Hado P Van Hasselt, John Hessel, Aslanides, Advances in Neural Information Processing Systems. 32Hado P van Hasselt, Matteo Hessel, and John Aslanides. When to use parametric models in reinforcement learning? Advances in Neural Information Processing Systems, 32, 2019.
Discovery of useful questions as auxiliary tasks. Vivek Veeriah, Matteo Hessel, Zhongwen Xu, Janarthanan Rajendran, Richard L Lewis, Junhyuk Oh, David Hado Van Hasselt, Satinder Silver, Singh, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman GarnettNeurIPS; Vancouver, BC, CanadaVivek Veeriah, Matteo Hessel, Zhongwen Xu, Janarthanan Rajendran, Richard L. Lewis, Junhyuk Oh, Hado van Hasselt, David Silver, and Satinder Singh. Discovery of useful questions as auxiliary tasks. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 9306-9317, 2019.
The shape of learning curves: a review. Tom Viering, Marco Loog, arXiv:2103.10948arXiv preprintTom Viering and Marco Loog. The shape of learning curves: a review. arXiv preprint arXiv:2103.10948, 2021.
Grandmaster level in starcraft ii using multi-agent reinforcement learning. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, H David, Richard Choi, Timo Powell, Petko Ewalds, Georgiev, Nature. 5757782Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Jun- young Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019.
Loss function search for face recognition. Xiaobo Wang, Shuo Wang, Cheng Chi, Shifeng Zhang, Tao Mei, International Conference on Machine Learning. PMLRXiaobo Wang, Shuo Wang, Cheng Chi, Shifeng Zhang, and Tao Mei. Loss function search for face recognition. In International Conference on Machine Learning, pages 10029-10038. PMLR, 2020.
Meta-gradient reinforcement learning with an objective discovered online. Zhongwen Xu, Hado Philip Van Hasselt, Matteo Hessel, Junhyuk Oh, Satinder Singh, David Silver, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Zhongwen Xu, Hado Philip van Hasselt, Matteo Hessel, Junhyuk Oh, Satinder Singh, and David Silver. Meta-gradient reinforcement learning with an objective discovered online. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
Guan Yang, Minghuan Liu, Weijun Hong, Weinan Zhang, Fei Fang, Guangjun Zeng, Yue Lin, arXiv:2203.16406Perfectdou: Dominating doudizhu with perfect information distillation. arXiv preprintGuan Yang, Minghuan Liu, Weijun Hong, Weinan Zhang, Fei Fang, Guangjun Zeng, and Yue Lin. Perfectdou: Dominating doudizhu with perfect information distillation. arXiv preprint arXiv:2203.16406, 2022.
Improving sample efficiency in model-free reinforcement learning from images. Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, Rob Fergus, arXiv:1910.01741arXiv preprintDenis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, and Rob Fergus. Improving sample efficiency in model-free reinforcement learning from images. arXiv preprint arXiv:1910.01741, 2019.
Nas-bench-101: Towards reproducible neural architecture search. Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, Frank Hutter, Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hutter. Nas-bench-101: Towards reproducible neural architecture search, 2019.
A self-tuning actor-critic algorithm. Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, David Hado Van Hasselt, Satinder Silver, Singh, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David Silver, and Satinder Singh. A self-tuning actor-critic algorithm. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
. Cheetah-Run, PixelCheetah-Run (Pixel)
Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State). Reacher-Easy, Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Cheetah-Run, Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run. StateCheetah-Run (Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Cheetah-Run, Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run. StateCheetah-Run (Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Cheetah-Run, Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run. StateCheetah-Run (Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Cheetah-Run, Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run. StateCheetah-Run (Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Cheetah-Run, Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run. StateCheetah-Run (Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Cheetah-Run, Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run. StateCheetah-Run (Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Cheetah-Run, Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run. StateCheetah-Run (Pixel) Reacher-Easy (Pixel) Walker-Walk (Pixel) Cheetah-Run (State) Hooper-Hop (State) Quadruped-Run (State)
Stage-4. Stage-4
Stage-5. Stage-5
Top-5 candidates of each stage in Walker-Walk (Image) evolution process Walker-Walk. 15ImageTable 15: Top-5 candidates of each stage in Walker-Walk (Image) evolution process Walker-Walk (Image)
Stage-3. Stage-3
at, at+1} → {st+1, at+1, st+2} {st, rt, rt+1, st+2, st+3, rt+3, rt+4} → {st. 5Stage-4 {st. at, rt, st+1, rt+1, st+2, at+2, st+3, rt+3, st+4, at+4, st+5} {st, st+2, st+3, at+3, rt+3, st+4, at+4} → {st, at, rt, at+2, rt+2, at+4, st+5} {st, rt, st+1, at+1, rt+1, st+2, at+3, st+4, at+4} → {at, rt, st+1, st+2, at+2, rt+3, st+5} {st, rt, st+1, at+1, rt+1, rt+2, st+3, at+3, st+4, at+4} → {at, rt, st+1, st+2, at+2, st+3, rt+3, at+4Stage-4 {st, at, at+1} → {st+1, at+1, st+2} {st, rt, rt+1, st+2, st+3, rt+3, rt+4} → {st, at, rt, st+1, rt+1, st+2, at+2, st+3, rt+3, st+4, at+4, st+5} {st, st+2, st+3, at+3, rt+3, st+4, at+4} → {st, at, rt, at+2, rt+2, at+4, st+5} {st, rt, st+1, at+1, rt+1, st+2, at+3, st+4, at+4} → {at, rt, st+1, st+2, at+2, rt+3, st+5} {st, rt, st+1, at+1, rt+1, rt+2, st+3, at+3, st+4, at+4} → {at, rt, st+1, st+2, at+2, st+3, rt+3, at+4, st+5}
Top-5 candidates of each stage in Cheetah-Run (Vector) evolution process Cheetah-Run. 16RawTable 16: Top-5 candidates of each stage in Cheetah-Run (Vector) evolution process Cheetah-Run (Raw)
Stage-2 {st. at, rt, at+1, rt+1} → {st+1, st+2} {st, at, rt, at+1, rt+1} → {st+1. 1st+2} {st, at, at+1, rt+1, at+2, rt+2, at+3, rt+3, at+4, rt+4, st+5, at+5, rt+5, at+6, at+7, at+8, rt+8} → {at, st+1, st+2, at+2, st+3, st+4, st+6, st+9} {st, at, at+1, st+2, at+2, at+3, rt+3, at+4, rt+4, at+5, at+7, st+8, at+8, rt+8} → {st+1, st+3, at+4, st+6, st+9} {st, at, rt} → {rtStage-2 {st, at, rt, at+1, rt+1} → {st+1, st+2} {st, at, rt, at+1, rt+1} → {st+1, st+2} {st, at, at+1, rt+1, at+2, rt+2, at+3, rt+3, at+4, rt+4, st+5, at+5, rt+5, at+6, at+7, at+8, rt+8} → {at, st+1, st+2, at+2, st+3, st+4, st+6, st+9} {st, at, at+1, st+2, at+2, at+3, rt+3, at+4, rt+4, at+5, at+7, st+8, at+8, rt+8} → {st+1, st+3, at+4, st+6, st+9} {st, at, rt} → {rt, st+1}
Stage-3 {st, at, rt, at+1, rt+1} → {st+1} {st, at} → {rt, st+1} {st, at, rt, at+1} → {st+2} {st, at, rt} → {st+1} {st, at, at+1, rt+1} → {st. 12Stage-3 {st, at, rt, at+1, rt+1} → {st+1} {st, at} → {rt, st+1} {st, at, rt, at+1} → {st+2} {st, at, rt} → {st+1} {st, at, at+1, rt+1} → {st, st+1, st+2}
at, rt, at+1, rt+1} → {st+1, rt+1, st+2} {st, at, rt, at+1} → {rt, st+1, st+2} {st, at, at+1} → {st+1, at+1. {st, at, rt, at+1, rt+1} → {st+1, st+2} {st. st+2} {st, at, at+1, at+2, rt+2, at+3, rt+3, at+4, rt+4, st+5, at+5, at+6, at+7, at+8, rt+8} → {st+1, st+2, st+3, at+3, at+4, st+6, st+8, st+9} * : Used for cross-validation. †: A2-winner-v. Table 17: Top-5 candidates of each stage in Hopper-Hop (Vector) evolution process{st, at, rt, at+1, rt+1} → {st+1, st+2} {st, at, rt, at+1, rt+1} → {st+1, rt+1, st+2} {st, at, rt, at+1} → {rt, st+1, st+2} {st, at, at+1} → {st+1, at+1, st+2} {st, at, at+1, at+2, rt+2, at+3, rt+3, at+4, rt+4, st+5, at+5, at+6, at+7, at+8, rt+8} → {st+1, st+2, st+3, at+3, at+4, st+6, st+8, st+9} * : Used for cross-validation. †: A2-winner-v. Table 17: Top-5 candidates of each stage in Hopper-Hop (Vector) evolution process
. Hopper-Hop, RawHopper-Hop (Raw)
Stage-3 {st, at, st+1, at+1} → {st, st+2} {st} → {st, at, rt, st+1} {st, at, rt, st+2, at+2, rt+2} → {st, st+1, rt+1, st+2, at+2, rt+2, st+3} {st, at, at+1} → {st+1. 12st+2} {st, at, at+1} → {stStage-3 {st, at, st+1, at+1} → {st, st+2} {st} → {st, at, rt, st+1} {st, at, rt, st+2, at+2, rt+2} → {st, st+1, rt+1, st+2, at+2, rt+2, st+3} {st, at, at+1} → {st+1, st+2} {st, at, at+1} → {st, st+1, st+2}
Stage-6 {st, at, at+1} → {st, st+1, st+2} {st, rt, st+2, at+2, rt+2} → {st, st+1, st+2, st+3} {st, at, at+1} → {st, st+1. 11st+2} {st, at, rt, st+1, at+1, rt+1} → {rt, st+1, st+2} {st, at, rt, at+1, st+2} → {stStage-6 {st, at, at+1} → {st, st+1, st+2} {st, rt, st+2, at+2, rt+2} → {st, st+1, st+2, st+3} {st, at, at+1} → {st, st+1, st+2} {st, at, rt, st+1, at+1, rt+1} → {rt, st+1, st+2} {st, at, rt, at+1, st+2} → {st, st+1, rt+1}
Top-5 candidates of each stage in Quadruped-Run (Vector) evolution process Quadruped-Run. 18RawTable 18: Top-5 candidates of each stage in Quadruped-Run (Vector) evolution process Quadruped-Run (Raw)
{at, rt, st+1, st+2, at+2} → {st, at+1, st+3} {rt, st+1, st+3, rt+3} → {st. 2at, rt, st+1, at+1, rt+1, st+2, rt+2, st+3, at+3, st+4} {at, at+1, rt+1, st+2, rt+2, st+3, at+3, rt+3} → {st, st+1, at+2, st+4} {st, at, rt+1, at+2, st+3, at+3, rt+3, st+4, st+5} → {at, at+1, rt+1, at+2, rt+3, at+4} {st, at, rt, st+1, at+1, st+3} → {rt+1{at, rt, st+1, st+2, at+2} → {st, at+1, st+3} {rt, st+1, st+3, rt+3} → {st, at, rt, st+1, at+1, rt+1, st+2, rt+2, st+3, at+3, st+4} {at, at+1, rt+1, st+2, rt+2, st+3, at+3, rt+3} → {st, st+1, at+2, st+4} {st, at, rt+1, at+2, st+3, at+3, rt+3, st+4, st+5} → {at, at+1, rt+1, at+2, rt+3, at+4} {st, at, rt, st+1, at+1, st+3} → {rt+1, rt+2}
Stage-3. Stage-3
| zyda_arxiv-1198000 |
Delta-gravity and Dark Energy
29 Jan 2012 January 31, 2012
J Alfaro [email protected]
Pontificia Universidad Católica de Chile
Av. Vicuña Mackenna 4860SantiagoChile
Delta-gravity and Dark Energy
29 Jan 2012 January 31, 2012
We present a model of the gravitational field based on two symmetric tensors. The equations of motion of test particles are derived: Massive particles do not follow a geodesic but massless particles trajectories are null geodesics of an effective metric. Outside matter, the predictions of the model coincide exactly with General Relativity, so all classical tests are satisfied. In Cosmology, we get accelerated expansion without a cosmological constant.
General Relativity(GR) works very well at the macroscopic scales [1]. Its quantization has proved to be difficult, though. It is non renormalizable, which prevents its unification with the other forces of Nature. Trying to make sense of Quantum GR is the main physical motivation of String Theories [2]. Moreover, recent discoveries in Cosmology [3,4] has revealed that most part of matter is in the form of unknown matter(dark matter,DM) and that the dynamics of the expansion of the Universe is governed by a mysterious component that accelerates the expansion(dark energy,DE). Although GR is able to accommodate both DM and DE, the interpretation of the dark sector in terms of fundamental theories of elementary particles is problematic [5]. Although some candidates exists that could play the role of DM, none have been detected yet. Also, an alternative explanation based on the modification of the dynamics for small accelerations cannot be ruled out [6].
In GR, DE can be explained if a small cosmological constant(Λ) is present. At the later stages of the evolution of the Universe Λ will dominate the expansion, explaining the acceleration. Such small Λ is very difficult to generate in Quantum Field Theory (QFT) models, because in this models Λ is the vacuum energy, which is usually very large.
One of the most important questions in Cosmology and cosmic structure formation is to understand the nature of dark energy in the context of a fundamental physical theory [17].
In recent years there has been various proposals to explain the observed acceleration of the universe. They involve the inclusion of some additional field like in quintessence, chameleon, vector dark energy or massive gravity; Addition of higher order terms in the Einsten-Hilbert action, like f(R) theories and Gauss-Bonnet terms; Modification of gravity on large scales by introduction of extra dimensions. For a review, see [7].
Less widely explored, but interesting possibilities, are the search for nontrivial ultraviolet fixed points in gravity (asymptotic safety [9]) and the notion of induced gravity [10]. The first possibility uses exact renormalization-group techniques [11] and lattice and numerical techniques such as Lorentzian triangulation analysis [12]. Induced gravity proposed that gravitation is a residual force produced by other interactions.
In a recent paper, [13] a two-dimensional field theory model explore the emergence of geometry by the spontaneous symmetry breaking of a larger symmetry where the metric is absent. Previous work in this direction can be found in [14], [15] and [16]. In this paper, we wish to present a model of gravitation that is as close as possible to classical GR, but could make sense at the quantum level. The main observation is that GR is finite on shell at one loop [18]. In [20,19] we presented a type of gauge theories, δ gauge theories(DGT): The main properties of DGT are: 1) The classical equations of motion are satisfied in the full Quantum theory 2) They live at one loop. 3) They are obtained through the extension of the former symmetry of the model introducing an extra symmetry that we call δ symmetry, since it is formally obtained as the variation of the original symmetry. When we apply this prescription to GR we obtain δ gravity. Quantization of δ gravity is discussed in [21].
The impact of dark energy on cosmological observations can be expressed in terms of a fluid equation of state p = w(R)ρ, which is to be determined studying its influence on the large-scale structure and dynamics of the Universe.
In this paper we follow the same approach. So we will not include the matter dynamics, except by demanding that the energy-momentum tensor of the matter fluid is covariantly conserved. This is required in order to respect the symmetries of the model.
The main properties of this model at the classical level are: a)It agrees with GR, outside the sources and with adequate boundary conditions. In particular, the causal structure of delta gravity in vacuum is the same as in General Relativity. So all standard test are satisfied automatically. b) When we study the evolution of the Universe, it predicts acceleration without a cosmological constant or additional scalar fields. The Universe ends in a Big Rip, similar to the scenario considered in [23]. c) The scale factor agrees with the standard cosmology at early times and show acceleration only at late times. Therefore we expect that density perturbations should not have large corrections.
It should be remarked that δ-gravity is not a metric model of gravity because massive particles do not move on geodesics. Only massless particles move on null geodesics of a linear combination of both tensor fields.
It was noticed in [20] that the Hamiltonian of delta models is not bounded from below. Phantoms cosmological models [22], [23] also have this property. Although it is not clear whether this problem will subsist in a diffeomorphism invariant model as delta gravity or not, we mention some ways out of the difficulty at the end.
Definition of Delta gravity In this section we define the action as well as the symmetries of the model and derive the equations of motion.
We use the metric convention of [8]. The action of δ gravity is:
S(g,g, λ) = d d x √ −g(− 1 2κ R + L M )+ κ 2 R µν − 1 2 g µν R + κT µν √ −gg µν d d x + (1) κ 2 κ √ −g (λ µ;ν + λ ν;µ ) T µν d d x
Here κ = 8πG c 4 , κ 2 is an arbitrary constant and T µν :
= − 2 √ −g δ( √ −gL M ) δg µν
is the energy-momentum tensor of matter.R µν is the Ricci's tensor and R is the curvature scalar of g µν .g µν is a two-contravariant tensor under general coordinate transformations.
The action (1) is obtained by applying the prescription contained in [20,19]. That is, we add to the action of general relativity, the variation of it and consider the variation δg µν =g µν as a new field. Similarly, the symmetries we write below are obtained as variation of the infinitesimal general coordinate transformations where the variation of the infinitesimal parameter δξ ρ 0 = ξ ρ 1 is the infinitesimal parameter of the new transformation δ. The last term in (1) is needed to implement the condition T µν ;ν = 0 as an equation of motion in order to implement the δ symmetry (2) off shell. This term is not needed in vacuum.
Action (1) is invariant under the following transformations(δ):
δg µν = g µρ ξ ρ 0,ν + g νρ ξ ρ 0,µ + g µν,ρ ξ ρ 0 = ξ 0µ;ν + ξ 0ν;µ δg µν (x) = ξ 1µ;ν + ξ 1ν;µ +g µρ ξ ρ 0,ν +g νρ ξ ρ 0,µ +g µν,ρ ξ ρ 0 (2) δλ µ = −ξ 1µ + λ ρ ξ ρ 0,µ + λ µ,ρ ξ ρ 0
From now on we will fix the gauge λ µ = 0. This gauge preserves general coordinate transformations but fixes completely the extra symmetry with parameter ξ 1µ .
Equations of motion Varying g µν we get:
S γσ + 1 2 (Rg γσ − g µνg µν R γσ ) − 1 2 g γσ 1 √ −g √ −g∇ νg µν ,µ + 1 4 g γσ 1 √ −g √ −gg αβ ∇ β (g µνg µν ) ,α = κ δT µν δg γσg µν(3)
where S γσ = (U σβγρ + U γβσρ − U σγβρ ) ;ρβ , U αβγρ = 1 2 g γρ (g βα − 1 2 g αβ g µνg µν ) Varyingg µν we get Einstein equation:
R µν − 1 2 g µν R + κT µν = 0(4)
Varying λ µ we get:T µν ;ν = 0 Covariant derivatives as well as raising and lowering of indices are defined using g µν . Notice that outside the sources(T µν = 0), a solution of (3) isg µν = λg µν , for a constant λ, since g µν ;ρ = 0 and R µν = 0. We will havẽ g µν = g µν , assuming that both fields satisfy the same boundary conditions far from the sources. But there exists other solutions in the vacuum. A simple case is presented in equation (48) of [21]. This solution is interesting because any finite size body will look point-like if we watch it from far away. Studying the motion of massive and massless test particles , using the equations we will derive below, we can see that the parameter β produces an additional gravitational force. We need further studies to ascertain whether this additional gravitational force can be used to understand dark matter or not.
The equation forg µν is of second order in the derivatives.
Particle motion in the gravitational field We are aware of the presence of the gravitational field through its effects on test particles. For this reason, here we discuss the coupling of a test particle to a background gravitational field, such that the action of the particle is invariant under (2).
In δ gravity we postulate the following action for a test particle:
S p = −m dt −g µνẋ µẋν + κ ′ 2 d n y √ −gT µν (g µν + λ µ;ν + λ ν;µ )
where T µν is the energy momentum tensor of the test particle:
T µν (y) = m 2 √ −g dtẋ µẋν −g αβẋ αẋβ δ(y − x) κ ′ 2 = κ 2 κ is a dimensionless constant.
That is:
S p = m dt −g αβẋ αẋβ g µν + κ 2 2 ′ḡ µν ẋ µẋν(5)
wereḡ µν =g µν + λ µ;ν + λ ν;µ . Notice that S p is invariant under (2) and t-parametrizations. From now on we work in the gauge λ µ = 0. Since far from the sources, we must have free particles in Minkowski space,i.e g µν ∼ η µν ,g µν ∼ η µν , it follows that we are describing the motion of a particle of mass m ′ = m(1 + κ 2 2 ′ ) Since in vacuumg µν = g µν , the equation of motion for test particles is the same as Einstein's. Moreover, the equation of motion is independent of the mass of the particle.
In order to include massless particles, we prefer to use the action [24]:
L = 1 2 dt vm 2 − v −1 (g µν + κ ′ 2ḡ µν )ẋ µẋν + m 2 + v −2 (g µν + κ ′ 2ḡ µν )ẋ µẋν 2v −3 g αβẋ αẋβ m 2 + v −2 g λρẋ λẋρ(6)
This action is invariant under reparametrizations:
x ′ (t ′ ) = x(t); dt ′ v ′ (t ′ ) = dtv(t); t ′ = t − ε(t)(7)
The equation of motion for v is:
v = − −g µνẋ µẋν m(8)
Replacing (8) into (6), we get back (5). Let us consider first the massive case. Using (7) we can fix the gauge v = 1. Introducing mdt = dτ , we get the action:
L 1 = 1 2 m dτ 1 − (g µν + κ ′ 2ḡ µν )ẋ µẋν + 1 + (g µν + κ ′ 2ḡ µν )ẋ µẋν 2g αβẋ αẋβ 1 + g λρẋ λẋρ(9)
plus the constraint obtained from the equation of motion for v:
g µνẋ µẋν = −1(10)
From L 1 the equation of motion for massive particles is derived. We define:
g µν = g µν + κ 2 2 ′ḡ µν . d(ẋ µẋν g µνẋ β g αβ + 2ẋ βḡ αβ ) dτ − 1 2ẋ
µẋνḡ µνẋ βẋγ g βγ,α −ẋ µẋν g µν,α = 0 (11)
We will discuss the motion of massive particles elsewhere. The action for massless particles is:
L 0 = 1 4 dt −v −1 (g µν + κ ′ 2ḡ µν )ẋ µẋν(12)
In the gauge v = 1, we get:
L 0 = − 1 4 dt (g µν + κ ′ 2ḡ µν )ẋ µẋν(13)
plus the equation of motion for v evaluated at v = 1: (g µν + κ ′ 2ḡ µν )ẋ µẋν = 0 So, the massless particle moves in a null geodesic of g µν = g µν + κ ′ 2ḡ µν . Distances and time intervals In this section, we define the measurement of time and distances in the model.
In GR the geodesic equation preserves the proper time of the particle along the trajectory. Equation(11) satisfies the same property: Along the trajectoryẋ µẋν g µν is constant.Therefore we define proper time using the original metric g µν ,
dτ = −g µν dx µ dx ν = √ −g 00 dx 0 (dx i = 0)(14)
Following [25], we consider the motion of light rays along infinitesimally near trajectories and (14) to get the three dimensional metric:
dl 2 = γ ij dx i dx j , γ ij = g 00 g 00 (g ij − g 0i g 0j g 00 )(15)
That is, we measure proper time using the metric g µν but the space geometry is determined by both metrics. In this model massive particles do not move on geodesics of a four dimensional metric. Only massless particles move on a null geodesic of g µν . So, delta gravity is not a metric theory. The Newtonian limit The motion of a non relativistic particle in a weak static gravitational field is obtained using
g µν = diag (−1 − 2 Uǫ, 1 − 2 Uǫ, 1 − 2 Uǫ, 1 − 2 Uǫ), which solves Einstein equations to first order in ǫ if ∇ 2 U = 1 2 κρ. The solution forg µν isg µν = diag ǫŨ , 1 + ǫ Ũ − 2U , 1 + ǫ Ũ − 2U , 1 + ǫ Ũ − 2U .
Solving (3),to first order in ǫ we get ∇ 2Ũ = 1 2 κρ. To recover the Minkowsky metric far from the sources, ρ → 0, we must require there:
U → 0,Ũ → −ǫ −1 . (11) implies d 2 x i dt 2 = −φ ,i with φ = U − κ ′ 2 (2U +Ũ ). The Newtonian potential satisfies ∇ 2 φ = κ 2 (1 − 3κ ′ 2 )ρ, |κ ′ 2 | ≪ 1.
The whole effect is a small redefinition of Newton constant.
Gravitational red shift experiments can be used to put bounds on κ ′ 2 . According to (14), the shift in frequency of a source located at x 1 , compared to the same source located at x 2 due to the change in gravitational potential is:
ν 2 −ν 1 ν 1 = φ N (x 2 ) − φ N (x 1 )
where φ N is the usual Newtonian potential, computed with κ as Newton constant. From [26] we get ∆ν ν = (1 + 2.5 ± 70 × 10 −6 )(ϕ S −ϕ E +. . . .), where ϕ S is the gravitational potential at the spacecraft position and ϕ E is the gravitational potential on Earth. . . . accounts for additional effects not related to the gravitational potential. We can ascribe the uncertainty of the experiment to κ ′ 2 , to get the bound: |κ ′ 2 | < 24 × 10 −6 This bound is conservative because the Newton constant itself has a larger error [27]: G = 6.67428 ± 0.00067 × 10 −11 m 3 kgs 2
In our description of the evolution of the Universe, the value of κ ′ 2 is not important, so we will keep it arbitrary for the time being.
Friedman-Robertson-Walker(FRW) metric This is the main section of the paper. We discuss the equations of motion for the Universe described by the FRW metric. We use spatial curvature equal to zero to agree with cosmological observations. In this paper we will deal only with a perfect fluid, since rotational and translational invariance implies that the energy-momentum tensor of the Universe has this form.The energy momentum tensor for a perfect fluid is [8]:
T µν = pg µν + (p + ρ)U µ U ν , g µν U µ U ν = −1(16)
Then:
δT µν δg γσg µν = pg γσ + 1 2 (p + ρ)(U γ U νg σν + U σ U νg γν )(17)
In this case, assuming flat three dimensional metric:
−ds 2 = dt 2 − R(t) 2 dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2 −ds 2 =Ã(t)dt 2 −B(t) dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2
Using (11,14), we can check that these are co-mobile coordinates and the proper time interval dτ for a co-moving clock is just dt, so t is the time measured in the rest frame of a co-moving clock. Equations (3,17) give:
−ṘḂ − 1 2 pRB + 1 2 R −1Ṙ2B − 1 6 ρR 3Ã + 3 2 RṘ 2Ã = 0 −pB − 2B − R −2Ṙ2B + 2R −1RB + 2R −1ṘḂ + ρ R 2Ã +Ṙ 2Ã + 2RṘȦ + 2RÃR = 0(18)
Einstein's equations are:
3 d d t R 2 R 2 = κρ , 2 R d 2 d t 2 R + d d t R 2 = −κR 2 p
We use the equation of state p = wρ, to get, for w = −1 :
R = R 0 t 2 3(1+w) ,Ã = 3wl 2 t ( w−1 w+1 ) , B = R 2 0 l 2 t b , b = 4 3w + 3 + w − 1 w + 1 (19)
l 2 is a free parameter.
Red Shift To make the usual connection between redshift and the scale factor, we consider light waves traveling to r = 0, from r = r 1 , along the r direction with fixed θ, φ. Photons moves on a null geodesic of g:
0 = −(1 + κ ′ 2Ã )dt 2 + (R 2 + κ ′ 2B )(dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2 )(20)
So,
t 0 t 1 dt 1 + κ ′ 2 tA R 2 + κ ′ 2 tB = r 1(21)
A typical galaxy will have fixed r 1 , θ 1 , φ 1 . If a second wave crest is emitted at t = t 1 + δt 1 from r = r 1 , it will reach r = 0 at t 0 + δt 0 , where
t 0 +δt 0 t 1 +δt 1 dt 1 + κ ′ 2 tA R 2 + κ ′ 2 tB = r 1
Therefore, for δt 1 , δt 0 small, which is appropiate for light waves, we have:
δt 0 1 + κ ′ 2 tA R 2 + κ ′ 2 tB (t 0 ) = δt 1 1 + κ ′ 2 tA R 2 + κ ′ 2 tB (t 1 )(22)
Introduce:R
(t) = R 2 + κ ′ 2 tB 1 + κ ′ 2 tA (t)
We get: δt 0 δt 1 =R (t 0 ) R(t 1 ) . A crucial point is that, according to equation (14), δt measure the change in proper time. That is:
ν 1 ν 0 =R (t 0 ) R(t 1 )
, where ν 0 is the light frequency detected at r = 0 corresponding to a source emission at frequency ν 1 . Or in terms of the redshift parameter z, defined as the fractional increase of the wavelength λ:
z =R (t 0 ) R(t 1 ) − 1 = λ 0 − λ 1 λ 1(23)
We see thatR replaces the usual scale factor R in the computation of z. Luminosity distance Let us consider a mirror of radius b that is receiving light from a distant source. The photons that reach the mirror are inside a cone of half-angle ε with origin at the source.
Let us compute ε.The light path of rays coming from a far away source at x 1 is given by x (ρ) = ρn + x 1 , ρ > 0 is a parameter andn is the direction of the light ray.The path reaches us at x = 0 for ρ = | x 1 | = r 1 . Son = −x 1 + ε.
Sincen,x 1 have modulus 1, ε = | ε| << 1 is precisely the angle between − x 1 andn at the source.The impact parameter is the proper distance of the path from the origin, when ρ = | x 1 |. The proper distance is determined by the 3-dimensional metric (15).
That is b =R (t 0 ) r 1 θ =R (t 0 ) r 1 ε, i.e. ε = b R(t 0 )r 1 .
Then the solid angle of the cone is πε 2 = A r 2 1R (t 0 ) 2 , where A = πb 2 is the proper area of the mirror.The fraction of all isotropically emitted photons that reach the mirror is f = A 4πr 2 1R (t 0 ) 2 . Each photon carries an energy hν 1 at the source and hν 0 at the mirror. Photons emitted at intervals δt 1 will arrive at intervals δt 0 . We have ν 1
ν 0 =R (t 0 ) R(t 1 ) , δt 0 δt 1 =R (t 0 ) R(t 1 )
. Therefore the power at the mirror is P 0 = LR (t 1 ) 2 R(t 0 ) 2 f , where L is the luminosity of the source. The apparent luminosity is l
= P 0 A = LR (t 1 ) 2 R(t 0 ) 2 1 4πr 2 1R (t 0 ) 2 .
In Euclidean space, the luminosity decreases with distance d according to l = L 4πd 2 .This permits to define the luminosity distance:
d L = L 4πl =R(t 0 ) 2 r 1 R(t 1 )
. Using (21) we can write this in terms of the red shift:
d L = (1 + z) z 0 dz ′ H(z ′ ) ,H =Ṙ R(24)
Supernova Ia data The supernova Ia data gives, m (apparent or effective magnitude) as a function of z. This is related to distance d L by m = M + 5log( d L 10pc ). Here M is common to all supernova and m changes with d L alone.
We compare δ gravity to General Relativity(GR) with a cosmological constant:
H 2 = H 2 0 (Ω m (1 + z) 3 + (1 − Ω m ))
, Ω Λ = 1 − Ω m Notice thatà = 0 for w = 0 in (19). So, it seems that we cannot fit the supernova data. However w = 0 is not the only component of the Universe. The massless particles that decoupled earlier still remain. It means that the true w is between 0 w < 1 3 , but very close to w = 0. So, we will fit the data with w = 0.1, 0.01, 0.001 and see how sensitive the predictions are to the value of w.
Using data from Essence [28], we notice that R 2 test changes very little for the chosen sequence of w's. Each fit determines the best l 2 for a given w.
In this way we see that l 2 scales like l 2 ∼ a 3w , a being independent of w. As an approximation to the limit w = 0, we get:
R(t) = R(t) √ a √ a − t(25)
1 3w renormalizes the derivative ofR at t = 0. It is not divergent, because for t → 0, w → 1 3 . a is a free parameter determined by the best fit to the data.
Of course, the complete model must include the contribution of normal matter(w = 0) plus relativistic matter (w = 1 3 ). But, at later times, the data should tend to (25).
Let us fit the data to the simple scaling model (25). We get: Ω m = 0.22 ± 0.03, M = 43.29 ± 0.03 , χ 2 (perpoint) = 1.0328, General Relativity a = 2.21 ± 0.12, M = 43.45 ± 0.06, χ 2 (perpoint) = 1.0327, Delta Gravity δ-gravity with non-relativistic(NR) matter alone give a fit to the data as good as GR with NR matter plus a cosmological constant.
According to the fit to data, a Big Rip will happen at t = 2.21049 in unities of t 0 (today). It is a similar scenario as in [23].
Finally, we want to point out that since for t → 0, we have w → 1 3 , theñ R(t) = R(t). Therefore the accelerated expansion is slower than (25) when we include both matter and radiation in the model.
Conclusions and Open Problems Delta Gravity agrees with General Relativity when T µν = 0, imposing same boundary conditions for both tensor fields. In particular, the causal structure of delta gravity in vacuum is the same as in General Relativity, since in this case the action (5) is proportional to the geodesic action in GR.
We recover the Newtonian approximation. In a homogeneous and isotropic universe, we get accelerated expansion without a cosmological constant or additional scalar fields.
The computation of PPN(Postnewtonian) parameters is in progress, but we do not expect large departures from general relativity, because the newtonian limit is the right one, as explained in section 6. Moreover the interestellar space has very small matter densities, so δ-gravity must give general relativity values for the PPN parameters(See comments after equation (4)). Additionally, please notice that allg contributions are multiplied by the small parameter κ ′ 2 of the order of 10 −5 or less, so they are much suppressed in the solar system. Stellar evolution will not be changed from its newtonian description, unless density of matter becomes very large. Even at the densities of white dwarfs the Poisson equation for the gravitational potential suffices.(See, for instance [8], chapter 11.3). δ-gravity implies it, as it is shown in section 6. Higher densities which are present in neutron stars may provide new tests of δ-gravity, since there we have to use the whole non-linear Einstein equations and the corresponding δ-gravity equations. But for the inner regions of massive stars, data is very scarce.
Notice that equation (19) implies thatR = R at the beginning of the Universe, when w = 1 3 , corresponding to ultrarelativistic matter. That is, the accelerated expansion started at a later time, which is needed if we want to recover the observational data of density perturbations and growth of structures in the Universe. An earlier acceleration of the expansion would prevent the growth of density perturbations.
Work is in progress to compute the growth of density perturbations, the anisotropies in the CMBR, BAO, WL and the evolution of massive stars. The comparison of these calculations with the considerable amount of astronomical data that will be available in the near future will be a very stringent test of the present gravitational model.
It was noticed in [20] that the Hamiltonian of delta models is not bounded from below. Phantoms cosmological models [22], [23] also have this property. Although it is not clear whether this problem will subsist in a diffeomorphism invariant model as delta gravity or not, we want to mention some ways out of the difficulty. a) Delta gravity is a gauge theory. Moreover it is diffeomorphism invariant. Thus the canonical Hamiltonian vanishes identically. It may be possible to truncate the Hilbert space, using the BRST formalism, to define a model with a Hamiltonian bounded from below. This is a difficult task that goes far beyond the present paper, but should be pursued in a future work. b) In a supersymmetric model we have H = Q 2 , where H is the Hamiltonian and Q is the hermitian supersymmetry charge. Thus the Hamiltonian is bounded from below. So, we expect that a delta supergravity model has a Hamiltonian bounded from below. L. Infante, G. Palma, M. Bañados and A. Clocchiatti. In particular, JA wants to thank A. Clocchiatti for pointing out the data in [28]. Finally, JA wants to thank J. Gamboa for a careful reading of the manuscript.
Acknowledgements The work of JA is partially supported by VRAID/DID/46/2010 and Fondecyt 1110378. He wants to thank R. Avila and P. González for several useful remarks; The author acknowledges interesting conversations with
The Confrontation between General Relativity and Experiment. Clifford M Will, Living Rev. Relativity. 9Clifford M. Will, "The Confrontation between General Rel- ativity and Experiment", Living Rev. Relativity 9, (2006), http://www.livingreviews.org/lrr-2006-3;
. G Slava, Turyshev, Annual Review of Nuclear and Particle Science. 58Slava G. Turyshev, An- nual Review of Nuclear and Particle Science, Vol. 58: 207-248 (Volume publication date November 2008)
Superstring Theory. M B Green, J H Schwarz, E Witten, vols. 1J. Polchinski. 21Cambridge University PressString TheoryFor a modern review of string model see: M.B. Green, J.H. Schwarz and E. Witten, "Superstring Theory ", vols. 1, 2, Cambridge University Press 1987. J. Polchinski, "String Theory", vols. 1,2, Cambridge University Press 1998.
for a review of Dark Matter and its detection , see S. Weinberg, Cosmology. Oxford University Pressfor a review of Dark Matter and its detection , see S. Weinberg, Cosmol- ogy, Oxford University Press 2008;
. D Hooper, Baltz, Annu. Rev. Nucl. Part. Sci. 58293314Hooper, D. and Baltz, Annu. Rev. Nucl. Part. Sci. 58, 293314(2008).
For a recent review, see. A G Riess, Supernova Search Team ; Supernova Cosmology Projectastro-ph 0903.0866The Physics of Cosmic Acceleration. 116565Astron. J.A. G. Riess et al. (Supernova Search Team), Astron. J. 116, 1009 (1998),S. Perlmutter et al. (Supernova Cosmology Project), Astrophys. J. 517, 565 (1999). For a recent review, see R. R. Caldwell and M. Kamionkowski, The Physics of Cosmic Acceleration,astro-ph 0903.0866.
Dark energy and the accelerating Universe. J A Frieman, M S Turner, D Huterer, Annu. Rev. Astron. Astrophys. 46385432Frieman, J. A., Turner, M. S. & Huterer, D. Dark energy and the accel- erating Universe. Annu. Rev. Astron. Astrophys. 46, 385432 (2008).
A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis. M Milgrom, ApJ. 270365Milgrom, M.,A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis, ApJ, 270, 365(1983);
Relativistic gravitation theory for the MOND paradigm. J Bekenstein, Phys. Rev. D. 7083509Beken- stein, J. Relativistic gravitation theory for the MOND paradigm, Phys. Rev. D 70, 083509 (2004)
. See, S Instance, Tsujikawa, Lect.Notes Phys. 800See, for instance, S. Tsujikawa, Lect.Notes Phys.800:99-145,2010
S Weinberg, Gravitation and Cosmology. New YorkWileyS. Weinberg, Gravitation and Cosmology(Wiley, New York, 1972)
S Weinberg, General Relativity: An Einstein centenary survey. S. W.Hawking and W.IsraelCambridge University Press16790S. Weinberg, in General Relativity: An Einstein centenary survey, edited by S. W.Hawking and W.Israel (Cambridge University Press, 1979), chapter 16, p. 790.
. . B Ya, Zeldovich, JETP Lett. 6316Ya.B. Zeldovich, JETP Lett., 6, 316 (1967);
. A Sakharov, SOv. Phys. Dokl. 121040A. Sakharov, SOv. Phys. Dokl., 12, 1040 (1968);
. O Klein, Phys. Scr. 969O. Klein, Phys. Scr. 9, 69 (1974);
. S Adler, Rev. Mod. Phys. 54729S. Adler, Rev. Mod. Phys., 54, 729 (1982).
. D F Litim, ; A Codello, R Percacci, C Rahmede, arXiv:0708.1317Lectures given at First Quantum Geometry and Quantum Gravity School. M. Reuter and F. Saueressig92e-PrintAIP Conf. Proc.D.F. Litim, Phys.Rev.Lett.92:201301,2004; AIP Conf. Proc. 841, 322 (2006); e-Print: arXiv:0810.367; A. Codello, R. Percacci and C. Rahmede, Annals Phys.324:414-469,2009; M. Reuter and F. Saueres- sig, Lectures given at First Quantum Geometry and Quantum Gravity School, Zakopane, Poland (2007),arXiv:0708.1317
. J Ambjorn, J Jurkiewicz, R Loll, Phys.Rev.Lett. 85J. Ambjorn, J. Jurkiewicz and R. Loll, Phys.Rev.Lett.85:924-927,2000.
. J Alfaro, D Espriu, D Puigdomenech, Phys. Rev. 8245018J. Alfaro, D. Espriu and D. Puigdomenech, Phys. Rev. D82:045018,2010.
. C J Isham, A Salam, J A Strathdee, Annals Phys. 62C.J. Isham, A. Salam and J.A. Strathdee, Annals Phys.62:98-119,1971.
. A B Borisov, V I Ogievetsky, ; E A Ivanov, V I Ogievetsky, Theor.Math.Phys. 21Lett.Math.Phys.A.B. Borisov and V.I. Ogievetsky, Theor.Math.Phys.21:1179,1975; E.A. Ivanov and V.I. Ogievetsky, Lett.Math.Phys.1:309-313,1976.
. D Amati, J Russo, Phys.Lett. B. 24844D. Amati and J. Russo, Phys.Lett. B 248, 44 (1990);
. J Russo, Phys.Lett. B. 25461J. Russo, Phys.Lett. B 254, 61 (1991);
. A Hebecker, C Wetterich, ; C Wetterich, Phys. Rev. D. 574105004Phys.Lett.A. Hebecker, C. Wetterich, Phys.Lett.B574:269- 275,2003; C. Wetterich, Phys. Rev. D 70: 105004, 2004.
astro-ph/0609591 and Peacock. See, A Albrecht, astro-ph/0610906J.A., et al.See, for instance:Albrecht, A., et al., 2006, astro-ph/0609591 and Pea- cock. J.A., et al., 2006, astro-ph/0610906
. G Hooft, M Veltman, Ann. Inst. Henri Poincar. 2069G. 't Hooft and M. Veltman, Ann. Inst. Henri Poincar, 20 (1974) 69
. J Alfaro, 9702060bv gauge theoriesJ. Alfaro, bv gauge theories, hep-th 9702060.
. J Alfaro, P Labraña, Phys. Rev. D. 6545002J. Alfaro and P. Labraña,Phys. Rev. D 65, 045002 (2002).
. J Alfaro, P Gonzalez, R Avila, Class.Quant.Grav. 28215020J. Alfaro, P. Gonzalez, R. Avila, Class.Quant.Grav. 28 (2011) 215020.
. R R Caldwell, Physics Letters B. 5452329R.R. Caldwell, Physics Letters B 545 (2002) 2329.
. R R Caldwell, M Kamionkowski, N N Weinberg, Phys. Rev. Lett. 9171301R. R. Caldwell, M. Kamionkowski, and N. N.Weinberg, Phys. Rev. Lett.91(2003)071301.
Fields. W Siegel, hep-th 9912205v3. 193W. Siegel, "Fields",hep-th 9912205v3, page 193.
L Landau, L M Lifshitz, The Classical Theory of Fields. Pergamon PressL. Landau and L.M. Lifshitz, The Classical Theory of Fields, Pergamon Press 1980.
. R F C Vessot, Phys. Rev. Lett. 452081R. F. C. Vessot, et al, Phys. Rev. Lett. 45, 2081 (1980).
. Peter J Mohr, Barry N Taylor, David B Newell, Rev. Mod. Phys. 80633730Mohr, Peter J.; Taylor, Barry N.; Newell, David B. , Rev. Mod. Phys. 80: 633730.
. W M Wood-Vasey, Astrophys.J. 666W. M. Wood-Vasey et al.,Astrophys.J.666:694-715,2007.
| zyda_arxiv-1229000 |
PRODUCTS OF MENGER SPACES: A COMBINATORIAL APPROACH
18 May 2016
Piotr Szewczak
Boaz Tsaban
PRODUCTS OF MENGER SPACES: A COMBINATORIAL APPROACH
18 May 2016arXiv:1603.03361v3 [math.GN]
We construct Menger subsets of the real line whose product is not Menger in the plane. In contrast to earlier constructions, our approach is purely combinatorial. The set theoretic hypothesis used in our construction is far milder than earlier ones, and holds in all but the most exotic models of real set theory. On the other hand, we establish productive properties for versions of Menger's property parameterized by filters and semifilters. In particular, the Continuum Hypothesis implies that every productively Menger set of real numbers is productively Hurewicz, and each ultrafilter version of Menger's property is strictly between Menger's and Hurewicz's classic properties. We include a number of open problems emerging from this study.
in the real line, or even just metrizable examples [23,Problem 6.7]. This problem, proposed by Scheepers long ago, resisted tremendous efforts thus far.
For brevity, sets of real numbers are called here real sets. Assuming the Continuum Hypothesis, there are two Luzin sets whose product is not Menger [12,Theorem 3.7]. An uncountable real set is Luzin if its intersection with every meager (Baire first category) set is countable. An uncountable real set X is concentrated if it has a countable subset D such that the set X \ U is countable for every open set U containing D. Every Luzin set is concentrated, and every concentrated set has Menger's property. This approach extends to obtain similar examples using a set theoretic hypothesis about the meager sets that is weaker than the Continuum Hypothesis [20,Theorem 49]. Later methods [27,Theorem 9.1] were combined with reasoning on meager sets to obtain examples using another portion of the Continuum Hypothesis [19,Theorem 3.3]. Here, we introduce a purely combinatorial approach to this problem. We obtain examples using hypotheses milder than earlier ones, as well as examples using hypotheses that are incompatible with the Continuum Hypothesis. To this end, we introduce the key notion of bi-d-unbounded set, and determine the limits on its possible existence. We extend these results to variations of Menger's property parameterized by filters and semifilters. For a semifilter S, we introduce the notion of S-scale. These scales provably exist, and capture a number of distinct special cases used in earlier works.
The second part of the paper, beginning with Section 5, establishes provably productive properties among semifilter-parameterized Menger properties. If S is an ultrafilter, then every S-scale gives rise to a productively S-Menger space. We deduce that each of these variations of Menger's property is strictly between Hurewicz's and Menger's classic properties.
The last section includes a discussion of related results and open problems suggested by our results.
Products of Menger spaces
Towards a combinatorial treatment of the questions discussed here, we identify the Cantor space {0, 1} N with the family P(N) of all subsets of the set N. Since the Cantor space is homeomorphic to Cantor's real set, every subspace of the space P(N) is considered as a real set.
The space P(N) splits into two important subspaces: the family of infinite subsets of N, denoted [N] ∞ , and the family of finite subsets of N, denoted [N] <∞ . We identify every set a ∈ [N] ∞ with its increasing enumeration, an element of the Baire space N N . Thus, for a natural number n, a(n) is the n-th element in the increasing enumeration of the set a. This way, we have [N] ∞ ⊆ N N , and the topology of the space [N] ∞ (a subspace of the Cantor space P(N)) coincides with the subspace topology induced by N N . This explains some of the elementary assertions made here; moreover, notions defined here for [N] ∞ are often adaptations of classic notions for N N . Depending on the interpretation, points of the space [N] ∞ are referred to as sets or functions.
For functions a, b ∈ [N] ∞ , we write a ≤ b if a(n) ≤ b(n) for all natural numbers n, and a ≤ * b if a(n) ≤ b(n) for almost all natural numbers n, that is, the set of exceptions { n : b(n) < a(n) } is finite. We follow the convention that bounded means has an upper bound in the ambient superset.
Definition 2.1. Let κ be an infinite cardinal number. A set X ⊆ [N] ∞ with |X| ≥ κ is κ-unbounded if
the cardinality of every ≤-bounded subset of the set X is smaller than κ.
Remark 2.2. For cardinal numbers κ of uncountable cofinality, which will be the case in the present paper, the notion of κ-unbounded defined here is equivalent to its variation using the relation ≤ * instead of ≤. This is not the case for cardinal numbers of countable cofinality.
Let κ be an infinite cardinal number. A topological space X with |X| ≥ κ is κ-concentrated on a countable set D ⊆ X if |X \ U| < κ for all open sets U containing D. Every compact set K ⊆ [N] ∞ is ≤-bounded. A|(X ∪ [N] <∞ ) ∩ K| = |X ∩ K| < κ. (⇐) For each bound b ∈ [N] ∞ , the set K := { a ∈ [N] ∞ : a ≤ b } is compact. Thus, the set U := P(N) \ K is an open set containing [N] <∞ , and we have |X \ U| < κ. A set X ⊆ [N] ∞ is dominating if for each function a ∈ [N] ∞ there is a function x ∈ X such that a ≤ * x. Let dDefinition 2.5. For functions a, b ∈ [N] ∞ , we write a ≤ ∞ b if b ≮ * a, that is, if a(n) ≤ b(n) for infinitely many natural numbers n. For a set X ⊆ [N] ∞ and a function b ∈ [N] ∞ , we write X ≤ ∞ b if x ≤ ∞ b for each function x ∈ X.
This convention applies to all binary relations.
There are, provably, d-unbounded sets and cf(d)-unbounded sets:
Let { d α : α < d } be a dominating set. For each ordinal number α < d, take a function x α ∈ [N] ∞ such that { d β , x β : β < α } < ∞ x α . Then the set { x α : α < d } is d-unbounded. Taking a cofinal subset I ⊆ d of cardinality cf(d), we obtain the cf(d)-unbounded set { x α : α ∈ I }. Lemma 2.6. For sets a, b ∈ P(N), let a ⊎ b := (2a) ∪ (2b + 1) = { 2k : k ∈ a } ∪ { 2k + 1 : k ∈ b }.
Then:
(1) For each set a ∈ [N] ∞ and each natural number n, we have (a ⊎ a)(2n) = 2a(n) + 1.
(2) For all sets a, b, c, d ∈ [N] ∞ with a ≤ b and c ≤ d, we have a ⊎ c ≤ b ⊎ d.Claim 2.8. The set { a ⊎ d ′ : a ∈ A, d ∈ I a } is d-unbounded. Proof. Let b ∈ [N] ∞ . Define b ′ (n) := b(2n) for all natural numbers n. Let K := { a ∈ A : a ≤ b ′ }. Then |K| < κ. Let a ∈ A \ K and d ∈ I a .
There is a natural number n such that
b(2n) = b ′ (n) < a(n) ≤ 2a(n) + 1 = (a ⊎ a)(2n) ≤ (a ⊎ d ′ )(2n),
and thus a ⊎ d ′ b. Therefore,
|{ a ⊎ d ′ : a ∈ A, d ∈ I a , a ⊎ d ′ ≤ b }| ≤ |{ a ⊎ d ′ : a ∈ K, d ∈ I a }| < d.
By Lemma 2.3, the real set
Y := { a ⊎ d ′ : a ∈ A, d ∈ I a } ∪ [N] <∞
is d-concentrated on the set [N] <∞ . In particular, the set Y is Menger.
For sets a, b ∈ P(N), let a ⊕ b denote the symmetric difference of the sets a and b. With respect to the operator ⊕, the space P(N) is a topological group.
Claim 2.9. The set (2X) ⊕ Y is a dominating subset of [N] ∞ . Proof. For all sets a, b, c ∈ P(N), we have (2a) ⊕ (b ⊎ c) = (a ⊕ b) ⊎ c ⊇ 2c + 1. It follows that (2X) ⊕ Y ⊆ [N] ∞ .
For each function d in the dominating set D we started with, let a ∈ A be a function such that d ∈ I a . As a ∈ X and a ⊎ d ′ ∈ Y , we have
2a ⊕ (a ⊎ d ′ ) = (a ⊕ a) ⊎ d ′ = ∅ ⊎ d ′ = 2d ′ + 1 ∈ (2X) ⊕ Y. Since d ≤ d ′ ≤ 2d ′ + 1 for all functions d ∈ D, the set (2X) ⊕ Y is dominating. In summary, the set (2X) ⊕ Y is a continuous image of the planar set X × Y in [N] ∞ that is dominating. It follows that the space X × Y is not Menger.
Let X be a real set of cardinality smaller than d. Then the set X is trivially Menger: the topology used is irrelevant, as long as we restrict attention to countable covers. In particular, all finite powers of the real set X are Menger, even for countable Borel covers; a strong property [20]. The most important application of Theorem 2.7 appears in the next section.
Bi-d-unbounded sets
For a set a ∈ P(N), let a c :
= N \ a. Let [N] ∞,∞ := { a ∈ [N] ∞ : a c ∈ [N] ∞ }, the family of infinite co-infinite subsets of N. Definition 3.1. Let κ be an infinite cardinal number. A set X ⊆ [N] ∞,∞ is bi-κ-unbounded if the sets X and { x c : x ∈ X } ⊆ [N] ∞ are both κ-unbounded. Theorem 3.2. Let κ ∈ {cf(d), d}. Let X ⊆ [N] ∞ be a bi-κ-unbounded set. Then: (1) The real set X ∪ [N] <∞ is κ-concentrated. In particular, it is Menger. (2) There is a d-concentrated real set Y such that the planar set (X ∪ [N] <∞ ) × Y is not Menger. Proof. (1) By Corollary 2.4. (2) The continuous image { x c : x ∈ X ∪ [N] <∞ } of the real set X ∪ [N] <∞ in P(N) is a κ-unbounded subset of [N] ∞ . Apply Theorem 2.7.
The existence of bi-d-unbounded sets and bi-cf(d)-unbounded sets is a mild hypothesis.
(1) d ≤ r. (2) There are bi-d-unbounded sets in [N] ∞ . (3) There are bi-cf(d)-unbounded sets in [N] ∞ . Proof. (1) ⇒ (2), (3):
We use the following lemma, to which we provide a short, direct proof.
Lemma 3.4 (Mejía [16]). Let X ⊆ [N] ∞ . If |X| < min{d, r}, then there is an element b ∈ [N] ∞,∞ such that X ≤ ∞ b and X ≤ ∞ b c . Proof. For a set x ∈ [N] ∞ with 1 / ∈ x, define a functionx ∈ [N] ∞ byx(1) := x(1)
, and x(n + 1) := x(x(n)) for each natural number n.
We may assume that 1 / ∈ x for all sets x ∈ X. Since |X| < d, there is a function a ∈ [N] ∞ such that the sets
I x := { n : |[a(n), a(n + 1)) ∩x| ≥ 2 } are infinite for all sets x ∈ X [7, Theorem 2.10]. Since |X| < r, there is a set r ∈ [N] ∞ that reaps the family { I x : x ∈ X }. Define b := n∈r [a(n), a(n + 1)).
Fix a set x ∈ X. Let n be a member of the infinite set r ∩ I x . There are at least two elements in the set [a(n), a(n + 1)) ∩x; letx(i) be the minimal one. Then x(x(i)) = x(i + 1) ∈ [a(n), a(n + 1)). Since n ∈ r, the set b c ∩ [a(n), a(n + 1)) is empty, and thus a(n
+ 1) ≤ b c (x(i)). It follows that x(x(i)) < b c (x(i)). Similarly, every number n ∈ I x \ r produces a number i such that x(x(i)) < b(x(i)). Let { d α : α < d } ⊆ [N] ∞ be a dominating set. By Lemma 3.4, for each ordinal number α < d, there is a set x α ∈ [N] ∞,∞ such that { d β , x β : β < α } < ∞ x α , x c α . Then the set { x α : α < d } is bi-d-unbounded. Let I be a cofinal subset of the cardinal number d, of cardinality cf(d). Then the set { x α : α ∈ I } is bi-cf(d)-unbounded. (2) ⇒ (1): We may assume that the cardinal number d is regular. Indeed, it is known that if r < d then d is regular. 2 Thus, if d is singular, then d ≤ r, and we are done. Let X ⊆ [N] ∞ be a bi-d-unbounded set. Let A ⊆ [N] ∞ be a family with |A| < d.
We prove that the family A is reapable. We may assume that for each set a ∈ A and each finite set s, we have a \ s ∈ A.
Since the set X is bi-d-unbounded, the set
a∈A { x ∈ X : x ≤ * a or x ≤ * a c }
is a union of less than d sets, each of cardinality smaller than d. Thus, there is an element r ∈ X that is not included in that set, that is, such that A < ∞ r, r c . The set r reaps the family A: Fix a set a ∈ A. Assume that the set a ∩ r is finite. Then the set a ′ := a \ r is in A, and thus a ′ < ∞ r c . But a ′ ⊆ r c , and thus r c ≤ a ′ ; a contradiction. For the same reason, the set a \ r is infinite, too.
(3) ⇒ (1): If the cardinal number d is regular, then the previously established implication applies. And if it is singular, then as explained in the previous implication, we have d ≤ r.
In either case, we are done.
A topological space X is Rothberger if for each sequence U 1 , U 2 , . . . of open covers of X, there are elements U 1 ∈ U 1 , U 2 ∈ U 2 , . . . with X ⊆ n U n .
Every real set of cardinality smaller than cov(M) is Rothberger [12,Theorem 4.2], and therefore so is every cov(M)concentrated real set. Since cov(M) ≤ r [7, Theorem 5.19], we obtain the following result. 3 Important examples of semifilters include the maximal semifilter [N] ∞ , the minimal semifilter cF of all cofinite sets, and every nonprincipal ultrafilter on N.
Filter-Menger spaces
For sets a, b ∈ [N] ∞ , we write a ⊆ * b if the set a \ b is finite. A semifilter [4] is a set S ⊆ [N] ∞ such that, for each set s ∈ S and each set b ∈ [N] ∞ with s ⊆ * b, we have b ∈ S.
Let S be a semifilter.
For functions a, b ∈ [N] ∞ , let [a ≤ b] := { n : a(n) ≤ b(n) }. We write a ≤ S b if [a ≤ b] ∈ S. Let b(S) be the minimal cardinality of a ≤ S -unbounded subset of [N] ∞ . For a semifilter S, let S + := { a ∈ [N] ∞ : a c / ∈ S }. For all sets a ∈ S and b ∈ S + , the intersection a ∩ b is infinite. For functions a, b ∈ [N] ∞ , we have that a S b if and only if b < S + a.
The κ-unbounded sets presented in the previous sections are instances of the following notion, which generalizes the earlier notion of b(S)-scale [27, Definition 2.8].
Definition 4.1. Let S be a semifilter. A set X ⊆ [N] ∞ with |X| ≥ b(S) is an S-scale if, for each function b ∈ [N] ∞ , there is a function c ∈ [N] ∞ such that b ≤ S + c ≤ S x for all but less than b(S) functions x ∈ X.Proof. Let { b α : α < b(S) } ⊆ [N] ∞ be a ≤ S -unbounded set. For each ordinal number α < b(S), there is a function x α ∈ [N] ∞ such that { b β , x β : β < α } < S x α . The set { x α : α < b(S) } is an S-scale. Indeed, fix a function b ∈ [N] ∞ . There is an ordinal number β < b(S) such that b β S b, and thus b ≤ S + b β . For each ordinal number α > β, we have b β ≤ S x α . Let S be a semifilter, and b, c, x ∈ [N] ∞ functions satisfying b ≤ S + c ≤ S x. Then the set [b ≤ x] contains the intersection [b ≤ c] ∩ [c ≤ x]
of an element of S + and an element of S.
In particular, we have b ≤ ∞ x.Let S be a semifilter. A topological space X is S-Menger if for each sequence U 1 , U 2 , . . . of open covers of X, there are finite sets F 1 ⊆ U 1 , F 2 ⊆ U 2 , . . . such that { n : x ∈ F n } ∈ S for all points x ∈ X. A topological space is Menger if and only if it is [N] ∞ -Menger.
For the filter cF of cofinite sets, the property cF-Menger is the classic Hurewicz property [11]. Thus, for every semifilter S, we have the following implications. By filter we mean a semifilter closed under finite intersections. If F is a filter, then a ∩ b ∈ F + for all sets a ∈ F and b ∈ F + . And if F is an ultrafilter, then F + = F . (1) The semifilter S is nonmeager. The following theorem implies that this assertion cannot be established for spaces that are not Hurewicz.
Hurewicz =⇒ S-Menger =⇒ Menger. A function Ψ from a topological space X into [N] ∞ is upper continuous if the sets { x ∈ X : Ψ(x)(n) ≤ m }(2) There are an S-scale X ⊆ [N] ∞ and a d-concentrated real set Y such that the planar set (X ∪ [N] <∞ ) × Y is not Menger. Proof. (1) ⇒ (2): Let { d α : α < d } be a dominating subset of [N] ∞ . Fix an ordinal number α < d. Since b = d, there is a function b ∈ [N] ∞ such that { d β , x β : β < α } < * b. The set { x ∈ [N] ∞,∞ : b ≤ ∞ x c } is comeager. Since the semifilter S is nonmeager, the set { x ∈ [N] ∞,∞ : b ≤ S x } is
Let P be a property of topological spaces. A real set X is productively P if for each topological space Y with the property P, the product space X × Y has the property P. The question whether productively P implies productively Q, for P and Q covering properties among those studied here, has a long history. The remainder of this paragraph assumes that d = ℵ 1 . Aurichi and Tall [3] improved several earlier results by proving that every productively Lindelöf space is Hurewicz. It was later shown that every productively Lindelöf space is productively Hurewicz and productively Menger [18,Theorem 8.2]. Thus, productively Lindelöf implies productively Menger, and the following theorem shows that productively Menger suffices to imply productively Hurewicz. ( (1), there is a d-concentrated real set Y such that the planar set Z × Y is not Menger. Since the set Z × Y is an upper continuous image of the product space X × H × Y , the latter space is not Menger, too. As the space H is Hurewicz and hereditarily Lindelöf and the set Y is d-concentrated, the product space H × Y is Menger [28,Theorem 4.6]. In summary, the product of the real set X and the Menger, hereditarily Lindelöf space H × Y is not Menger.
Some special hypothesis is necessary for Theorem 4.8: The union of less than g Menger real sets is Menger [29,26]. Assume that b < g. Then any unbounded real set X ⊆ [N] ∞ of cardinality b is productively Menger but not Hurewicz.
Productive real sets
In this section, we establish preservation of some properties under products. We begin with a generalization of an earlier result [18, Lemma 6.3] to general topological spaces. The earlier proof [18, Lemma 6.3] does not apply in this general setting; we provide an alternative proof. 1, a y (k)))×{y} ⊆ U n m . Let a y (k +1) be the minimal natural number such that a y (k+1) ≥ m and P([a y (k), a y (k+1)) c )×{y} ⊆ U n m . Since our open covers are ascending, we have P([a y (k), a y (k + 1)) c ) × {y} ⊆ U n ay(k+1) . Notice that the number a y (k + 1) is minimal with this property. As the set P([a y (k), a y (k + 1)) c ) × {y} is compact, there is an open neighborhood V y k+1 ⊆ V y k of the point y such that P([a y (k), a y (k + 1)) c ) × V y k+1 ⊆ U n ay(k+1) . Define Φ(y)(n) := a y (n + 1). For each point y ′ ∈ V y n+1 , we have y ′ ∈ V y k for all k = 1, . . . , n + 1. The sequence a y ′ (1), . . . , a y ′ (n + 1) is bounded by the sequence a y (1), . . . , a y (n + 1), coordinate-wise: For k = 1, we have a y ′ (1) = a y (1) = 1. Assume that a y ′ (k) ≤ a y (k). Then
P([a y ′ (k), a y (k + 1)) c ) × {y ′ } ⊆ P([a y (k), a y (k + 1)) c ) × V y k+1 ⊆ U n ay(k+1)
, and, by the minimality of the number a y ′ (k + 1), we have a y ′ (k + 1) ≤ a y (k + 1), too.
In summary, we have Φ(y ′ )(n) = a y ′ (n + 1) ≤ a y (n + 1) ≤ Φ(y)(n) for all points y ′ ∈ V y n+1 . This shows that the function Φ is upper continuous.
Fix a point y ∈ Y and a natural number n. Let x ∈ [N] ∞ be an element with Φ(y)(n) ≤ x(n). As a y (n + 1) = Φ(y)(n) ≤ x(n), there is a natural number k ≤ n such that x ∩ [a y (k), a y (k + 1)) = ∅. Thus, (x, y) ∈ P([a y (k), a y (k + 1)) c ) × {y} ⊆ U n ay(k+1) ⊆ U n Φ(y)(n) , and therefore Ψ(x, y)(n) ≤ Φ(y)(n).
For filters, we obtain a productive version of Proposition 4.5.
Theorem 5.2. Let F be a filter and X ⊆ [N] ∞ be an F -scale. For each F -Menger space Y , every upper continuous image of the product space
(X ∪ [N] <∞ ) × Y in [N] ∞ is ≤ Fa -bounded for some set a ∈ F + . Proof. Let Ψ : (X ∪ [N] <∞ ) × Y → [N] ∞ be an upper continuous function. Let Φ : Y → [N] ∞ be as in the Productive Two Worlds Lemma (Lemma 5.1). Since the space Y is F -Menger, there is a function b ∈ [N] ∞ such that Φ[Y ] ≤ F b. As the set X is an F -scale, there is a function c ∈ [N] ∞ such that b ≤ F + c ≤ F x for all but less than b(F ) elements of X. Let a := [b ≤ c]
, an element of the semifilter F + . Then the cardinality of the set
Z := { x ∈ X : b Fa x } is smaller than b(F ). Fix a pair (x, y) ∈ (X \ Z) × Y . Then b ≤ Fa x and Φ(y) ≤ F b. Since F is a filter, we have [Φ(y) ≤ x] ⊇ [Φ(y) ≤ b] ∩ [b ≤ x] ∈ F a .
This
shows that Ψ[(X \Z)×Y ] ≤ Fa b. Let z ∈ Z ∪[N] <∞ . Since the set {z}×Y is F -Menger, Ψ[{z} × Y ] ≤ F c z for some function c z ∈ [N] ∞ . Since |Z ∪ [N] <∞ | < b(F ), there is a function c ∈ [N] ∞ such that { c z : z ∈ Z ∪[N] <∞ } ≤ F c. As F is a filter, we have Ψ[(Z ∪[N] <∞ )×Y ] ≤ F c, and therefore Ψ[(X ∪ [N] <∞ ) × Y ] ≤ Fa { max{b(n), c(n)} : n ∈ N }.(1) For each F -Menger space Y , the product space (X ∪ [N] <∞ ) × Y is F + -Menger. (2) If F is an ultrafilter, then the real set X ∪ [N] <∞ is productively F -Menger.
Proof. Every product of a metrizable Lindelöf space and a hereditarily Lindelöf spaces is Lindelöf. Apply Theorem 5.2.
The following theorem was previously known for b-scales, a special kind of cF-scales [18,Theorem 6.5]. This theorem and the subsequent one improve upon earlier results [27, Proof. Let Y be a hereditarily Lindelöf, Hurewicz space. Since the space Y is hereditarily Lindelöf, the product space ( for all natural numbers n and for k ∈ [h(n), h(n + 1)). Then Ψ[Y ] ≤ * b ≤b.
X ∪ [N] <∞ ) × Y is Lindelöf. Let Φ : X × Y → [N] ∞ be
Since the set X is an S-scale, there is a function c ∈ [N] ∞ such thatb ≤ S + c and all but less than b functions x ∈ X belong to the set
X := { x ∈ X : c ≤ S x }. Claim 5.5. The set Φ[(X ∪ [N] <∞ ) × Y ] is ≤ * -bounded.
Proof. Fix a function x ∈X. Then [c ≤ x] ∈ S, and thus the set [c ≤ x] ∩ [h(n), h(n + 1)) is nonempty for almost all natural numbers n. Let
d := n ∈ N : [b ≤ c] ∩ [h(n − 1), h(n)) = ∅ .
Then, for almost all natural numbers n ∈ d, there are natural numbers x(h(n + 1)). (1) There is a cofinal F -scale.
l ∈ [b ≤ c] ∩ [h(n − 1), h(n)) and m ∈ [c ≤ x] ∩ [h(n), h(n + 1)), and we have b(h(n + 1)) =b(l) ≤ c(l) ≤ c(m) ≤ x(m) ≤Thus, b(k) ≤ x(k) for almost all natural numbers k ∈ e := { h(n + 1) : n ∈ d }. Let y ∈ Y . Since Ψ[Y ] ≤ * b, for almost all natural numbers k ∈ e we have Ψ(y)(k) ≤ b(k) ≤ x(k), and thus Φ(x, y)(k) ≤ Ψ(y)(k) ≤ b(k). Hence, the set Φ[X × Y ] is ≤ * -bounded(2) b(F ) = b(F + ).
Proof. (1) ⇒ (2): Since F is a filter, we have F ⊆ F + , and thus b(F ) ≤ b(F + ). Apply Proposition 6.2(3). A superfilter (also called grille or coideal) is a semifilter S such that a∪b ∈ S implies a ∈ S or b ∈ S. A semifilter S is a superfilter if and only if the semifilter S + is a filter. Equivalently, a superfilter is a union of a family of ultrafilters. For example, the set [N] ∞ = cF + is a superfilter.
(2) ⇒ (1): A ≤ F + -unbounded set { b α : α < b(F ) } ⊆ [N] ∞ is ≤ F -cofinal. For each ordinal number α < b(F ), let x α ∈ [N] ∞ be such that { b β , x β : β < α } ≤ F x α . As
Proposition 6.6. Let S be a superfilter. A set X ⊆ [N] ∞ is a cofinal S-scale if and only if it is an S-scale.
Proof (⇐). Let F := S + . If b ≤ S + c ≤ S x, then b ≤ F c ≤ F + x, and since F is a filter, we have b ≤ F + x, that is, b ≤ S x.
The proof of the following assertion is similar to that of Proposition 4.5. Let U be an ultrafilter, and X ⊆ [N] ∞ be a U-scale. By Proposition 6.6, the set X is in fact a cofinal U-scale. Using Proposition 6.7, we obtain an alternative derivation of Corollary 4.6. Similarly, Theorem 6.4 generalizes Theorem 5.3 (2). Theorem 6.4 cannot be extended to all semifilters, and not even to all superfilters: By Theorems 3.2-3.3, the hypothesis d ≤ r implies that Theorem 6.4 does not hold for the superfilter [N] ∞ . The latter assertion also follows from the following theorem that is, in fact, established by the proof of Theorem 4.7.
Theorem 6.8. Assume that b = d. Let S be a semifilter. The following assertions are equivalent:
(1) The semifilter S is nonmeager. Proof of (1). By Theorem 6.8(3) and Theorem 6.4, using that products of Hurewicz sets and d-concentrated real sets are Menger [28, Theorem 4.6].
Comments and open problems
We restrict attention to real sets throughout this section, except for the last subsection. The Menger productivity problem, whether Menger's property is consistently preserved by finite products, remains open. The hypothesis d ≤ r provides two Menger sets whose product is not Menger (Theorems 2.7 and 3.2). It is well known that this immediately provides a Menger set whose square is not Menger. Indeed, assume that X and Y are Menger real sets such that the planar set X × Y is not Menger. The set X ∪ Y is Menger. We may assume that X ⊆ (0, 1) and Y ⊆ (2, 3). Then the set X ×Y is a closed subset of the square (X ∪Y ) 2 . Menger's property is hereditary for closed subsets. 7.1. A combinatorial characterization of the cardinal number min{r, d}. Aubrey [1] proved that min{d, u} ≤ r. Since r ≤ u, the hypothesis d ≤ r in Theorem 3.3 is equivalent to the hypothesis d ≤ u.
Initially, we proved Theorem 3.3 using a stronger hypothesis.
b ∈ [N] ∞,∞ with X ≤ ∞ b, b c .
We observed that max{b, cov(M)} ≤ bidi ≤ min{r, d}, and needed that bidi = d to carry out our construction. It is immediate that bidi ≤ d, and the argument in the proof of the implication (2) ⇒ (1) in Theorem 3.3 shows that bidi ≤ r. Answering our question, Mejía pointed out to us that, by a result of Kamburelis and Węglorz, our upper bound on the cardinal number bidi is tight [16] (see Lemma 3.4). We thus have the following characterization of min{r, d}. Proposition 7.2. bidi = min{r, d}.
7.2.
Which κ-unbounded sets are not productively Menger? There are, in ZFC, bunbounded sets (e.g., by Proposition 4.2). Every union of less than max{b, g} Menger sets is Menger [29,26]. Since the hypothesis b < g is consistent, Theorem 2.7 and Corollary 2.11 do not extend to the case κ = b, or to any cardinal number that is consistently smaller than max{b, g}.
For a κ-unbounded set, which we may assume to have cardinality κ, the present proof of Theorem 2.7 requires a partition d = α<κ I α such that for each set J ⊆ κ with |J| < d, we have | α∈J I α | < d. It is not difficult to see that this implies that κ ∈ {cf(d), d}. To this end, it is important to find mild hypotheses implying that there are two Hurewicz sets whose product is not Hurewicz. In the notation of Section 4, Menger's property is [N] ∞ -Menger, and Hurewicz's property is cF-Menger. By Theorems 3.2 and 3.3, [N] ∞scales need not be productively [N] ∞ -Menger. In contrast, by Theorem 5.4, cF-scales are productively cF-Menger. Thus, modern constructions of Hurewicz sets do not help in regard to Problem 7.4. The classic example of a Hurewicz set (if not counting σ-compact sets, which are productively Hurewicz) is a Sierpiński set [24]. If b = cov(N ) = cof(N ), then there is a b-Sierpiński set (which is Hurewicz) whose square is not Hurewicz [20,Theorem 43]. No more general constructions are known.
Problem 7.5. Does the Continuum Hypothesis imply that no Sierpiński set is productively Hurewicz? Productively Menger?
By Theorem 4.8, if b = d, then every productively Menger set is productively Hurewicz. By the discussion following that theorem, if b < g then there are productively Menger sets that are not Hurewicz. Thus meager semifilters do not have the property in Problem 7.7. By Theorem 6.4 and Proposition 6.6, ultrafilters are also not in that category. But the full semifilter [N] ∞ is in this category, by Theorem 6.8.
A b-scale [24] is a particularly simple kind of a cF-scale, for cF the filter of cofinite sets. 7.4. Strictly unbounded sets. Say that a set X ⊆ [N] ∞ is strictly unbounded if for every set A ⊆ [N] ∞ of cardinality smaller than d there is a function x ∈ X such that A ≤ ∞ x. Let X ⊆ [N] ∞ be a strictly unbounded set. By the argument in the proof of Theorem 4.8(1), the set X contains a d-unbounded set. By Theorem 2.7, there is a d-concentrated set Y such that the planar set X × Y is not Menger. If b = d, then every unbounded set in [N] ∞ is strictly unbounded. We thus obtain a generalization of Theorem 4.8.
The construction in Theorem 3.3 that provides a Menger set that is not productively Menger provides, in fact, a Menger strictly unbounded set.
A negative answer to the second item of the following problem implies a negative solution for the Menger productivity problem. Theorem 5.4 provides a positive answer for hereditarily Lindelöf spaces Y . But this restriction is only needed for deducing that the product space (X ∪[N] <∞ ) ×Y is Lindelöf (and similarly for the other results in Section 4). A positive solution for the following problem would suffice. Problem 7.11. Let X be a real set of cardinality smaller than b, and Y be a Hurewicz space. Is the product space X × Y necessarily Lindelöf?
Theorem 2 . 10 .
210Assume that cf(d) < d. There are real sets X and Y such that |X| < d and the set Y is d-concentrated, but the planar set X × Y is notMenger. Proof. By the discussion preceding Lemma 2.6, there are cf(d)-unbounded sets in [N] ∞ . Apply Theorem 2.7 to any of these sets.Let κ be a cardinal number. A real set of cardinality at least κ is κ-Luzin if the cardinalities of its intersections with meager sets are all smaller than κ. Let cov(M) be the minimal cardinality of a cover of the real line by meager sets, and cof(M) be the minimal cardinality of a cofinal family of meager real sets. The hypothesis cov(M) = cof(M) implies that there are cov(M)-Luzin sets whose product is not Menger [20, Theorem 49]. Since cov(M) ≤ d, every cov(M)-Luzin set is d-concentrated, and thus Menger. In general, cov(M) ≤ d ≤ cof(M), and thus the following corollary implies (using the same hypothesis) that for every cov(M)-Luzin set L there is a d-concentrated real set Y such that the planar set L × Y is not Menger.
Corollary 2 . 11 .
211Let κ ∈ {cf(d), d}. For each κ-Luzin set L, there is a d-concentrated real set Y such that the planar set L × Y is not Menger. In particular, if ℵ 1 = cf(d), then this is the case for every Luzin set. Proof. By applying a homeomorphism, we may assume that L ⊆ [N] ∞ . Every κ-Luzin subset of [N] ∞ is κ-unbounded. Apply Theorem 2.7.
A set r ∈ [N] ∞ reaps a family A ⊆ [N] ∞ if, for each set a ∈ A, both sets a ∩ r and a \ r are infinite. Let r be the minimal cardinality of a family A ⊆ [N] ∞ that no set r reaps. For natural numbers n < m, let [n, m) := {n, n + 1, . . . , m − 1}. Theorem 3.3. The following assertions are equivalent:
Corollary 3. 5 .
5Assume that cov(M) = d. Then there are two Rothberger real sets whose product is not Menger.
Proposition 4.2 ([27, Lemma 2.9]). For each semifilter S there is an S-scale.
Proposition 4. 3 .
3Let S be a semifilter. Every S-scale is a b(S)-unbounded subset of [N] ∞ , and thus its union with the set [N] <∞ is b(S)-concentrated. In particular, no union of an S-scale and [N] <∞ is σ-compact [24, Lemma 1.6].
are open for all natural numbers n and m. In particular, continuous functions are upper continuous. By earlier methods [18, Theorem 7.3], we have the following result.
Proposition 4. 4 .
4Let X be a topological space, and S be a semifilter. The following assertions are equivalent:(1) The space X is S-Menger.(2) The space X is Lindelöf, and every upper continuousimage of X in [N] ∞ is ≤ Sbounded.For especially nice classes of spaces, such as Lindelöf zero-dimensional spaces or real sets, upper continuous can be replaced by continuous in Proposition 4.4. In general, however, this is not the case: The properties considered here are hereditary for closed subsets. Consider the planar setX := ((R \ Q) × [0, 1]) ∪ (R × {1}) ⊆ R 2 .This set is not Menger, since the non-Menger set (R \ Q) × {0} (homeomorphic to [N] ∞ ) is closed in X. Since the set X is connected, every continuous image of X in [N] ∞ is a singleton.For a set a ∈ S + , letS a := { c ∈ [N] ∞ : ∃s ∈ S, s ∩ a ⊆ * c },the semifilter generated by the sets { s ∩ a : s ∈ S }. The following observation generalizes an earlier result [27, Theorem 2.14].Proposition 4.5. Let S be a semifilter, and X ⊆ [N] ∞ be an S-scale. Every upper continuous image of the real set X ∪ [N] <∞ in [N] ∞ is ≤ Sa -bounded for some set a ∈ S + . Proof. Let Ψ : X ∪ [N] <∞ → [N] ∞ be an upper continuous function. We use the forthcoming Lemma 5.1, in the case Y = {0}. This special case was, implicitly, established by Bartoszyński and Shelah [5, Lemma 2]. This lemma provides a function b ∈ [N] ∞ such that Ψ(x)(n) ≤ b(n) for all functions x ∈ X and all natural numbers n with b(n) ≤ x(n). Since the set X is an S-scale, there is a function c ∈ [N] ∞ such that b ≤ S + c ≤ S x for all but less than b(S) functions x ∈ X. For these points x, we have that Ψ(x)(n) ≤ b(n) for all natural numbers n ∈ [b ≤ c] ∩ [c ≤ x]. Take a := [b ≤ c]. The image of the remaining points of the set X ∪ [N] <∞ is ≤ S -bounded by some member b ′ ∈ [N] ∞ . Then any function d ∈ [N] ∞ with b, b ′ ≤ d is a bound as required.
Corollary 4. 6 .
6For every filter F , the union of every F -scale and [N] <∞ is F + -Menger, and if F is an ultrafilter, this union is F -Menger. Let b be the minimal cardinality of a ≤ * -unbounded subset of [N] ∞ .
Theorem 4. 7 .
7Assume that b = d. Let S be a semifilter.The following assertions are equivalent:
nonmeager [ 27 ,
27Corollary 3.4]. Thus, there is a set x α ∈ [N] ∞,∞ such that b ≤ S x α and b ≤ ∞ x c α . Then the set X := { x α : α < d } is an S-scale, and it is bi-d-unbounded. Apply Theorem 2.7. (2) ⇒ (1): Let S be a meager semifilter, and X ⊆ [N] ∞ be an S-scale. By Theorem 5.4 below, the real set X ∪ [N] <∞ is, in particular, Hurewicz. Products of Hurewicz sets and d-concentrated real sets are Menger [28, Theorem 4.6].The real set X ∪ [N] <∞ in Theorem 4.7 is not Hurewicz since its image under the function x → x c is unbounded in [N] ∞ . The existence of non-Hurewicz sets of this form follows from a weaker hypothesis[27, Theorem 3.9], but without the non-productive property. The product of every Hurewicz real set and every d-concentrated real set is Menger[28, Theorem 4.6].
Theorem 4. 8 .
8Assume that b = d.
Lemma 5. 1 (
1Productive Two Worlds Lemma). Let X be a subset of [N] ∞ , Y be an arbitrary space, and Ψ : (X ∪ [N] <∞ ) × Y → [N] ∞ be an upper continuous function. There is an upper continuous function Φ : Y → [N] ∞ such that, for all points x ∈ X and y ∈ Y , and all natural numbers n:If Φ(y)(n) ≤ x(n), then Ψ(x, y)(n) ≤ Φ(y)(n).Proof. For natural numbers n and m, let U n m := Ψ −1 [{ a ∈ [N] ∞ : a(n) ≤ m }]. For each natural number n, the family { U n m : m ∈ N } is an ascending open cover of the product space (X ∪ [N] <∞ ) × Y . By enlarging the sets U n m , we may assume that they are open in the larger space P(N) × Y . Fix a point y ∈ Y and a natural number n. Set a y (1) := 1 and V y 1 := Y . For a natural number k, let m be the minimal natural number with P([
lary 4.4], asserting that the corresponding properties hold in all finite powers. A semifilter S is meager if and only if there is a function h ∈ [N] ∞ such that for each set s ∈ S, the set s ∩ [h(n), h(n + 1)) is nonempty for almost all natural numbers n [21, Theorem 21]. For meager semifilters S, we have b(S) = b [27, Corollary 2.27], and S-Menger is equivalent to Hurewicz [27, Theorem 2.32]. The following theorem generalizes an earlier result [27, Theorem 2.28], using a similar proof. Theorem 5.4. Let S be a meager semifilter, and X ⊆ [N] ∞ be an S-scale. Then, in the realm of hereditarily Lindelöf spaces, the real set X ∪ [N] <∞ is productively Hurewicz.
an upper continuous function and Ψ : Y → [N] ∞ be the upper continuous function provided by the Productive Two Worlds Lemma (Lemma 5.1). Since the space Y is Hurewicz, its image Ψ[Y ] is ≤ * -bounded by some function b ∈ [N] ∞ . Let h ∈ [N] ∞ be a witness for the semifilter S being meager. Define a functionb ∈ N N bỹ b(k) := b(h(n + 2))
on an infinite set, and thus [10, Fact 3.4] ≤ *bounded. As | (X \X)∪[N] <∞ | < b and the space Y is Hurewicz, the image Φ[((X \X)∪[N] <∞ )×Y ] is a union of less than b sets that are ≤ * -bounded, and is thus ≤ * -bounded. Thus, the entire image Φ[(X ∪ [N] <∞ ) × Y ] is ≤ * -bounded. 6. Cofinal S-scales For a semifilter S, the following special type of S-scale is a natural generalization of the earlier notion of cofinal b(S)-scale [27, Definition 2.22]. Definition 6.1. Let S be a semifilter. A set X ⊆ [N] ∞ with |X| ≥ b(S) is a cofinal S-scale if for each function b ∈ [N] ∞ , we have b ≤ S x for all but less than b(S) functions x ∈ X. For example, a set X ⊆ [N] ∞ is a cofinal [N] ∞ -scale if and only if the set X is d-unbounded. Thus, for some semifilters S, cofinal S-scales provably exist. But this is not always the case.
Proposition 6. 2 .
2Let S be a semifilter. (1) Cofinal S-scales are ≤ S + -unbounded. (2) Every subset, of cardinality b(S), of a cofinal S-scale is a cofinal S-scale. (3) If there is a cofinal S-scale, then b(S + ) ≤ b(S). (4) If b(S) = d, then there is a cofinal S-scale [27, Lemma 2.23]. Corollary 6.3. Let F be a filter. The following assertions are equivalent:
F is a filter, the relation ≤ F is transitive, and thus the set{ x α : α < b(F ) } is a cofinal F -scale.In particular, since b(cF) = b and b(cF + ) = b([N] ∞ ) = d, there are cofinal cF-scales if and only if b = d.The proof of the following theorem is similar to that of Theorems 5.2-5.3(1).
Proposition 6. 7 .
7Let S be a semifilter. For each cofinal S-scale X, the real set X ∪ [N] <∞ is S-Menger.
( 2 )
2There are an S-scale X ⊆ [N] ∞ and a d-concentrated real set Y such that the planarset (X ∪ [N] <∞ ) × Y is not Menger. (3) There are a cofinal S-scale X ⊆ [N] ∞ and a d-concentrated real set Y such that the planar set (X ∪ [N] <∞ ) × Y is not Menger.A related result of Repovš and Zdomskyy[19, Theorem 3.3] asserts that, if b = d, then there are ultrafilters U 1 and U 2 , a (cofinal) U 1 -scale X 1 , and a (cofinal) U 2 -scale X 2 , such that the planar set (X 1 ∪ [N] <∞ ) × (X 2 ∪ [N] <∞ ) is not Menger.Theorem 6.9. Assume that b = d. For every nonmeager filter F : (1) In the realm of hereditarily Lindelöf spaces, there is a productively F -Menger space that is not Hurewicz and not productively Menger. (2) The property F -Menger is strictly between Hurewicz and Menger.
Problem 7. 6 .
6Assume the Continuum Hypothesis. Do the classes of productively Menger and productively Hurewicz sets coincide? Recall that for meager semifilters S, being S-Menger is equivalent to being Hurewicz [27, Theorem 2.32]. By Theorem 5.4, in this case, for each S-scale X, the set X ∪ [N] <∞ is productively S-Menger. Problem 7.7. Assume the Continuum Hypothesis. For which semifilters S there is an Sscale X such that the set X ∪ [N] <∞ is not productively S-Menger?
Problem 7. 8 .
8Let X ⊆ [N] ∞ be a b-scale. Is the set X ∪ [N] <∞ necessarily productively Menger?If u < g, then every d-concentrated set (in particular, every union of an S-scale, for some semifilter S, and the set [N] <∞ ) is productively Menger[18, Theorem 4.7].
( 1 )
1Is it consistent that r < d and there are strictly unbounded Menger sets? (2) Is it consistent that there are no strictly unbounded Menger sets? 7.5. General spaces. Let S be a semifilter. Restricting the definition of S-Menger spaces to countable open covers, we obtain the definition of countably S-Menger spaces. This makes it possible to eliminate the adjective "hereditarily Lindelöf" in most of our theorems. For general Hurewicz spaces, the following problem remains open, even for the so-called b-scales [24, Definition 2.8]. Problem 7.10. Let cF be the semifilter of cofinite sets, X ⊆ [N] ∞ be an cF-scale, and Y be a Hurewicz space. Is the product space (X ∪ [N] <∞ ) × Y necessarily Hurewicz?
classic argument of Lawrence[15, implies that, for each cardinal number κ, the existence of a κ-concentrated real set is equivalent to the existence of a κ-unbounded set in [N] ∞ . Essentially, this is due to the following fact.Lemma 2.3. Let κ be a cardinal number, and X ⊆ [N] ∞ be a set with |X| ≥ κ. The set X is κ-unbounded if and only if the real set X ∪ [N] <∞ is κ-concentrated on [N] <∞ . Proof. (⇒) Let U ⊆ P(N) be an open set containing the set [N] <∞ . The set K := P(N) \ U is a closed, and thus compact, subset of P(N). Since U ⊇ [N] <∞ , we have K ⊆ [N] ∞ . Since compact subsets of [N] ∞ are ≤-bounded and the set X is κ-unbounded, we have
be the minimal cardinality of a dominating set in [N] ∞ . Much information about the cardinal number d, and about other ones defined below, is available [7]. Every real set of cardinality smaller than d is Menger, and no dominating subset of [N] ∞ is Menger [12, Theorem 4.4]. The former assertion implies that every d-concentrated real set is Menger. 1 Corollary 2.4. For each d-unbounded set X ⊆ [N] ∞ , the real set X ∪ [N] <∞ is Menger.
Theorem 2.7. Let κ ∈ {cf(d), d}, and X ⊆ [N] ∞ be a set containing a κ-unbounded set. There is a d-concentrated real set Y such that the planar set X × Y is notMenger. Proof. Let A ⊆ X be a κ-unbounded set. By moving to a subset of A, we may assume that |A| = κ. Let D ⊆ [N] ∞ be a dominating set of cardinality d. Decompose such that | a∈B I a | < d for all sets B ⊆ A of cardinality smaller than κ. (If κ = d, we can take every set I a to be a singleton). Fix elements a ∈ A and d ∈ I a . Take a function d ′ ∈ [N] ∞ such that a, d ≤ d ′ . Consider the set { a ⊎ d ′ : a ∈ A, d ∈ I a }. Its cardinality is at most d, and since its projection on the odd coordinates is dominating, its cardinality is exactly d.D =
a∈A
I a
1 )
1For every unbounded set X ⊆ [N] ∞ , there is a d-concentrated real set Y such that the planar set X × Y is not Menger.(2) In the realm of hereditarily Lindelöf spaces: If a real set X is productivelyMenger, then it is productively Hurewicz.Proof. (1) Let { d α : α < d } be a dominating set in [N] ∞ . Since b = d, for each ordinal number α < d the set { d β : β < α } is bounded, and thus there is a function x α ∈ X such that { d β , x β : β < α } < ∞ x α . Then the subset { x α : α < d } of the set X is d-unbounded,Assume that there is a Hurewicz hereditarily Lindelöf space H such that the product space X × H is not Hurewicz. Then there is an unbounded upper continuous image Z of the space X × H in [N] ∞ [18, Theorem 7.3]. Byand Theorem 2.7 applies.
(2)
Theorem 6.4. Let F be a filter and X ⊆ [N] ∞ be a cofinal F -scale. Then, in the realm of hereditarily Lindelöf spaces, the real set X ∪ [N] <∞ is productively F -Menger. Corollary 6.5. Let X ⊆ [N] ∞ be a cofinal cF-scale. Then the real set X ∪ [N] <∞ is productively F -Menger for all filters F . Proof. Since the existence of a cofinal cF-scale implies b = d, a cofinal cF-scale is a cofinal F -scale for all filters F . Apply Theorem 6.4.
Definition 7.1. Let bidi be the minimal cardinality of a set X ⊆ [N] ∞ such that there is no set
Problem 7.3. Assume that κ is a cardinal number with cf(d) < κ < d. Let X be a κunbounded set in [N] ∞ . Is there necessarily a d-concentrated set Y such that the planar set X × Y is not Menger? 7.3. Products of Hurewicz sets. The following problem is intriguing.Problem 7.4 (Scheepers[23, Problem 6.7]). Is it consistent that every product of two Hurewicz sets isHurewicz?
Moreover, d-concentrated sets have the stronger selective property S 1 (Γ, O)[6,24].
In the notation of Section 4, fix an ultrafilter U with pseudobase P of cardinality r [7, Theorem 9.9], and take a ≤ U -dominating set D of cardinality b(U ), a regular cardinal number. Then the set { f • p : f ∈ D, p ∈ P } is dominating, and thus d ≤ b(U )(≤ d).
Semifilters are normally denoted by calligraphic letters. Here, we view them as sets of points in, and thus subspaces of, the Cantor space P(N). Thus, we use the standard typefaces, as we do for arbitrary points and sets in topological spaces.
Acknowledgments.We are indebted to Diego Alejandro Mejía for his Lemma 3.4. We also thank Will Brian and Ashutosh Kumar for their answers to questions we had during this study[8,14], and Heike Mildenberger for information about the hypothesis r < d. The research of the first named author was supported by Etiuda 2, Polish National Science Center, UMO-2014/12/T/ST1/00627.
Combinatorics for the dominating and unsplitting numbers. J Aubrey, Journal of Symbolic Logic. 69J. Aubrey, Combinatorics for the dominating and unsplitting numbers, Journal of Symbolic Logic 69 (2004), 482-498.
D-spaces, topological games, and selection principles. L Aurichi, Topology Proceedings. 36L. Aurichi, D-spaces, topological games, and selection principles, Topology Proceedings 36 (2010), 107- 122.
Lindelöf spaces which are indestructible, productive, or D. L Aurichi, F Tall, Topology and its Applications. 159L. Aurichi, F. Tall, Lindelöf spaces which are indestructible, productive, or D, Topology and its Appli- cations 159 (2012), 331-340.
The Coherence of Semifilters: a Survey. T Banakh, L Zdomskyy, Selection Principles and Covering Properties in Topology. L. KočinacCasertaSeconda Università di NapoliQuaderni di Matematica 18T. Banakh, L. Zdomskyy, The Coherence of Semifilters: a Survey, in: Selection Principles and Covering Properties in Topology (L. Kočinac, ed.), Quaderni di Matematica 18, Seconda Università di Napoli, Caserta, 2006, 53-99.
Continuous images of sets of reals. T Bartoszyński, S Shelah, Topology and its Applications. 116T. Bartoszyński, S. Shelah, Continuous images of sets of reals, Topology and its Applications 116 (2001), 243-253.
T Bartoszyński, B Tsaban, Hereditary topological diagonalizations and the Menger-Hurewicz Conjectures. 134T. Bartoszyński, B. Tsaban, Hereditary topological diagonalizations and the Menger-Hurewicz Conjec- tures,Proceedings of the American Mathematical Society 134 (2006), 605-615.
A Blass, Handbook of Set Theory. M. Foreman, A. KanamoriSpringerA. Blass, Combinatorial cardinal characteristics of the continuum, in: Handbook of Set Theory (M. Foreman, A. Kanamori, eds.), Springer, 2010, 395-489.
Answer to A property of the Frechet filter and every ultrafilter. W Brian, MathOverflow. W. Brian, Answer to A property of the Frechet filter and every ultrafilter, MathOverflow, 2015. http://mathoverflow.net/questions/201171
Mathias forcing and combinatorial covering properties of filters. D Chodounsky, D Repovš, L Zdomskyy, Journal of Symbolic Logic. 80Journal of Symbolic LogicD. Chodounsky, D. Repovš, L. Zdomskyy, Mathias forcing and combinatorial covering properties of filters, Journal of Symbolic Logic, Journal of Symbolic Logic 80 (2015), 1398-1410.
E Van Douwen, The integers and topology. Handbook of Set Theoretic Topology (K. Kunen, J. VaughanNorth-Holland, AmsterdamE. van Douwen, The integers and topology, in: Handbook of Set Theoretic Topology (K. Kunen, J. Vaughan, editors), North-Holland, Amsterdam, 1984, 111-167.
W Hurewicz, Über eine Verallgemeinerung des Borelschen Theorems. 24W. Hurewicz, Über eine Verallgemeinerung des Borelschen Theorems, Mathematische Zeitschrift 24 (1925), 401-421.
The combinatorics of open covers II. W Just, A Miller, M Scheepers, P Szeptycki, Topology and its Applications. 73W. Just, A. Miller, M. Scheepers, P. Szeptycki, The combinatorics of open covers II, Topology and its Applications 73 (1996), 241-266.
. A Kamburelis, B Węglorz, Splittings, Archive for Mathematical Logic. 35A. Kamburelis, B. Węglorz, Splittings, Archive for Mathematical Logic 35 (1996), 263-277.
Answer to A classic cardinal characteristic of the continuum in disguise?. A Kumar, MathOverflow. A. Kumar, Answer to A classic cardinal characteristic of the continuum in disguise?, MathOverflow, 2015. http://mathoverflow.net/questions/201170
The influence of a small cardinal on the product of a Lindelöf space and the irrationals. L Lawrence, Proceedings of the American Mathematical Society. 110L. Lawrence, The influence of a small cardinal on the product of a Lindelöf space and the irrationals, Proceedings of the American Mathematical Society 110 (1990), 535-542.
Answer for Bidi: A new cardinal characteristic of the continuum?. D Mejía, MathOverflow. D. Mejía, Answer for Bidi: A new cardinal characteristic of the continuum?, MathOverflow, 2015. http://mathoverflow.net/questions/206348
Einige Überdeckungssätze der Punktmengenlehre. K Menger, Sitzungsberichte der Wiener Akademie. 133K. Menger, Einige Überdeckungssätze der Punktmengenlehre, Sitzungsberichte der Wiener Akademie 133 (1924), 421-444.
Selective covering properties of product spaces. A Miller, B Tsaban, L Zdomskyy, Annals of Pure and Applied Logic. 165A. Miller, B. Tsaban, L. Zdomskyy, Selective covering properties of product spaces, Annals of Pure and Applied Logic 165 (2014), 1034-1057.
On M-separability of countable spaces and function spaces. D Repovš, L Zdomskyy, Topology and its Applications. 157D. Repovš, L. Zdomskyy, On M-separability of countable spaces and function spaces, Topology and its Applications 157 (2010), 2538-2541.
The combinatorics of Borel covers. M Scheepers, B Tsaban, Topology and its Applications. 121M. Scheepers, B. Tsaban, The combinatorics of Borel covers, Topology and its Applications 121 (2002), 357-382.
Compacts de fonctiones mesurables et filtres non mesurables. M Talagrand, Studia Mathematica. 67M. Talagrand, Compacts de fonctiones mesurables et filtres non mesurables, Studia Mathematica 67 (1980), 13-43.
Aronszajn orderings, Publications de l. S Todorčević, Insitut Mathematique. 57S. Todorčević, Aronszajn orderings, Publications de l'Insitut Mathematique 57 (1995), 29-46.
Selection principles and special sets of reals. B Tsaban, Open Problems in Topology II. E. PearlElsevier B.VB. Tsaban, Selection principles and special sets of reals, in: Open Problems in Topology II (E. Pearl, ed.), Elsevier B.V. 2007, 91-108.
Menger's and Hurewicz's Problems: Solutions from "The Book" and refinements. B Tsaban, Set Theory and its Applications. L. Babinkostova, A. Caicedo, S. Geschke, M. Scheepers533Contemporary MathematicsB. Tsaban, Menger's and Hurewicz's Problems: Solutions from "The Book" and refinements, in: Set Theory and its Applications (L. Babinkostova, A. Caicedo, S. Geschke, M. Scheepers, editors), Contem- porary Mathematics 533 (2011), 211-226.
Algebra in the Stone-Čech compactification, selections, and additive Ramsey theory, submitted for publication. B Tsaban, B. Tsaban, Algebra in the Stone-Čech compactification, selections, and additive Ramsey theory, submit- ted for publication.
Combinatorial images of sets of reals and semifilter trichotomy. B Tsaban, L Zdomskyy, Journal of Symbolic Logic. 73B. Tsaban, L. Zdomskyy, Combinatorial images of sets of reals and semifilter trichotomy, Journal of Symbolic Logic 73 (2008), 1278-1288.
. B Tsaban, L Zdomskyy, Scales, Hurewicz, Journal of the European Mathematical Society. 10B. Tsaban, L. Zdomskyy, Scales, fields, and a problem of Hurewicz, Journal of the European Mathe- matical Society 10 (2008), 837-866.
Additivity of the Gerlits-Nagy property and concentrated sets. B Tsaban, L Zdomskyy, Proceedings of the American Mathematical Society. 142B. Tsaban, L. Zdomskyy, Additivity of the Gerlits-Nagy property and concentrated sets, Proceedings of the American Mathematical Society 142 (2014), 2881-2890.
A semifilter approach to selection principles. L Zdomskyy, Commentationes Mathematicae Universitatis Carolinae. 46L. Zdomskyy, A semifilter approach to selection principles, Commentationes Mathematicae Universitatis Carolinae 46 (2005), 525-539.
Israel E-mail address: [email protected] URL. 5290002Boaz Tsaban, Department of Mathematics, Bar-Ilan UniversityBoaz Tsaban, Department of Mathematics, Bar-Ilan University, Ramat Gan 5290002, Is- rael E-mail address: [email protected] URL: http://math.biu.ac.il/~tsaban
| zyda_arxiv-1271000 |
Optimizing Drone Delivery in Smart Cities
Babar Shahzaad
School of Computer Science
The University of Sydney
Australia
Balsam Alkouz
School of Computer Science
The University of Sydney
Australia
Jermaine Janszen
School of Computer Science
The University of Sydney
Australia
Athman Bouguettaya
School of Computer Science
The University of Sydney
Australia
Optimizing Drone Delivery in Smart Cities
We propose a novel context-aware drone delivery framework for optimizing package delivery through skyway networks in smart cities. We reformulate the problem of finding an optimal drone service delivery pathway as a more congruent and elegant drone delivery service composition problem. In this respect, we propose a novel line-of-sight heuristic-based context-aware composition algorithm that selects and composes near-optimal drone delivery services. We conducted an extensive experiment using a real dataset to show the robustness of our proposed approach.
UNMANNED AERIAL VEHICLES (UAVS) are gaining increasing attention as means for service provisioning in smart cities [1]. A drone is a popular type of UAV that offers potential benefits in smart city applications [2]. Drones are increasingly becoming pervasive in their use, including surveillance, agriculture, and delivery of goods [3]. During the COVID-19 pandemic, drones have been widely used for monitoring social distancing, aerial spraying, and delivery of goods. Several countries have used drones for safe and contactless deliveries during pandemic lockdowns [4]. Moreover, companies such as Amazon and Google have massively increased their investment in drones for delivery services [5]. Drone delivery services are typically targeted at consumers and suppliers of goods and services (e.g., retailers, pharmaceutical and medical suppliers, and transport services).
The service paradigm is congruent with the concept of drone service delivery [6]. It provides an elegant mechanism to define and model the drone's functional and non-functional aspects as a drone delivery service. In this respect, the functional property depicts the delivery of a package from one node to another while traversing a skyway network [7]. A drone's maximum payload weight, battery capacity, and flying range represent the non-functional (aka Quality of Service (QoS)) aspects. A skyway network is defined as a set of skyway segments that directly connect two nodes representing take-off and landing stations [8]. Take-off and landing stations (aka network nodes) are typically from and to building rooftops. Each network node acts as a delivery target and a recharging station. In this regard, an atomic drone delivery service is characterized by the transportation of a package using a drone along a skyway segment operating under a set of constraints.
Delivery drones are typically constrained by their battery capacity and flying range [9]. For example, the DJI M200 V2 drone can only fly as far as 32 km when fully charged. Several solutions have been proposed to address these constraints, such as battery swapping [10], solar-powered batteries, and the use of additional batteries [11]. However, these solutions either require a precise landing of drones, availability of spare batteries specific to drones, or are highly dependent on the weather conditions. The recharging of drones at intermediate stations is an applicable solution in the context of utilizing building rooftops as recharging stations [12]. In this regard, a drone Figure 1: Drone Service Composition Scenario may need to be recharged multiple times to cover long-distance areas. In service terms, this would translate into a service composition to deliver goods. An optimal drone delivery service composition is the process of selecting and composing the temporally optimal drone delivery services from a given source location to a delivery destination [2]. In this context, temporally optimal refers to leading towards the destination faster. Figure 1 provides a depiction of a drone service composition scenario.
Drone service composition broadly involves two types of constraints: (1) internal/inherent constraints and (2) external/environmental constraints [13]. The internal/inherent constraints include limited payload weight, flying range, and limited battery capacity of a drone. The external/environmental constraints include recharging pad availability, congestion at stations, and wind conditions. Existing approaches mainly focus on the last-mile delivery services [14] or delivery in deterministic environments [15]. In these approaches, assumptions are simplified to ignore the impact of the constraints mentioned above on drone delivery plans. In contrast, we make no such simplifying assumptions in our work. We consider the internal and external constraints of drone-based provisioning of delivery.
We propose a context-aware drone delivery framework for effectively provisioning delivery services in smart cities. In this paper, contextawareness refers to the capability of leveraging internal/inherent and external/environmental constraints information to tailor the near-optimal drone service composition. We develop a Lineof-Sight (LOS) heuristic composition algorithm that selects and composes the best drone delivery services. A LOS heuristic is based on a straight line from the drone's current location to the target location to determine the next optimal node to visit. In this respect, the best drone service is a service that guarantees package delivery in a minimum time from its start to end location. Using a real dataset, we conduct experiments to demonstrate the effectiveness of our LOS heuristic approach.
Related Work
Existing drone delivery planning and scheduling approaches can be divided into point-to-point and multi-point delivery approaches [2]. In pointto-point delivery approaches, the deliveries are made in a limited geographical area. In such cases, a drone takes off from the warehouse, delivers the package directly to the customer, and returns to the warehouse. A job assignment problem is studied to dimension and control a fleet of drones in a drone delivery system [16]. Two policies are proposed using queuing theory to solve the job assignment problem. The first policy uses the customer's location to the jobs. The second policy uses the arrival time of the customer's request to select the jobs. Simulation experiments show that the second policy is more effective in optimizing the delivery time for low loads and performs well for high loads.
A drone service framework is proposed to provide delivery services [6]. Scheduling, route planning, and composition are the fundamental components of the proposed framework. The scheduling generates itineraries for drones in a network. A route-planning algorithm is proposed focusing on the selection of the optimal route. The drone services are composed at each station using a drone service composition algorithm. The proposed framework does not consider the LOS drone flying regulations and drones' recharging requirements.
Our previous studies are the first to model the multi-point drone deliveries using the service paradigm [15], [7]. A Drone-as-a-Service (DaaS) framework is presented for package delivery using drones [15]. This paper was the first to model a drone's capabilities as a DaaS. This study aims to select and compose the right set of DaaS for faster package delivery. A heuristic-based algorithm is presented for DaaS selection and composition. The proposed composition approach considers the environment to be static and ignores the recharging constraints, which is not realistic in practical scenarios.
A dynamic top-k service composition approach is proposed for drone delivery services [7]. Two drone service models are presented considering no-congestion and with congestion conditions. The no-congestion drone service model ignores the congestion conditions and computes the initial top-k compositions. This initial plan is updated after incorporating congestion conditions using the congestion model. The impact of wind conditions is not addressed in the proposed approach. To the extent of our knowledge, none of the existing approaches consider the effects of the relative wind (i.e., headwind, tailwind, and crosswind) on the drone's energy consumption. This paper is the first attempt to propose a contextaware drone delivery framework that considers the congestion conditions at recharging stations and the effect of relative wind on the drone's energy consumption.
Context-Aware Drone Delivery Framework
Multi-Drone Skyway Network
This section describes our multi-drone skyway network, where multiple drones are assumed to operate to deliver packages to respective delivery targets. We focus solely on using drones to perform deliveries through a skyway network in a given geographical area. A skyway network enables the real-world deployment of drone delivery systems in integrated airspace. We construct a multi-drone skyway network by linking predefined skyway segments between any two nodes within the line of sight. Each node has a landing pad on a residential/commercial rooftop. Our proposed skyway network uses a city's existing infrastructure where wireless recharging pads can easily and cheaply be fitted on building rooftops.
We formally define a drone delivery service and request for drone delivery as follows.
Definition 1: We define a Drone Delivery Service (DDS) as a tuple < DDS id, DDS f , DDS q >, where • DDS.id represents a unique ID, • DDS f is the delivery function of a drone D to transport a package over a skyway segment, • DDS q is a set of QoS attributes of a drone delivery service.
Definition 2: A drone delivery service request is a tuple < ζ, ξ, rt s , w >, where
• ζ represents the source (i.e., warehouse), • ξ represents the destination (i.e., customer location), • rt s represents the start time of the request, • w represents the weight of the package to be delivered.
Problem Formulation
Given a drone-based delivery service request from a consumer, we formulate the drone delivery problem as the composition of the best DDS to form a skyway path from source ζ to destination ξ. The aforementioned internal and external constraints pose significant challenges to the composition of drone delivery services. First, the selection of the right drone service to serve the delivery request. This selection process involves the consideration of both internal and external constraints for efficient service delivery. Second, the selection of subsequent drone services if a single drone service does not satisfy the delivery request due to battery limitations or lineof-sight drone flying regulations. Third, avoiding congestion conditions at recharging stations and selecting a wind-favored skyway path for faster service delivery. The objective of this problem formulation is to select and compose the drone services with the overall shortest delivery time from source to destination while considering all the aforementioned constraints and challenges.
Line-of-Sight Heuristic Composition
We propose a LOS heuristic composition algorithm using an adapted A* algorithm to compose near-optimal services from source to destination. In Algorithm 1, a consumer-invoked service request includes the source node, destination node, and package weight. At the source node, the algorithm computes the drone's probability P (A) to arrive at the destination straightaway without stopping at intermediate stations. P (A) is computed using the path with the shortest distance to the destination. If the drone can fly this path and reach the destination directly before battery depletion, given the wind and payload constraint, then the value of P (A) is 1 and the drone goes directly to the destination using this path. We assume to have a global wind direction θ and speed affecting all the skyway network segments. As shown in Equation 1, we compute the relative wind impact on the drone (α) by subtracting the drone's orientation (β) from the global wind direction (θ). We convert any α value to the ranges (0 • to 180 • ) and (0 • to -180 • ) (Lines 9 -13). Since the drone travels in a line-of-sight, we compute the drone's orientation using the positions of the two nodes connecting the segment (Equation 2). Wind impact on a drone is typically modeled in three ways: headwind, tailwind, and crosswind [17]. The Real Dataset, which is described in the following section, captures the effect of these types on the drone's energy consumption. However, because drones fly in a skyway network, the orientation of the drone may differ from the energy-favorable wind direction. Thus, our formula captures the effect of relative wind direction, which includes the orientation of the drone β and the global wind direction θ. We compute the arriving probability using the shortest path from source to destination in terms of distance only. The probability considers the payload and wind constraints at each segment. If the drone cannot reach the destination straightaway, the drone goes to the best neighboring recharging node (Lines 21-24).
α = θ − β (1) β = atan2(y2 − y1, x2 − x1)(2)
The best node is a node with the least travel and transit times. Transit time consists of the drone's recharging time rt and waiting time wt if a recharging pad is occupied. Each segment's energy consumption rate and travel time are extracted from the dataset described in the Real Dataset section based on the segment's length, package weight, and wind conditions. When selecting the best neighbor, the euclidean distance and euclidean travel time ett between each neighbor and the final destination is considered as a heuristic that adds to the travel time and transit time factors. Including the euclidean travel time factor ensures that a neighbor in the direction
Algorithm 1 LOS Heuristic Composition Algorithm
Input: D, R, θ Output: dt 1: dt = 0 2: while D is not at destination do 3: path to destination= Dijkstra(current, destination) 4: distance to destination=0 5: for Every segment s in the path to destination do 6: distance to destination+=distances i 7:
βs i = atan2(sy i+1 − sy i , sx i+1 − sx i ) 8: αs i = θ − βs i 9:
if αs i > 180 then 10: of the destination is selected and that the algorithm converges. When the drone travels to the best neighbor, the delivery time is updated with the travel time of the segment and the transit time. The algorithm again checks if the drone can reach the destination directly. The process continues until the drone successfully reaches its destination. Because the proposed algorithm is a modified version of A*, the worst-case complexity is O(|E|) = O(b d ). However, the heuristic function has a significant impact on the practical performance of the search because it allows the algorithm to prune away many of the b d nodes that an uninformed search would expand. Furthermore, because our algorithm can skip nodes when applicable, the complexity is typically reduced.
αs i − =
Real Dataset
We use a real dataset collected on various drone parameters [13]. An indoor drone testbed is set up using a 3D printed model of Sydney's CBD to construct a skyway network ( Figure 2a). A drone would typically traverse building rooftops equipped with recharging pads to serve the delivery request. We use HTC Vive base stations to assist the drone in locating its precise position during the flight. The base stations are fitted at diagonal corners of the lab. We use Fanco Premium Pedestal fan to generate wind speeds in different directions. In real-world scenarios, various wind effects affect urban environments, such as venturi, downdraught, downwind eddy, counter-current effect, etc [18]. However, at any given time, the drone is primarily exposed to wind from one direction, i.e., headwind, tailwind, or crosswind. Therefore, we investigate the impact of wind direction on a drone scale rather than an urban scale, i.e., the major wind hitting the drone at any point in time. Furthermore, a city would typically include one weather base station that reports global wind speed and direction for practical reasons. It is not always possible to obtain microclimate data at the urban or building level. As a result, our proposed solution models wind impact on a drone by comparing the global wind direction to the drone's orientation. We construct a real miniature skyway network using the building rooftops of Sydney's CBD ( Figure 2b). The network consists of 36 nodes and seven no-flight zones following aviation regulations for safety and privacy purposes. The collected dataset records the impact of wind conditions and different payloads on the drone's energy consumption over a set of trajectory patterns. The trajectory patterns include hovering, linear, rectangular, and triangular flight paths. This dataset contains various attributes of a drone during each flight, including XY positions, altitude, energy consumption, and wind speed and direction. The data are stored in the form of CSV files 1 .
Experiments and Results
We assess our proposed LOS heuristic composition algorithm compared to flagship algorithms (Dijkstra, Floyd-Warshall, Bellman-Ford), Modi- recharging pads at the next node. The weights of the edges in the Energy-based Bellman-Ford algorithm represent the energy consumed at each edge. We evaluate the algorithms regarding the controlling attributes in a context-aware composition. The controlling attributes include the wind status, carried payload, and recharging pad's availability. The experiments are run on the aforementioned real miniature skyway network. The skyway segments have been scaled up by 50 to mimic the need for recharging stations. We generate 2000 consumer requests with random source and destination and package payloads. We assume that the drone's flight speed is 30 cm/s at the smaller scale and the time it takes to fully charge an empty battery is 40 minutes 2 . The Figure 3a. Our proposed algorithm performs well because of its ability to skip nodes when no recharging is required, resulting in shorter delivery times. On the contrary, the other flagship algorithms select paths that may reduce the distance traveled; however, visit nodes with longer transit times. All flagship algorithms compute the same path and delivery times as they behave similarly except when negative edges exist, which is not the case in our scenario. For the Energy-based Bellman-Ford algorithm, the delivery time is still higher than the proposed LOS heuristic but lower than the three flagship algorithms at lower nodes only. The energy-based modified algorithm prefers edges that result in lower recharging times. Lower recharging time refers to edges that are shorter and wind-favored. Therefore, when the number of intermediate nodes is less between the source and destination, the significance of waiting times at each node is not high, resulting in lower delivery times. With more nodes, the waiting time at each node factors in increasing the overall delivery time. In the exhaustive search algorithm, the top 100 shortest paths in terms of distance only are generated. The path with the least delivery time considering all the constraints is selected. Since the network is very dense, i.e., highly connected, generating all possible paths between source and destination is not feasible. Therefore, top k paths are generated. We arbitrarily chose a value of 100 because a lower value of k resulted in unsuccessful compositions. An unsuccessful composition is one in which a drone's battery is depleted before reaching the next node under the current wind and payload conditions. As shown in Figure 3a, the exhaustive search outperforms all the algorithms regarding delivery times as it computes all the possible compositions. However, this performance comes with a high execution time cost, making it infeasible for larger network sizes.
Relative Wind Effect on Energy Consumption. We analyze the effect of the relative wind on energy consumption. We first group the requests by distance, considering the first two groups only, i.e., requests with a total distance traveled between 0-15 km and 15-30 km. Then, within each group, we group the requests with paths that have an average relative wind within 45 • steps between -180 • and 180 • . As shown in Figure 3b, when the drone travels with the wind direction (0 • ) or against the wind direction (-180 • and 180 • ), it consumes the least amount of energy. Therefore, if most of the segments composed in a path are in the south or north, the energy consumption is the least. The translational lift increases with a headwind due to the relative airflow increase, resulting in less energy consumption [19]. Similarly, the energy consumption is at its lowest with a tailwind because the wind helps reduce the drag forces on the drone. When the relative wind is a crosswind, it consumes more energy than in other directions. We conclude that if two possible paths are available between the source and destination, the path where the drone is oriented to travel with or against the wind is best in terms of energy consumption. This results in shorter recharge and overall delivery times. As shown in Figure 3c, path 2 consumes less energy (190%) than path 1 (273.5%) assuming both travel the same distance. This difference in energy consumption is because the drone following path 1 travels a larger distance against the global wind direction than path 2. Furthermore, we observe that the flagship methods slightly outperform the LOS heuristic method in terms of energy consumption, as shown in Figure 3d. This performance of flagship methods is due to selecting paths with shorter distances, where all the energy consump-tion occurs. The modified Energy-based Bellman-Ford slightly consumes less energy than flagship methods, as it favors paths with less energy consumption. The exhaustive search method outperforms all methods because it selects paths that are both short in distance and wind-favored.
Conclusion
We present a novel context-aware drone delivery framework using the service paradigm. We model the service as a segment in a skyway network served by a drone. Therefore, a skyway path from source to destination is a service composition consisting of multiple segments served by a drone. We conduct experiments using a real dataset to evaluate our proposed algorithm compared to flagship algorithms and an exhaustive search algorithm. We observe that our proposed algorithm delivers the packages faster than other algorithms while maintaining a similar trend of energy consumption. In the future, we plan to investigate the impact of other environmental factors such as temperature and precipitation on a drone's energy consumption and delivery time.
Figure 2 :
2fied Bellman-Ford (Energy-based Bellman-Ford), and Exhaustive Search. The weight of the edges in flagships is set to be the sum of the travel time on the edge, the recharging time at the next node, and the waiting time caused by preoccupied 1 shorturl.at/cGLSV (a) Indoor Drone Testbed of Sydney's CBD (b) Skyway Network with No-Flight Zones 3D Model of Sydney's CBD and Skyway Network
Figure 3 :
3Average Delivery Time and Energy Consumption with Relative Wind drones are assumed to be operating under global wind conditions, i.e., speed and direction. The experiments were carried out on a MacBook Pro, Apple M1 Chip, 8 cores, 16 GB memory, and 1TB SDD. Average Delivery Time. In the first experiment, we evaluate the delivery time of the generated requests. The requests are grouped by the number of nodes between the source nodes and the destination nodes of the requests using Dijkstra's algorithm. The x-axis in Figures 3a and 3d represent the number of intermediate nodes. The proposed LOS heuristic algorithm outperforms all algorithms except the Exhaustive search algorithm, as illustrated in
else if αs i < −180 then compute energy consumption for D based on R package weight, distance to destination, and α dt += tt + rt + wt360
11:
12:
αs i + = 360
13:
end if
14:
end for
15:
16:
if P (A) = 1 then
17:
D can reach the destination without intermediate
nodes
18:
D travels to the destination
19:
dt+=travel time
20:
else
21:
find nearest neighbor nodes
22:
select best neighboring node min(tt+rt+wt+ett))
23:
S travels to the neighboring node
24:
25:
end if
26: end while
27: return dt
Babar Shahzaad is a Research Fellow in the School of Information Systems at Queensland University of Technology (QUT). He has completed his Ph.D. in Computer Science from The University of Sydney. He has published in top-ranked conferences and journals, including IEEE ICWS, ICSOC, IEEE IoT, and FGCS. His research interests include the Industrial Internet of Things, ICN/NDN applications for the Internet of Things (IoT), Service Computing, and Dronebased Delivery Services in Smart Cities. Contact him at [email protected] Balsam Alkouz is a Ph.D. student in the School of Computer Science at the University of Sydney. She received her bachelor's degree in IT Multimedia and her master's degree in Computer Science from the University of Sharjah, United Arab Emirates, in 2016 and 2018 respectively. She worked as a Research Assistant in the Data Mining and Multimedia Research Group at the University of Sharjah. Her research focuses on IoT, Service Computing, and Data Mining. Contact her at [email protected] Jermaine Janszen is a Software Engineer at WiseTech Global. He completed his Honours degree under the supervision of Prof. Athman Bouguettaya at the University of Sydney. His interests focus on Drone-based Services in smart cities. Contact him at [email protected] Athman Bouguettaya is a Professor in the School of Computer Science at the University of Sydney. He received his Ph.D. in Computer Science from the University of Colorado at Boulder (USA) in 1992. He is or has been on the editorial boards of several journals, including the IEEE Transactions on Services Computing, ACM Transactions on Internet Technology, the International Journal on Next Generation Computing, and the VLDB Journal. He is a Fellow of the IEEE and a Distinguished Scientist of the ACM. He is a member of the Academia Europaea (MAE). Contact him at [email protected]
https://www.bitcraze.io/support/f-a-q/
AcknowledgmentThis research was partly made possible by the LE220100078 and DP220101823 grants from the Australian Research Council. The statements made herein are solely the responsibility of the authors.
Drones ripe for pervasive use. F F Mueller, A Schmidt, IEEE Pervasive Computing. F. F. Mueller and A. Schmidt, "Drones ripe for pervasive use," IEEE Pervasive Computing, pp. 21-23, 2017.
Servicebased drone delivery. B Alkouz, B Shahzaad, A Bouguettaya, IEEE CIC, 2021. B. Alkouz, B. Shahzaad, and A. Bouguettaya, "Service- based drone delivery," in IEEE CIC, 2021, pp. 68-76.
Distributed processing applications for UAV/drones: A survey. G Chmaj, H Selvaraj, Progress in Systems Engineering. G. Chmaj and H. Selvaraj, "Distributed processing ap- plications for UAV/drones: A survey," in Progress in Systems Engineering, 2015, pp. 449-454.
A drone-based networked system and methods for combating coronavirus disease (COVID-19) pandemic. A Kumar, Future Generation Computer Systems. A. Kumar et al., "A drone-based networked system and methods for combating coronavirus disease (COVID- 19) pandemic," Future Generation Computer Systems, pp. 1-19, 2021.
Last mile delivery by drones: an estimation of viable market potential and access to citizens across european cities. J.-P Aurambout, K Gkoumas, B Ciuffo, European Transport Research Review. J.-P. Aurambout, K. Gkoumas, and B. Ciuffo, "Last mile delivery by drones: an estimation of viable market po- tential and access to citizens across european cities," European Transport Research Review, pp. 1-21, 2019.
Drone-as-a-service composition under uncertainty. A Hamdi, IEEE Transactions on Services Computing. A. Hamdi et al., "Drone-as-a-service composition under uncertainty," IEEE Transactions on Services Comput- ing, pp. 2685-2698, 2022.
Top-k dynamic service composition in skyway networks. B Shahzaad, A Bouguettaya, ICSOC. B. Shahzaad and A. Bouguettaya, "Top-k dynamic ser- vice composition in skyway networks," in ICSOC, 2021, pp. 479-495.
Package delivery using autonomous drones in skyways. W Lee, Proc. UbiComp/ISWC, 2021. UbiComp/ISWC, 2021W. Lee et al., "Package delivery using autonomous drones in skyways," in Proc. UbiComp/ISWC, 2021, p. 48-50.
Guest editorial can drones deliver?. R , IEEE Transactions on Automation Science and Engineering. R. D'Andrea, "Guest editorial can drones deliver?" IEEE Transactions on Automation Science and Engineering, pp. 647-648, 2014.
Designing a drone delivery network with automated battery swapping machines. T Cokyasar, W Dong, M Jin, . Ö Verbas, Computers & Operations Research. 129105177T. Cokyasar, W. Dong, M. Jin, andİ.Ö. Verbas, "De- signing a drone delivery network with automated bat- tery swapping machines," Computers & Operations Re- search, vol. 129, p. 105177, 2021.
UAVs as mobile infrastructure: Addressing battery lifetime. B Galkin, J Kibilda, L A Dasilva, IEEE Communications MagazineB. Galkin, J. Kibilda, and L. A. DaSilva, "UAVs as mobile infrastructure: Addressing battery lifetime," IEEE Communications Magazine, pp. 132-137, 2019.
Deployment of charging stations for drone delivery assisted by public transportation vehicles. H Huang, A V Savkin, IEEE Transactions on Intelligent Transportation Systems. 239H. Huang and A. V. Savkin, "Deployment of charging stations for drone delivery assisted by public transporta- tion vehicles," IEEE Transactions on Intelligent Trans- portation Systems, vol. 23, no. 9, pp. 15 043-15 054, 2021.
Constraint-aware trajectory for drone delivery services. J Janszen, ICSOC, 2022. J. Janszen et al., "Constraint-aware trajectory for drone delivery services," in ICSOC, 2022, pp. 306-310.
The urban last mile problem: Autonomous drone delivery to your balcony. G Brunner, ICUAS. G. Brunner et al., "The urban last mile problem: Au- tonomous drone delivery to your balcony," in ICUAS, 2019, pp. 1005-1012.
Composing drone-as-a-service (DaaS) for delivery. B Shahzaad, IEEE ICWS. B. Shahzaad et al., "Composing drone-as-a-service (DaaS) for delivery," in IEEE ICWS, 2019, pp. 28-32.
Drone delivery systems: Job assignment and dimensioning. P Grippa, Autonomous Robots. P. Grippa et al., "Drone delivery systems: Job assign- ment and dimensioning," Autonomous Robots, pp. 261- 274, 2019.
Simulation and characterization of wind impacts on suas flight performance for crash scene reconstruction. T Chu, Drones. T. Chu et al., "Simulation and characterization of wind impacts on suas flight performance for crash scene reconstruction," Drones, pp. 1-23, 2021.
Computational fluid dynamics simulation of tree effects on pedestrian wind comfort in an urban area. G Kang, J.-J Kim, W Choi, Sustainable Cities and Society. G. Kang, J.-J. Kim, and W. Choi, "Computational fluid dynamics simulation of tree effects on pedestrian wind comfort in an urban area," Sustainable Cities and Soci- ety, pp. 1-17, 2020.
Energy consumption in unmanned aerial vehicles: A review of energy consumption models and their relation to the UAV routing. A Thibbotuwawa, in ISAT. A. Thibbotuwawa et al., "Energy consumption in un- manned aerial vehicles: A review of energy consump- tion models and their relation to the UAV routing," in ISAT, 2018, pp. 173-184.
| zyda_arxiv-1395000 |
How to Make Privacy Policies both GDPR-Compliant and Usable
Karen Renaud [email protected]
School of Design and Informatics Abertay University Dundee
School of Design and Informatics Abertay University Dundee
United Kingdom, United Kingdom
Lynsay A Shepherd [email protected]
School of Design and Informatics Abertay University Dundee
School of Design and Informatics Abertay University Dundee
United Kingdom, United Kingdom
How to Make Privacy Policies both GDPR-Compliant and Usable
It is important for organisations to ensure that their privacy policies are General Data Protection Regulation (GDPR) compliant, and this has to be done by the May 2018 deadline. However, it is also important for these policies to be designed with the needs of the human recipient in mind. We carried out an investigation to find out how best to achieve this.We commenced by synthesising the GDPR requirements into a checklist-type format. We then derived a list of usability design guidelines for privacy notifications from the research literature. We augmented the recommendations with other findings reported in the research literature, in order to confirm the guidelines. We conclude by providing a usable and GDPR-compliant privacy policy template for the benefit of policy writers.
I. INTRODUCTION
Those who surf the web risk having their privacy violated. They need to be informed about what personal data websites are collecting so that they can choose to patronise those who do not violate their privacy, or opt out of the use of their information. In other contexts there is evidence that people do respond to warnings [1], [2], with confirmation from a study in the privacy context [3]. Yet it is non-trivial to design effective privacy policies [4].
Obar and Oeldorf-Hirsch [5] found that 74% of the 543 people in their study did not even read the privacy policy. Where websites force users to read and agree to their policies (e.g. Google), they often become discouraged and overwhelmed because the text is overly long or incomprehensible [6]. Computer users often receive too many privacy advisements [3], [7], [8], and sometimes do not know what actions to take as a consequence of policy information [9].
The Web Content Accessibility Guidelines 1 require notifications to be perceivable, operable, understandable and robust [10]. The evidence from investigations into privacy policy examples suggests that they do not demonstrate these qualities [6]. This diminishes the efficacy of policy notifications, and leaves users vulnerable to unknowingly carrying out actions that will compromise their privacy.
The advent of GDPR adds another level of complexity to the design of privacy policies. Guidance provided by the Information Commissioner's Office [11] stresses the importance of communicating the necessary privacy information to
The problem is perhaps that traditional usability guidelines cannot necessarily be used "as-is" in the privacy context because usability testing is usually related to primary task completion. Privacy, on the other hand, is seldom the end user's primary task [13], [14]. That being so, the display of a privacy policies can interrupt the user's pursuit of their primary goal and is thus often perceived to be a nuisance [15]. We need bespoke guidelines to inform policy design in the privacy context.
Waldman [12, p. 8] reports that their review of 191 privacy policies convinced them that "today's privacy policies are not designed with readability, comprehension, and access in mind". This justifies the need for explicit usability guidelines to be provided to web privacy policy writers.
Our work seeks to inform policy writers, with guidance that is specifically tailored towards browser-based privacy policies that are both usable and GDPR-compliant.
We first detail the context of our investigations in Section II then summarise the GDPR legislation requirements in Section III. We carried out a systematic literature review of design guidelines for designing usable privacy policies (Section IV).
To make our guidelines as helpful as possible, we decided to convey the spirit of the guidelines in the form of a privacy policy template. This conveys the "how" of privacy policy design, rather than the "what", as encapsulated in a linear set of policy design guidelines. The paper provides a template pattern for a policy that is both usable and GDPR compliant (Section V), before concluding in Sections VI and VII II. PREAMBLE Wogalter and Mayhorn [16] explain that warnings (policy items) are a type of risk communication. Wogalter [17] explains that warnings have two purposes, to: (1) communicate information, and (2) reduce unwise behaviours. To achieve these aims, the policies have to be designed carefully.
To understand how humans process communications, we need to look at how researchers have modeled this.
Wogalter, DeJoy, and Laughery [18] developed the C-HIP model in the context of warning research. Their model builds on initial human communication models proposed by Shannon [19] and Lasswell [20]. Wogalter et al.'s model can be considered to be somewhat unrealistic because it does not include a noise component, as Shannon's does. In a world of noisy communication such a model can not be complete. Cranor [21] proposed a human-in-the-loop framework which is more comprehensive and reflects the factors impacting communications in the context of security notifications.
It is important to realise that security and privacy are fundamentally different concepts. Skinner et al. [22] argue that a secure information system does not necessarily imply that privacy will be preserved in the system. Gritzalis and Lambrinoudakis [23], and Bambauer [24, p. 667] make similar arguments. As an example, they refer to a company that collects customer information and stores it in an encrypted format. This ensures that the information is secured. Yet the same company may sell the information to another company, thereby violating the owner's privacy.
Privacy and security, being clearly distinct concepts, require different models of notification design. This means that we cannot merely use the security communication processing models to inform the design of privacy policies. In the absence of a published privacy communication model, we plan to use the GDPR legislation to structure our privacy policy design guidelines.
III. GDPR LEGISLATION
The introduction of the GDPR is said to be "the most important change in data privacy regulation in 20 years" [25]. The legislation will come into force on the 25th May 2018, and replaces the existing Data Protection Directive 95/46/EC. Organisations that fail to comply will be subject to significant fines. The main GDPR requirements are that customers must be informed about (numbering is ours):
GDPR1: Specify Data Being Collected: Customers should be aware of the information that is collected about them. Furthermore, businesses should document the information that is collected, which links into the accountability required by GDPR [11].
GDPR2: Justification For Data Collection: Organisations must explain their rights to collect data [11], but they should also justify to themselves exactly why they need to collect such information [26].
GDPR3: How Data Will Be Processed: The organisation must inform the customer of the lawful rights it has to process personal data [11].
GDPR outlines the ways in which processing is deemed legal (one of the following must apply): the customer has given consent for this to be done for a specific purpose, it is used to form a contract with a customer, the data controller is complying with a legal obligation, it is used to protect the interests of a person, it is required for a task involving the public interest, it is required for a legitimate purpose by the controller (provided rights and freedoms are not violated) [27]. Moreover, the person has the right to opt out of processing of his data by an algorithm, or any other profiling.
Under Article 9 of the legislation, there is a special category of data, deemed 'sensitive data' which requires further protection. This information can include details of an individual's health, political views, religion, etc. A lawful basis for processing such information must be given (these have been outlined in a previous paragraph), and a separate basis must be provided for processing special category data [28]. Examples of reasons for processing such data include: it may be necessary for reasons of public health, or it may be necessary for the progression of legal claims [28].
GDPR4: How Long Data Will Be Retained: GDPR dictates that data should be held for the minimum amount of time, and organisations must state how long data is retained [11] [29].
GDPR5: Who Can Be Contacted to Have Data Removed or Produced: People have the right for all their data, both provided to the company, and observed by their systems: (1) to be forgotten, and (2) to be provided to them. To facilitate this, contact details must be provided in the policy [30], [31]. Within the organisation, someone must take responsibility for the stored and processed data. Customers should also be informed who the Data Protection Officer (controller) is, and how to get in touch with them, should they have an access request [11]. Customers should also be provided with a timescale in terms of how subject access requests will be handled by the organisation [11].
GDPR6: Communication of Privacy Information: Documentation on the legislation notes that it "requires the information to be provided in concise, easy to understand and clear language" [11].
We now present a GDPR-compliant policy template in Figure 1.
A. Assessing Current State of Play
To take a snapshot of the current situation, roughly three months before the GDPR deadline, we proceeded to assess the privacy policies of some UK-based websites. We carried out this assessment on the 25th January 2018.
In order to choose the UK websites to assess, we consulted Alexa to obtain the top 10 most-used websites in the UK 2 .
The first step is to be able easily to locate the policy. Langhorne [32] reported, in 2014, that many organisations did If you have any questions or comments about this privacy policy, or the data collected, email [email protected]
If you sign up to use this website's services, we may retain personal information about you, such as your name, home address, email, and phone number
You can opt into this website using your personal Information to provide personalised services online, and personalised 3rd party advertising Aggregated data is analysed to inform internal decision making. Personal Information is used to deliver services. Opt in here GDPR1,6 GDPR2,6 GDPR3,6
GDPR5,6
Your personal Information will be deleted if you do not use this website for …. Order information will be retained as long as the law requires GDPR4,6 not provide a handy link to their privacy policies from the landing page. It is likely that the upcoming GDPR legislation will mandate provision of such links. All of the websites we examined did indeed include a link to their privacy policy from their main page, which was a positive development. Secondly, we checked the extent to which the websites' privacy policies satisfied GDPR requirements. To provide a measure of understanding (GDPR6), we use the Gunning Fog Index score 3 . This index is an indication of the number of years of schooling someone would need to be able to understand the text. If someone needs more than a high school education to understand the policy (more than 13 years), we conclude that it fails GDPR6 in terms of understandability. Table I presents our findings. We also provide the number of words in total, as well as the number of complicated words (with 3 or more syllables) to give an idea of the effort a user would have to expend if they wanted to read and understand the entire policy. The data is depicted in Figure 2.
Only one of these policies met the requirements of the GDPR legislation on the 28th January 2018. There is still time left for the others to revise their policies and they will probably do so, most being large companies with substantial web development resources at their disposal. Yet smaller companies would probably benefit from some guidance in this respect.
In
IV. USABILITY GUIDELINES
We decided to focus on browser privacy policies firstly because of the popularity of web applications [33] such as email, claimed to be the most popular application in use [34]. Video streaming [35], which runs within browsers, is also very popular. The second reason is that browsers run on all devices, ranging from Desktops to Smartphones. We thus felt that our guidelines could be most useful to developers if we focused on guidelines for browser application policy writers.
We carried out a systematic literature review in order to gather best practice from the research literature in this respect.
A. Systematic Literature Review
The literature search was carried out in January 2018 as follows:
Databases: ACM, Springer, Web of Science, Scopus, IEEE, and then Google Scholar to identify publications that did not appear in the databases.
Keywords: 'design guidelines' and 'browser'and 'privacy' and ('feedback' or 'warnings' or 'notification' or 'alert'). A separate search was conducted using the phrase 'privacy policy design'. 0 0 0 ACM 3 2 1 Springer 145 139 6 Web of Science 0 0 0 Google Scholar 61 42 19 IEEE 73 70 3 Total 29 TABLE II PAPERS FROM THE LITERATURE SEARCH We analysed the guidelines using Thematic Analysis [36]. This approach supports pinpointing, examining, and recording themes that emerge from the papers. We commenced by familiarising ourselves with the papers. We then generated initial codes and searched for themes as we collated these codes. We then reviewed the themes, defining and naming them. Finally, we assigned them to the applicable GDPR category, as detailed in Section III.
B. Results
GDPR1: Ensure that the sensitivity of the data is communicated to the user [37]. This need is confirmed by [38].
GDPR2: Some researchers advise that providing justifications for privacy policies potentially reduces the end-user's trust in the system [39], [40], [41], [42]. Volkamer et al. [43] advise that the potential consequences of a risk be conveyed to the user, along with potential recommendations. GDPR mandates that this information be provided so we should focus on fostering trust in the presence of such justifications.
GDPR3: -GDPR4: -GDPR5: It is important to ensure that the user can contact someone in the organisation [44], [45]. Contact details should be conspicuously placed [46].
GDPR6: In this section we first present the themes that emerged from our analysis. We then cite supporting research from other publications. The themes fell naturally into two meta categories: (1) content of the policies, and (2) delivery of the policies. We report these separately
Content Guidelines:
The overarching admonition should be that human attention is a finite resource [47], [45] that should not be taken for granted or squandered, and privacy policies "should empower users to make informed decisions about their online behavior" [48].
(a) Modality -Murphy-Hill & Murphy [49], [50] suggest that pictures be used to ease communication because users prefer this [51]. Others have advocated visualising privacy policy statements, making them more usable [45].
On the other hand, Goldberg [44] suggests that text should be used exclusively to maximise accessibility. Anderson et al. [52], [53] suggests the use of polymorphism in warning notifications to reduce habituation.
Supporting Research: Other researchers argue for the power of a multi-modality image and text message in enhancing communication [54], [55], [56], [57].
(b) Make it Personal -Vasalou [58] says policy items should give recipients "space for interpretation", so that they can understand how it applies particularly to them [59].
The personalisation of policies should be considered [51], [60], [61], [62]. .
Supporting Research: Elman et al. [63] argue that personalisation, by whatever means, is extremely important in enhancing understanding. Schaub et al. [64] says privacy policy notices should be "relevant" to the person. Needham [65] also argues for the importance of personalisation. Yet policy display is somewhat different from other kinds of personalisation opportunities: people view the policy before they have divulged any information that could be used to personalise the communication. That being so, one way of personalising a generic document, such as a policy, especially in helping people to see that it applies to them, could be by using personal pronouns like "you" and "your". This should help people to consider the personal ramifications of the policy.
Another way is to provide examples that people can identify with [66], but this will take up valuable space and needs detailed investigation to assess viability.
(c) Give Control to the User -It is important for the user to retain a level of control [46], [58], [67] by allowing them to exercise control over disclosure [47]. Schaub et al. [68] distinguish between three levels of user control: (1) blocking, non-blocking and decoupled. A designer has to decide whether the user has to acknowledge the policy notification (blocking) or not (non-blocking), whether they can defer their response (decoupled), or whether the option's actions will expire [49].
Users should be provided with the option to respond to a risk they have been notified about, and helped to visualise potential consequences [60], [69].
Supporting Research: Other research emphasises the need to allow people to control disclosure [38], [64]. Yet Waldman [12] reports that, of the 191 policies they surveyed in 2016, only 9 provided users with noticeable opt-out buttons. Moreover, they discovered that a little more than half of these only allowed users to opt out of marketing, but not out of profiling. GDPR mandates that users should be allow to opt out of the latter. Yet Adjerid et al. [70] point out that merely allowing people to opt out, without carefully considering the way the information about such consent is presented to the user is framed, does not necessarily help them to make better privacy choices.
(d) Trust -Trust should be deliberately built and maintained [49], [12] by framing the privacy policy very carefully [40]. Indeed, when people read privacy policies, it impacts on their trust of the website [71], so it is important to get it right.
It is crucial for people to trust a website if they are to make use of it [72]. Broutsou and Fitsilis [73] review the literature on trust and report a number of studies that show that the level of trust is positively related to the intention to carry out an online transaction.
Supporting Research: Other research suggests that users require reassurance that information is kept securely [38], [45] and recommend including a Privacy Seal [74], [30], [31], [72]. Policy writers should also provide a telephone number (not only an email address) and make other channels of communication clear [30], [31]. Finally, the policy should explain how these privacy assurances will be enforced [75], [76].
(e) Overview & Link -Lin [59] suggests highlighting the most important information. We should only present essential details about the risk [60], [61], with links to more information should they want it [43]. In providing policy-based notifications, a balance must be found between brevity and comprehensiveness [13].
Supporting Research: Researchers confirm the need to provide an overview first and then links to more information [31], [74] (f) Maximise Understandability -This is emphasised by a number of researchers [14], [49], [50], [77], [69], [78], [12] as well as the importance of consistency [10], [49].
Unclear notifications are more likely to be ignored, and consideration should be given to the exact meanings of words used [60]. Concrete explanations should be provided [79], [80] and explanations should be simple [59], avoiding acronyms and jargon with only meaningful terminology being used [13], [14], [77], [2]. Semantically distinct information should be separated [14], [58]. Text should be presented in short, simple sentences, devoid of complex grammatical structures [81], [82], [83], [84]. Longer warning notifications performed poorly in user testing [85].
Some users may have low numeracy levels so that other mechanisms for communicating risk should be sought. In choosing these, it should be borne in mind that users may have different understanding of visuals [60].
Supporting Research: Other authors confirm the importance of maximising ease of use [72], [86], [64].
In terms of understandability, it must be noted that existing work confirms that shorter notifications are most effective at communicating with users. The challenge, in providing enough information to foster understanding, while being brief, is highlighted [87].
Delivery Guidelines:
(i) Timing & Location -Many of the recommendations that fall into this category are related to the delivery of pop-up type alerts and notifications, both in terms of time and space. There is a focus on displaying these only when they merit interrupting the user's task [50], [88], avoid irritating [43] and preventing habituation [49], [88].
Privacy policies, unlike these kinds of alerts, are either viewed when the person deliberately clicks on a link, or is forced to read the policy and consent to it. Hence time and space are less applicable in this context.
(iii) Appearance -Kelley [14] provides a number of recommendations: (1) the notification should be surrounded by a box to clearly demarcate it; (2) provide a title to assist speedy recognition. It is important to be careful with colour use so as not to disadvantage those with colour deficiencies [44]. A neutral grey colour can be used for the background of notifications, as it is unlikely to annoy the user [43].
C. Reprise
It is clear from the previous discussion that much attention has been given to guidelines to ensure that GDPR6 is satisfied. GDPR3 and GDPR4 requirements were not addressed in the literature we gathered, while GDPR1 and GDPR5 did not receive much attention. GDPR2 is an area ripe for focused attention, because many of the current guidelines conflict with the GDPR requirements.
We could simply provide the list of content-related guidelines based on the derived principles in the previous section. However designers have difficulty benefiting from these kinds of flat lists of guidelines [89], [90]. We therefore plan to produce a template to demonstrate the impact of these guidelines. Waldman [12] discovered that a demarcated structure for policies made them more palatable to users.
V. USABLE AND GDPR-COMPLIANT PRIVACY POLICY TEMPLATE
In this section we consider how to implement the content guidelines from the literature as described in the previous section. The delivery guidelines will not be considered because they have a great deal to do with the context and nature of the website and cannot be provided in a context-neutral fashion.
In providing an example GDPR-compliant template, we formulated text to deliver the content for a fictional Company X, as advised by the GDPR requirements and content guidelines. We measured the understadability of the text by using the Gunning Fog Index test.
Some of the content guidelines are relatively easy to satisfy, more or less in a binary fashion i.e. overview & link. Guidelines (d) (trust) and (f) understandability, require a more nuanced approach.
GDPR6(d) Trust: To address trust issues we decided to include an image to foster and inspire trust. We decided to propose the use of a Privacy Seal for this purpose, especially since this has been widely advised [74], [30], [31], [72]. Moreover, we shall include icons in each subsection to demarcate them and improve accessibility.
GDPR6(f) Maximise Understandability: To maximise the understandability required by GDPR6, we simplified the text to need less than a high school education to understand, and included a small icon to bookmark different sections.
The years of compulsory schooling a person receives depends on the country they are from. For example, in the UK, children attend school from the ages of 5 to 18, however they are free to leave at the age of 16, meaning they can receive between 11 and 13 years of schooling. In contrast, when considering other EU countries, Swedish children start school at the age of 7, and can leave at 16, meaning they may only receive 9 years of schooling.
Research presented in this paper was conducted by an English-speaking, UK-based institution, therefore the assumption was made that people typically have between 11 and 13 years of schooling. Table III An exemplar GDPR-compliant and usable privacy policy was derived from the template shown in Figure 1 and is shown in Figure 3. Company X, the company this privacy policy was tailored for, only uses their customers' information to detect global trends, and this is reflected in the middle box. This box, in particular, would reflect the purposes any particular organisation intends to use the customer's data for. The box on the right would also reflect a specific company's deletion policy; Company X only keeps data for 1 month -others may keep it for 2 years. It is important that the actual policy is reflected here, so that the policy satisfies GDPR requirements.
VI. FUTURE WORK
The incoming GDPR legislation requires websites to obtain consent from their customers/users for any data collection to take place. This will inevitably lead to a veritable avalanche of consent requests as the GDPR deadline approaches. It is possible, as Schermer et al. [91] argue, that people will become desensitised by all these requests and will start consenting without being fully aware of what they are consenting to. Adjerid et al. [40] also argue that a myopic focus on transparency enhancement will not necessarily lead to improved and informed consent, especially when sites frame information differently. It would be very interesting to explore these apparent conundrums.
We proposed the use of a privacy seal to foster trust. A more detailed investigation is required in order to determine whether this is the most effective image to use. Some researchers found that privacy seals did enhance trust [92] but there is also evidence that users often misinterpret their message [93].
If you sign up to use this website's services, we may keep personal information about you. This will include your name, home address, email, and phone number.
If you have any questions or comments about this privacy policy, or the data collected, email [email protected] or phone +44 … View the full privacy policy here
We would like to use your information to provide better services to you, and adverts from 3rd parties. Opt in here Your data is stored safely and securely. If we do lose your data we will be fined by the Information Commissioner.
COMPANY X PRIVACY POLICY
Order information is kept to meet legal requirements. Your personal Information will be deleted if you do not use this website for a year
Privacy Seal
We would like to collect all order information to help us to predict global trends. Opt in here
VII. CONCLUSION
We publish this work to provide guidance to designers and developers who need to incorporate privacy policies into their systems. Our final template draws on the GDPR legislation and the research literature on usable design. We welcome feedback, particularly from those working in industry, to help us to refine and improve this template, to help it deliver maximum value.
Fig. 1 .
1GDPR-Compliant Policy Template. Each section provides a link to more comprehensive information
Fig. 3 .
3Usable GDPR-Compliant Privacy Policy Example
the next section we consider what the research literature says about how to design privacy policies. 3 http://gunning-fog-index.com/ TOP ALEXA WEBSITES AND GDPR REQUIREMENTS. STARRED WEBSITES ARE GDPR COMPLIANT. (GFI=GUNNING FOG INDEX: •=SATISFIES; ⊗=DOES NOT SATISFY) Fig. 2. Word Lengths and % of Complicated Words (3+ syllables)GDPR Num-
ber
1
2
3
4
5
6
GFI
Words
3+
Syllable
Words
Google.co.uk •
•
•
⊗
⊗
15.21
2831
487
YouTube
•
•
•
⊗
⊗
Google.com
•
•
•
⊗
⊗
Facebook
•
•
•
⊗
⊗
13.71
2697
416
Reddit
•
•
•
⊗
⊗
13.86
2680
423
Amazon.co.uk •
•
•
⊗
•
12.21
3059
581
BBC *
•
•
•
•
•
11.34
5187
608
Wikipedia
•
•
•
•
⊗
13.74
445
91
eBay
•
•
•
⊗
⊗
17.97
5260
994
Twitter
•
•
•
⊗
⊗
13.51
3793
586
TABLE I
provides the GFI of the text provided to address all the GDPR requirements as understandably as possible. If you sign up to use this website's services, we may keep personal information about you. This will include your name, home address, email, and phone number GDPR2, GDPR6(c)10.30 This website will use your information to provide better services to you, and adverts from 3rd parties. Opt out here GDPR3 5.822 We would like to collect all order information to help us to predict global trends. If you have any questions or comments about this privacy policy, or the data collected, email ... GDPR6(d) 11.67 Your data is stored safely and securely. If we do lose your data we will be fined by the InformationCommissioner TABLE III TEMPLATE TEXT GUNNING FOG INDEXGuideline GFI
Text Used
GDPR1
8.457 Opt
in here
GDPR4
9.73
Order information is kept to meet legal re-
quirements. Your personal Information will
be deleted if you do not use this website for
a month
GDPR5
11.40
https://www.alexa.com/topsites/countries/GB Alexa uses web traffic analysis to produce lists of the most popular websites in countries worldwide
ACKNOWLEDGEMENTSWe thank Andrew Phillips for his feedback on an earlier draft of this paper.
Do product warnings increase safe behavior? A meta-analysis. E P Cox, Iii , M S Wogalter, S L Stokes, E J Tipton Murff, Journal of Public Policy & Marketing. E. P. Cox III, M. S. Wogalter, S. L. Stokes, and E. J. Tipton Murff, "Do product warnings increase safe behavior? A meta-analysis," Journal of Public Policy & Marketing, pp. 195-204, 1997.
Warning! A comprehensive model of the effects of digital information security warning messages. M Silic, J Barlow, D Ormond, The 2015 Dewald Roode Workshop on Information Systems Security Research. Newark, Delaware, USAIFIPM. Silic, J. Barlow, and D. Ormond, "Warning! A comprehensive model of the effects of digital information security warning messages," in The 2015 Dewald Roode Workshop on Information Systems Security Research. Newark, Delaware, USA: IFIP, 2015.
You've been warned: an empirical study of the effectiveness of web browser phishing warnings. S Egelman, L F Cranor, J Hong, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsACMS. Egelman, L. F. Cranor, and J. Hong, "You've been warned: an em- pirical study of the effectiveness of web browser phishing warnings," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2008, pp. 1065-1074.
Promoting i-safety: effects of privacy warnings and privacy seals on risk assessment and online privacy behavior. R Larose, N J Rifon, Journal of Consumer Affairs. 411R. LaRose and N. J. Rifon, "Promoting i-safety: effects of privacy warnings and privacy seals on risk assessment and online privacy behavior," Journal of Consumer Affairs, vol. 41, no. 1, pp. 127-149, 2007.
The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. J A Obar, A Oeldorf-Hirsch, TPRC 44: The 44th Research Conference on Communication, Information and Internet Policy. George Mason University School of LawJ. A. Obar and A. Oeldorf-Hirsch, "The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services," in TPRC 44: The 44th Research Conference on Communication, Information and Internet Policy. September 30-October 1, George Mason University School of Law, 2016.
Disagreeable privacy policies: Mismatches between meaning and users' understanding. J R Reidenberg, T Breaux, L F Cranor, B French, A Grannis, J T Graves, F Liu, A Mcdonald, T B Norton, R Ramanath, Berkeley Tech. LJ. 3039J. R. Reidenberg, T. Breaux, L. F. Cranor, B. French, A. Grannis, J. T. Graves, F. Liu, A. McDonald, T. B. Norton, and R. Ramanath, "Disagreeable privacy policies: Mismatches between meaning and users' understanding," Berkeley Tech. LJ, vol. 30, p. 39, 2015.
Habituation, dishabituation, and recovery effects in visual warnings. S Kim, M S Wogalter, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual MeetingLos Angeles, CASage Publications Sage CA53S. Kim and M. S. Wogalter, "Habituation, dishabituation, and recovery effects in visual warnings," in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 53, no. 20. Sage Publications Sage CA: Los Angeles, CA, 2009, pp. 1612-1616.
Users aren't (necessarily) lazy: using neuroIS to explain habituation to security warnings. B Anderson, T Vance, B Kirwan, D Eargle, S Howard, Thirty Fifth International Conference on Information Systems. AucklandB. Anderson, T. Vance, B. Kirwan, D. Eargle, and S. Howard, "Users aren't (necessarily) lazy: using neuroIS to explain habituation to security warnings," in Thirty Fifth International Conference on Information Systems, Auckland, 2014.
Doppelganger: Better browser privacy without the bother. U Shankar, C Karlof, Proceedings of the 13th ACM Conference on Computer and Communications Security. the 13th ACM Conference on Computer and Communications SecurityACMU. Shankar and C. Karlof, "Doppelganger: Better browser privacy without the bother," in Proceedings of the 13th ACM Conference on Computer and Communications Security. ACM, 2006, pp. 154-167.
Merging technical guidelines for accessible web content with universal design principles. L D A Almeida, M C C Baranauskas, IC-10-020Instituto De Computação. Universidade Estadual de CampinasTech. Rep.L. D. A. Almeida and M. C. C. Baranauskas, "Merging technical guidelines for accessible web content with universal design principles," Instituto De Computação. Universidade Estadual de Campinas, Tech. Rep. IC-10-020, June 2010.
Preparing for the General Data Protection Regulation (GDPR) -12 Steps to Take Now. Information Commissioner's OfficeInformation Commissioner's Office, "Preparing for the General Data Protection Regulation (GDPR) -12 Steps to Take Now," 2018, https://ico.org.uk/media/1624219/preparing-for-the-gdpr-12-steps.pdf (Accessed April 2018).
Privacy, Notice and Design. A E Waldman, A. E. Waldman, "Privacy, Notice and Design," 2016, https://papers.ssrn. com/sol3/papers.cfm?abstract id=2780305 (Accessed April 2018).
Little brother's watching you: Raising awareness of data leaks on smartphones. R Balebako, J Jung, W Lu, L F Cranor, C Nguyen, Proceedings of the Ninth Symposium on Usable Privacy and Security. the Ninth Symposium on Usable Privacy and SecurityACM12R. Balebako, J. Jung, W. Lu, L. F. Cranor, and C. Nguyen, "Little brother's watching you: Raising awareness of data leaks on smart- phones," in Proceedings of the Ninth Symposium on Usable Privacy and Security. ACM, 2013, p. 12.
Designing a privacy label: assisting consumer understanding of online privacy practices. P G Kelley, CHI'09 Extended Abstracts on Human Factors in Computing Systems. ACMP. G. Kelley, "Designing a privacy label: assisting consumer under- standing of online privacy practices," in CHI'09 Extended Abstracts on Human Factors in Computing Systems. ACM, 2009, pp. 3347-3352.
A qualitative study of users' view on information security. E Albrechtsen, Computers & Security. 264E. Albrechtsen, "A qualitative study of users' view on information security," Computers & Security, vol. 26, no. 4, pp. 276-289, 2007.
Warning design. M Wogalter, C Practice, A Black, P Luna, O Lund, S Walker, Information Design: Research. 20M. Wogalter and C. Mayhorn, "Warning design," in Information Design: Research and Practice, A. Black, P. Luna, O. Lund, and S. Walker, Eds., 2017, ch. 20.
Visual Information for Everyday Use: Design and Research Perspectives. M S Wogalter, Factors Influencing the Effectiveness of WarningsM. S. Wogalter, "Factors Influencing the Effectiveness of Warnings," Visual Information for Everyday Use: Design and Research Perspectives, pp. 93-110, 1999.
Organizing theoretical framework: a consolidated communication-human information processing (c-hip) model," Warnings and Risk Communication. M S Wogalter, D M Dejoy, K R Laughery, M. S. Wogalter, D. M. DeJoy, and K. R. Laughery, "Organizing the- oretical framework: a consolidated communication-human information processing (c-hip) model," Warnings and Risk Communication, pp. 15- 23, 1999.
A mathematical theory of communication. C E Shannon, ACM SIGMOBILE Mobile Computing and Communications Review. 51C. E. Shannon, "A mathematical theory of communication," ACM SIGMOBILE Mobile Computing and Communications Review, vol. 5, no. 1, pp. 3-55, 2001.
The Structure and Function of Communication in Society. H D Lasswell, The Communication of Ideas. 37H. D. Lasswell, "The Structure and Function of Communication in Society," The Communication of Ideas, vol. 37, pp. 215-228, 1948.
A framework for reasoning about the human in the loop. L F Cranor, UPSEC. 8L. F. Cranor, "A framework for reasoning about the human in the loop." UPSEC, vol. 8, no. 2008, pp. 1-15, 2008.
A framework of privacy shield in organizational information systems. G Skinner, S Han, E Chang, International Conference on Mobile Business. IEEEG. Skinner, S. Han, and E. Chang, "A framework of privacy shield in organizational information systems," in International Conference on Mobile Business. (ICMB 2005). IEEE, 2005, pp. 647-650.
Privacy in the digital world. S Gritzalis, C Lambrinoudakis, Encyclopedia of Internet Technologies and Applications. IGI GlobalS. Gritzalis and C. Lambrinoudakis, "Privacy in the digital world," in Encyclopedia of Internet Technologies and Applications. IGI Global, 2008, pp. 411-417.
Privacy versus Security. D E Bambauer, J. Crim. L. & Criminology. 103667D. E. Bambauer, "Privacy versus Security," J. Crim. L. & Criminology, vol. 103, p. 667, 2013.
Home Page of EU GDPR. EU ParliamentEU Parliament, "Home Page of EU GDPR," 2018, https://www.eugdpr. org/(Accessed April 2018).
GDPR: What's your justification. A Cormack, A. Cormack, "GDPR: What's your justification?" 2017, https://community.jisc.ac.uk/blogs/regulatory-developments/article/ gdpr-whats-your-justification (Accessed April 2018).
Art. 6 GDPR Lawfulness of processing. Intersoft Consulting, Intersoft Consulting, "Art. 6 GDPR Lawfulness of processing," 2016, https://gdpr-info.eu/art-6-gdpr/ (Accessed April 2018).
Information Commissioner's Office. Special Category DataInformation Commissioner's Office, "Special Cate- gory Data," 2018, https://ico.org.uk/for-organisations/ guide-to-the-general-data-protection-regulation-gdpr/ lawful-basis-for-processing/special-category-data/?q=best+practice (Accessed April 2018).
GDPR Data Retention Quick Guide. Data Protection NetworkData Protection Network, "GDPR Data Retention Quick Guide," 2017, https://www.dpnetwork.org.uk/gdpr-data-retention-guide/.
Exploring efforts to engender on-line trust. P Durkan, M Durkin, J Gillen, International Journal of Entrepreneurial Behavior & Research. 93P. Durkan, M. Durkin, and J. Gillen, "Exploring efforts to engender on-line trust," International Journal of Entrepreneurial Behavior & Research, vol. 9, no. 3, pp. 93-110, 2003.
All you need is trust -an analysis of trust measures communicated by cloud providers. J Gantner, L Demetz, R Maier, OTM Confederated International Conferences. On the Move to Meaningful Internet Systems. SpringerJ. Gantner, L. Demetz, and R. Maier, "All you need is trust -an analysis of trust measures communicated by cloud providers," in OTM Confederated International Conferences. On the Move to Meaningful Internet Systems. Springer, 2015, pp. 557-574.
Web privacy policies in higher education: How are content and design used to provide notice (or a lack thereof) to users?. A L Langhorne, in International Conference on Human Aspects of Information Security, Privacy, and Trust. SpringerA. L. Langhorne, "Web privacy policies in higher education: How are content and design used to provide notice (or a lack thereof) to users?" in International Conference on Human Aspects of Information Security, Privacy, and Trust. Springer, 2014, pp. 422-432.
Single Page Web Applications. M S Mikowski, J C Powell, Manning PublicationsShelter Island, NYM. S. Mikowski and J. C. Powell, Single Page Web Applications. Shelter Island, NY: Manning Publications, 2013.
Graphical browsing of email data: An empirical investigation. S Alharbi, D Rigas, Fifth International Conference on Information Technology: New Generations. ITNG. IEEE. S. Alharbi and D. Rigas, "Graphical browsing of email data: An em- pirical investigation," in Fifth International Conference on Information Technology: New Generations. ITNG. IEEE, 2008, pp. 495-499.
Mobile broadband services and the availability of instant access to cyberspace. A Kellerman, Environment and Planning A. 4212A. Kellerman, "Mobile broadband services and the availability of instant access to cyberspace," Environment and Planning A, vol. 42, no. 12, pp. 2990-3005, 2010.
Introduction to thematic analysis. G Guest, N Macqueen, E Namey, Applied Thematic Analysis. 12G. Guest, N. MacQueen, and E. Namey, "Introduction to thematic analysis," Applied Thematic Analysis, vol. 12, 2012.
Aligning privacy and usability: Designing a privacy-aware mobile application that people can use. S Nafra, Vienna University of Economics and BusinessMaster's thesisS. Nafra, "Aligning privacy and usability: Designing a privacy-aware mobile application that people can use," Master's thesis, Vienna Univer- sity of Economics and Business, 2014.
Beyond concerna privacy-trust-behavioral intention model of electronic commerce. C Liu, J T Marchewka, J Lu, C.-S Yu, Information & Management. 422C. Liu, J. T. Marchewka, J. Lu, and C.-S. Yu, "Beyond concern - a privacy-trust-behavioral intention model of electronic commerce," Information & Management, vol. 42, no. 2, pp. 289-304, 2005.
How Privacy Policy Affects Sign-Ups -Surprising Data From 4 A/B Tests. M Aagaard, ContentVerve. com. M. Aagaard, "How Privacy Policy Affects Sign-Ups -Surprising Data From 4 A/B Tests," ContentVerve. com, 2013.
Sleights of privacy: Framing, disclosures, and the limits of transparency. I Adjerid, A Acquisti, L Brandimarte, G Loewenstein, Proceedings of the Ninth Symposium on Usable Privacy and Security. the Ninth Symposium on Usable Privacy and SecurityACM9I. Adjerid, A. Acquisti, L. Brandimarte, and G. Loewenstein, "Sleights of privacy: Framing, disclosures, and the limits of transparency," in Proceedings of the Ninth Symposium on Usable Privacy and Security. ACM, 2013, p. 9.
A user-tailored approach to privacy decision support. B P Knijnenburg, IrvineInformation and Computer Sciences, University of CaliforniaPh.D. dissertationB. P. Knijnenburg, "A user-tailored approach to privacy decision sup- port," Ph.D. dissertation, Information and Computer Sciences, University of California, Irvine, 2015.
What's wrong with online privacy policies. I Pollach, Communications of the ACM. 50I. Pollach, "What's wrong with online privacy policies?" Communica- tions of the ACM, vol. 50, no. 9, pp. 103-108, 2007.
Design and Field Evaluation of PassSec: Raising and Sustaining Web Surfer Risk Awareness. M Volkamer, K Renaud, G Canova, B Reinheimer, K Braun, Trust and Trustworthy Computing -8th International Conference. Heraklion, GreeceM. Volkamer, K. Renaud, G. Canova, B. Reinheimer, and K. Braun, "Design and Field Evaluation of PassSec: Raising and Sustaining Web Surfer Risk Awareness," in Trust and Trustworthy Computing -8th International Conference, Heraklion, Greece, August 24-26, 2015, pp. 104-122.
State of Texas Municipal Web Sites: A Description of Website Attributes and Features of Municipalities with Populations Between. J S Goldberg, 50Texas State UniversityMaster's thesis, Public AdministrationJ. S. Goldberg, "State of Texas Municipal Web Sites: A Description of Website Attributes and Features of Municipalities with Populations Between 50,000-125,000," Master's thesis, Public Administration, Texas State University, 2009.
A usability study on the privacy policy visualization model. T Albalawi, K Ghazinour, IEEE 14th Intl Conf on Dependable, Autonomic and Secure Computing. Auckland, New ZealandT. Albalawi and K. Ghazinour, "A usability study on the privacy policy visualization model," in IEEE 14th Intl Conf on Dependable, Autonomic and Secure Computing, Auckland, New Zealand, August 8-12, 2016, pp. 578-585.
HCI Guidelines PRIME (Privacy and Identity Management for Europe) EU Project Report. John Sören PettersonJohn Sören Petterson (Ed.), "HCI Guidelines PRIME (Privacy and Identity Management for Europe) EU Project Report," 2008, http://www.fidis.net/fileadmin/public/events/dg insfo/prime.dg infso. presentation.pdf (Accessed April 2018).
Privacy agents in the IoT: considerations on how to balance agent autonomy and user control in privacy decisions. J H Colnago, Universidade Federal de São CarlosPh.D. dissertationJ. H. Colnago, "Privacy agents in the IoT: considerations on how to balance agent autonomy and user control in privacy decisions," Ph.D. dissertation, Universidade Federal de São Carlos, 2016.
Web privacy policies in higher education: How are content and design used to provide notice (or a lack thereof) to users?. A L Langhorne, in Human Aspects of Information Security, Privacy, and Trust Heraklion. Crete, GreeceA. L. Langhorne, "Web privacy policies in higher education: How are content and design used to provide notice (or a lack thereof) to users?" in Human Aspects of Information Security, Privacy, and Trust Heraklion, Crete, Greece, June 22-27, 2014, pp. 422-432. [Online].
. 10.1007/978-3-319-07620-1_37Available: https://doi.org/10.1007/978-3-319-07620-1 37
Recommendation delivery. E Murphy-Hill, G C Murphy, Recommendation Systems in Software Engineering. SpringerE. Murphy-Hill and G. C. Murphy, "Recommendation delivery," in Recommendation Systems in Software Engineering. Springer, 2014, pp. 223-242.
User acceptance of mobile notifications. T Westermann, GermanyInstitute of Software Engineering and Theoretical Computer Science, Berlin Institute of Technology BerlinPh.D. dissertationT. Westermann, "User acceptance of mobile notifications," Ph.D. dis- sertation, Institute of Software Engineering and Theoretical Computer Science, Berlin Institute of Technology Berlin, Germany, 2017.
Interface design elements for antiphishing systems. Y Chen, F Zahedi, A Abbasi, Proceedings of the 6th International Conference on Service-oriented Perspectives in Design Science Research. the 6th International Conference on Service-oriented Perspectives in Design Science ResearchBerlin, HeidelbergSpringer-VerlagY. Chen, F. Zahedi, and A. Abbasi, "Interface design elements for anti- phishing systems," in Proceedings of the 6th International Conference on Service-oriented Perspectives in Design Science Research. Berlin, Heidelberg: Springer-Verlag, 2011, pp. 253-265.
How polymorphic warnings reduce habituation in the brain: Insights from an fMRI study. B B Anderson, C B Kirwan, J L Jenkins, D Eargle, S Howard, A Vance, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. the 33rd Annual ACM Conference on Human Factors in Computing SystemsACMB. B. Anderson, C. B. Kirwan, J. L. Jenkins, D. Eargle, S. Howard, and A. Vance, "How polymorphic warnings reduce habituation in the brain: Insights from an fMRI study," in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2015, pp. 2883-2892.
From warning to wallpaper: Why the brain habituates to security warnings and what can be done about it. B B Anderson, A Vance, C B Kirwan, J L Jenkins, D Eargle, Journal of Management Information Systems. 333B. B. Anderson, A. Vance, C. B. Kirwan, J. L. Jenkins, and D. Eargle, "From warning to wallpaper: Why the brain habituates to security warnings and what can be done about it," Journal of Management Information Systems, vol. 33, no. 3, pp. 713-743, 2016.
The effect of website design dimensions on initial trust: a synthesis of the empirical literature. F P Karimov, M Brengman, L Van Hove, Journal of Electronic Commerce Research. 124272F. P. Karimov, M. Brengman, and L. Van Hove, "The effect of website design dimensions on initial trust: a synthesis of the empirical literature," Journal of Electronic Commerce Research, vol. 12, no. 4, p. 272, 2011.
What local consumers want most from local business websites. R Merchant, Accessed 31/1/18R. Merchant, "What local consumers want most from local business websites," 2017, https://www.brightlocal.com/2014/02/06/ what-local-consumers-want-most-from-local-business-websites/ April 11 (Accessed 31/1/18).
Visual persuasion: The role of images in advertising. P Messaris, London: SageP. Messaris, Visual persuasion: The role of images in advertising. London: Sage, 1997.
The role of images in framing news stories. P Messaris, L Abraham, Framing public life: Perspectives on Media and our Understanding of the Social World. New JerseyP. Messaris and L. Abraham, "The role of images in framing news stories," in Framing public life: Perspectives on Media and our Under- standing of the Social World, New Jersey, 2001, pp. 215-226.
Understanding engagement with the privacy domain through design research. A Vasalou, A.-M Oostveen, C Bowers, R Beale, Journal of the Association for Information Science and Technology. 666A. Vasalou, A.-M. Oostveen, C. Bowers, and R. Beale, "Understanding engagement with the privacy domain through design research," Journal of the Association for Information Science and Technology, vol. 66, no. 6, pp. 1263-1273, 2015.
Understanding and capturing people's mobile app privacy preferences. J Lin, Carnegie Mellon UniversityPh.D. dissertationJ. Lin, "Understanding and capturing people's mobile app privacy preferences," Ph.D. dissertation, Carnegie Mellon University, 2013.
Effective communication of cyber security risks. J R Nurse, 7th International Scientific Conference on Security and Protection of Information. SPI 2013J. R. Nurse, "Effective communication of cyber security risks," in 7th International Scientific Conference on Security and Protection of Information (SPI 2013), 2013.
Guidelines for usable cybersecurity: Past and present. J R Nurse, S Creese, M Goldsmith, K Lamberts, The 3rd International Workshop on Cyberspace Safety and Security. NSSThe 5th International Conference on Network and System SecurityJ. R. Nurse, S. Creese, M. Goldsmith, and K. Lamberts, "Guidelines for usable cybersecurity: Past and present," in The 3rd International Workshop on Cyberspace Safety and Security (CSS 2011) at The 5th International Conference on Network and System Security (NSS 2011).
. IEEE. IEEE, 2011.
You Want Me To Do What? A Design Study of Two-Factor Authentication Messages. E M Redmiles, E Liu, M L Mazurek, Thirteenth Symposium on Usable Privacy and Security. E. M. Redmiles, E. Liu, and M. L. Mazurek, "You Want Me To Do What? A Design Study of Two-Factor Authentication Messages," in Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017).
Aphasia: Awareness, advocacy, and activism. R J Elman, J Ogar, S H Elman, Aphasiology. 145-6R. J. Elman, J. Ogar, and S. H. Elman, "Aphasia: Awareness, advocacy, and activism," Aphasiology, vol. 14, no. 5-6, pp. 455-459, 2000.
Designing effective privacy notices and controls. F Schaub, R Balebako, L F Cranor, IEEE Internet Computing. 213F. Schaub, R. Balebako, and L. F. Cranor, "Designing effective privacy notices and controls," IEEE Internet Computing, vol. 21, no. 3, pp. 70- 77, May 2017.
Personalising public services: Understanding the personalisation narrative. C Needham, Policy PressBristol, UKC. Needham, Personalising public services: Understanding the person- alisation narrative. Bristol, UK: Policy Press, 2011.
Personalising learning through the use of technology. C Robinson, J Sebba, Computers & Education. 543C. Robinson and J. Sebba, "Personalising learning through the use of technology," Computers & Education, vol. 54, no. 3, pp. 767-775, 2010.
A value sensitive design investigation of privacy enhancing tools in web browsers. H Xu, R E Crossler, F Bélanger, Decision Support Systems. 541H. Xu, R. E. Crossler, and F. BéLanger, "A value sensitive design investigation of privacy enhancing tools in web browsers," Decision Support Systems, vol. 54, no. 1, pp. 424-433, 2012.
A design space for effective privacy notices. F Schaub, R Balebako, A L Durity, L F Cranor, Eleventh Symposium On Usable Privacy and Security (SOUPS 2015). USENIX Association. F. Schaub, R. Balebako, A. L. Durity, and L. F. Cranor, "A design space for effective privacy notices," in Eleventh Symposium On Usable Privacy and Security (SOUPS 2015). USENIX Association, 2015, pp. 1-17.
Probing the design space of usable privacy policies: A qualitative exploration of a reimagined privacy policy. R Jones, N Sailaja, L Kerlin, Proceedings BHCI. BHCIR. Jones, N. Sailaja, and L. Kerlin, "Probing the design space of usable privacy policies: A qualitative exploration of a reimagined privacy policy," in Proceedings BHCI, 2017.
Framing and the malleability of privacy choices. I Adjerid, A Acquisti, G Loewenstein, Proceedings of the 13th Workshop on the Economics of Information Security. the 13th Workshop on the Economics of Information SecurityI. Adjerid, A. Acquisti, and G. Loewenstein, "Framing and the mal- leability of privacy choices," in Proceedings of the 13th Workshop on the Economics of Information Security, 2014.
Formal versus informal privacy contracts: Comparing the impact of privacy notices and norms on consumer trust online. K Martin, K. Martin, "Formal versus informal privacy contracts: Compar- ing the impact of privacy notices and norms on consumer trust online." 2015, https://www.law.uchicago.edu/files/file/martin formal versus informal privacy contracts.pdf (Accessed April 2018).
Understanding Consumers' Trust in Internet Financial Sales Platform: Evidence from "Yuebao. S Sun, T Wang, L Chen, M Wang, Pacific Asia Conference on Information Systems (PACIS). 199S. Sun, T. Wang, L. Chen, and M. Wang, "Understanding Consumers' Trust in Internet Financial Sales Platform: Evidence from "Yuebao"," in Pacific Asia Conference on Information Systems (PACIS), 2014, p. 199.
Online Trust in the Greek context: The influence of perceived companys reputation on consumers trust and the effects of trust on intention for online transactions. A Broutsou, P Fitsilis, the Proceedings of the Management of International Business and Economic Systems (MIBES-ESDO) 2012 International Conference. GreeceSchool of Management and Economics, TEI of LarissaA. Broutsou and P. Fitsilis, "Online Trust in the Greek context: The influence of perceived companys reputation on consumers trust and the effects of trust on intention for online transactions," in the Proceedings of the Management of International Business and Economic Systems (MIBES-ESDO) 2012 International Conference, School of Management and Economics, TEI of Larissa, Greece, 2012.
Security and human computer interfaces. J Johnston, J H Eloff, L Labuschagne, Computers & Security. 228J. Johnston, J. H. Eloff, and L. Labuschagne, "Security and human computer interfaces," Computers & Security, vol. 22, no. 8, pp. 675-684, 2003.
The effect of online privacy policy on consumer privacy concern and trust. K.-W Wu, S Y Huang, D C Yen, I Popova, Computers in Human Behavior. 283K.-W. Wu, S. Y. Huang, D. C. Yen, and I. Popova, "The effect of online privacy policy on consumer privacy concern and trust," Computers in Human Behavior, vol. 28, no. 3, pp. 889-897, 2012.
Privacy design patterns and anti-patterns. N Doty, M Gupta, Trustbusters Workshop at the Symposium on Usable Privacy and Security. N. Doty and M. Gupta, "Privacy design patterns and anti-patterns," in Trustbusters Workshop at the Symposium on Usable Privacy and Security, 2013.
Evaluating effectiveness of mobile browser security warnings. R Shah, K Patil, ICTACT Journal on Communication Technology. 73R. Shah and K. Patil, "Evaluating effectiveness of mobile browser security warnings," ICTACT Journal on Communication Technology, vol. 7, no. 3, pp. 1373-1378, 2016.
End User Comprehension of Privacy Policy Representations. S Kununka, N Mehandjiev, P Sampaio, K Vassilopoulou, International Symposium on End User Development. SpringerS. Kununka, N. Mehandjiev, P. Sampaio, and K. Vassilopoulou, "End User Comprehension of Privacy Policy Representations," in Interna- tional Symposium on End User Development. Springer, 2017, pp. 135-149.
Design guidelines for effective recommender system interfaces based on a usability criteria conceptual model: results from a college student population. A A Ozok, Q Fan, A F Norcio, Behaviour & Information Technology. 291A. A. Ozok, Q. Fan, and A. F. Norcio, "Design guidelines for effective recommender system interfaces based on a usability criteria conceptual model: results from a college student population," Behaviour & Infor- mation Technology, vol. 29, no. 1, pp. 57-83, 2010.
Mental Models of Construction Workers for Safety-Sign Representation. A W Y Ng, A H S Chan, Journal of Construction Engineering Management. 1432A. W. Y. Ng and A. H. S. Chan, "Mental Models of Construction Workers for Safety-Sign Representation," Journal of Construction En- gineering Management, vol. 143, no. 2, 2017.
Sorry, I Don't Get It: An Analysis of Warning Message Texts. M Harbach, S Fahl, P Yakovleva, M Smith, Proceedings of the 2013 International Conference on Financial Cryptography and Data Security (FC13), Workshop on Usable Security, ser. Lecture Notes in Computer Science. the 2013 International Conference on Financial Cryptography and Data Security (FC13), Workshop on Usable Security, ser. Lecture Notes in Computer ScienceM. Harbach, S. Fahl, P. Yakovleva, and M. Smith, "Sorry, I Don't Get It: An Analysis of Warning Message Texts," in Proceedings of the 2013 International Conference on Financial Cryptography and Data Security (FC13), Workshop on Usable Security, ser. Lecture Notes in Computer Science, 2013.
Towards measuring warning readability. M Harbach, S Fahl, T Muders, M Smith, http:/doi.acm.org/10.1145/2382196.2382301Proceedings of the 2012 ACM Conference on Computer and Communications Security, ser. CCS '12. the 2012 ACM Conference on Computer and Communications Security, ser. CCS '12New York, NY, USAACMM. Harbach, S. Fahl, T. Muders, and M. Smith, "Towards measuring warning readability," in Proceedings of the 2012 ACM Conference on Computer and Communications Security, ser. CCS '12. New York, NY, USA: ACM, 2012, pp. 989-991. [Online]. Available: http://doi.acm.org/10.1145/2382196.2382301
On the usability of user interfaces for secure website authentication in browsers. M Pala, Y Wang, Proceedings of the 6th European Conference on Public Key Infrastructures, Services and Applications, ser. EuroPKI'09. the 6th European Conference on Public Key Infrastructures, Services and Applications, ser. EuroPKI'09Berlin, HeidelbergSpringer-VerlagM. Pala and Y. Wang, "On the usability of user interfaces for secure website authentication in browsers," in Proceedings of the 6th European Conference on Public Key Infrastructures, Services and Applications, ser. EuroPKI'09. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 239- 254.
Defending the weakest link: phishing websites detection by analysing user behaviours. X Dong, J A Clark, J L Jacob, Telecommunication Systems. 452-3X. Dong, J. A. Clark, and J. L. Jacob, "Defending the weakest link: phishing websites detection by analysing user behaviours," Telecommu- nication Systems, vol. 45, no. 2-3, pp. 215-226, 2010.
Improving computer security dialogs. C Bravo-Lillo, L F Cranor, J S Downs, S Komanduri, M Sleeper, Human-Computer Interaction -INTERACT 2011 -13th IFIP TC 13 International Conference. Lisbon, PortugalProceedings, Part IVC. Bravo-Lillo, L. F. Cranor, J. S. Downs, S. Komanduri, and M. Sleeper, "Improving computer security dialogs," in Human-Computer Interaction -INTERACT 2011 -13th IFIP TC 13 International Conference, Lisbon, Portugal, September 5-9, 2011, Proceedings, Part IV, 2011, pp. 18-35.
Trust and the perception of security. S Hertefelt, S. D'Hertefelt, "Trust and the perception of security," 2000, 3 January http://users.skynet.be/fa250900/research/report20000103shd. htm (Accessed 30/1/2018).
Improving SSL Warnings: Comprehension and Adherence. A P Felt, A Ainslie, R W Reeder, S Consolvo, S Thyagaraja, A Bettes, H Harris, J Grimes, Proceedings of the Conference on Human Factors and Computing Systems. the Conference on Human Factors and Computing SystemsA. P. Felt, A. Ainslie, R. W. Reeder, S. Consolvo, S. Thyagaraja, A. Bettes, H. Harris, and J. Grimes, "Improving SSL Warnings: Compre- hension and Adherence," in Proceedings of the Conference on Human Factors and Computing Systems, 2015.
Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness. D Akhawe, A P Felt, USENIX security symposium. 13D. Akhawe and A. P. Felt, "Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness," in USENIX security symposium, vol. 13, 2013.
Demarcating Mobile Phone Interface Design Guidelines to Expedite Selection. K Renaud, J Van Biljon, South African Computing Journal. 293K. Renaud and J. van Biljon, "Demarcating Mobile Phone Interface Design Guidelines to Expedite Selection ," South African Computing Journal, vol. 29, no. 3, 2017.
The value of consent: Discussions with designers of ubiquitous computing systems. E Luger, T Rodden, IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops. IEEEE. Luger and T. Rodden, "The value of consent: Discussions with designers of ubiquitous computing systems," in IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops). IEEE, 2014, pp. 388-393.
The crisis of consent: How stronger legal protection may lead to weaker consent in data protection. B W Schermer, B Custers, S Van Der Hof, Ethics and Information Technology. 162B. W. Schermer, B. Custers, and S. van der Hof, "The crisis of consent: How stronger legal protection may lead to weaker consent in data protection," Ethics and Information Technology, vol. 16, no. 2, pp. 171- 182, 2014.
Your privacy is sealed: Effects of web privacy seals on trust and personal disclosures. N J Rifon, R Larose, S Choi, Journal of Consumer Affairs. 392N. J. Rifon, R. LaRose, and S. Choi, "Your privacy is sealed: Effects of web privacy seals on trust and personal disclosures," Journal of Consumer Affairs, vol. 39, no. 2, pp. 339-362, 2005.
Your privacy is assured -of being disturbed: websites with and without privacy seals. R Larose, N Rifon, New Media & Society. 86R. LaRose and N. Rifon, "Your privacy is assured -of being disturbed: websites with and without privacy seals," New Media & Society, vol. 8, no. 6, pp. 1009-1029, 2006.
| zyda_arxiv-1399000 |
THE REGION OF TRIGGERED STAR FORMATION W40: OBSERVATIONS AND MODEL
27 Mar 2015
L E Pirogov
Institute of Applied Physics
Russian Academy of Sciences
Nizhni NovgorodRussia
Lobachevsky State University of Nizhni Novgorod
Nizhni NovgorodRussia
THE REGION OF TRIGGERED STAR FORMATION W40: OBSERVATIONS AND MODEL
27 Mar 2015
A "collect and collapse" model of triggered star formation is used to estimate the parameters of ring-like structure consisted of a sequence of low-mass clumps in the W40 region. The model parameters are close to the observed ones if the density of the cloud in which the HII zone is expanding is fairly high ( > ∼ 10 5 cm −3 ) and the luminosity of the driving source exceeds previous estimate. Probable reasons for the scatter of the observed parameters of the clumps are discussed. * Electronic address: [email protected]
INTRODUCTION
Early stages of the star formation process are far from fully understood in spite of increasing amount of observational data available. This is true, in particular, for regions of high-mass star formation which are more rare, more distant, and evolve more quickly than regions of low-mass star formation, spend considerable time of their evolution inside the parent cloud (e.g. [1]). As massive stars evolve, they affect parent cloud through their stellar winds, massive outflows, strong UV-radiation, and expansion of their HII zones. These factors change the physical conditions and chemical composition of the cloud and the expanding of HII zones "sweep out" the gas towards the periphery causing compression and possibly trigger the formation of a new generation of stars.
The W40 region (Sharpless 64) [2,3] contains large blister-type HII zone that lies at the edge of an extended molecular cloud (TGU 279-P7 [4]) in the Aquila Rift complex. According to recent estimates the distance to the central sources of the HII zone is ∼ 500 pc [5]. This puts W40 among the nearest regions of high-mass star formation. The difference between CO and hydrogen recombination line velocities indicates that the HII zone lies at the front 2 edge of the cloud closer to the observer [6]. The HII region expands into molecular cloud to the west. Molecular gas is observed is observed at two different velocities which probably correspond to gas located behind and in front of the HII zone [7]. The W40 region was studied at various wavelengths from the radio to the X-ray (see the review [8] and [5,[9][10][11][12][13]).
Continuum observations of dust emission [11,12] have shown that the dust to the west of the HII zone is concentrated in clumps forming a ring. The morphology of ionized gas [12,13] shows that there is another compact HII zone near the main one. The expansion of this zone could lead to formation of the ring-like structure due to the "collect and collapse" mechanism.
This model was proposed in [14] to describe the process of triggered high-mass star formation.
Although there are more than a few examples where this mechanism could be taking place (e.g. [15][16][17][18][19]) there have been few quantitative comparisons between observational data and the model. This can be related to the fact that the model is fairly simple compared to real objects with inhomogeneous density structure. In the W40 region which is located much more closer than most of the HII zones where the collect and collapse mechanism could take place the observed ring-like structure is more distinct and compact and is much closer to the probable driving source of the HII region. We present here quantitative comparison of the physical parameters derived from observations with the model predictions in order to determine whether the collect and collapse model can predict parameters of the structure found in W40.
THE RING OF DUST CLUMPS
A 1.2 mm dust continuum emission map of W40 taken from [12] is given in the figure.
The dust is concentrated in a chain of clumps forming a ring. The data of [11] show that the western branch of the ring extends to the northeast, making the ring-like structure more distinct. Some of the clumps are associated with the Class 0 and Class I sources [11], indicating that low-mass star formation has started in the ring. The Near-IR sources [5] and 3.6 cm compact VLA radio sources [9] are also shown in the figure (left panel).
Most of the radio and IR sources are grouped near the driving source of the main HII zone (IRS 1A South), which is located to the east of the ring and appears to be a massive O9.5 star [5,13]. The IRS 5 infrared source, which is a B1 star and the probable source of the neighboring compact HII zone [12,13], is also marked in the figure. Since IRS 5 is located within the region bounded by the ring, but is shifted relative to its geometrical center, it is probable that the ring formed due to the expansion of the compact zone around this source.
The ring is apparently not oriented face-on, and its south-eastern part is closer to observer [12].
The formation of the ring-like structure from a sequence of fragments (clumps) suggests the action of collect and collapse mechanism of triggered star formation [14]. This mechanism is treated analytically in [20] and the results of that paper have been confirmed by model calculations [21]. According to the collect and collapse mechanism when an HII region expands into molecular cloud, a layer of enhanced density forms between the shock front and the ionization front. Due to instability, the layer splits into uniformly spaced clumps, which can be the sites of formation of next generation of stars. Such ring-like structures usually consisting of massive fragments (M ∼ 10 − 100 M ⊙ ) are observed at the edges of some HII zones (e.g. [15,16,18]). However, in the case of W40, the masses of clumps in the ring (0.4 − 8 M ⊙ [12]) are considerably lower than is usually observed in such structures.
ANALYTICAL MODEL
The evolution of the layer formed while an HII zone expands into a medium consisted of neutral gas with uniform density is considered in [20]. Analytical expressions for the time at which fragmentation in the layer occurs (t f rag ), the layer radius (R f rag ) and column density of the layer (N f rag ) at this time, the mean mass of fragments (M f rag ) and initial separation between fragments (2r f rag ) have been derived. They depend on the sound speed in the medium (a), on the density of atomic gas (n), and on the luminosity of the central source of ionized radiation (L). The expressions of [20] are given below, where a 0.2 = a/0.2 km/s, L 49 = L/10 49 photon/s, n 3 = n/10 3 cm −3 :
t f rag ∼ 1.6 Myr a 7/11 0.2 L −1/11 49 n −5/11 3 ,
(1)
R f rag ∼ 5.8 pc a 4/11 0.2 L 1/11 49 n −6/11 3 ,(2)
N f rag ∼ 6 × 10 21 cm −2 a 4/11
0.2 L 1/11 49 n 5/11 3 ,(3)M f rag ∼ 23 M ⊙ a 40/11 0.2 L −1/11 49 n −5/11 3 ,(4)2r f rag ∼ 0.8 pc a 18/11 0.2 L −1/11 49 n −5/11 3 .(5)
Note that, when the gas is in molecular form, the atomic gas density is twice of the molecular gas density [21]. We neglected this factor in our approximate estimates derived
from (1)-(5).
It is clear from the above equations that, when the HII zone made by a star emitting 10 49 photons/s (main sequence O6 massive star [22]) expands into a cold cloud with moderate
density (T KIN ∼ 10 − 15 K, n ∼ 10 3 cm −3 ), a layer of fragments with masses ∼ 25 − 40 M ⊙
forms at a distance ∼ 6 pc after ∼ 1.6 Myr. The initial separation between centers of the fragments is ∼ 1 pc. Such fragments could become sites of formation of a new generation of stars, including massive ones. If an HII zone expands into a medium of higher density, the layer of enhanced density will form on shorter scales and closer to the star. This layer will split into fragments with lower sizes and masses.
The luminosity of IRS 5, the source that has probably formed the ring-like structure in W40, is ∼ 2.4 × 10 45 photons/s, which corresponds to a B1V star [13]. IRS 5 is classified as a B1 star in [5]. However, the luminosity could be appreciably underestimated, as well as the luminosity of the main HII zone (see the discussion in [13]). We used two luminosities in calculations: one close to the estimate of [13] and the second an order of magnitude higher.
The results of comparing the model calculations data with the observed parameters are given in the following section.
COMPARISON OF THE MODEL AND THE OBSERVED CLUMP PARAMETERS
The dense core of the molecular cloud near the HII zone in W40 has a structure consisting of several components, including clumpy ring and gas which that has not condensed into clumps [12]. The western clumps 6-9 are associated with the N 2 H + and NH of the HCN(1-0) hyperfine components imply moderate optical depth [12]. The HCN(1-0) emission is apparently associated with regions of high density and not with enhanced column density of the gas. The spatial distributions of the different CO isotopic line intensities [23] correlate weakly with the ring structure, and may be tracers of a more diffuse envelope around the ring. The clumpy ring probably formed due to the expansion of the HII zone associated with IRS 5 into the parent dense cloud. In this case the gas emitting in the HCO + and HCN lines may represent material that has not experienced the expansion of the HII zone, since it is more distant from the source.
The [13] and the other an order of magnitude higher. The density of the surrounding medium for which the model estimates are more or less close to the observed parameters is given for each luminosity.
The kinetic temperature value was taken to be close to kinetic and/or dust temperature estimates for the western clumps [11,12]. The eastern clumps 1-3 associated with Class I sources [11] have higher dust temperatures, and are influenced by the ionization front of the main HII zone, which probably changes their physical characteristics. The ranges of physical parameters obtained from the observations [12] are given in the table for comparison. The parameters of the eastern clumps 1-3 are excluded from consideration.
The size of the H II zone around IRS 5 has been approximately estimated in [13] as the diameter of the ring of Class 0/I sources, ∼ 0.4 pc. However, this ring is probably not oriented face-on, with its eastern part is closer to the observer and different distances from the probable central source IRS 5 to the boundary of the ring in different directions (see Figure). The maximum distance between IRS 5 and the inner boundary of the ring in the southern direction is ∼ 0.4 pc [12], which is twice the radius estimated in [13]. The average of these two radius estimates, 0.3 pc, is given in the table as an observed radius of the ring,
R f rag .
To be compared with the time at which gas in the layer becomes gravitationally unstable and splits into fragments (t f rag ), the corresponding dynamical age of the HII zone (t dyn ) is given in the table. This was calculated from the standard expression for the time dependence of the radius of an H II zone [24] for a given luminosity and density:
R(t) = R s (1 + 7 c II t 4 Rs ) 4/7 , 6
where R s is the radius of the Strömgren zone which depends on the source luminosity and the density of the medium (e.g. [13]), and c II is the sound speed of the ionized gas, taken to be 11 km/s [13]. The condition t dyn > ∼ t f rag is satisfied in the calculations.
The ranges of the peak column densities and masses calculated from the dust continuum observations are given in the table as the observed values of N f rag and M f rag . The standard gas-to-dust mass ratio, equal to 100 [12] was adopted. Variations of these values are more likely related to density variations in the gas into which the HII zone is expanding. Variations of the gas-to-dust mass ratio in clumps (Section 5) could also be a possible reason. The
DISCUSSION
The structure of the W40 dense core probably formed due to the influence on the neutral material of two HII zones whose main driving sources are IRS 1A South and IRS 5 ( Figure, left panel). There is a cluster of sources near IRS 1A South. Two of these sources (IRS 2B, IRS 3A) are massive B4 and B3 stars, respectively [5]. The main H II region to the west 7 is bounded by dense molecular gas and is more extended in the eastern direction [13]. Its emission overlaps with emission of several compact H II zones (the figure, left panel; Fig. 10 from [13]). The HII zone around IRS 5 is located to the northwest from the main zone, has a more or less circular morphology, and is probably a distinct region. The morphology of the ionized gas shows that, even taking into account possible projection effects, it is not possible to explain the formation of the ring due to the influence of the main HII zone, since its main driving source, IRS 1A, is located to the east of the ring. Alternative mechanisms for triggered star formation such as, radiation-driven implosion, are unlikely be able to explain the formation of the observed ring-like structure with regularly spaced clumps.
As model calculations show (e.g. [25]) radiation-driven implosion can form globules of elongated (cometary or pillar-like) form that are associated with already existing inhomogeneities in the medium. Star formation occurs at their front edges (facing the star). Moreover, this mechanism predicts the existence of velocity gradients inside the globules [26], which are not observed in the clumps forming the ring-like structure in W40. Nevertheless, radiation-driven implosion could take place at the outer boundary of the eastern branch of the ring, where the morphologies of the ionized gas (according to observations with high spatial resolution), dust and dense molecular gas are similar. The line-of-sight velocities of the gas associated with the eastern clumps differ from the velocities of the remaining gas in the region, and this gas is contracting [12]. Therefore, the collect and collapse mechanism seems to be the most probable reason for the formation of the ring if the luminosity of IRS 5 is higher than the estimate of [13] and the medium where ionization front propagates has a high density.
The observed scatter of the clump masses, column densities, and separation could be associated with density variations in the medium around IRS 5. The scatter in the clump separations could be connected with both density variations and projection effects.
Note that clump masses and column densities were calculated in [12] using dust continuum observations. The gas-to-dust mass ratio was taken to be 100. It cannot be ruled out that this ratio could vary in the dense gas near an HII zone. For example, the HCN, HCO + , CS and 13 CO molecular emission is detected inside the region bounded by the ring, where the dust emission is considerably reduced [11,12,23]. It is shown in [27] that spatial fluctuations of the dust density in turbulent clouds can occur independently of gas-density fluctuations on scales compared with the sizes of protostellar cores.
It is possible the level of turbulence of the medium increases as shock propagates, leading to spatial fluctuations in the gas-to-dust abundance ratio. The observations show that the widths of the N 2 H + , NH 3 and H 13 CO + lines associated with the western clumps are half the widths of the CS, HCN and HCO + lines [12]. In addition to the optical depth effects this could be related with different levels of turbulence in the gas that emits effectively in these lines. Thus, it is possible that the gas with higher level of turbulence emitting in the CS, HCN and HCO + lines and spatially uncorrelated with the clumps could have a lower dust abundance. The hypothesis that there may be a relation between turbulence and spatial fluctuations of dust abundance close to HII zones need to be studied further, using both independent observational estimates of gas and dust abundances and model calculations.
CONCLUSION
We have carried out a comparison of the observed parameters of a ring-like structure consisting of low-mass clumps in W40 and estimates produced using the collect and collapse model [14,20]. The physical parameters of the clumps were taken from [12]. This comparison shows that parameters such as the radius of the ring, the masses and gas column densities in the clumps, and the separation between clumps are close to the model estimates if the density of the cloud into which the HII zone expands is fairly high ( > ∼ 10 5 см −3 ) and the luminosity of the source driving the HII zone exceeds the previous estimate [13]. Thus, W40 can be considered an example of the realization of the collect and collapse mechanism at a relatively small distance from the source in a high-density medium. The observed scatter of the physical parameters of the clumps could be associated with density inhomogeneities in the medium where ionization front propagates, and also with projection effects and turbulence of the medium that results in fluctuations of the gas-to-dust mass ratio. shown as small squares [5]. Compact VLA radio sources [9] are shown as larger circles in the left panel. Class 0 and Class I sources [11] are marked by smaller circles and triangles, respectively (left panel). The driving source of the main HII zone, IRS 1A South, and the source of the separate H II zone, IRS 5, [5] are shown as stars.
12
3 molecular line emission regions, while the CS emission is enhanced towards the eastern clumps 1-3 (Figure, right panel). THe mean densities in clumps derived from continuum observations are: ∼ 10 5 − 10 6 cm −3 , while the densities calculated from molecular line data are higher. The gas densities of extended regions that do not correlate with the ring and are observed 5 mainly in the HCO + (1-0) and HCN(1-0) lines (Figure, right panel) should be also fairly high (the critical densities for the excitation these transitions are ∼ 10 5 − 10 6 cm −3 ). The ratios
observed separations between fragments (2r f rag ) lie in the range indicated in the table due to probable density variations and projection effects. It is possible that the separations between the western clumps 7, 8 and 9 depend on these factors to a lesser extent. The observed clump sizes (0.02-0.11 pc) and masses are typical for low-mass star forming regions, but there are several indications that the formation of stars with masses higher than a solar mass is occurring in the eastern part of the ring[12].A comparison between the observed parameters of the clumpy ring and the two sets of model parameters (see the table) shows that the model with higher luminosity fits the observations better. According to the model estimates, in both cases, the density of the medium into which the HII zone expands considerably exceeds the value usually adopted as a standard for neutral gas surrounding HII zones (∼ 10 3 см −3 ). A better agreement between the model estimates of the masses and the separations of the fragmetns can be achieved if a lower temperature is adopted for the medium (∼ 10 K). The density of the medium is not appreciably changed in this case. The small excess of the observed radius and column density of the layer over the model estimates (see the table) could be connected with the evolution of these parameters during the time that has passed since the onset of fragmentation in the layer.
Figure 1 .
1Maps of dust, ionized and molecular gas emission in the W40 region according to the data from[12]. The coordinates of the central position (the 13 CO emission peak[23]) are:R.A.(J2000)=18 h 31 m 15.75 s , Dec.(J2000)=-02 • 06 ′ 49.3 ′′ . The clumpy ring is shown in greyscale. Distinct clumps are denoted by numbers (left panel). The spatial distribution of the ionized gas is shown as contours in the left panel. The HCN(1-0) (dashed contours), NH 3 (1,1) (dark contours) and CS(5-4) (light contours) molecular line maps are shown in the right panel. Near IR-sources are
Table presents
presentsparameters calculated from (1)-(5) for a 0.2 ≈ 1.15 (T KIN = 15 K,
m = 2.33 m H ) and for two luminosities, one close to the estimate of
Table 1 .
1Observed and model parameters of the clumpsParameter
Observed value
Model values
[12]
L(photons/s) = 3 · 10 45 L(photons/s) = 3 · 10 46
n(cm −3 )
10 5
1.5 · 10 5
t dyn (Myr)
0.65
0.45
t f rag (Myr)
0.44
0.3
R f rag (pc)
∼ 0.3
0.24
0.23
N f rag (10 22 cm −2 )
4-11
2.4
3.6
M f rag (M ⊙ )
2-6
9.8
6.6
2r f rag (pc)
∼ 0.1 − 0.2
0.27
0.18
ACKNOWLEDGMENTSWe thank D. Z. Wiebe for his question concerning quantitative estimates of the parameters of the collect and collapse model for W40 which inspired this study. We also thank the referee for valuable comments and questions that led to recalculation of the model parameters and a significant makeover of the paper. This work was partially supported by
. H Zinnecker, H W Yorke, Ann. Rev. Astron. and Astrophys. 45481H. Zinnecker and H. W. Yorke, Ann. Rev. Astron. and Astrophys. 45, 481 (2007)
. S Sharpless, Astrophys. J. 4257SupplS. Sharpless, Astrophys. J. Suppl. 4, 257 (1959)
. G Westerhout, Bull. Astron. Inst. Netherlands. 14215G. Westerhout, Bull. Astron. Inst. Netherlands 14, 215 (1958)
. K Dobashi, H Uehara, R Kandori, T Sakurai, M Kaiden, T Umemoto, F Sato, Publ. Astron. Soc. Jap. 571K. Dobashi, H. Uehara, R. Kandori, T. Sakurai, M. Kaiden, T. Umemoto, F. Sato, Publ. As- tron. Soc. Jap. 57, S1 (2005)
. R Y Shuping, W D Vacca, M Kassis, K C Yu, Astron. J. 144116R. Y. Shuping, W. D. Vacca, M. Kassis and K. C. Yu, Astron. J. 144, 116 (2012)
. M Zeilik, C J Lada, Astrophys. J. 222896M. Zeilik and C. J. Lada, Astrophys. J. 222, 896 (1978)
. J P Vallée, Astron. and Astrophys. 178237J. P. Vallée, Astron. and Astrophys. 178, 237 (1987)
Handbook of Star Forming Regions. S A Rodney, B Reipurth, II683S. A. Rodney and B. Reipurth, Handbook of Star Forming Regions II, 683 (2008)
. L F Rodríguez, S A Rodney, B Reipurth, Astron. J. 140968L. F. Rodríguez, S. A. Rodney, B. Reipurth, Astron. J. 140, 968 (2010)
. M A Kuhn, K V Getman, E D Feigelson, B Reipurth, S A Rodney, G P Garmire, Astrophys. J. 7252485M. A. Kuhn, K. V. Getman, E. D. Feigelson, B. Reipurth, S. A. Rodney, G. P. Garmire, Astrophys. J. 725, 2485 (2010)
. A J Maury, P André, A Men'shchikov, V Könyves, S Bontemps, Astron. and Astrophys. 53577A. J. Maury, P. André, A. Men'shchikov, V. Könyves, S. Bontemps, Astron. and Astrophys. 535, A77 (2011)
. L Pirogov, D K Ojha, M Thomasson, Y.-F Wu, I Zinchenko, Mon. Not. RAS. 4363186L. Pirogov, D. K. Ojha, M. Thomasson, Y.-F. Wu, I. Zinchenko, Mon. Not. RAS 436, 3186 (2013)
. K K Mallick, M S N Kumar, D K Ojha, R Bachiller, M R Samal, L Pirogov, Astrophys. J. 779113K. K. Mallick, M. S. N. Kumar, D. K. Ojha, R. Bachiller, M. R. Samal, L. Pirogov, Astrophys. J. 779, 113 (2013)
. B G Elmegreen, C J Lada, Astrophys. J. 214725B. G. Elmegreen and C. J. Lada, Astrophys. J. 214, 725 (1977)
. L Deharveng, B Lefloch, A Zavagno, J Caplan, A P Whitworth, D Nadeau, S Martín, Astron. and Astrophys. 40825L. Deharveng, B. Lefloch, A. Zavagno, J. Caplan, A. P. Whitworth, D. Nadeau, S. Martín, Astron. and Astrophys. 408, L25 (2003)
. L Deharveng, B Lefloch, S Kurtz, D Nadeau, M Pomarès, J Caplan, A Zavagno, Astron. and Astrophys. 482585L. Deharveng, B. Lefloch, S. Kurtz, D. Nadeau, M. Pomarès, J. Caplan, A. Zavagno, Astron. and Astrophys. 482, 585 (2008)
. L Deharveng, F Schuller, L D Anderson, A Zavagno, F Wyrowski, K M Menten, L Bronfman, L Testi, C M Walmsley, M Wienen, Astron. and Astrophys. 5236L. Deharveng, F. Schuller, L. D. Anderson, A. Zavagno, F. Wyrowski, K. M. Menten, L. Bronf- man, L. Testi, C. M. Walmsley, M. Wienen, Astron. and Astrophys. 523, A6 (2010)
. H Ohlendorf, T Preibisch, B Gaczkowski, T Ratzka, J Ngoumou, V Roccatagliata, R Grellmann, Astron. and Astrophys. 55214H. Ohlendorf, T. Preibisch, B. Gaczkowski, T. Ratzka, J. Ngoumou, V. Roccatagliata, R. Grell- mann, Astron. and Astrophys. 552, A14 (2013)
. M R Samal, A Zavagno, L Deharveng, S Molinari, D K Ojha, D Paradis, J Tigél, A K Pandey, D , Astron. and Astrophys. 566122M. R. Samal, A. Zavagno, L. Deharveng, S. Molinari, D. K. Ojha, D. Paradis, J. Tigél, A. K. Pandey, D. Russeil, Astron. and Astrophys. 566, A122 (2014)
. A P Whitworth, A S Bhattal, S J Chapman, M J Disney, J A , A. P. Whitworth, A. S. Bhattal, S. J. Chapman, M. J. Disney, J. A..
. Turner, Mon. Not. RAS. 268291Turner, Mon. Not. RAS 268, 291 (1994)
. J E Dale, I A Bonnell, A , J. E. Dale, I. A. Bonnell, A..
. P Whitworth, Mon. Not. RAS. 3751291P. Whitworth, Mon. Not. RAS 375, 1291 (2007)
. N Panagia, Astron. J. 78929N. Panagia, Astron. J. 78, 929 (1973)
. L Zhu, Y.-F Wu, Y Wei, Chin. J. Astron. and Astrophys. 661L. Zhu, Y.-F. Wu and Y. Wei, Chin. J. Astron. and Astrophys. 6, 61 (2006)
Physical processes in the interstellar medium. L Spitzer, WileyNew YorkL. Spitzer Physical processes in the interstellar medium (Wiley, New York, 1978;
. Moscow Mir, Mir, Moscow, 1981).
. T G Bisbas, R Wünsch, A , T. G. Bisbas, R. Wünsch, A..
. P Whitworth, D A Hubber, S Walch, Astrophys. J. 736142P. Whitworth, D. A. Hubber, S. Walch, Astrophys. J., 736, 142 (2011)
. B Lefloch, B Lazareff, Astron. and Astrophys. 289559B. Lefloch, B. Lazareff, Astron. and Astrophys. 289, 559 (1994)
. P F Hopkins, Astrophys. J. 79759P. F. Hopkins, Astrophys. J., 797, 59 (2014)
| zyda_arxiv-1407000 |
On the Connection between Residual Distribution Schemes and Flux Reconstruction
June 3, 2021
R Abgrall
Institute of Mathematics
University of Zurich
Switzerland
E Le Mélédo
Institute of Mathematics
University of Zurich
Switzerland
P Öffner
Institute of Mathematics
University of Zurich
Switzerland
On the Connection between Residual Distribution Schemes and Flux Reconstruction
June 3, 2021
In this short paper, we are considering the connection between the Residual Distribution Schemes (RD) and the Flux Reconstruction (FR) approach. We demonstrate that flux reconstruction can be recast into the RD framework and vice versa. Because of this close connection we are able to apply known results from RD schemes to FR methods. In this context we propose a first demonstration of entropy stability for the FR schemes under consideration and show how to construct entropy stable numerical schemes based on our FR methods. Simultaneously, we do not restrict the mesh to tensor structures or triangle elements, but rather allow polygons. The key of our analysis is a proper choice of the correction functions for which we present an approach here.
Introduction
We are interested in the approximation of mainly non-linear hyperbolic problems like Euler equations or the MHD equations. The construction of high order methods for these problems is widely studied in current research with the aim to find good ways to build preferable schemes. All of these methods have in common that they are based of either a finite difference (FD) or a finite element (FE) approach. In the last years, great efforts have been made to transform numerical schemes from one to another and to use techniques which are originally used in a different framework. Here, the summation-by-parts (SBP) operators are a good example to mention. SBP operators originate in the FD framework [19] and lead to an ansatz to prove stability in a way similar to the continuous analysis (see [31,12,15] and the references therein). In [13] the author transforms the technique to a Discontinuous Galerkin (DG) spectral element method (DGSEM) using the nodes of Lobatto-Legendre quadrature, and in [21,22,23,24,25,26] SBP operators are applied to the Correction Procedure via Reconstruction (CPR) or Flux Reconstruction (FR) method to extend stability proofs in a more general framework. In this paper, we also deal with the reinterpretation or transformation of two classes of numerical methods, meaningly the Residual Distribution (RD) schemes and the FR methods. Both lead to a general framework which contains several numerical schemes. The FR creates a unifying framework for several high-order methods such as DG, spectral difference (SD) and spectral volume (SV) methods. These connection are already pointed out in [17] and in the references therein. However, since the early work of Roe, Deconinick and Struijs [29,30], the RD schemes have been further developed in a series of papers, e.g. [1,7,11,28]. A connection between RD to DG is explained in [8] and, because of the close relation between RD and FR to DG, it seems natural to study the link between RD and FR. Besides accuracy and robustness of the numerical scheme, another desirable property of numerical schemes is entropy stability. Recently, efforts have been made to construct numerical methods enjoying entropy stability. So far, linear stability remains mainly investigated in the context of FR, as e.g. [9,10,18,36,37,34,38]. By embedding FR into the RD framework we are able to follow the steps of [3] and construct FR schemes that are also entropy conservative. The key is a proper choice of the correction functions, which is discussed in this paper. Another beautiful consequence of our abstract approach it that we do not need to restrict our mesh in two dimensions on triangles or tensor structures. Our approach is valid for general polygons, extending the current results. The paper is organized as follows. In the next section we shortly repeat the main idea of RD schemes and FR schemes. We introduce the notations that will be used later in this work. After that we explain the flux reconstruction approach and formulate the schemes in the RD context. Therefore, the definition of the correction functions in FR are essential and two conditions guaranteeing conservation are derived. Furthermore, in the section 4, we transform theoretical convergence and stability results from RD to our FR schemes. Simultaneously, we make some preparations for discussing entropy stability. In the next section 5 we follow the steps from [3] and construct an entropy conservative/stable numerical scheme based on our FR approach. Simultaneously, by bringing our investigations together we are able to derive straight conditions on our correction functions so that the resulting FR methods are naturally entropy conservative. We then summarize everything and conclude on the admissibility criteria for the correction functions. In the appendix we further extend the investigation on entropy stability and correction functions. Entropy stability is usually associated with the condition of Tadmor on the numerical flux. Following the study of [3,4,5] where similar conditions for RD schemes are proposed and making use of the link between RD and FR, we are able to derive conditions on FR schemes that guarantee entropy stability. It is shown that the correction functions have to fulfill an inequality which is derived from an entropy inequality. However, if this more a theoretical condition can be checked, it does not lead to a construction method for entropy conservative/stable schemes yet. Nevertheless, as it may be interesting for future research, we detail it in the appendix for the sake of completeness.
Residual Distribution Schemes and Flux Reconstruction
Residual Distribution Schemes -Basic Formulation
We follow the ideas and notations for the RD methods of [2,3,4,5,6]. As already described in [4], for example, the Discontinuous Galerkin Method (DG) can be interpreted as a RD scheme. Thus, by the close connection between DG and FR (see [16]), such an interpretation is also possible for FR schemes.
In this paper we are considering the steady state problem
div f (u) = d j=1 ∂f j ∂x j (u) = 0 for x ∈ Ω ⊂ R d(1)
together with the initial condition
(∇ u f (u) · n(x)) − (u − u b ) = 0 on ∂Ω,(2)
where n(x) is the outward normal vector at x ∈ ∂Ω and u b is a regular enough function. An extension of (1) to unsteady problems is straightforward by following the steps from [6]. The flux function is given by
f j = f 1,j . . . f p,j and u = u 1 . . . u p ∈ D ⊂ R p is the conserved variable.
Later on, we will focus on the entropy U , which fulfils the condition
∇ u U · ∇ u f j = ∇ u g j , ∀j = 1, . . . , d(3)
where g = (g 1 , · · · , g d ), g j ∈ C 1 (Ω) and is called the entropy flux function. If u is smooth, then the additional conservation relation
div g(u) = d j=1 ∂g j ∂x j (u) = 0(4)
holds. If u is a weak entropy solution of (1)-(2), then div g(u) ≤ 0 holds in the sense of distribution. We split Ω in a partition of elements K, and approximate our solution in each element by a polynomial of degree k. The numerical solution is denoted u h . Therefore, the numerical solution lies in the space
V h = K {u h ∈ L 2 (K) d , u h | K ∈ P k (K)}.
In addition, we set {φ σ } σ∈ K a set of basis functions for the space P k (K), where K is a set of degrees of freedom of linear forms acting on the set P k (K), which will be used in every element to express u h . Considering all the elements covering Ω, the set of degrees of freedom is denoted by S and a generic degree of freedom by σ. Furthermore, for any K we have that
∀x ∈ K, σ∈K φ σ (x) = 1.
The key of the RD schemes is to define residuals Φ K σ (u h ) on every element K, satisfying element-wise the following conservation relation
σ∈K Φ K σ (u h ) = ∂Kf (u h , u h,− ) d γ,(5)
where u h,− is the approximated solution on the other side of the local edge/face of K,f is a consistent numerical flux, i.e.f (u, u) = f (u) · n, and K is the boundary integral evaluated by a numerical quadrature rule. Simultaneously, we have to consider the residuals on the boundary elements Γ. For any degree of freedom belonging to the boundary Γ, we assume that Φ Γ σ (u h ) fulfils the conservation relation
σ∈Γ Φ Γ σ (u h ) = ∂Γ f (u h , u b ) − f (u h ) · n d γ.(6)
The discretisation of (1)-(2) is given by the following formula. For any σ ∈ S, it reads
K⊂Ω, σ∈K Φ K σ (u h ) + Γ⊂∂Ω, σ∈Γ Φ Γ σ (u h ) = 0.(7)
We are able to embed the discretisation (7) into several numerical methods like finite element or DG depending on the solution space V h and the definition of the residuals, see [4] for details. Here, we repeat it shortly for the DG scheme before we extend it to the FR methods. A weak formulation of DG reads:
Find u h ∈ V h such that for any v h ∈ V h , a(u h , v h ) := K⊂Ω − K ∇v h · f (u h ) d x + ∂K v h ·f (u h , u h,− ) d γ + Γ⊂∂Ω Γ v h · f (u h , u b ) − f (u h ) · n d γ = 0.(8)
Here we have defined for the boundary faces u h,− = u h and used the fact that the expression ∇v h · f (u h ) implies
∇v h · f (u h ) = d j=1 ∂ ∂x j v h · f j (u h ).(9)
The strong version of DG is obtained by applying integration-by-parts (summation-by-parts in the discrete sense) another time. We obtain the corresponding RD scheme's residuals by comparing (7) and (8). For the inner elements, we get
Φ K,DG σ (u h ) = − K ∇φ σ · f (u h ) d x + ∂K φ σ ·f (u h , u h,− ) d γ.(10)
The boundary residuals are given by
Φ Γ,DG σ (u h ) = Γ φ σ · f (u h , u b ) − f (u h ) · n d γ.(11)
Note that the expressions (10) and (11) satisfy the conservation relations (5) and (6). In view of later use, we set an expression for the average of the left and right states of a on the boundary of K and a jump condition that for any function ω as follows.
{a} := 1 2 a K + a K,− [ω] := ω |K − − ω |K(12)
Flux Reconstruction Approach on Triangles
In our research we focus on two dimensional problems and target the use general polygonal meshes. Therefore, we start by introducing the FR approach directly on triangles following the explanations of [9]. For a detailed introduction into FR we strongly recommend the review article [17] and the references therein. Instead of considering our steady state problem (1), we focus in this subsection on a two dimensional scalar conservation law ∂ t u + div(f ) = 0 (13) within an arbitrary domain Ω. The flux is now f = (f (u), g(u)) and div is divergence in the space variables x, y. The domain is splitted into N non-overlapping elements Ω k such that Ω = K k=1 Ω k . We use here a conforming triangulation of Ω. Note that a quadrangulation would also be possible. Both u and f are approximated by polynomials in every element Ω k and their total approximation in Ω is given by
u σ = N k=1 u σ k , f σ = N k=1 f σ k ,
where σ represents the DOF in the FR context, i.e. the solution points to evaluate the polyomials. In place of doing every calculation in each element Ω k a reference element Ω s is chosen. Each element Ω k is mapped to the reference element Ω s and all calculation are done in Ω s . The initial equation (13) can be transformed to the following governing equation in the reference domain;
∂ tũ + div(f ) = 0 ⇐⇒ ∂ tũ + ∇ rs ·f = 0,(14)
where div is the divergence in the variables r, s in the computational space. To clarify the notion we use ∇ rs · until the end of this section.ũ,f are the transformed u and f . The quantitiesũ andf can be directly calculated using the element mapping. We suppress this dependence in the following to simplify the notation, see [9] for details. Let P p (Ω S ) denote the space of polynomials of degree less than p on Ω s and R p (Γ S ) be the polynomial space on the edges given by
R P (Γ s ) = v ∈ L 2 (Γ s ), v| Γ f ∈ P p (Γ f ), ∀Γ f
where Γ f stand for the edge f of the reference element Ω s . The approximationũ σ of the solution u within the reference element Ω s is done through a multi-dimensional polynomial of degree p, using the values ofũ at N p = 1 2 (p + 1)(p + 2) solution points. The solution approximation then reads;
u σ (r, s, t) = Np i=1ũ σ i l i (r, s)
whereũ σ i is the value 1 ofũ at the solution point i and l i (r, s) is the multidimensional Lagrange polynomial associated with the solution points i in the reference element Ω s . We now detail the main idea of FR. A simple approach is to also approximate the flux function f by a polynomialf σ = (f σ ,g σ ). To build this approximation, a first polynomial decomposition off σ is set as
f σ,D = Np i=1f σ i l i (r, s),g σ,D = Np i=1g σ i l i (r, s),(15)
where the coefficients of these polynomials, respectively denoted byf σ i andg σ i , are again evaluated at the solution points. Sincef σ,D is always discontinuous at the boundary, it is called discontinuous flux. To overcome / reduce that problem a further termf σ,C is then added tof σ,D .f σ,C is set to work directly on the boundaries of each element and correctsf σ,D such that information of two neighboring elements interacts and properties like conservation still hold in the discretisation. We obtain for the approximation of the flux;
f σ =f σ,D +f σ,C .
This gives Flux Reconstruction its name. The selection/definition of these correction functions is essential. We now detail a possible construction for our special case. On each edge of the triangle, a set of N fp = p + 1 flux points are defined. These flux points are applied to couple the solution between neighboring elements. The correction function is then constructed as follows
f σ,C = 3 f =1 N fp j=1 f · n σK f,j − f σ,D · n f,j h f,j (r, s)(16)
The indices f, j correspond to a quantity at the flux point j of face f . Thus, in our case it is 1 ≤ f ≤ 3 and 1 ≤ j ≤ N fp . The term f σ,D · n . Finally, h f,j (r, s) has to be explained. This is a vector correction function associated with the flux points f, j and that lie in the Raviart-Thomas space RT p (Ω s ) of order p. Thus h fulfills the following two properties
∇ rs · h f,j ∈ P p (Ω s ) h f,j · n ∈ R p (Γ S )(17)
and has also to satisfy
h f,j (r f2,j2 ) · n f2,j2 = 1 if f = f 2 and j = j 2 , 0 if f = f 2 or j = j 2 .(18)
Because of (18) it follows that
f σ,C (r f,j ) · n f,j = f · n σK f,j − f σ,D · n f,j =: α f,j . We also getf σ (r f,j ) · n f,j = f · n σK f,j
at each flux point f, j. Combining our results, the approximate solution values to the problem (13) can be updated at the solution points from
d u d t = − ∇ rs ·f σ ri = − ∇ rs ·f σ,D ri − ∇ rs ·f σ,C ri = − Np k=1f σ k ∂l k (r, s) ∂r ri − Np k=1g σ k ∂l k (r, s) ∂s ri − 3 f =1 N fp j=1 f · n σK f,j − f σ,D · n f,j ∇ rs h f,j (r) = − Np k=1f σ k ∂l k (r, s) ∂r ri − Np k=1g σ k ∂l k (r, s) ∂s ri − 3 f =1 N fp j=1 α f,j ∇ rs h f,j (r).
Defining our FR scheme reduces to select the distributions of flux points and solutions points, as well as the form of our correction functions. The choice leads to several numerical methods with different properties.
In [9] special attention is paid on conservation and linear stability which restricts again the set of correction functions, but we do not go further into details here. Finally, we want to mention that some of the most famous schemes are embedded in this framework by a right choice of correction functions and point distributions. To give a concrete example, the nodal Discontinuous Galerkin Spectral Element Method of Gassner et al. [14,13] can be named.
div f h + f − f h · n ∇ψ = 0.(20)
We derive conditions on ∇ψ so that this approach fits in the RD framework and that our methods have the desirable properties of conservation and stability. First, let us focus on FR schemes. The main idea of the FR schemes is that the numerical flux at the boundaries will be corrected by functions in such manner that information of two neighboring elements interacts and properties like conservation hold also in their discretisations. Let us consider our discretisation (20). If we apply a Galerkin approach in every element K, then we obtain that for any v h ∈ V h the relation
K v h · div f h + f (u h , u h,− ) − f h · n ∇ψ d x = 0(21)
has to be fulfilled. Using the Gauss theorem in the above equation yields
− K ∇v h · f h + α∇ψ d x + ∂K v h · f h · n + f − f h · n ∇ψ · n d γ = 0.(22)
To guarantee conservation we demand that the flux over the element boundaries should be expressed only by the numerical flux of elements sharing this boundary. Therefore, we require that
f h · n + f (u h , u h,− ) − f h · n ∇ψ · n =f (u h , u h,− ), which implies ∇ψ · n ≡ 1(23)
on the boundary. The relation (23) yields us a first property on our correction function ∇ψ.
Remark 3.1. The condition (23) can be further weaken. Since we are using quadrature rules to evaluate the integrals (21) or (22), (23) has to be be only fulfilled at the quadrature points. We can guarantee the property (23) in two dimension (d = 2) when using functions ∇ψ lying in the lowest order Raviart-Thomas space [27], up to some scaling. The relation (23) is then automatically fulfilled as a basic property of this function space. In [9] the authors already considered the Raviart-Thomas elements focusing on triangles. We are considering the more general case of polygons.
To demonstrate the connection between FR and RD and to build an numerical scheme we have to apply quadrature formulas to evaluate the continious integrals.
From Flux Reconstruction to Residual Distribution Schemes
In this part of the paper we show the connection between Flux Reconstruction and Residual Distribution Schemes. The key is a proper definition of the residuals. If one has again a look on the formulation of the residuals (8)-(11) from section 2.1 and compare them now with the formulations of (21)- (22), it can be noticed that the equations share similar structures. By passing from integrals to quadrature formulas and splitting v h along {φ σ } σ∈S , we can define the residuals in the following manner.
Φ K,F R σ (u h ) := K φ σ · div f h + f (u h , u h,− ) − f h · n ∇ψ d x(24)
An other approach is to use Gauss formula (integration-by-parts/summation-by-parts), leading to
Φ K,F R σ (u h ) := − K ∇φ σ · f h d x + ∂K φ σ α∇ψ · n + f h · n d γ − K ∇φ σ · α∇ψ d x (23) = − K ∇φ σ · f h d x + ∂K φ σf (u h , u h,− ) d γ − K ∇φ σ · α∇ψ d x :=rσ .(25)
Recalling property (23), the boundary residuals reads
Φ K,F R σ (u h ) = Γ φ σ f (u h , u b ) − f h · n d γ.(26)
Comparing the residuals (24) and (25) with the residuals (10) of the DG scheme 2 , we can write
Φ K,F R σ (u h ) = Φ K,DG σ (u h ) + r σ .(27)
Furthermore, the conservation relation (5) directly provides a second property on ∇ψ, explicitly
σ∈K r σ = − σ∈K K ∇φ σ · α∇ψ d x = 0.(28)
If we apply the residuals (24)-(26) on our underlying steady state problem (1)-(2), we are able to write our model problem in the shape of (7). For any σ ∈ S, it reads
K⊂Ω, σ∈K Φ K,F R σ (u h ) + Γ⊂∂Ω, σ∈Γ Φ Γ,F R σ (u h ) = 0.(29)
With the definitions of the residuals (24)- (27) and the discretisation (29), the Flux Reconstruction is embedded within the RD framework. By ensuring that conditions (23) and (28) hold, the conservation relation (5) for our residuals is guaranteed and we are now able to use the theoretical results of RD [2,4,5,3,6] for the FR schemes under consideration. Naturally, the conservation properties of Flux Reconstruction schemes also hold and also stability results will transfer from the RD framework to FR.
Remark 3.2. If we are considering a two dimensional problem (1), our approach does not restrict the splitting of the domain Ω to a specific geometric structure like triangles or rectangles. The results are valid more generally for all polygons. This approach then extends the results of [9,17,20,35] on FR to general grids.
Before we focus on entropy stability of our FR methods we shortly repeat some well-known results of RD schemes from [3,4,5] and references therein.
Transformation Results to Flux Reconstruction
As it is described inter alia in [4] for the RD schemes, a generalization of the classical Lax-Wendroff theorem is valued. It transfers naturally to our FR formulation in RD (24)- (27).
Theorem 4.1 (Theorem 2.2 of [7]). Assume the family of meshes T = (T h ) is shape regular. We assume that the residuals {Φ K σ } σ∈K for K an element or a boundary element of T h satisfy: • For any M ∈ R + , there exists a constant C which depends only on the family of meshes T h and M such that for any
u h ∈ V h with ||u h || ∞ ≤ M , then ||Φ K σ (u h | K )|| ≤ C σ,σ ∈K ||u h σ − u h σ ||.
• The conservation relations (5) and (6) hold.
If there exists a Constant C max such that the solutions of the scheme (7) (or (29)) satisfy ||u h || ∞ ≤ C max and a function v ∈ L 2 (Ω) d such that (u h ) h or at least a sub-sequence converges to v in L 2 (Ω) d , then v is a weak solution of (1).
In view of the proof, the following relation is essential and can be derived from the conservation relations (5) and (6). For any v h ∈ V h written as v h = σ∈S v σ φ σ :
0 = − Ω ∇v h · f (u h ) d x + ∂Ω v h f (u h , u b ) − f (u h ) · n d γ + e∈E h e {v h }f (u h , u h,− ) d γ + K∈Ω 1 #K σ,σ ∈K (v σ − v σ ) Φ K σ (u h ) − Φ K,DG σ (u h ) + Γ∈∂Ω 1 #Γ σ,σ ∈Γ (v σ − v σ ) Φ Γ σ (u h ) − Φ Γ,DG σ (u h ) ,(30)
with Φ •,DG σ as in (10) and (11). A consequence of (30) is the following entropy inequality: Proposition 4.2 (Proposition 3.2 from [4]). Let (U, g) be an couple entropy-flux for (1) andĝ a numerical entropy flux consistent with g · n. Assume that the residuals satisfy:
σ∈K ∇ u U (u σ ), Φ K σ ≥ ∂Kĝ (u h , u h,− ) d γ for any element K, σ∈e ∇ u U (u σ ), Φ e σ ≥ ∂eĝ (u h , u b ) − g(u h ) · n) d γ for any boundary edge e.(31)
Under the assumption of the theorem 4.1, the limit weak solution satisfies the following entropy inequality:
For any τ ∈ C 1 (Ω), τ ≥ 0, − Ω ∇τ · g(u) d x + ∂Ω − τ g(u b ) · n d γ ≤ 0.
The theorem 4.1 and the proposition 4.2 ensure entropy stability. Therefore, assumption (31) is essential. In [3], the property (31) is further analysed and compared with the theory of Tadmor about entropy conservative/stable numerical flux functions [32,33]. We can transfer this investigation, yielding a further condition on the correction functions. One can show that they have to satisfy an inequality in some sense to guarantee entropy stability. However, this is more a theoretical condition and until now we do not see how this helps us to construct entropy conservative/stable FR schemes. For reasons of completeness and for future perspectives we develop this in the appendix. Another approach will be followed in the next section.
Entropy Conservative/Stable Flux Reconstruction Schemes
We show in this section how to construct an entropy conservative scheme staring from our FR schemes Φ K,F R σ as the defined in (25) and (27). From now on v represents the entropy variable ∇ u U (u). Note that since the entropy is strictly convex, the mapping u → v(u) is one-to-one. Here, we concentrate only on the the inner elements. For a detailed analysis and the study of the boundary we strongly recommend [5]. We give further conditions on our correction functions in (27) to get an entropy conservative/stable numerical FR scheme and presented a way to construct entropy stable FR schemes. This is done for the first time whereas [9,35,36] derive only linear stability. Now, letĝ be the associated numerical entropy flux for the entropy variable v consistent with g · n. As it is described in [5] the entropy conditions can be formulated in the RD setting for every inner element by the following formula.
σ∈K v σ , Φ CS σ = ∂Kĝ (v h , v h,− ) d γ(32)
To construct a new scheme Φ CS σ which fulfils the condition (32), we consider
Φ CS σ = Φ K,F R σ +τ σ (27) = Φ K,DG σ + r σ +τ σ ,(33)
with someτ σ that has to be built. In order to guarantee the conservation property, we ask
σ∈K (Φ K,F R σ +τ σ ) = ∂Kf (v h , v h,− ) d γ,(34)
which implies σ∈Kτ σ = 0.
The question is now to constructτ σ under this constraint. When (32) holds, we have
σ∈K v σ ,τ σ = ∂Kĝ (v h , v h,− ) d γ − σ∈K v σ , Φ K,F R σ := E.(36)
Thus, a solution to (35)-(36) is given by:
τ σ = α σ (v σ − v), α σ = E σ∈K (v σ − v) 2 , v = 1 #K σ∈K v σ .(37)
This can be seen by the following short calculation
σ∈Kτ σ = σ∈K α σ (v σ − v) (37) = σ∈K α σ v σ − σ∈K α σ v σ = 0, E = σ∈K v σ ,τ σ = σ∈K v σ , α σ (v σ − v) = σ∈K α σ ( v σ , v σ − v σ , v σ ) = σ∈K α σ ( v σ , v σ − 2 v σ , v σ + v σ , v σ ) = σ∈K α σ (v σ − v) 2
The scheme (33) is entropy stable by construction, but one can wonder about its accuracy. We are using the second approach presented in [3], since we can preserve the accuracy. The entropy flux g(u) ⊂ R d and the normal flux f fulfill the following relation. 3
g = v, f − θ with ∇ v θ = f(38)
The crucial point is the error E which has to be approximated as accurately as possible. For simplicity reason we are considering
E := −E = − ∂Kĝ (v h , v h,− ) d γ + σ∈K v σ , Φ K,F R σ = − ∂Kĝ (v h , v h,− ) d γ + σ∈K v σ , Φ K,DG σ + r σ (39)
The entropy errorẼ for DG was already investigated in [3]. We follow the steps which are analogous except for the additional terms r σ . The numerical entropyĝ will be defined later on. We hide the dependence of f h on v h in the following to simplify the notation. We further need the following two relations. Using (9) we have:
∇v h · f h = div v h , f h − v h · ∇f h ,(40)
which can be also written in components. It would read
d j=1 p i=1 ∂ ∂x j v h i f i,j = d j=1 ∂ ∂x j p i=1 (v h i f i,j ) − d j=1 p i=1 v h i ∂ ∂x j f i,j .
We also get from (3) and (4) v
(v h ) · ∇f (v h ) = d j=1 v(v h ) · ∂ ∂x j f j (v h ) = d j=1 ∂ ∂x j g j (v h ) = div g(v h ).(41)
Here, v(v h ) is used to emphasis that condition (3) is only valued for the entropy variable v and we can not assume directly that it holds for its interpolation. Therefore, we have to use the flux itself in this context and not the interpolated one. Since we use the approximated flux function f h in our FR schemes we get a slight different investigation as in [3]. However it does not change the major result. With (40), (41) and Gauss's theorem 4 we are able to rewrite (39) as
E (40) = − K div v h , f h d x + K v h · ∇f h d x + ∂K v h ,f (v h , v h,− ) −ĝ(v h , v h,− ) d γ + σ∈K v σ , r σ = − K div v h , f h d x + K div v h , f h d x :=SU R1 − K div v h , f h d x + K v h − v(v h ) · ∇f (v h ) d x :=SU R2 + K v h · ∇ f h − f (v h ) d x :=SU R3 + K v(v h ) · ∇f (v h ) d x − K ∇v h · (αψ) d x :=CO + ∂K v h ,f (v h , v h,− ) −ĝ(v h , v h,− ) d γ Gauss&(41) = SU R 1 + SU R 2 + SU R 3 + CO − ∂K v h , f (v h ) · n d γ + ∂K v h , f (v h ) · n d γ :=BO − ∂K v h , f (v h ) · n d γ + ∂K g(v h ) · n d γ + ∂K v h ,f (v h , v h,− ) −ĝ(v h , v h,− ) d γ = SU R 1 + SU R 2 + CO + BO + ∂K − v h , f (v h ) · n + v h ,f (v h , v h,− ) + g(v h ) · n −ĝ(v h , v h,− ) d γ
Assuming a smooth exact solution, we can use a quadrature formula of order k for the three first volume terms (SU R 1 , SU R 2 , SU R 3 ) and obtain
SU R 1 = O(h k+d+1 ), SU R 2 = O(h k+d+1 ), SU R 3 = O(h k+d+1 ).(42)
For the boundary term BO, we can get BO = O(h k+d+1 ) using a quadrature formula of order k + 1. The last term has to be investigated. We use here for the numerical entropy flux
g(v h , v h,− ) = {v h },f (v h , v h,− ) − θ({v h }).(43)
Applying (38) and (43) in the last term yields after some calculations to
− v h , f (v h ) · n + v h ,f (v h , v h,− ) + g(v h ) · n −ĝ(v h , v h,− ) = O(h 2(k+1) )
and in total
∂K − v h , f (v h ) · n + v h ,f (v h , v h,− ) + g(v h ) · n −ĝ(v h , v h,− ) d γ = O(h d+2k+1 )
with a suitable quadrature formula. We now consider the last element term CO, which has to approximate zero at least as O(h k+d+1 ). Indeed, if we assume that a quadrature of order k for the volume integrals leads to CO = O(h d+k+1 ), then we can merge the errors and retrievẽ
E = σ∈K v σ , Φ σ − ∂Kĝ (v h , v h,− ) d γ = O(h d+k+1 ),(44)
provided that a quadrature formula of order k for the volume integrals and order k + 1 for the boundary integrals are applied. This last equality furnishes a new condition for admissible correction functions and/or their quadrature rule. This yields us to an extension of Proposition 3.3.
Remark 5.1. To obtain an entropy stable scheme, i.e. to have
σ∈K v σ , Φ ST σ ≥ ∂Kĝ (v h , v h,− ) d γ,(45)
we can combine the above approach and add an additional term to Φ CS σ . More explicitly we can set:
Φ ST σ = Φ CS + Ψ σ ,
where Ψ σ satisfy σ∈K Ψ σ = 0 and σ∈K v σ , Ψ σ ≥ 0. In [3] two expressions for Ψ can be found. One contains jumps and the other one streamlines.
However, we are interested here in entropy conservative/stable FR schemes. Therefore, the inequality (45) should already hold for FR schemes of type (25). Let E DG be the entropy error (36) of DG. Up to assuming
σ∈K v σ , r σ ≥ E DG ,(46)
we get automatically
σ∈K v σ , Φ K,F R σ − ∂Kĝ (v h , v h,− ) d γ = σ∈K v σ , r σ − ∂Kĝ (v h , v h,− ) d γ − σ∈K v σ , Φ K,DG σ E DG ≥ 0.
Thus, we are now able to present a way to build functions such that (32) is naturally fulfilled. To simplify our notation we include the scaling term α in our correction functions ∇ψ and use from now on ∇ψ := α∇ψ. Therefore, conditions (23) and (28) transfer to
∇ψ · n = f (u h , u h,− ) − f h · n (23*) σ∈K r σ = − σ∈K K ∇φ σ · ∇ψ d x = 0. (28*)
For the construction of an entropy conservative scheme, we first add the termτ to our FR schemes. A solution of (37) is then obtained. A way to determine a conservation preservingτ is to read its formulation directly from the the ansatz (37) used together with the above mentioned conditions, especially (28*). On the one side we have
r σ = K ∇φ σ · ∇ψ d x,
but on the other side we also ask the entropy condition (32). In order to fulfill this last condition, we may write r σ as
r σ = α σ (E) (v − v) ,
where α σ (E) is defined as in (37) and where we emphasized the dependence on the entropy error E. Bringing the two relations together we define our correction function by solving the following discrete Neumann problem.
K ∇φ σ · ∇ψ d x = α σ (E) (v − v) , ∇ψ · n = f (u h , u h,− ) − f h · n (47)
The equation (32) is then fulfilled and we naturally get entropy conservation for our FR schemes. We can add jump or streamline terms to the schemes, as described in remark 5.1 or more explicitly in [3], to get entropy stability.
Summary
In this short paper we demonstrated the connection between Residual Distribution Schemes and the Flux Reconstruction approach. We saw that the FR schemes can be written as RD schemes. This link enables us to transform the well-known results about RD schemes into the FR framework. The crucial point is to derive suitable correction functions in FR. From our previous analysis we can formulate the following main result.
Theorem 6.1. We are considering the steady state problem (1) together with the initial condition (2). We are using a FR approach div f h + α∇ψ = 0. (20) with α = f − f h · n . Let us further assume that our correction functions α∇ψ fulfill the following two conditions
• ∇ψ · n ≡ 1 (23) • σ∈K r σ = − σ∈K K ∇φ σ · α∇ψ d x = 0.(28)
Then, our FR schemes are conservative and we are able to recast them into the RD framework. All the results of section 4 apply automatically to our FR schemes. If we further choose our correction functions so that
σ∈K v σ , r σ ≥ E DG ,(46)
where E DG is the entropy error (36) of DG, then our FR scheme is additionally entropy stable. Finally, the correction functions are determined by solving the discrete Neuman problem (47), leading to an entropy conservative FR scheme.
A naturally arising question is how to select this correction functions so that the conditions are fulfilled. As explained in remark 3.1, we search our correction functions ψ in the Raviart-Thomas space. Solving then the discrete Neumann problem (47) selects entropy conservative FR schemes.
In [36] Vincent et al. already described and developed FR schemes on triangular grids. Their main problem was to describe the correction functions. They used the Raviart-Thomas space and could prove several conditions on their schemes focusing on linear stability. They used in their analysis/construction a direct representation with flux points and solutions points on triangles whereas we apply an abstract approach. Our advantage is that we do not use the geometrical structure of the grid. Therefore, our results are valid for general polygons (in two space dimensions), and include the schemes of [36]. However, this paper deals with the interpretation/transformation of FR schemes into the RD framework, allowing the use of theoretical results specific to RD in the context of FR. We are considering especially entropy stability and derive conditions which are linked to the selection of correction functions. We further give an idea to build entropy conservative FR schemes. In a forthcoming paper we construct these FR schemes for general polygons and test them numerically against some benchmarks.
Appendix
Entropy Stability-An Approach in the Sense of Tadmor
In this appendix we follow the steps from [3] and have a closer look on the property (31) for our FR residuals (24)- (26). For simplicity reasons we assume in the following that K is a fixed triangle. The results are nevertheless extendible to general polytopes with degrees of freedom on the boundary of K. We may consider a triangulation T K of K elements whose vertices are exactly the elements of S. We denote byf σ,σ the flux between two DOFs σ and σ and by n σ,σn the normal vector on the direct edge between σ and σ n . The control volume C σ associated to σ = 1 is green on the right and corresponds to 1IGK on the left. The vectors n ij are normal to the internal edges scaled by the corresponding edge length.
In [3] it is shown that we can split the residual in the following way;
Φ K σ = edges[σ,σ ]f σ,σ +f b σ ,(48)withf b σ = ∂K φ σfσ (u h , u h,− ) d γ.
Further properties and a detailed analysis can be found in [3]. As an additional example we derive the fluxf σ,σ for FR. Example 7.1 (Flux Reconstruction schemes in the P 1 case). The residuals are simply
Φ K,F R σ (u h ) = − K ∇φ σ f h + α∇ψ d x + ∂K φ σf (u h , u h,− ) d γ
The flux between two DOFs σ and σ is given bŷ
f σ,σ (u h , u h,− ) = ∂K (φ σ − φ σ )f (u h , u h,− ) d γ − K ∇(φ σ − φ σ ) · f h + α∇ψ d x,
where the equality ∇(φ σ − φ σ ) = n σσ |K| comes from the geometry (See figures 1 and 2 ). We obtain Figure 2: Representation of the control volume associated to DOF 1.
f σ,σ (u h , u h,− ) = ∂K (φ σ − φ σ )f (u h , u h,− ) d γ − K f h + α∇ψ d x · n σσ |K| .(49)
Remark 7.2. If the same quadrature formula is used on each edge, then ∂K (φ σ − φ σ ) d γ = 0 and by introducing the cell average u h of u h and the average f h of the flux f h , we can rewrite the flux aŝ
f σ,σ (u h , u h,− ) = ∂K (φ σ − φ σ ) f (u h , u h,− ) − f h · n d γ − K f h + α∇ψ d x · n σσ |K| ,
and the first term can be interpreted as a dissipation.
In Tadmors's work [32,33] about entropy stability, sufficient conditions are derived for the schemes to be entropy stable/conservative. In particular, it is shown that the numerical entropy flux needs to fulfill some dissipation inequality. We deduce here equivalent conditions for our setting, paying attention to the correction function that plays an important role. Using equation (48) we can write our FR residuals as
Φ K,F R σ = edges[σ,σ ]f F R σ,σ +f b,F R σ ,(50)withf b,F R σ = ∂K φ σf (u h , u h,− ) d x.
From proposition 4.2 we know that the condition (31) is sufficient to guarantee entropy stability. Let us now insert (50) into it. We obtain 5
σ∈K v σ , Φ K,F R σ = σ∈K v σ , [σ,σ ]f F R σ,σ +f b,F R σ = σ∈K v σ , [σ,σ ]f F R σ,σ + σ∈K v σ , ∂K φ σf (v h , v h,− ) d x = 1 2 σ>σ v σ − v σ ,f F R σ,σ + ∂K v h ,f (v h , v h,− ) d γ(51)
We recall that we want to tune the correction function such that the obtained scheme is stable. It reduces here to ask
σ∈K v σ , Φ K,F R σ ≥ ∂Kĝ (v h , v h,− ) d γ.(52)
We now derive a condition that guarantees (52) to hold. Introducing the potential Θ h K in K by
Θ h K := σ∈K θ σ φ σ with θ σ = v σ , f (v σ ) − g(v).
where we used the fact that the entropy flux g is related to the flux f by g = v, f − Θ, and defining the numerical entropy fluxĝ in this spirit yields to the following expression.
g(v h , v h,− ) := {v h },f (v h , v h,− ) − {Θ h K } · n
Together with (12) we may rewrite (51) as:
1 2 σ>σ v σ − v σ ,f F R σ,σ + ∂K v h ,f (v h , v h,− ) d γ ≥ ∂K {v h },f (v h , v h,− ) − {Θ h K } · n d γ ⇐⇒ 1 2 σ>σ v σ − v σ ,f F R σ,σ + ∂K v h ,f (v h , v h,− ) d γ + ∂K Θ h K · n d γ + 1 2 ∂K [Θ h K ] · n d γ ≥ ∂K {v h },f (v h , v h,− ) d γ ⇐⇒ 1 2 σ>σ v σ − v σ ,f F R σ,σ + ∂K Θ h K · n d γ + 1 2 ∂K [Θ h K ] · n d γ − 1 2 ∂K [v h ],f (v h , v h,− ) d γ ≥ 0
Rearranging the last inequality we get:
1 2 σ>σ v σ − v σ ,f F R σ,σ + ∂K Θ h K · n d γ :=C K − 1 2 ∂K [v h ],f (v h , v h,− ) − [Θ h K ] d γ :=B ∂K ≥ 0(53)
Taking now a closer look on our inequality (53), we may split this condition for entropy stability in two parts. One part is actually working on the boundary of K, denoted by B ∂K , and one in K, denoted by C K .
Let's pull a connection to the work of [32]. We recall that a numerical fluxf is entropy stable in the sense of Tadmor if
[v h ],f (v h , v h,− ) − [Θ h K ] ≤ 0(54)
holds. Thus combining (54) and the use of an entropy stable numerical flux in the sense of Tadmor, we can guarantee that B ∂K ≤ 0. We have only left to consider C K . Let us study its first term. We have:
1 2 σ>σ v σ − v σ ,f F R σ,σ = 1 2 σ>σ v σ − v σ ,f F R σ,σ − (θ σ − θ σ ) · n σ,σ + 1 2 σ>σ (θ σ − θ σ ) · n σ,σ .
Furthermore, we have 1 2
σ>σ (θ σ − θ σ ) · n σ,σ = σ∈K θ σ · N σ ,(55)(θ σ − θ σ ) · n σ,σ = σ∈K θ σ · N σ = − σ∈K θ σ ∂K φ σ · n d γ = − ∂K Θ h K · n d γ.(56)
By (56) we get for C K :
1 2 σ>σ v σ − v σ ,f F R σ,σ − (θ σ − θ σ ) · n σ,σ + ∂K Θ h K · n d γ − ∂K Θ h K · n d γ ≥ 0
Thus we finally get:
1 2 σ>σ v σ − v σ ,f F R σ,σ − (θ σ − θ σ ) · n σ,σ ≥ 0.(57)
This condition is analogous to the one of Tadmor about entropy stability. Therefore, for an entropy stable flux functionf F R σ,σ , the entropy stability is guaranteed. However, note thatf F R σ,σ depends on the correction functions. We show it on our example 7.1. Example 7.3. Putting (49) into (57) yields to the condition
1 2 σ>σ v σ − v σ ,f F R σ,σ − (θ σ − θ σ ) · n σ,σ = 1 2 σ>σ v σ − v σ , ∂K (φ σ − φ σ )f (u h , u h,− ) d γ − K f h + α∇ψ d x · n σσ |K| − (θ σ − θ σ ) · n σ,σ = 1 2 σ>σ v σ − v σ , ∂K (φ σ − φ σ )f (u h , u h,− ) d γ − K f h d x · n σσ |K| − (θ σ − θ σ ) · n σ,σ − 1 2 σ>σ v σ − v σ , K α∇ψ d x · n σσ |K| ≥0
The first line of the last expression is again the same as for DG whereas the last line gives us special conditions on the correction functions which can be tuned such that this inequality holds. Thus, the first step is the selection of α∇ψ such that
1 2 σ>σ v σ − v σ , K
α∇ψ d x · n σσ |K| ≤ 0.
In the above example we have seen that the correction function has a direct influence on entropy stability. We can now further restrict our correction functions so that the inequality (57) holds. A detailed analysis as well as more examples of those restrictions be found in [5].
Remark 7.4 (Extension of Theorem 6.1). Our investigation yields us an additional condition, so that we can extend our Theorem 6.1 by the the following: If we further choose our correction functions so thatf F R σ,σ is entropy stable in the sense of Tadmor (57) then our FR scheme is additionally entropy stable.
normal component of the transformed discontinuous flux at the flux point j, j, whereas f · n σK f,j is a normal transformed numerical flux computed at flux point f, j.We compute it by evaluating the multiply defined values of u σ at each flux point. More precisely, we first define by u σ,− the value of u σ computed in the current element and by u σ,+ its value computed using the information from the adjoint element that shares the same flux point j. This couples two neighboring elements and the information between them. We then evaluate u σ,+ and u σ,− at each flux point and compute f (u σ,− , u σ,+ )
Figure 1 :
1Notations for the schemes. On the left: Definition of the control volume for the degree of freedom σ. The vertex σ plays the role of the vertex 1 on the left picture for the triangle K.
σ and σ are not on the same edge,1 if [σ, σ ] is an edge and σ → σ is direct, −1 if [σ, σ ] is an edge and σ → σ is indirect. Additionally, N fulfillsf b σ = f (u h ) · N σ and we can write N σ := − ∂K φ σ · n d γ.Putting this in(55)
We neglect the dependence on the mapping here again.
Connection between Flux Reconstruction and Residual DistributionInstead of using a variational or integral form like in DG, FR schemes are applied in their discretisation of the differential form (1) as it is described in subsection 2.2. The flux function is approximated by a polynomial of degree k + 1 denoted by f h . The discretisation of our underlying problem (1) reads div(f h + α∇ψ) = 0,(19) where α∇ψ is our correction function with the scaling term α =f − f h · n. We change here the notation from subsection 2.2 on purpose to clarify that we are dealing now with the general case. We can translate it into triangles by setting ∇ψ := h. We get
Here, we neglect the fact that in our description of DG we did not approximate the flux function by a polynomial.
Here, the expression v, f := v T · f , where the flux is interpreted as a p × d matrix and the product is a d-row vector.
Here, we assume that the quadrature has sufficient order or accuracy such that also the discrete version of the theorem is fulfilled.
Here, we considered directly the entropy variable v similar to[32] instead of u. Furthermore, we deal with an oriented graph. Given two vertices of this graph σ and σ , we write σ > σ a direct edge and we shorten the notation byσ>σ := σ∈K σ ∈K|σ>σ .
AcknowledgementsThe second and third authors have been funded in by the the SNF project (Number 175784) " Solving advection dominated problems with high order schemes with polygonal meshes: application to compressible and incompressible flow problems".
Toward the ultimate conservative scheme: following the quest. R , Journal of Computational Physics. 1672R. Abgrall. Toward the ultimate conservative scheme: following the quest. Journal of Computational Physics, 167(2):277-315, 2001.
Residual distribution schemes: current status and future trends. R , Computers & Fluids. 357R. Abgrall. Residual distribution schemes: current status and future trends. Computers & Fluids, 35(7):641-669, 2006.
A general framework to construct schemes satisfying additional conservation relations. application to entropy conservation and entropy dissipative schemes. R , R. Abgrall. A general framework to construct schemes satisfying additional conservation relations. application to entropy conservation and entropy dissipative schemes. 2017.
Some remarks about conservation and entropy stability for residual distribution schemes. R , arXiv:1708.03108arXiv preprintR. Abgrall. Some remarks about conservation and entropy stability for residual distribution schemes. arXiv preprint arXiv:1708.03108, 2017.
Some remarks about conservation for residual distribution schemes. R , Computational Methods in Applied Mathematics. R. Abgrall. Some remarks about conservation for residual distribution schemes. Computational Methods in Applied Mathematics, 2017.
A high-order nonconservative approach for hyperbolic equations in fluid dynamics. R Abgrall, P Bacigaluppi, S Tokareva, Computers & Fluids. 191-3R. Abgrall, P. Bacigaluppi, and S. Tokareva. A high-order nonconservative approach for hyperbolic equations in fluid dynamics. Computers & Fluids, 19(1-3), 2017.
High order fluctuation schemes on triangular meshes. R Abgrall, P L Roe, Journal of Scientific Computing. 191-3R. Abgrall and P. L. Roe. High order fluctuation schemes on triangular meshes. Journal of Scientific Computing, 19(1-3):3-36, 2003.
Development of residual distribution schemes for the discontinuous galerkin method: The scalar case with linear elements. R Abgrall, C.-W Shu, Communications in Computational Physics. 52-4R. Abgrall and C.-W. Shu. Development of residual distribution schemes for the discontinuous galerkin method: The scalar case with linear elements. Communications in Computational Physics, 5(2-4):376- 390, 2009.
A new class of high-order energy stable flux reconstruction schemes for triangular elements. P Castonguay, P E Vincent, A Jameson, Journal of Scientific Computing. 511P. Castonguay, P. E. Vincent, and A. Jameson. A new class of high-order energy stable flux reconstruc- tion schemes for triangular elements. Journal of Scientific Computing, 51(1):224-256, 2012.
Energy stable flux reconstruction schemes for advection-diffusion problems. P Castonguay, D Williams, P E Vincent, A Jameson, Computer Methods in Applied Mechanics and Engineering. 267P. Castonguay, D. Williams, P. E. Vincent, and A. Jameson. Energy stable flux reconstruction schemes for advection-diffusion problems. Computer Methods in Applied Mechanics and Engineering, 267:400- 417, 2013.
Status of multidimensional upwind residual distribution schemes and applications in aeronautics. H Deconinck, K Sermeus, R Abgrall, Fluids 2000 Conference and Exhibit. 2328H. Deconinck, K. Sermeus, and R. Abgrall. Status of multidimensional upwind residual distribution schemes and applications in aeronautics. In Fluids 2000 Conference and Exhibit, page 2328, 2000.
Review of summation-by-parts operators with simultaneous approximation terms for the numerical solution of partial differential equations. D C D R Fernández, J E Hicken, D W Zingg, Computers & Fluids. 95D. C. D. R. Fernández, J. E. Hicken, and D. W. Zingg. Review of summation-by-parts operators with simultaneous approximation terms for the numerical solution of partial differential equations. Computers & Fluids, 95:171-196, 2014.
A skew-symmetric discontinuous Galerkin spectral element discretization and its relation to SBP-SAT finite difference methods. G J Gassner, SIAM Journal on Scientific Computing. 353G. J. Gassner. A skew-symmetric discontinuous Galerkin spectral element discretization and its relation to SBP-SAT finite difference methods. SIAM Journal on Scientific Computing, 35(3):A1233-A1253, 2013.
A comparison of the dispersion and dissipation errors of Gauss and Gauss-Lobatto discontinuous Galerkin spectral element methods. G J Gassner, D A Kopriva, SIAM Journal on Scientific Computing. 335G. J. Gassner and D. A. Kopriva. A comparison of the dispersion and dissipation errors of Gauss and Gauss-Lobatto discontinuous Galerkin spectral element methods. SIAM Journal on Scientific Computing, 33(5):2560-2579, 2011.
Multidimensional summation-by-parts operators: General theory and application to simplex elements. J E Hicken, D C Del Rey Fernaández, D W Zingg, SIAM Journal on Scientific Computing. 384J. E. Hicken, D. C. Del Rey Fernaández, and D. W. Zingg. Multidimensional summation-by-parts operators: General theory and application to simplex elements. SIAM Journal on Scientific Computing, 38(4):A1935-A1958, 2016.
A flux reconstruction approach to high-order schemes including discontinuous Galerkin methods. H Huynh, AIAA paper4079H. Huynh. A flux reconstruction approach to high-order schemes including discontinuous Galerkin methods. AIAA paper, 4079:2007, 2007.
High-order methods for computational fluid dynamics: A brief review of compact differential formulations on unstructured grids. H Huynh, Z J Wang, P E Vincent, Computers & Fluids. 98H. Huynh, Z. J. Wang, and P. E. Vincent. High-order methods for computational fluid dynamics: A brief review of compact differential formulations on unstructured grids. Computers & Fluids, 98:209-220, 2014.
On the non-linear stability of flux reconstruction schemes. A Jameson, P E Vincent, P Castonguay, Journal of Scientific Computing. 502A. Jameson, P. E. Vincent, and P. Castonguay. On the non-linear stability of flux reconstruction schemes. Journal of Scientific Computing, 50(2):434-445, 2012.
Finite element and finite difference methods for hyperbolic partial differential equations. Mathematical aspects of finite elements in partial differential equations. H.-O Kreiss, G Scherer, H.-O. Kreiss and G. Scherer. Finite element and finite difference methods for hyperbolic partial differ- ential equations. Mathematical aspects of finite elements in partial differential equations, (33):195-212, 1974.
Dealiasing techniques for high-order spectral element methods on regular and irregular grids. G Mengaldo, D De Grazia, D Moxey, P E Vincent, S Sherwin, Journal of Computational Physics. 299G. Mengaldo, D. De Grazia, D. Moxey, P. E. Vincent, and S. Sherwin. Dealiasing techniques for high-order spectral element methods on regular and irregular grids. Journal of Computational Physics, 299:56-81, 2015.
Error boundedness of correction procedure via reconstruction / flux reconstruction. P Öffner, arXiv:1806.01575arXiv preprintSubmittedP.Öffner. Error boundedness of correction procedure via reconstruction / flux reconstruction. arXiv preprint arXiv:1806.01575, 2018. Submitted.
Stability of correction procedure via reconstruction with summation-by-parts operators for burgers' equation using a polynomial chaos approach. P Öffner, J Glaubitz, H Ranocha, arXiv:1703.03561arXiv preprintSubmittedP.Öffner, J. Glaubitz, and H. Ranocha. Stability of correction procedure via reconstruction with summation-by-parts operators for burgers' equation using a polynomial chaos approach. arXiv preprint arXiv:1703.03561, 2017. Submitted.
Generalised summation-by-parts operators and variable coefficients. H Ranocha, Journal of Computational Physics. 362H. Ranocha. Generalised summation-by-parts operators and variable coefficients. Journal of Computa- tional Physics, 362:20-48, 02 2018.
Stability of artificial dissipation and modal filtering for flux reconstruction schemes using summation-by-parts operators. H Ranocha, J Glaubitz, P Öffner, T Sonar, Applied Numerical Mathematics. 128H. Ranocha, J. Glaubitz, P.Öffner, and T. Sonar. Stability of artificial dissipation and modal filtering for flux reconstruction schemes using summation-by-parts operators. Applied Numerical Mathematics, 128:1-23, 02 2018.
Summation-by-parts operators for correction procedure via reconstruction. H Ranocha, P Öffner, T Sonar, Journal of Computational Physics. 311H. Ranocha, P.Öffner, and T. Sonar. Summation-by-parts operators for correction procedure via reconstruction. Journal of Computational Physics, 311:299-328, 2016.
Extended skew-symmetric form for summation-by-parts operators and varying Jacobians. H Ranocha, P Öffner, T Sonar, Journal of Computational Physics. 342H. Ranocha, P.Öffner, and T. Sonar. Extended skew-symmetric form for summation-by-parts operators and varying Jacobians. Journal of Computational Physics, 342:13-28, 2017.
A mixed finite element method for 2-nd order elliptic problems. P.-A Raviart, J.-M Thomas, Mathematical aspects of finite element methods. SpringerP.-A. Raviart and J.-M. Thomas. A mixed finite element method for 2-nd order elliptic problems. In Mathematical aspects of finite element methods, pages 292-315. Springer, 1977.
Application of conservative residual distribution schemes to the solution of the shallow water equations on unstructured meshes. M Ricchiuto, R Abgrall, H Deconinck, Journal of Computational Physics. 2221M. Ricchiuto, R. Abgrall, and H. Deconinck. Application of conservative residual distribution schemes to the solution of the shallow water equations on unstructured meshes. Journal of Computational Physics, 222(1):287-331, 2007.
Characteristic-based schemes for the euler equations. Annual review of fluid mechanics. P L Roe, 18P. L. Roe. Characteristic-based schemes for the euler equations. Annual review of fluid mechanics, 18(1):337-365, 1986.
Fluctuation splitting schemes for the 2D Euler equations. R Struijs, H Deconinck, P Roe, In In its Computational Fluid Dynamics. 94R. Struijs, H. Deconinck, and P. Roe. Fluctuation splitting schemes for the 2D Euler equations. In In its Computational Fluid Dynamics 94 p (SEE N91-32426 24-34), 1991.
Review of summation-by-parts schemes for initial-boundary-value problems. M Svärd, J Nordström, Journal of Computational Physics. 268M. Svärd and J. Nordström. Review of summation-by-parts schemes for initial-boundary-value problems. Journal of Computational Physics, 268:17-38, 2014.
The numerical viscosity of entropy stable schemes for systems of conservation laws. E Tadmor, I. Mathematics of Computation. 49179E. Tadmor. The numerical viscosity of entropy stable schemes for systems of conservation laws. I. Mathematics of Computation, 49(179):91-103, 1987.
Entropy stability theory for difference approximations of nonlinear conservation laws and related time-dependent problems. E Tadmor, Acta Numerica. 12E. Tadmor. Entropy stability theory for difference approximations of nonlinear conservation laws and related time-dependent problems. Acta Numerica, 12:451-512, 2003.
On the properties of energy stable flux reconstruction schemes for implicit large eddy simulation. B Vermeire, P Vincent, Journal of Computational Physics. 327B. Vermeire and P. Vincent. On the properties of energy stable flux reconstruction schemes for implicit large eddy simulation. Journal of Computational Physics, 327:368-388, 2016.
A new class of high-order energy stable flux reconstruction schemes. P E Vincent, P Castonguay, A Jameson, Journal of Scientific Computing. 471P. E. Vincent, P. Castonguay, and A. Jameson. A new class of high-order energy stable flux reconstruc- tion schemes. Journal of Scientific Computing, 47(1):50-72, 2011.
An extended range of stablesymmetric-conservative flux reconstruction correction functions. P E Vincent, A M Farrington, F D Witherden, A Jameson, Computer Methods in Applied Mechanics and Engineering. 296P. E. Vincent, A. M. Farrington, F. D. Witherden, and A. Jameson. An extended range of stable- symmetric-conservative flux reconstruction correction functions. Computer Methods in Applied Me- chanics and Engineering, 296:248-272, 2015.
A review of flux reconstruction or correction procedure via reconstruction method for the Navier-Stokes equations. Z J Wang, H Huynh, Mechanical Engineering Reviews. 31Z. J. Wang and H. Huynh. A review of flux reconstruction or correction procedure via reconstruction method for the Navier-Stokes equations. Mechanical Engineering Reviews, 3(1):15-00475, 2016.
Energy stable flux reconstruction schemes for advection-diffusion problems on tetrahedra. D Williams, A Jameson, Journal of Scientific Computing. 593D. Williams and A. Jameson. Energy stable flux reconstruction schemes for advection-diffusion problems on tetrahedra. Journal of Scientific Computing, 59(3):721-759, 2014.
| zyda_arxiv-1412000 |
Neural Volumetric Blendshapes: Computationally Efficient Physics-Based Facial Blendshapes
January 23, 2023
Nicolas Wagner
Ulrich Schwanecke
Mario Botsch
Neural Volumetric Blendshapes: Computationally Efficient Physics-Based Facial Blendshapes
January 23, 2023
Computationally weak systems and demanding graphical applications are still mostly dependent on linear blendshape models for facial animations. At this, artifacts such as self-intersections, loss of volume, or missing soft tissue elasticity are often avoided by using comprehensively designed blendshape rigs. However, hundreds of blendshapes have to be manually created or scanned in for high-quality animations, which is very costly to scale to many characters. Non-linear physics-based animation systems provide an alternative approach which avoid most artifacts by construction. Nonetheless, they are cumbersome to implement and require immense computational effort at runtime. We propose neural volumetric blendshapes, a realtime approach on consumer-grade CPUs that combines the advantages of physics-based simulations while the handling is as effortless and fast as that of linear blendshapes. To this end, we present a neural network that efficiently approximates volumetric simulations and generalizes across human identities as well as facial expressions. Furthermore, it only requires a single neutral face mesh as input in the minimal setting. Along with the design of the network, we introduce a pipeline for the challenging creation of anatomically and physically plausible training data. Part of the pipeline is a novel layered head model that densely positions the biomechanical anatomy within a skin surface while avoiding intersections. The fidelity of all parts of the data generation pipeline as well as the accuracy and efficiency of the network are evaluated in this work. Upon publication, the trained models and associated code will be released.
Introduction
At present, research in the field of head avatars and facial animation is mainly concerned with obtaining photorealistic results through neural networks [13,24,4] which can be operated on computationally rich systems, require comprehensive per-person training data, and time-consuming individualization. What currently falls short, however, is the inclusion of less capable hardware setups and efficient training pipelines that avoid extensive data collection. For this, various adaptions of linear blendshape models paired with deformation transfer [54,9] and example-based facial rigging [38] are still the usual means in production. Although linear facial models have been intensively researched and improved over the past decades, there are still known shortcomings like physically implausible distortions, loss of volume, anatomically impossible expressions, missing volumetric elasticity, or self-intersections. Physics-based simulations have been proposed that overcome most artifacts of linear blendshapes [27,26,18,16], but they are usually laborious to handle and computationally expensive. Hybrid models that try to combine the best of both worlds are either not sophisticated enough in the quality of the simulated physical properties [6] Figure 1: a) Exemplary results of our Neural Volumetric Blendshapes (brown) compared to linear blendshapes (blue). Among others, volume preservation, the ability to create more detailed wrinkles, and avoided self-intersections result in more realistic and anatomically plausible facial animations. b) An example fit of the novel layered head model. The model encapsulates the skin, the skull, and the muscles with wraps for which we present a data-driven fitting algorithm. The space between the wraps can be canonically tetrahedralized.
still too inefficient to be used on slower devices [26].
An auspicious approach in the latter category is physics-based volumetric blendshapes [26]. These can be animated with physical plausibility, anatomical constraints can be taken into account, selfintersections can be prevented, and the control is identical to linear blendshapes. Although the level of detail is slightly sacrificed in comparison to other physics simulations [18,27], volumetric blendshapes can still only be used at low frame rates. We improve on this approach with neural volumetric blendshapes that approximate the involved calculations of physical and anatomical constraints with an efficient and lightweight neural network. Thereby, realtime inference of physics-based non-linear blendshapes on consumer-grade hardware becomes possible and only a slight computational overhead is necessary compared to linear blendshapes. The principal challenge we solve in this work is the creation of training data for neural volumetric blendshapes that reflects the previously discussed animation advantages and facilitates a generalization across different human identities. Thus, in contrast to other recent work that tries to approximate physics-based facial simulations by neural networks [53,55,16], we avoid time-consuming individualizations for a straightforward deployment. In addition, our facial animation method does not require sequences of optical scans or manually crafted facial animations. Instead, the network trained in this work induces an anatomical plausible deformation transfer such that our system is instantly applicable to neutral head surfaces or on top of arbitrary linear blendshape rigs.
The aforementioned data generation pipeline is unique in that, to the best of our knowledge, there are no other comprehensive datasets that relate a broad range of head shapes in diverse facial expressions to the underlying anatomical and physical characteristics. To curate such a dataset for the first time, we bring together multiple data sources such as CT data to reflect the anatomy of heads, 3D reconstructions of images in the wild to collect diverse head shapes, or facial expressions in the form of recorded blendshape weights from dyadic conversational situations. The result is a dataset of heads with neutral and nonneutral expressions represented by a novel standardized layered head model, relating skin surface displacements with the underlying biomechanical volumetric deformations and transformations of muscles, skull bones, and soft tissue.
The key novelties and contributions we present in this paper can be summarized as follows:
• A novel layered head model (LHM) representing the skin surface as well as the entire biomechanical anatomy.
• A data-driven procedure for fitting the LHM to neutral skin surfaces.
• An inverse physics-based simulation which fits the LHM to skin surfaces of non-neutral facial expressions.
• A novel neural network design that approximates physics-based simulations to efficiently implement neural volumetric blendshapes.
• A pipeline for creating training data of the neural network that among other parts includes the LHM and the associated fitting procedures.
2 Related Work
Personalized Anatomical Models
Algorithms that create personalized anatomical models can essentially be distinguished according to two paradigms: heuristic-based and data-driven. Considering heuristic-based approaches, Anatomy Transfer [2] applies a space warp to a template anatomical structure to fit a target skin surface. The skull and other bones are only deformed by an affine transformation. A similar idea is proposed by Gilles et al. [23]. While they also implement a statistical validation of bone shapes, the statistics are collected from artificially deformed bones. In [26,30], an inverse physics simulation was used to reconstruct anatomical structures from multiple 3D expression scans. Saito et al. [45] simulate the growth of soft tissue, muscles, and bones. A musculoskeletal biomechanical model is fitted from sparse measurements in [48] but not qualitatively evaluated. There are only a few data-driven approaches because combined data sets of surface scans and MRI, CT or DXA images are hard to obtain for various reasons (e.g. data privacy or unnecessary radiation exposure). The recent work OSSO [32] predicts full body skeletons from 2000 DXA images that do not carry precise 3D information. Further, bones are positioned within a body by predicting only three anchor points per bone group and not avoiding intersections between skin and skull. A model that prevents skin-skull intersections and also considers muscles is based on fitting encapsulating wraps instead of the anatomy itself [34]. However, no accurate algorithm based on medical imaging but a BMI (Body mass index) regressor [41] is used to position the wraps. A much more accurate, pure face model, was developed by Achenbach et al. [1]. Here, CT scans are combined with optical scans by a multilinear model (MLM) which can map from skulls to faces and vice versa. As before, no self-intersections are prevented and only bones are fitted. Building on the data from [1] and following the idea of a layered body model [34], we create a statistical layered head model including musculature that avoids self-intersections.
Physics-Based Facial Animation
A variety of techniques for animating faces have been developed in the past [25,12,56,43]. Data-driven models [36,26,37], which have recently been significantly improved by deep learning [57,21,13,20,51,4], are certainly dominant. Due to their simplicity and speed, linear blendshapes [36] are still most commonly used in demanding applications and whenever no computationally rich hardware is available. Physics-based models have been developed for a long time [50] and avoid artifacts like implausible contortions and self-intersections, but due to their complexity and computational effort, they are rarely used. Hybrid approaches add surface-based physics to linear blendshapes for more detailed facial expressions [6,7,16]. However, by construction they can not model volumetric effects.
The pioneering work of Sifakis et al. [50] is the first fully physics-based facial animation. The simulation is conducted on a personalized tetrahedron mesh, which can only be of a limited resolution due to a necessary dense optimization problem. With Phace [27], this problem was overcome by an improved physics simulation. An art-directed muscle model [17,5,18] additionally represents muscles as B-splines and allows control of expressions via trajectories of spline control points. A solely inverse model for determining the physical properties of faces was proposed in [29]. Neural soft-tissue dynamics [46,14] extend the SMPL (Skinned Multi-Person Linear Model) proposed in Figure 2: All components of the layered head model template T . Skin S T , skin wrapŜ T , muscles M T , muscles wrapM T , skull B T , and the skull wrapB T . Each wrap is an abstract and simplified representation of a more complex structure.
[40] with secondary motion. Recently, [53,55,16] adapted neural soft-body dynamics to learn the physical properties of a particular person. However, these approaches must be retrained for new objects and are slow in inference. With volumetric blendshapes [26], a hybrid approach has been presented that combines the structure of linear blendshapes with physical and anatomical plausibility. We extend this work to neural volumetric blendshapes to make physical plausibility realtime capable while maintaining the control structure of standard linear blendshapes.
Method
The foundation of our neural volumetric blendshapes approach (Sections 3.2 and 3.3) is a novel layered head representation (Section 3.1). Starting from there, we design a physics-based facial animation model (Section 3.4) and distill it into a defining dataset (Section 3.5). With this dataset, a neural network that makes the animation model realtime capable is trained.
Layered Head Model
We represent a head H = ρ H (T ) with neutral expression through a component-wise transformation (see Section 3.5 for details) of a layered head model
(LHM) template T = S T , B T , M T ,Ŝ T ,B T ,M T ,(1)
that consists of six triangle meshes. S T describes the skin surface including the eyes, the mouth cavity, and the tongue, B T the surface of all skull bones and teeth, M T the surface of all muscles and the cartilages of the ears and nose.Ŝ T is the skin layer, i.e. a closed wrap enveloping S T ,B T the skull layer that envelopes B T , andM T the muscle layer that envelopes M T . Other anatomical structures are omitted for simplicity. The template structures S T , B T and M T were designed by an experienced digital artist. The skin, skull, and muscle layersŜ T ,B T , andM T have the same triangulation and were generated by shrink-wrapping a sphere as close as possible to the corresponding surfaces without intersections. The complete template is shown in Figure 2.
The skull and muscle layers are further massaged such that the quads of all prisms that can be spanned between corresponding faces of the skin, muscle, and skull layer are as rectangular as possible while preserving the original geometries. To that end, we determine for the skull layer arg min
X w rect E rect X,Ŝ T + w dist E dist X,B T ,(2)
within the efficient projective dynamics [11] optimization framework, initializing with X =B T . Here, E d is the two-sided Hausdorff distance to the nonmassaged shape and
E r X,Ŝ T = (x 0 i ,x 1 i )∈X 90 • − ∠ x 0 i x 1 iŝ 1 i 2 + 90 • − ∠ x 1 i x 0 iŝ 0 i 2 + 90 • − ∠ ŝ 0 iŝ 1 i x 1 i 2 + 90 • − ∠ ŝ 1 iŝ 0 i x 0 i 2 ,(3)
induces the rectangular prism shapes. After the optimization we setB T = X. The same optimization is run for the muscle layer.
The wrapping layers of the LHM allow for at least two significant advantages. On the one hand, they provide a simplified representation of the skin surface, the musculature, and the skull, which we exploit in Section 3.5 for determining the LHM template transformation ρ H . On the other hand, they can be used for topologically and semantically consistent tetrahedralization of the head volume by splitting the prisms between the layers canonically into tetrahedrons.
Mesh S T B T M
More precisely, the construction of the LHM also defines a soft tissue tetrahedron mesh S T (i.e. between the skin and the muscle layer) and a muscle tissue tetrahedron mesh M T (i.e. between the muscle and the skull layer). The massage of the muscle and skull layers ensures nicely shaped tets with minimal shearing. Further, S T can be fine-tuned by removing all vertices and connected tets outside of S T , adding the vertices of S T , and connecting them via Delaunay tetrahedralization. The complexities of all template components are given in Table 1.
Linear & Volumetric Blendshapes
Building on the LHM representation, we can now introduce neural volumetric blendshapes. For this, the classical concept of linear blendshapes is reviewed first. Thereupon, we derive volumetric blendshapes and the involved physics-based simulations that are in general not real time capable. Finally, we introduce neural volumetric blendshapes (Section 3.3) as an efficient and fast alternative.
S i H n i=1(4)
and determines an unknown facial expression S exp H as the linear interpolation
S exp H = i w i exp S i H .(5)
The w i exp are the blending weights and determine the share of each blendshape in the expression.
The corresponding set of volumetric blendshapes can be defined as
V i H n i=1 = ∇S i H , ∇M i H , B i H n i=1 ,(6)
where ∇S i H , ∇M i H describe soft and muscle tissue deformations as vectors of 3 × 3 deformation gradients. B i H describes the skull after rigid motion of jaw and cranium. An anatomically plausible inverse physics model φ † (see Section 3.4 for details) relates the linear rig to the volumetric rig as
V i H = φ † S i H , S H , M H , B H .(7)
In words, φ † determines the biomechanical volumetric deformations of H that cause the skin surface of a facial expression. Vice versa, a forward physics model φ acts as a left inverse to φ † and maps volumetric deformations back to surface deformations as
S i H = φ V i H .(8)
In reality, S i H is often not an anatomically legal facial expression with respect to φ † . As a consequence, φ † is built such thatS i H is an anatomically reachable expression which, in Euclidean sense, is close but not necessarily equal to S i H . The major advantage of a volumetric blendshape model is that unknown facial expressions calculated asS
exp H = φ i w i exp ⊗ V i H(9)
can be shaped anatomically more plausible and thus more realistic, provided a suitable choice of interpolation (and extrapolation) functions and ⊗. At this, the essential requirement to and ⊗ is the biologically necessary volume preservation of the deformation gradients. The common approach would be to to separately interpolate the stretch and the rotation components of the deformation gradients [26,49] and the positions of the skull bones. Considering the stretching components, volume-preserving interpolation methods [3,28] have been proposed. Considering the rotation components, quaternion-interpolation satisfies the volume-preservation by construction. However, to the best of our knowledge, it is yet to be discussed how both components should be extrapolated for facial animations if the blending weights do not form a convex combination. In this work, we therefore use a novel hybrid approach that calculates facial expressions as
S exp H = φ φ † i w i expS i H , S H , M H , B H .(10)
This way, inter-and extrapolation capabilities of linear surface blendshapes can be used while maintaining the advantages of anatomical plausibility through φ † .
Neural Volumetric Blendshapes
Regardless of the construction of and ⊗, the calculation of facial expressions is in general not realtimecapable for high-resolution volumetric blendshapes and sophisticated physics models φ and φ † . We therefore present the neural volumetric blendshape model that like the linear blendshape model (4) consists of a set of (not necessarily anatomically plausible) expression surfaces {S i H } n i=1 . However, unknown anatomically plausible expressions are formed by a neural network f that is trained such that (10) which is approximated by f , it seems reasonable to use the same inputs and only learn the evaluation of φ(φ † (·). However, the accompanying tetrahedron meshes are significantly higher-dimensional than the corresponding surface meshes and f would be slow downed severely in the inference speed. We therefore expect f to implicitly learn the linking between linear and volumetric blendshapes as shown in Equation (7) as well as the fitting of the LHM. Since we demonstrate in Section 3.5 how the LHM can be fitted given only the neutral surface and also the linking does not need any further inputs, the neutral surface carries sufficient information to omit the tetrahedron meshes. For the second input, the difference vector, the blending weights or the linear interpolation result could alternatively be inserted. Inputting only the weights, however, would considerably limit the flexibility of f because it does not allow the underlying surface blendshapes to be changed after training. On the other hand, inputting the linear interpolation result exhibits disadvantages at training time. The target deltas are mostly Gaussian distributed and can therefore be learned more easily by a neural network [39]. In the same spirit, the output of f is chosen to be the differences to the anatomically plausible expression.
S exp H − i w i exp S i H ≈ f i w i exp S i H − S H , S H .(11)
We evaluated alternatives to fully connected networks such as set transformers [35], convolutional networks on geometry images, graph neural networks [47], or implicit architectures [42], but all have exhibited substantially slower inference speeds while reaching a similar accuracy.
Our design of f is not only fast, i.e., runs only 8 ms per frame on a consumer grade Intel i5 12600K, but it is also straightforward to deploy. A single neutral surface of a head is sufficient on which deformation transfer [9] can be applied. Building on this, an anatomically plausible animation can be performed with f as shown in Figure 3. Further, no more volumetric information has to be processed in complex simulation frameworks. Thereby, and because only simple fully connected layers are used, the mesh is portable and easy to use on many different (computationally weak) devices.
Next, the construction of φ and φ † is described and subsequently a pipeline for creating a dataset to learn f is structured.
Physics-Based Simulations
We realize the anatomically plausible inverse physics φ † and the left inverse φ as projective dynamics energies E φ † and E φ , respectively. Starting with φ † , separate terms for soft tissue, muscle tissue, the skin, the skull, and auxiliary components are applied. Considering the soft tissue S H , we closely follow the model of [27] and impose (12) which for each tetrahedron t penalizes changes of volume E vol (t) = (det(F(t)) − 1) 2 (13) and strain E str (t) = min
E S H = w vol t∈S H E vol (t) + w str t∈S H E str (t),R∈SO(3) F(t) − R 2 F .(14)
F(t) denotes the deformation gradient of a tetrahedron t, R ∈ SO(3) the optimal rotation, and · 2 F the Frobenius norm.
To reflect the biological structure of the skin, we additionally formulate a dedicated strain energy
E S H = t∈S H E str (t)(15)
on each triangle t of the skin. For the muscle tets M H , we follow the arguments of [29] that capturing fiber directions for tetrahedralized muscles is in general too restrictive. Hence, only a volume-preservation term
E M H = w vol t∈M H E vol (t)(16)
is applied. The weight w vol is set sufficiently high such that the volume-preservation is almost a hard constraint.
The skull is not tetrahedralized as it is assumed to be non-deformable even though it is rigidly movable. The non-deformability of the skull is represented by i.e. a strain E str on the triangles and mean curvature regularization
E B H = t∈B H E str (t) + x∈B H E curv (x, B H ) ,(17)E curv (x, B H ) = A x ∆x − R∆b x 2(18)
on the vertices of the skull. The matrix R ∈ SO(3) denotes the optimal rotation keeping the vertex Laplacian ∆x as close as possible to its initial value ∆b x . The vertex Laplacian is discretized using the cotangent weights and the Voronoi areas A x [10]. We do not model the non-deformability as a rigidity constrain due to the significantly higher computational burden. Like for the volume constraints, the non-deformability of the skull is weighted to be an almost hard constraint.
To connect the muscle tets as well as the eyes to the skull, connecting tets are introduced. For the muscle tets, each skull vertex connects to the closest three vertices in M H to form a connecting tet. For the eyes, connecting tets are formed by connecting each eye vertex to the three closest vertices in B H . On these connecting tets, the energy E con with the same constraints as in Equation (12) is imposed as almost hard constraints. By this design, the jaw and the cranium are moved independent from each other though muscle activations but the eyes remain rigid and move only with the cranium.
Finally, the energy
E inv = x∈S H E tar (x, S exp H )(19)
of soft Dirichlet constraints
E tar (x, S exp H ) = x − s x 2 ,(20)
is added, attracting each vertex x of the skin surface S H to the corresponding vertex s x from the target expression S exp H . As previously mentioned, the expression S exp H might be anatomically implausible. As a countermeasure, we can impose a maximum strain by balancing w str with w tar . Thus, together with the almost hard constraints and by the construction of projective dynamics, φ † always results in a plausible expressionS exp H close to the target. The weighted sum of the aforementioned energies gives the total
energy E φ † = w S H E S H + w M H E M H + w B H E B H + w S H E S H + w con E con + w inv E inv(21)
of the backward model φ † . The forward model φ is considerably simpler in structure and is realized as the energy
E φ = t∈S H E dg (t, ∇S exp H ) + t∈M H E dg (t, ∇M exp H ) + w tar x∈B H E tar (x, B exp H ) + w con E con(F(t) − RT t 2 F(23)
is similar to Equation (14) and attracts the deformation of each tetrahedron t to a corresponding target deformation gradient T t ∈ ∇T. For both models we resolve self-intersections between colliding lips or teeth in a subsequent projective dynamics update. In the second update, colliding vertices are resolved as in [33]. The distinctive feature here is that no collision-gaps can occur after dissolving the self-intersections.
Generation of Training Data
To approximate Equation (10) with f , a defining training dataset is required in first place. By the construction of f , this training dataset must consist of instances that relate diverse facial expressions created through linear blendshapes to the corresponding anatomically plausible surfaces. This dataset must also cover a variety of distinct head shapes to train f to be as generally applicable as φ and φ † .
In the following, we describe a pipeline for creating instances of such dataset, which can be roughly divided into two high-level steps. First, in order to evaluate Equation (10)
H = 52 i=1 w i exp S i H . c) GetS exp H from φ φ † (S exp H , S H , M H , B H ) .
and muscle tissue tetrahedron meshes S H and M H . Second, S H has to be deformed to an expression S exp H and mapped to the anatomically plausibleS exp H . A more detailed algorithmic description is given in Algorithm 1.
Sampling Head Shapes We start the first part of the pipeline by randomly drawing a neutral skin surface S H from DECA [20], one of the most comprehensive high-resolution face models currently available. More specifically, we randomly draw an image from the Flickr-Faces-HQ [31] dataset and let DECA determine the corresponding neutral head shape. Further, a precomputed mapping is applied to adapt the DECA topology to our template.
Fitting the LHM Next, the template LHM T is aligned with the skin surface S H by finding ρ H that maps each of the template components individually. For this, we rely on a hybrid approach that is largely data-driven but also based on heuristics that ensure anatomic plausibility and avoids intersections.
As the first of the remaining five template meshes, we fit the skin layer by settinĝ
S H = rbf S T →S H (Ŝ T ).(24)
The RBF function is a space warp based on triharmonic radial basis functions [8] that is calculated to displace from the template skin surface S T to the target S H and is then applied to the template skin layer. By construction, the skin layer will be warped semantically consistent and stick close to the targeted skin surface. Next, we fit the skull layerB H by invoking a linear regressor D that predicts the distances from the vertices ofŜ H to the corresponding vertices ofB H and subsequently minimizing with projective dynamics arg min
X w rect E rect X,Ŝ T + w dist2 E dist2 X,Ŝ H , D Ŝ H + w curv E curv X,B T .(25)
Here,
E dist2 X,Ŝ H , D Ŝ H = x∈X x − s x 2 − d x 2
(26) ensures that for each vertex x ∈ X the predicted distances d x ∈ D(Ŝ H ) is adhered to. Apart from E dist2 , the same regularizing terms as in Equation (3) and 18 are used. The optimization is initialized with X =Ŝ H −D(S H )·n(Ŝ H ) where n(Ŝ H ) are areaweighted vertex normals. D is trained on the dataset of [22] (SKULLS) that relates MRI skull measurements to skin surface scans. To ease the learning task, we learn the regressor between PCAs of the skin layers and the skin-to-skull-layer distances. Predicted lengths are set to a minimum value if they fall below a threshold, thus, avoiding skin-skull intersections and numerical issues in downstream physics-based simulations. In Figure 4a, the linear regressor training is visualized.
The muscle layerM H is fitted by positioning its vertices at the same absolute distances between the corresponding skin and skull layer vertices as in the template, and only passing on a small relative amount w rel of the distance changes compared to the template. This approach assumes that the muscle mass in the facial area is only moderately affected by body weight and skull size.
The skull mesh is placed by setting
B H = rbfB T →B H (B T ) .(27)
The properties of the RBF space warp ensure that the skull mesh remains within the skull layer if the layer is of sufficient resolution. The muscle mesh could be placed in a similar fashion but is not needed in our pipeline any further. Finally, the soft and muscle tissue tetrahedron meshes S H and M H can be constructed as described in Section 3.1. On average, the complete fitting pipeline takes only about 500ms for one instance on an Intel i5 12600K processor. Figure 4b)
Experiments
Before demonstrating the accuracy and efficiency (Section 4.3) of neural volumetric blendshapes, we first evaluate the fitting precision of the LHM (Section 4.1) as well as the quality of the proposed physical models (Section 4.2).
LHM Fitting
The fitting of the LHM is mainly composed of the data-driven positioning of the skull layer and the subsequent heuristic fitting of the muscle layer. We evaluate the crucial fitting of the skull layer with the SKULLS [22] dataset. Since this dataset consists of 43 instances only, a leave-one-out validation is performed in which the vertex-wise L2 errors are measured. The results are compared to the multilinear model as originally used for SKULLS in [1]. 0 mm 1 0 mm Figure 5: The per-vertex mean L2-error of our LHM fitting algorithm in a leave-one-out validation on SKULLS [22]. Larger errors only appear in the back part of the face and have only a minor impact on facial animations. Both models cannot achieve a medical-grade positioning with errors between approximately 2 mm and 4 mm. The MLM achieves a higher precision with a mean error of 1.98 mm than our approach that dispositions the skull by 3.83 mm on average. However, the MLM cannot prevent collisions that might crash physics-based simulations. Also, our fitting algorithm produces large errors only in regions that are of less importance for facial simulations as can be seen in Figure 5. The errors are predominately distributed in the back area of the skull, since here the rectangular constraints of our fitting procedure can presumably no longer be aligned well to the skin layer. The following section demonstrates in downstream physics-based simulations that the prediction quality in the frontal face region is sufficiently adequate for detailed facial animations. Figure 6 displays fitting examples.
w S H w S H w str w vol w con w inv 1 10 10 2 10 3 10 3 10 w M H w B H w curv w rect w dist w dist2 1 10 0.1 1.0 10.0 10.0
Physics-Based Simulation
To investigate the quality of φ † and φ we perform an experiment on nearly 300 high-resolution optical facial expression scans from the proprietary 3DScanstore 2 dataset. First, we fit the LHM to each of the scans using φ † and store only the resulting muscle deformation gradients and bone movements. Then, in turn, we apply these to the neutral scans of the corresponding individuals through φ and simulate the soft tissue as φ † does. Only if our physic models and the previous LHM fitting operate with reasonable precision, do we expect a low L2-loss between the original expression and the simulated expression as well as visually appealing results.
Across all expression scans, we observe an average L2 reconstruction error of only 1.6 mm. The reproduction quality of the physical models is also reflected in the visual results of Figure 7. Further, the accuracy of the LHM fitting investigated in the previous section is again underlined. Additionally, we inspected expression retargeting by applying the extracted muscle and bone transformations to other identities. Again, visual appealing examples can be found Figure 7. The weights used to realize φ, φ † , and the LHM fitting in this and the following experiments are stated in Table 2.
Neural Volumetric Blendshapes
To train and evaluate f , we assemble a dataset of 50000 training and test instances by using the pipeline from Section 3.5. At this, we sample 10000 different head shapes and compute 5 different expressions per head. For training, the Adam optimizer performs 200k update steps with a learning rate of 0.0001. The learning rate is linearly decreased to 0.00005 over the course of training and a batch size of 128 is used. In total, the training specifications result in an approximate runtime of 8 hours on an NVIDIA A6000. The comparatively short training time can straightforwardly be explained by the less noisy training data as usually encountered for imagebased deep learning models and the efficient network design. The identities are used 90% for training and 10% for testing. First, we evaluate the efficiency of f . The initial runtime of φ and φ † in Equation (10) with 6 global projective dynamics updates as in [26] is about 950 ms on a workstation AMD threadripper pro 3995wx with 128 cores (implemented with ShapeOp [19]). Our implementation of f only takes 8ms on a consumer-grade Intel i5 12600K (implemented with PyTorch 3 ). Moreover, a forward pass on an NVIDIA RTX3090 is calculated in less than one millisecond. Thus, neural volumetric blendshapes are suitable for realtime applications on weaker hardware setups and also offer advantages when many facial animations are to be executed in parallel on a GPU.
The time advantage alone is not useful if f is not an adequate approximation. We evaluate the ap-3 https://pytorch.org proximation quality by measuring the mean L2 error on the test dataset. More precisely, we randomly generate 5 training-test splits of the generated dataset and report the average of the test losses after training on each training set. We achieve an average test error of only 0.13 mm, meaning that we can successfully achieve the approximation target of Equation (11) In the top row, muscle deformation gradients have been extracted with φ † and reapplied using φ to the corresponding neutral head. In the bottom row, the deformation gradients are reapplied to a different identity.
the realism-enhancing constraining of volume preservation. Close-up images of resolved self-intersections can be seen in Figure 9. The only other approach that to the best of our knowledge can animate detailed faces at this speed, requires only a neutral surface as input, and allows for semantically consistent control, are linear blendshapes paired with a variant of deformation transfer [9,39,15]. As can be seen in Figure 8, too, neural volumetric blendshapes produce far more immersive facial animations.
Conclusion
In this work, we presented neural volumetric blendshapes, an efficient realization of physics-based facial animations even on consumer-grade hardware. Our approach yields more realistic results than the commonly used linear blendshapes, since artifacts such as volume-loss, self-intersections, or anatomical flaws are avoided and effects like volumetric elasticity and wrinkles are added. It is also convenient to use, since existing surface-based blendshape rigs can be extended into anatomically and physically plausible animations with almost no integration overhead while keeping the actuation as before. We aim to improve neural volumetric blendshapes in at least two directions. On the one hand, with an even more accurate anatomical model that represents e.g. trachea and esophagus more precisely. On the other hand, recent results [44] show that contact deformations can also be efficiently learned. Since people touch their faces dozens of times [52] a day, adding contact-handling for more realistic gestures may improve the immersion significantly. Besides resolving self-collisions, the physical and anatomical constraints anchored in our approach lead to more realistic and immersive expressions while being computationally efficient.
H , B H , M H ,Ŝ H ,B H ,M H , a linear blendshape model consists of n surface blendshapes
Figure 3 :
3An overview of the neural volumetric blendshape model and how novel expressions are determined. For the fully connected layers (FC), the input size, the output size, as well as the activation function are stated. Please note that f is designed as an adapter such that the part left of the dotted line can be amended.
training scheme of the skin to skull layer distances regressor D which is used in the layered head model fitting. D is trained on the SKULLS dataset[22] that relates optical skin scans to MRI measurements. b) Procedural overview of the layered head model fitting algorithm. Starting from only a neutral head scan, the five other components are positioned. Blue strokes indicate inputs, green strokes outputs.
Figure 6 :
6Exemplary fits of the LHM components skull wrap, muscle wrap, and skull. The skulls are fitted with a dense position regressor. At this, collisions between the skulls and skins are avoided which increases the numerical stability of downstream physics-based simulations.
Figure 7 :
7and f generalizes well across head shapes and facial expressions. Another open question is the temporal consistency of f which is inherent in physics-based simulations. To this end, we invite the reader to watch the attached video in the supplementary material. Finally, visual results of neural volumetric blendshapes as shown in Figure 8 exhibit the same improvements towards more realistic facial expressions as much slower physicsbased simulations before. These include the resolution of self-intersections, the simulation of wrinkles, the adherence to anatomical boundaries, and Exemplary results of the proposed physicsbased facial animation models.
Figure 8 :
8Exemplary test results of our neural volumetric blendshapes for a variety of different head shapes.
Table 1 :
1Description of the cardinality of each template LHM T component. By subdividing the wrap meshes or the layer prisms, the resolution of the template tetrahedron meshes can easily be adjusted.TŜT
#Vertices
21875 14572 16388
7826
#Faces / #Tets 42738 28856 32370
15648
MeshB TMT
S T
M T
#Vertices
7826
7826
49852
#Faces / #Tets 15648 15648 123429 73681
The structure of Equation (11) enables us to implement f efficiently as fully connected networks. More precisely, both inputs to f , the neutral surface S H and the vector of the differences to the linear interpo-The inputs and outputs of f are justified as discussed next. Considering Equationlation result S exp
H = i w i
exp S i
H , are tokenized with
the help of respective encoders. Subsequently, the
tokens are processed by a decoder that outputs the
vertex-wise deformations from S exp
H to the anatomi-
cally plausible expressionS exp
H .
on a reasonable domain, it is necessary to model a head H from an extensive headmodel and determine the corresponding neutral soft Algorithm 1 Data Generation Head Sampling and LHM Fitting 1. a) Draw a random image I H of a head H from the FlickerHQ dataset. b) Calculate skin surface S H = DECA(I H ) with neutral expression parameters. c) Find LHM transformation ρ H from S H , build up tetrahedron meshes S H and M H .Expression Sampling and Simulation
2. a) Create ARKit blendshapes {S i
H } 52
i=1 from
S H with deformation transfer.
b) Sample weights w exp = {w i
exp } 52
i=1 from
dyadic recordings and calculate S exp
† (S exp H , S H , M H , B H ) can be computed. Computing one training instance takes approximately 40 seconds on a AMD Threadripper Pro 3995wx.visualizes
the overall fitting process.
Sampling Expressions In the second part of the
pipeline, the actual training instance S exp
H ,S exp
H
is
created. Beginning with deformation transfer [9] to
transfer ARKit 1 surface-based blendshapes to S H , we
create expression S exp
H by linearly blending the blend-
shapes with weights that are obtained from 8 around
10 minutes long dyadic conversations recorded with
a custom iOS app.
Training
Instance
Finally,S exp
H
=
φ φ
Table 2 :
2The weights used to implement φ, φ † , and the LHM fitting.
https://developer.apple.com/
https://www.3dscanstore.com
A multilinear model for bidirectional craniofacial reconstruction. Jascha Achenbach, Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine. the Eurographics Workshop on Visual Computing for Biology and MedicineJascha Achenbach et al. "A multilinear model for bidirectional craniofacial reconstruction". In: Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine. 2018, pp. 67-76.
Anatomy transfer. Dicko Ali-Hamadi, ACM transactions on graphics (TOG). 32Dicko Ali-Hamadi et al. "Anatomy transfer". In: ACM transactions on graphics (TOG) 32.6 (2013), pp. 1-8.
Geometric means in a novel vector space structure on symmetric positive-definite matrices. Vincent Arsigny, SIAM journal on matrix analysis and applications. 29Vincent Arsigny et al. "Geometric means in a novel vector space structure on symmet- ric positive-definite matrices". In: SIAM jour- nal on matrix analysis and applications 29.1 (2007), pp. 328-347.
RigNeRF: Fully Controllable Neural 3D Portraits. Shahrukh Athar, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022ShahRukh Athar et al. "RigNeRF: Fully Con- trollable Neural 3D Portraits". In: Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition. 2022, pp. 20364- 20373.
High-quality face capture using anatomical muscles. Michael Bao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMichael Bao et al. "High-quality face capture using anatomical muscles". In: Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition. 2019, pp. 10802- 10811.
Exemplary results of our collision resolution. Collisions are not only avoided but also simulated in a physically plausible manner. Figure. 9Figure 9: Exemplary results of our collision resolu- tion. Collisions are not only avoided but also simu- lated in a physically plausible manner.
Blendforces: A dynamic framework for facial animation. Vincent Barrielle, Nicolas Stoiber, Cédric Cagniart, Computer Graphics Forum. 352Wiley Online LibraryVincent Barrielle, Nicolas Stoiber, and Cédric Cagniart. "Blendforces: A dynamic framework for facial animation". In: Computer Graphics Forum. Vol. 35. 2. Wiley Online Library. 2016, pp. 341-352.
Pose-Space Animation and Transfer of Facial Details. Bernd Bickel, In: Symposium on Computer Animation. Bernd Bickel et al. "Pose-Space Animation and Transfer of Facial Details." In: Symposium on Computer Animation. 2008, pp. 57-66.
Real-time shape editing using radial basis functions. Mario Botsch, Leif Kobbelt, ; Boston, Usa , Computer graphics forum. 243Blackwell Publishing, Inc OxfordMario Botsch and Leif Kobbelt. "Real-time shape editing using radial basis functions". In: Computer graphics forum. Vol. 24. 3. Blackwell Publishing, Inc Oxford, UK and Boston, USA. 2005, pp. 611-621.
Deformation transfer for detail-preserving surface editing. Mario Botsch, Vision, Modeling & Visualization. Citeseer. Mario Botsch et al. "Deformation transfer for detail-preserving surface editing". In: Vi- sion, Modeling & Visualization. Citeseer. 2006, pp. 357-364.
Polygon mesh processing. Mario Botsch, CRC pressMario Botsch et al. Polygon mesh processing. CRC press, 2010.
Projective dynamics: Fusing constraint projections for fast simulation. Sofien Bouaziz, ACM transactions on graphics (TOG). 33Sofien Bouaziz et al. "Projective dynamics: Fusing constraint projections for fast sim- ulation". In: ACM transactions on graphics (TOG) 33.4 (2014), pp. 1-11.
High resolution passive facial performance capture. Derek Bradley, ACM SIG-GRAPH 2010 papers. Derek Bradley et al. "High resolution passive facial performance capture". In: ACM SIG- GRAPH 2010 papers. 2010, pp. 1-10.
Authentic volumetric avatars from a phone scan. Chen Cao, ACM Transactions on Graphics (TOG). 41Chen Cao et al. "Authentic volumetric avatars from a phone scan". In: ACM Transactions on Graphics (TOG) 41.4 (2022), pp. 1-19.
Learning nonlinear soft-tissue dynamics for interactive avatars. Dan Casas, Otaduy, Proceedings of the ACM on Computer Graphics and Interactive Techniques. the ACM on Computer Graphics and Interactive Techniques1Dan Casas and Miguel A Otaduy. "Learning nonlinear soft-tissue dynamics for interactive avatars". In: Proceedings of the ACM on Com- puter Graphics and Interactive Techniques 1.1 (2018), pp. 1-15.
Local anatomically-constrained facial performance retargeting. Prashanth Chandran, ACM Transactions on Graphics (TOG). 41Prashanth Chandran et al. "Local anatomically-constrained facial performance retargeting". In: ACM Transactions on Graphics (TOG) 41.4 (2022), pp. 1-14.
Animatomy: an Animator-centric, Anatomically Inspired System for 3D Facial Modeling, Animation and Transfer. Byungkuk Choi, SIGGRAPH Asia 2022 Conference Papers. 2022. Byungkuk Choi et al. "Animatomy: an Animator-centric, Anatomically Inspired Sys- tem for 3D Facial Modeling, Animation and Transfer". In: SIGGRAPH Asia 2022 Confer- ence Papers. 2022, pp. 1-9.
Musclebased facial retargeting with anatomical constraints. Matthew Cong, Ronald Fedkiw, ACM SIGGRAPH. Matthew Cong and Ronald Fedkiw. "Muscle- based facial retargeting with anatomical con- straints". In: ACM SIGGRAPH 2019 Talks. 2019, pp. 1-2.
Art-directed muscle simulation for high-end facial animation. Cong Matthew Deying, Stanford UniversityMatthew Deying Cong. Art-directed muscle simulation for high-end facial animation. Stan- ford University, 2016.
ShapeOp-a robust and extensible geometric modelling paradigm. Mario Deuss, Modelling Behaviour. SpringerMario Deuss et al. "ShapeOp-a robust and extensible geometric modelling paradigm". In: Modelling Behaviour. Springer, 2015, pp. 505- 515.
Learning an animatable detailed 3D face model from in-the-wild images. Yao Feng, ACM Transactions on Graphics (ToG). 40Yao Feng et al. "Learning an animatable de- tailed 3D face model from in-the-wild images". In: ACM Transactions on Graphics (ToG) 40.4 (2021), pp. 1-13.
VolTeMorph: Realtime, Controllable and Generalisable Animation of Volumetric Representations. J Stephan, Garbin, arXiv:2208.00949arXiv preprintStephan J Garbin et al. "VolTeMorph: Real- time, Controllable and Generalisable Anima- tion of Volumetric Representations". In: arXiv preprint arXiv:2208.00949 (2022).
A method for automatic forensic facial reconstruction based on dense statistics of soft tissue thickness. Thomas Gietzen, PloS one. 14210257Thomas Gietzen et al. "A method for auto- matic forensic facial reconstruction based on dense statistics of soft tissue thickness". In: PloS one 14.1 (2019), e0210257.
Creating and animating subject-specific anatomical models. Benjamin Gilles, Lionel Reveret, Dinesh K Pai, Computer Graphics Forum. Wiley Online Library29Benjamin Gilles, Lionel Reveret, and Dinesh K Pai. "Creating and animating subject-specific anatomical models". In: Computer Graphics Forum. Vol. 29. 8. Wiley Online Library. 2010, pp. 2340-2351.
Neural head avatars from monocular RGB videos. Philip-William Grassal, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022Philip-William Grassal et al. "Neural head avatars from monocular RGB videos". In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, pp. 18653-18664.
Dynamic 3D avatar creation from hand-held video input. Sofien Alexandru Eugen Ichim, Mark Bouaziz, Pauly, ACM Transactions on Graphics (ToG). 34Alexandru Eugen Ichim, Sofien Bouaziz, and Mark Pauly. "Dynamic 3D avatar creation from hand-held video input". In: ACM Transactions on Graphics (ToG) 34.4 (2015), pp. 1-14.
Building and animating user-specific volumetric face rigs. Alexandru Eugen Ichim, In: Symposium on Computer Animation. Alexandru Eugen Ichim et al. "Building and animating user-specific volumetric face rigs." In: Symposium on Computer Animation. 2016, pp. 107-117.
Phace: Physicsbased face modeling and animation. Alexandru-Eugen Ichim, ACM Transactions on Graphics (TOG). 364Alexandru-Eugen Ichim et al. "Phace: Physics- based face modeling and animation". In: ACM Transactions on Graphics (TOG) 36.4 (2017), pp. 1-14.
Scaling-rotation distance and interpolation of symmetric positive-definite matrices. Sungkyu Jung, Armin Schwartzman, David Groisser, SIAM Journal on Matrix Analysis and Applications. 363Sungkyu Jung, Armin Schwartzman, and David Groisser. "Scaling-rotation distance and interpolation of symmetric positive-definite matrices". In: SIAM Journal on Matrix Anal- ysis and Applications 36.3 (2015), pp. 1180- 1201.
Building accurate physics-based face models from data. Petr Kadleček, Ladislav Kavan, Proceedings of the ACM on Computer Graphics and Interactive Techniques. the ACM on Computer Graphics and Interactive Techniques2Petr Kadleček and Ladislav Kavan. "Build- ing accurate physics-based face models from data". In: Proceedings of the ACM on Com- puter Graphics and Interactive Techniques 2.2 (2019), pp. 1-16.
Reconstructing personalized anatomical models for physics-based body animation. Petr Kadleček, ACM Transactions on Graphics (TOG). 35Petr Kadleček et al. "Reconstructing personal- ized anatomical models for physics-based body animation". In: ACM Transactions on Graph- ics (TOG) 35.6 (2016), pp. 1-13.
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionTero Karras, Samuli Laine, and Timo Aila. "A style-based generator architecture for genera- tive adversarial networks". In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, pp. 4401-4410.
OSSO: Obtaining Skeletal Shape from Outside. Marilyn Keller, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022Marilyn Keller et al. "OSSO: Obtaining Skele- tal Shape from Outside". In: Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition. 2022, pp. 20492- 20501.
Projective skinning. Martin Komaritzan, Mario Botsch, Proceedings of the ACM on Computer Graphics and Interactive Techniques. 11Martin Komaritzan and Mario Botsch. "Pro- jective skinning". In: Proceedings of the ACM on Computer Graphics and Interactive Tech- niques 1.1 (2018), pp. 1-19.
Inside Humans: Creating a Simple Layered Anatomical Model from Human Surface Scans. Martin Komaritzan, Stephan Wenninger, Mario Botsch, Frontiers in Virtual Reality. 2694244Martin Komaritzan, Stephan Wenninger, and Mario Botsch. "Inside Humans: Creating a Simple Layered Anatomical Model from Hu- man Surface Scans". In: Frontiers in Virtual Reality 2 (2021), p. 694244.
Set transformer: A framework for attention-based permutation-invariant neural networks. Juho Lee, PMLR. 2019International conference on machine learning. Juho Lee et al. "Set transformer: A framework for attention-based permutation-invariant neu- ral networks". In: International conference on machine learning. PMLR. 2019, pp. 3744-3753.
Practice and theory of blendshape facial models. P John, Lewis, Eurographics (State of the Art Reports) 1. 82John P Lewis et al. "Practice and theory of blendshape facial models." In: Eurographics (State of the Art Reports) 1.8 (2014), p. 2.
Reducing blendshape interference by selected motion attenuation. P John, Lewis, Proceedings of the 2005 symposium on Interactive 3D graphics and games. the 2005 symposium on Interactive 3D graphics and gamesJohn P Lewis et al. "Reducing blendshape in- terference by selected motion attenuation". In: Proceedings of the 2005 symposium on Interac- tive 3D graphics and games. 2005, pp. 25-29.
Example-based facial rigging. Hao Li, Thibaut Weise, Mark Pauly, Acm transactions on graphics (tog). 29Hao Li, Thibaut Weise, and Mark Pauly. "Example-based facial rigging". In: Acm trans- actions on graphics (tog) 29.4 (2010), pp. 1-6.
Dynamic facial asset and rig generation from a single scan. Jiaman Li, In: ACM Trans. Graph. 39Jiaman Li et al. "Dynamic facial asset and rig generation from a single scan." In: ACM Trans. Graph. 39.6 (2020), pp. 215-1.
SMPL: A Skinned Multi-Person Linear Model. Matthew Loper, ACM Trans. Graphics (Proc. SIGGRAPH Asia). 34616Matthew Loper et al. "SMPL: A Skinned Multi-Person Linear Model". In: ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34.6 (Oct. 2015), 248:1-248:16.
Beyond BMI for selfestimates of body size and shape: A new method for developing stimuli correctly calibrated for body composition. Nadia Maalin, In: Behavior Research Methods. 53Nadia Maalin et al. "Beyond BMI for self- estimates of body size and shape: A new method for developing stimuli correctly cali- brated for body composition". In: Behavior Re- search Methods 53.3 (2021), pp. 1308-1321.
Nerf: Representing scenes as neural radiance fields for view synthesis. Ben Mildenhall, Communications of the ACM. 65Ben Mildenhall et al. "Nerf: Representing scenes as neural radiance fields for view syn- thesis". In: Communications of the ACM 65.1 (2021), pp. 99-106.
Control parameterization for facial animation. Frederic I Parke, Computer Animation'91. SpringerFrederic I Parke. "Control parameterization for facial animation". In: Computer Animation'91. Springer. 1991, pp. 3-14.
Contact-centric deformation learning. Cristian Romero, ACM Transactions on Graphics (TOG). 41Cristian Romero et al. "Contact-centric defor- mation learning". In: ACM Transactions on Graphics (TOG) 41.4 (2022), pp. 1-11.
Computational bodybuilding: Anatomically-based modeling of human bodies. Shunsuke Saito, Zi-Ye Zhou, Ladislav Kavan, ACM Transactions on Graphics (TOG). 34Shunsuke Saito, Zi-Ye Zhou, and Ladislav Kavan. "Computational bodybuilding: Anatomically-based modeling of human bodies". In: ACM Transactions on Graphics (TOG) 34.4 (2015), pp. 1-12.
SoftSMPL: Datadriven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans. Igor Santesteban, Computer Graphics Forum. 2. Wiley Online Library. 202039Igor Santesteban et al. "SoftSMPL: Data- driven Modeling of Nonlinear Soft-tissue Dy- namics for Parametric Humans". In: Computer Graphics Forum. Vol. 39. 2. Wiley Online Li- brary. 2020, pp. 65-75.
The graph neural network model. Franco Scarselli, IEEE transactions on neural networks. 20Franco Scarselli et al. "The graph neural net- work model". In: IEEE transactions on neural networks 20.1 (2008), pp. 61-80.
BASH: Biomechanical Animated Skinned Human for Visualization of Kinematics and Muscle Activity. Robert Schleicher, In: VISI-GRAPP (1: GRAPP). 2021Robert Schleicher et al. "BASH: Biomechani- cal Animated Skinned Human for Visualization of Kinematics and Muscle Activity." In: VISI- GRAPP (1: GRAPP). 2021, pp. 25-36.
Matrix animation and polar decomposition. Ken Shoemake, Tom Duff, Graphics Interface. 92Ken Shoemake and Tom Duff. "Matrix anima- tion and polar decomposition". In: Graphics In- terface. Vol. 92. 1992, pp. 258-264.
Automatic determination of facial muscle activations from sparse motion capture marker data. Eftychios Sifakis, Igor Neverov, Ronald Fedkiw, ACM SIGGRAPH. Eftychios Sifakis, Igor Neverov, and Ronald Fedkiw. "Automatic determination of facial muscle activations from sparse motion capture marker data". In: ACM SIGGRAPH 2005 Pa- pers. 2005, pp. 417-425.
Accurate face rig approximation with deep differential subspace reconstruction. L Steven, Weiqi Song, Michael Shi, Reed, ACM Transactions on Graphics (TOG). 39Steven L Song, Weiqi Shi, and Michael Reed. "Accurate face rig approximation with deep differential subspace reconstruction". In: ACM Transactions on Graphics (TOG) 39.4 (2020), pp. 34-1.
Stop touching your face! A systematic review of triggers, characteristics, regulatory functions and neuro-physiology of facial self touch. L Jente, Spille, Neuroscience & Biobehavioral Reviews. 128Jente L Spille et al. "Stop touching your face! A systematic review of triggers, characteristics, regulatory functions and neuro-physiology of facial self touch". In: Neuroscience & Biobehav- ioral Reviews 128 (2021), pp. 102-116.
Learning active quasistatic physics-based models from data. Sangeetha Grama Srinivasan, ACM Transactions on Graphics (TOG). 40Sangeetha Grama Srinivasan et al. "Learning active quasistatic physics-based models from data". In: ACM Transactions on Graphics (TOG) 40.4 (2021), pp. 1-14.
Deformation transfer for triangle meshes. W Robert, Jovan Sumner, Popović, ACM Transactions on graphics (TOG). 23Robert W Sumner and Jovan Popović. "Defor- mation transfer for triangle meshes". In: ACM Transactions on graphics (TOG) 23.3 (2004), pp. 399-405.
Implicit neural representation for physics-driven actuated soft bodies. Lingchen Yang, ACM Transactions on Graphics (TOG). 41Lingchen Yang et al. "Implicit neural represen- tation for physics-driven actuated soft bodies". In: ACM Transactions on Graphics (TOG) 41.4 (2022), pp. 1-10.
Spacetime faces: Highresolution capture for˜modeling and animation". In: Data-Driven 3D Facial Animation. Li Zhang, SpringerLi Zhang et al. "Spacetime faces: High- resolution capture for˜modeling and anima- tion". In: Data-Driven 3D Facial Animation. Springer, 2008, pp. 248-276.
Im avatar: Implicit morphable head avatars from videos. Yufeng Zheng, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022Yufeng Zheng et al. "Im avatar: Implicit mor- phable head avatars from videos". In: Proceed- ings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition. 2022, pp. 13545-13555.
| zyda_arxiv-1420000 |
Resonance refraction and neutrino oscillations
June 29, 2021
Alexei Y Smirnov
Max-Planck-Institut für Kernphysik
69117HeidelbergGermany
Abdus Salam International Centre for Theoretical Physics
Strada Costiera 1134014TriesteItaly
Victor B Valera
Niels Bohr International Academy
Niels Bohr Insitute
University of Copenhagen
DK-2100CopenhagenDenmark
Abdus Salam International Centre for Theoretical Physics
Strada Costiera 1134014TriesteItaly
Resonance refraction and neutrino oscillations
June 29, 2021
The refraction index and matter potential depend on neutrino energy and this dependence has a resonance character associated to the production of the mediator in the s−channel. For light mediators and light particles of medium (background) the resonance can be realized at energies accessible to laboratory experiments. We study properties of the energy dependence of the potential for different C-asymmetries of background. Interplay of the background potential and the vacuum term leads to (i) bump in the oscillation probability in the resonance region, (ii) dip related to the MSW resonance in the background, (iii) substantial deviation of the effective ∆m 2 above the resonance from the low energy value, etc. We considered generation of mixing in the background. Interactions with background shifts the energy of usual MSW resonance and produces new MSW resonances. Searches of the background effects allow us to put bounds on new interactions of neutrinos and properties of the background.We show that explanation of the MiniBooNE excess, as the bump due to resonance refraction, is excluded.
Introduction
The Wolfenstein potential 1 , which describes the matter effect on neutrino oscillations, do not depend on the neutrino energy [1][2][3][4]. This is the consequence of (i) large mass of mediators of interactions, M med , or low energies of neutrinos, so that the total energy in the CMS: √ s M med . Recall that originally the potentials were derived using the 4 fermion point-like interactions.
(ii) the C-(CP-) asymmetry of background. In the C-symmetric medium in the lowest order the potentials are zero.
In general (independently of the C-asymmetry) substantial dependence of the potentials on energy should show up at energies √ s M med . Furthermore, exchange of mediator in the schannel leads to the resonance character of this dependence [5]. We will call this phenomenon the resonance refraction.
In the Standard Model the mediators of neutrino interactions are W , Z 0 as well as H 0 . Z 0 leads to the resonance refraction in theνν− annihilation. In resonance the potential is exactly zero and changes the sign with energy change. Above the resonance energy the potential has 1/E dependence similar to the usual kinetic term related to mass squared difference [5]. In principle, this refraction can be realized in scattering on the ultra high energy cosmic neutrinos on relic neutrino background (E ≥ 10 21 eV in the present epoch) [5]. The W −boson exchange produces the resonance refraction in theν e e− scattering, i.e., in the Glashow resonance. For electrons at rest this requires the neutrino energy ∼ 6.4 PeV. We comment on possibility of observational effects in sect. 3.8.
For light mediators and light scatterers (their existence implies physics beyond the SM) the resonance refraction can be realized at low energies accessible to existing experiments. The resonance refraction leads to increase of the oscillation phase which can dominate over the vacuum phase in the energy range around the resonance. This produces an enhancement of the oscillation effect which would be negligible without resonance refraction. Such an enhancement was used in [6] to explain the low energy excess of the MiniBooNE events [7]. In this explanation the medium was composed of the overdense relic neutrinos.
Potentials induced by light mediator in medium with light scatterers were computed recently in connection to possible existence of light dark sector and light dark matter [8][9][10][11][12][13][14]. Mediators 1 In what follows we will consider potentials which are related to the refraction index n as V = (n − 1)p, where p is the momentum of neutrino. and scatterers of different nature were explored: fermions, scalars, gauge bosons. Various bounds on couplings of neutrinos with new light sector were obtained [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30].
In this paper we focus on phenomenon of resonance refraction itself presenting results in a model independent way. We study in detail dependence of the resonant potentials on energy for different values of the C− asymmetry of background. We consider interplay of resonance potentials with usual vacuum (kinetic) term as well with usual matter potential. New interesting features are realized, such as shift of the usual MSW resonances, increase or decrease of the effective mass squared difference with energy, etc. We identify signatures of the resonance refraction and outline possible observable effects. As an illustration, we apply our results to the MiniBooNE excess and show that explanation [6] is excluded. In general, applications can include explanations of some energy localized anomalies. In the absence of anomalies the bounds can be established on background parameters (densities, characteristics of scatterers) and neutrino couplings.
The paper is organized as follows. In sect. 2 we introduce interactions of neutrinos with new light sector. We compute potentials due these interactions and study resonances in these potentials.
In sect. 3 we discuss effects of interplay of the background potential with vacuum (kinetic) term and usual matter potential. We consider possible observational effects and applications of the results, in particular, to an explanation of the MiniBooNE excess in sect. 4. Conclusions follow in sect. 5.
Potentials and resonances 2.1 Neutrino interactions with new light sector
In this paper we focus on phenomenon of resonance refraction itself, and present our results in general and universal form valid for different mediators and particles of background. We consider the simplest (minimal) light sector composed of new scalar φ (which can be real or complex) with mass m φ and fermion χ with mass m χ . We comment on some extensions of this sector later.
Interactions of the SM neutrino mass states ν iL (i = 1, 2, 3) with these new particles are described by
L N SI ⊃ g iχ ν iL φ * + h.c.(1)
φ may acquire VEV, thus contributing to neutrino mass. Then for single χ only one neutrino (combination of ν i ) will acquire mass by VEV of φ. We assume that some other sources of χ and neutrino masses exist, e.g. the see-saw mechanism, so that χ and all ν i acquire different masses and in general these masses are not related to g i . The couplings (1) were considered in various contexts before [16, 19-21, 23, 24, 27, 29]. For light new particles m φ , m χ 1 GeV, a number of generic bounds were obtained. The bounds are based on possible transitions ν → χ + φ.
Notice that refraction is induced by the elastic forward scattering being proportional to g 2 /M 2 med . Therefore it does not disappear in the limit g → 0, provided that M med decreases in the same way as g. This allows us to avoid most of the bound based the inelastic processes for which σ ∝ (g 2 /q 2 ) 2 , and the transfer momentum squared q 2 is restricted from below by condition of observability.
The laboratory bounds on g are rather weak: g φ 10 −3 for masses m φ < m K (K− meson mass). They follow, in particular, from additional contribution to the decay K → µχφ. Much stronger bounds follow from Cosmology (BBN, CMB data, structure formation) and astrophysics (star cooling, supernova dynamics and SN87A neutrino observations). They give the bound
g φ 10 −7 .(2)
Elastic forward scattering due to the interactions (1) produces the effective potentials V i for neutrino mass states in medium. There are two possibilities even for simplest case of (1) (i) φ plays the role of mediator while χ form a background, and vice versa: (ii) χ is the mediator while φ form a background.
Potentials in the fermionic background
We consider first the case of strong hierarchy of couplings: g 3 g 2 , g 1 , so that ν 3 couples with background, while interactions of others can be neglected. In this case the interactions, and consequently the potentials, are diagonal in the mass basis. We will discuss couplings of all three neutrinos later in sect. 3.7. Also we comment on the case of three χ j .
We consider background composed of fermions χ and antifermionsχ with number densities n χ andn χ correspondingly. The C-asymmetry of the background can be defined as
≡ n χ −n χ n χ +n χ .(3)
The mediator is a scalar φ and the diagrams of the neutrino scattering on χ (left) andχ (right) are shown in Figure 1. For m φ > m ν , m χ the right diagram with the s-channel exchange produces resonance.
To obtain the potential, we integrate the matrix element of the process over the momentum of particle χ with distribution function, F χ (k). The latter is normalized as
d 3 kF χ (k) = n χ ,(4)
and similarly forχ. The left (u-channel) and right (s-channel) diagrams in Fig. 1 give correspondingly the potentials
V ui = d 3 kF χ (k) ν i,p χ k | g † iν i P R χ 1 q 2 − m 2 φ g iχ P L ν i |ν i,p χ k ,(5)V si = d 3 kFχ(k) ν i,pχk | g † iν i P R χ 1 q 2 − m 2 φ + im φ Γ φ g iχ P L ν i |ν i,pχk ,(6)
Here in the propagator we added the term with total width of φ. In vacuum
Γ 0 φ = i g 2 i 8π m φ ≈ g 2 8π m φ ,(7)
(φ → ν χ), where g = g 3 and we take g 1 = g 2 = 0. In medium the term with Γ is modified (see below).
We assume first that the background particles are at rest which is valid for cold gases like dark matter (DM) or relic neutrinos from the cosmological neutrino background (CνB). Then F χ (k) = nδ( k) and the integrals in (5) and (6) give the total potential
V B ≡ V u + V s = |g| 2 2 n χ (2E ν m χ + m 2 φ ) +n χ (2E ν m χ − m 2 φ ) (2E ν m χ − m 2 φ ) 2 + (m φ Γ φ ) 2 .(8)
This expression differs from expression for potential in [6], but coincides with that in [11].
We obtain similar result for moving χ with the only substitution m χ → E χ , if the angular distribution is isotropic. This is important for the degenerate gas with large overdensity when the
Fermi momentum p f m χ .
The second term in (8) has a resonance dependence on energy (pole of propagator) with the resonance energy
E R ≡ m 2 φ − m 2 χ − m 2 ν 2m χ ≈ m 2 φ 2m χ .(9)
At E R the contribution V s is exactly zero and it changes the sign with energy change. The amplitude of scattering becomes purely imaginary, which corresponds to production of the on shell φ. In terms of the resonance energy the potential (8) can be rewritten as
V B = |g| 2 (n χ +n χ ) 8m χ (E − E R )(1 − ) (E − E R ) 2 + (ξE R ) 2 + 1 + E + E R ,(10)
where
ξ ≡ Γ φ m φ ,(11)
and in vacuum
ξ 0 ≡ g 2 8π .(12)
Let us introduce a dimensionless parameter
y ≡ E E R .(13)
In terms of y the expression for the potential (10) becomes
V B = 1 2 V B 0 (1 − )(y − 1) (y − 1) 2 + ξ 2 + 1 + y + 1 ,(14)
where
V 0 ≡ g 2 2m 2 φ (n χ +n χ ).(15)
In this way we can disentangle dependencies of the potential on relevant physical quantities: V 0 depends on parameters of mediator, g and m φ , and on total density of scatterers in a background.
It has a form of the standard matter potential at low energies with G φ = g 2 /2m 2 φ . The parameter ξ is proportional to the coupling constant squared, while the mass of χ enters via E R . V 0 is introduced in such a way that for y → 0 we have V B → V 0 , and consequently, for = ±1: V B = ±V 0 , thus reproducing the standard Wolfenstein potential.
Potentials in the bosonic background
For the scalar particle background and fermionic mediator the potential is similar to the one computed before. In the lowest order in g 2 , up to factor of 2 the potential has the same expression as in (8) with the following substitutions
m φ ↔ m χ , n χ → n φ , Γ φ → Γ χ .
Thus,
V φ ≈ 2V χ (m φ → m χ , m χ → m φ , n χ → n φ , Γ φ → Γ χ ).(16)
The resonance is realized if m χ > m ν + m φ , and the resonance energy equals
E R m 2 χ 2m φ .(17)
In terms of resonance energy the potential can be written in exactly the same form as in (14) with
V φ 0 = g 2 2m 2 χ (n φ +n φ ),(18)and φ ≡ n φ −n φ n φ +n φ .(19)
The difference from the fermionic background case may appear in higher orders in g 2 due to fermionic nature of mediator χ. Now the amplitude of scattering is proportional to / q = / p + / k:
A = / pΣ ν + / kΣ χ .
The first term gives contribution to renormalization of the wave function of neutrino: ν = (1+Σ ν /2)ν L , while the second one generates the potential: for the background at rest γ 0 m φ Σ χ = γ 0 V . Renormalization leads to change of the potential: V = (1 + Σ * ν /2)V (1 + Σ ν /2) = V (1 + Σ ν ) (as well as usual kinetic term) [11]. The correction is of the order g 2 . In this order one should take into account also loop corrections to external neutrino lines All these corrections have the same nature and can be described by tree level diagrams with multiple scattering on a background: ν + φ * → χ → ν + φ * , ν + φ * → χ.... Alternatively it can be treated as resummation of self-energy loop diagrams. The high order corrections will not change general properties (energy dependence) of potentials. In the lowest order properties of the resonances in the scalar and fermion backgrounds are the same. The difference appears in applications and implications for theory.
Resonance, energy smearing, coherence
In resonance, y = 1, the s−component of the potential (14) is zero for any asymmetry, V s = 0, and only non-resonance component contributes. The potential has extrema at y = 1 ± ξ:
|V max |= V 0 4 1 − ξ + 1 + ≈ V 0 4 1 − ξ .(20)
So, in resonance the enhancement is given by inverse coupling constant squared. The energy interval between two extrema equals 2ξE R . In these points the ratio of the resonant to non-
resonant part equals V s V u = 1 ξ 1 − 1 + .(21)
Zero of the total potential is shifted with respect to y = 1 due to the non-resonant contribution as
y 0 = 1 − 1 2 ξ 2 1 + 1 − .
The width of the peak at the half of height, V s (y 1/2 ) = 0.5V max s , equals
|y 1/2 − 1|= (2 + √ 3)ξ ≈ 3.73ξ.(22)
For values of couplings (2), ξ ∼ ξ 0 < 10 −15 , the characteristics of resonance in (20) - (22) (width and enhancement in the peak) have no physical sense. One should take into account (i) smearing of the peaks due to integration with distribution of the background χ over momenta, which differ from δ function, (ii) averaging over uncertainty in neutrino energy, (iii) effect of density correction to the width of φ, (iv) dumping due to resonance absorption.
Let σ y be the scale of smearing in variable y. The smearing leads to decrease of heights of the peaks and their widening. If σ y ξ, we can neglect ξ 2 in (14). Then the height of the peak after averaging can be estimated as
|V max |= V B (1 + σ y ) = V 0 (1 − ) 1 2σ y .(23)
So that the enhancement factor is given by 1/σ y . The maxima shift to y ≈ σ y /2. Let us consider possible origings of σ y .
The quantity σ y can be the width of F χ (k) distribution. Recall that deriving the potential (10) we assumed that the background particles are at rest, k χ = 0. (This can be still a possibility for condensate of scalar DM). For fermions F (k) is not the δ−function, but distribution with finite width. In Eq. (8) one should use (even for isotropic background) E χ = m 2 χ + k 2 χ . Near the resonance
σ y ≈ ∆E χ E χ ,
and for non-relativistic background E χ ∼ m χ + k 2 /2m χ , so that ∆E χ ≈ ∆(k 2 )/2m χ . For thermal background with temperature T we can take ∆k 2 = (3T ) 2 , and therefore
σ y = ∆(k 2 ) 2m 2 χ ≈ 9T 2 2m 2 χ .(24)
If T = 1.945 K and m χ = 0.05 eV, we obtain the value of enhancement 1/σ y ∼ 10 4 .
Further smearing of the dependence of potential on energy is due to neutrino energy uncertainty σ E in the oscillation setup. In this case
σ y = σ E /E R .
For very narrow resonance one needs to take into account the medium corrections to the φ−propagator. The main correction is given by the loop diagram φ → ν + χ * → φ with the χ propagator in a finite density medium. This medium correction corresponds to scattering of φ on particles of medium via neutrino as mediator: φ + χ → ν → φ + χ. So, whole the process consists
of the transitions: ν + χ * → φ, φ + χ → ν, ν + χ * → φ, φ → χ + ν.
These transitions can be treated as the induced decay of φ in medium. The polarization operator equals
Π = g 2 n χ 4m χ ,
which should be compared with m φ Γ 0 φ = g 2 m 2 φ /8π. Therefore the width can be written as
Γ φ = Γ 0 φ 1 + 2πn χ m 2 φ m χ .(25)
Ratio of the polarization operator and m 2 φ (the denominator outside the resonance):
β ≡ Π m 2 φ = g 2 n χ 4m 2 φ m χ .(26)
can be considered as the expansion parameter of the perturbation theory.
Refraction implies coherence: zero transfer momentum by neutrinos, and consequently, the unchanged state of medium |M : M |M ≈ 1. In the resonance region (in the s-channel) ν interacting with χ in some point x produces nearly on-shell φ which propagates for some distance and then decays back into ν and χ. So, the particle of medium reappears in different space-time point x . Then the coherence condition requires χ(x )|χ(x) ≈ 1. That is, the corresponding wave functions of χ before and after scattering should nearly coincide.
The time of propagation of φ between the production and annihilation is determined by the decay rate τ φ = 1/Γ. Taking into account the Lorentz factor γ = E φ /m φ we find the distance of propagation of φ in the rest frame of background:
d ≈ ct φ = τ φ E φ m φ = 2πE φ |g| 2 m 2 φ .(27)
For light background particles the total energy of mediator is
E φ = E + m χ ≈ E ν , Using the resonance condition, m 2 φ = 2Em χ , we can rewrite (27) as d = π |g| 2 m χ ≈ 6.2 · 10 9 cm |g| 10 −7 −2 m χ 1 eV −1 .(28)
d should be smaller than the uncertainty in the position (localization) of the background particle ∆x ≈ 1/∆p χ . This gives the coherence condition
d < 1 ∆p χ ,(29)
which imposes the upper bound on the uncertainty
∆p χ 3 |g| 10 −7 2 m χ 1 eV 10 −15 eV.(30)
However, for a given neutrino energy most of the particles of a background are not in resonance exactly and produced φ will be out of mass shell. The virtuality can be estimated as
∆q ∼ s − m 2 φ = E R k 2 m χ = m φ k m χ . Consequently, typical distance of travel is d ∼ m χ /m φ k. The scale of localization is about 1/n 1/3
χ . So, the condition for coherence can be written as
m χ m φ k 1 n 1/3 χ .(31)
Properties of resonance and total potential
Outside the resonance, |y − 1| ξ, neglecting ξ we obtain from (14)
V B (y, ) = V 0 y − y 2 − 1 .(32)
For y = 0:
V B = V 0 ,
so that for symmetric background V B = 0. Above the resonance, y 1, independently of
V ≈ V 0 1 y .
Thus, at E E R the potential takes the form of the standard vacuum contribution with 1/E dependence. Therefore, in principle, the standard neutrino oscillations can be reproduced (even for massless neutrinos) provided that
n χ 4m χ ∆|g| 2 ∆m 2 .(33)
(See recent discussion in [32], [13]).
For particular values of we have the following dependence on y (see Fig. 2).
• = −1 corresponds to pureχ background, and consequently, only the resonance contribution exists:
V B (y, −1) = V 0 1 y − 1 .(34)
At y = 0: V (0, −1) = −V 0 , then it decreases with increase of y.
With increase of the low energy part of the potential (y < 1) shifts up.
• = 0 corresponds to symmetric background. The potential equals
V B (y, 0) = V 0 y y 2 − 1 .(35)
V = 0 at y = 0, and then V B (y, 0) decreases linearly below the resonance:
V (y, 0) = −V 0 y.
• > 0: according to (32) for y > , V B has positive values, it vanishes at y = and then becomes negative.
• = 1 corresponds to pure χ background and resonance is absent:
V B (y, 1) = V 0 1 y + 1(36)
describes the asymptotic curve with V B /V 0 = 1 at y = 0. V B (y)/V 0 decreases monotonously from 1 to 0 at y → ∞ and at y = 1 the ratio equals 0.5. Shown also the vacuum kinetic energy V vac /V 0 as function of y.
For < 1 the dependence of potential on y has two branches. In the low energy branch, y < 1,
the ratio V B /V 0 decreases from at y = 0 down to −(1 − )/4ξ at y ≈ 1 − ξ, if there is no
smearing, see eq. (20). In the high energy branch, y > 1, we have V B /V 0 > 0, and it decreases from V B /V 0 ∼ (1 − )/4ξ at y = 1 + ξ down to zero at y → ∞ (without smearing). The two branches are connected in the range y = 1 ± ξ. The largest effect of a background is for = −1.
With increase of both branches approach the non-resonance curve (36) everywhere apart from the region around 1:
y = ÷ 1 2 + 1 2 1 + 4(1 − ) ≈ ÷ (2 − ).
3 Resonance refraction and oscillations
Background versus vacuum contributions
Let us consider an interplay of the background V B with kinetic term ("vacuum potential"):
V vac (E) ≡ ∆m 2 2E = V vac R y , V vac R ≡ ∆m 2 2E R .(37)
We can neglect the usual matter effect if the refraction resonance energy is much smaller than In general, V vac can be positive or negative depending on the mass ordering (sign of ∆m 2 ).
The sign is relevant since now we have two contribution to the phase. In the model where y i correlate with masses, the potentials V vac and V 0 correlate too, having the same sign. If both V 0 and V vac are positive, the potential V vac (y) crosses V B (y) at y > 1 provided that V vac
R /V 0 > 1/2.
To compare the two contributions we consider the ratio
κ(y) ≡ V B V vac = r y(y − ) y 2 − 1 ,(38)
where according to (9), (15) ,
r ≡ V 0 V vac R = g 2 (n χ +n χ ) 2m χ ∆m 2 .(39)
The parameter r determines the relative strength of the background effect. Notice that r depends on the mass of particles of the background, but does not depend on the mass of mediator. More importantly, r determines the ratio of potentials for y → ∞.
Two contributions to the phase are equal (for r = 1) at
y eq = 1 2(1 − r) − r + 2 r 2 + 4(1 − r) .(40)
This equation gives y eq = 1/(1 − r) for = −1, and y eq = 1/(1 − r) for = 0. With decrease of r, as well as increase of the value of y eq approaches 1. For the non-resonance case ( = 1) y eq = 1/(r − 1) and the equality is realized when r > 2.
For the low energy branch, y < 1, an interesting feature is cancellation of two contributions
when V B = −V vac ,
which corresponds to the MSW resonance on the background. If r = −1 this happens at
y c = 1 2(1 + r) r + 2 r 2 + 4(1 + r) ,(41)
so that y c = 1/(1 + r) for = −1, and y c = 1/(1 + r) for = 0. With decrease of r and → 1 the cancellation point approaches 1. Also for → 1 we find that y c → 1.
The sum of two contributions
V sum ≡ V vac + V B = V vac [1 + κ(y)],(42)
in the units of V 0 as function of y, is shown in Fig. 3. It has the following features. In the high energy branch V sum increases from [V vac (1 + r)] at y → ∞ to V 0 (1 − )/4ξ at y = 1 + ξ (in absence of smearing). The two contributions become equal at y eq (40). In the low energy branch V sum /V 0 decreases from V vac /V 0 (1 + ) at y → 0, down to −(1 − )/4ξ at y = 1 − . It crosses zero at y = y c .
Correspondingly, the modulus |V sum | increases with y at y > y c up to V 0 /ξ.
Thus, the background contribution distorts substantially the potential (and consequently, the vacuum phase) dependence on y in the resonance region y ∼ 1: y c ÷ y eq . This region shrinks with increase of r and . Maximal distortion effect is at = −1.
Effective mass splitting
Effect of the background can be treated as modification of the mass squared difference which depends on neutrino energy:
∆m 2 eff (y) = ∆m 2 [1 + κ(y)],(43)
so, that V sum = ∆m 2 eff (y)/2E. The ratio of the effective splitting in a background, ∆m 2 eff , and in vacuum, ∆m 2 equals R ∆ (y) = 1 + r y(y − )
R ∆ ≡ ∆m 2 eff ∆m 2 = V sum V vac = Φ tot Φ vac .(44)y 2 − 1 .(45)
For y → 0 the correction disappears R ∆ (y) = 1 + ry.
For high energies with increase of y the ratio converges to constant value
R ∆ (y) = 1 + r(47)
independently of . Thus, the key consequence of interaction with background is that ∆m 2 extracted from data above the refraction resonance differs from ∆m 2 extracted from low energy data.
In Fig. 4 we show dependence of the ratio (45) on y for different values of r. Here the important point is y s in which corrections to the modulus of effective mass squared difference changes the sign. It is determined by
|R ∆ | = 1,(48)
or according to (43) by V B (y)/V vac (y) = −2. Solution of the corresponding equation gives y s = 1 2(2 + r) r + 2 r 2 + 8r(2 + r) .
For = 0, we find y s = 2/(2 + r). Consequently, for r = 1.5 it equals y s = 0.87, and for = −1: y s = 0.75. In the interval y = 0 ÷ y s the background diminishes splitting: ∆m 2 eff < ∆m 2 , and consequently, the oscillation phase. For y > y s : ∆m 2 eff > ∆m 2 and the phase increases. With decrease of r the correction decreases and the benchmark energies y c , y s and y eq approach 1.
Negative κ
In the previous consideration we assumed that ∆m 2 is positive, or more precisely, ∆m 2 and
V B = V B 2 − V B
1 are positive simultaneously. That is, the potentials follow the hierarchy of masses, which is automatically satisfied if both differences are given by g 2 2 − g 2 1 . As a consequence, κ ≥ 0 and r ≥ 0.
If, however, neutrinos have other sources of masses apart from VEV of φ, the signs and values of ∆m 2 and V B are independent. In this connection let us consider the case of negative κ and r.
Above the resonance the quantities V B and V vac have opposite signs. Therefore Figure 4: The effective mass splitting ∆m 2 eff /∆m 2 as function of y for different values of r. We take = 0. Shown are also lines π/2LV vac (y) which correspond to two different values of baseline π/2LV vac R (numbers at the lines). Crossings of these lines with ∆m 2 eff /∆m 2 give the points where the total phase Φ = π/2 (see text for explanations).
1. The cancellation point y c (the MSW resonance on background) is above the refraction resonance: y c > 1.
2. The ratio R ∆ increases from 1 at y → 0 to maximum at y ≈ 1 − ξ (no smearing).
3. The dip is above the resonance peak. 4. In asymptotics, y → ∞, we have R ∆ → 1 − |r|< 1. So, one expects smaller value of ∆m 2 eff in comparison to the vacuum value: ∆m 2 eff = (1 − |r|)∆m 2 .
Phases and probabilities
In the case of diagonal matrix of potentials in the neutrino mass basis the background potential modifies neutrino oscillations via an extra contributions to the oscillation phase:
Φ = Φ vac + Φ B = (V vac + V B )L,(50)
while the mixing is unchanged. Thus, for two neutrino mixing the ν α − ν β transition probability equals P να− →ν β (L, E) = sin 2 2θ sin 2 0.5Φ.
We assume here constant density of background particles.
Since the phase Φ enters in the observables (probability) as cos Φ or sin 2 Φ/2, the change of sign of V in the resonance does not lead to suppression due to integration over energy. (Notice that this is valid for 2ν case and without matter effect. In the 3ν− case we have interference of different channels with different frequencies and those terms are not even with respect to V .)
Observational effects of the background depend on the baseline of experiment. In Fig. 4 we
show the lines π 2V vac L = π 2Φ vac = πy 2V vac R L .(52)
For fixed L the lines correspond to the inverse of the vacuum oscillation phase as function of y.
With increase of L the slope decreases. In Fig. 4 the left (right) line corresponds to the short (long) baseline.
The total oscillation phase equals
Φ = R ∆ (y)Φ vac .
Therefore at crossings of π/(2Φ vac (y)) and R ∆ (y):
|R ∆ (y)| = π 2V vac L(53)
we have Φ(y cross ) = π/2 → sin 2 0.5Φ(y cross ) = 0.5.
There are four crossings: low energy y l , and y − , y + with left and right branches of the resonance peak as well at the resonance y ≈ 1. The equation for crossing (53) can be written as
y 2 − 1 + ry(y − ) = ± π 2V vac R L y(y 2 − 1).(54)
For parts of the lines R ∆ (y), which are above the crossings the phase is big Φ > π/2, while for the parts below the crossings the phase is small: Φ < π/2.
In Fig. 5 we show the oscillatory factor sin 2 0.5Φ as function of y for three different values of baseline. We performed smearing over energy.
The crossings determine four intervals of y with different observational features.
• y < y l : the oscillatory curve with increasing period when y → 0. At y → 0 oscillations in background nearly coincide with the vacuum oscillations.
• y l < y < y − : oscillation dip. Here Φ < π/2, the background suppresses the phase.
• y − < y < y + : resonance interval. The phase is large: Φ > π/2. In the central resonance region Φ 1.
• y > y + : tail at high energies, Φ < π/2, the phase decreases with increase of y.
With decreases of L: y l → 0, while y − , y + → 1. Thus, the dip widens, whereas the resonance region becomes narrower.
Bump: number of events
The characteristic relevant for observations is not the width of the peak, but the energy range where the background effect is bigger than the standard oscillation effect. It is determined by the tails of resonance where |V B | |V B max |. According to (32) We take r = 1.6 and = 0. The dotted lines correspond to the oscillatory factors for pure vacuum oscillations (r = 0).
Φ B = Φ 0 y − y 2 − 1 ,(55)with Φ 0 ≡ V 0 L.(56)
As a criteria for strong effect, we can use sin 2 Φ B /2 = 1/2, which gives according to (55)
y ≈ 1 ± Φ 0 π (1 − ).(57)
Therefore the region of strong effect has width
∆y = 2Φ 0 π (1 − ).(58)
This region decreases with increase of .
dy sin 2 0.5Φ 0 y − y 2 − 1 ,(59)
where y max and y min are determined by conditions the phase Φ B (y max ) = Φ B (y min ) = π/2, so that sin 2 0.5Φ B = 1/2. We can approximate the oscillatory factor by its average value: sin 2 Φ/2 ≈ 0.5.
Then
I = 0.5∆y = Φ 0 π (1 − ) = (1 − ) V 0 L π ,(60)
according to (55), and this result is valid for Φ 0 /π 1.
More precise computation of the integral (59) for any interval of y can be done in the following way. Let us introduce δ y (which depends on Φ 0 ) such that in the range |y − 1|< δ y the phase is very big: Φ 0 /2δ y 1, and consequently, the sine has very fast oscillations (δ y ∆y). Then the integral I can be split in three parts: with integration over y in the intervals [1 − δ y ÷ 1 + δ y ],
[0 ÷ 1 − δ y ] and [1 + δ y ÷ ∞].
In the first (central) interval the integrand can be approximated by 1/2, and consequently,
I = δ y + 1−δy 0 dy sin 2 0.5Φ 0 (y − ) y 2 − 1 + ∞ 1+δy dy sin 2 0.5Φ 0 (y − ) y 2 − 1 .(61)
The tail integrals can be computed numerically as follows.
In the central (resonance) region, −2Φ 0 /π < (y − 1) < 2Φ 0 /π, we can substitute the integrand sin 2 Φ/2 by 1/2. Outside the resonance region, 0 < y < 1−2Φ 0 /π (lower region) and y > 1+2Φ 0 /π (upper region), the sine squared can be approximated by
1 2 2Φ 0 π 2 (y − ) 2 (y 2 − 1) 2(62)
normalized in such a way that at the borders it equals 1/2. Then for small Φ 0 /π the high and the low energy tails give
I h ≈ Φ 0 π 1 − 2 , I l ≈ Φ 0 π 1 − 2 1 − 2Φ 0 π ,
and the sum equals
I tail = I h + I l ≈ 2Φ 0 π 1 − 2 .
The ratio of the tails to the resonance (60) contributions equals
I tail I c ≈ 1 − O Φ 0 π ,(63)
and it depends on the phase weakly: with increase of Φ 0 the ratio decreases. The contribution from resonance width (22) is negligible.
Adding usual matter effect
The matter potential V e = √ 2G F n e does not depend on energy in the range we are considering.
The equality V e ≈ V vac determines the MSW resonance energy E MSW . Since in this setup the mixing does not change by the background, the MSW resonance condition has usual form:
∆m 2 eff (E) 2E = V e cos 2θ .(64)
There are three possibilities depending on relative values of V e and V vac R .
I. V e < V vac R : In this case the refraction resonance is below the MSW resonance E B R < E MSW (see Fig. 3). There are three crossing of V vac (y) with V e in the neutrino channel:
(i) Standard MSW resonance. It is shifted to higher energies due to background contribution.
The resonance energy with background correction can be found from eq.(64). The expression is simplified in the case y MSW 1, so that we can take the asymptotic value ∆m 2 eff (E) ≈ ∆m 2 (1+r). As a result,
E MSW = E MSW,0 (1 + r),(65)
where E MSW,0 is the standard resonance energy without background:
E MSW,0 = cos 2θ ∆m 2 2V e .
The shift of MSW resonance can be used to search for the background effect.
(ii) New crossing near refraction resonance, y ≈ 1.
(iii) New crossing with the low energy branch of V tot .
In theν− channel there are two crossings: (i) near the refraction resonance; (ii) with low energy branch of V sum .
In the crossing points the mixing in medium (matter plus background) becomes maximal.
If V e V vac R , E B R E MSW , at low energies and short baseline experiments the effects of four new crossings become unobservable because in these crossings Φ 1.
II. V e > V vac R : In this case the refraction resonance is above the MSW resonance: E B R > E MSW . Depending on the shift of the MSW resonance can be to higher or low energies.
As before there are two new crossings in the ν−channel and two new crossings in theν−channel.
In the ν−channel one crossing is near the refraction resonance, and another one is in the high energy branch. The energy of the latter can be substantially larger than y = 1. In theν−channel the two crossings are near the refraction resonance being in the low energy branch.
III. The case of V e ≈ V vac R is of special interest: the standard MSW resonance coincides with the refraction resonance, while two new resonances (at y > 1 and at y < 1) can be far from the refraction resonance y = 1.
Generation of mixing in the background
In the previous consideration the matrix of potentials had only one entry and so it was diagonal in the mass eigenstate basis. If couplings of other mass states with background are not neglected the transition ν 1χ → ν 2χ generates a non-diagonal element of the matrix of potentials which is proportional g 1 g * 2 . In the 2ν case the total Hamiltonian becomes
H B = 0 αV B α * V B V vac + V B = V vac 0 ακ α * κ 1 + κ ,(66)
where
α ≡ g 1 g * 2 |g 2 | 2 −|g 1 | 2 , V B = V B (|g| 2 → |g 2 | 2 −|g 1 | 2 ) and V B (|g| 2 )
is the background potential discussed in the previous sections. κ is defined in (38).
Notice that the resonance energies are different for different neutrino mass states ν i :
E 2R − E 1R = ∆m 2 2m χ E R ,
but this difference is still much smaller than the scale of smearing due to motion of scatterers.
Therefore we can neglect dependence of potentials on the neutrino masses and the only relevant dependence on type of neutrino is in the coupling constants.
Diagonalization of the matrix (66) gives the difference of eigenvalues
R ∆ = (1 + κ) 2 + (2ακ) 2 ,(67)
and the mixing angle of mass states
sin 2 2θ B = (2ακ) 2 (1 + κ) 2 + (2ακ) 2 .(68)
The flavor mixing angle becomes
θ f = θ + θ B .(69)
Let us consider different limits and benchmark points.
1. y → 0: R ∆ → 1 and θ B → 0. The background effect is negligible.
2. y → y c : the cancellation point (V B = −V vac ) becomes the energy of MSW resonance on the background. Here the mixing is maximal sin 2 2θ B = 1 and splitting is non-zero:
R ∆ = 2α.
The transition probability equals P ≈ sin 2 (αΦ vac ).
3. In the peak, y ≈ 1: V B V vac , the ratio equals
R ∆ = V B V vac √ 1 + 4α 2 = κ √ 1 + 4α 2 ,
and the angle is The transition probability equals P = sin 2 2(θ + θ B ) sin 2 (Φ vac R/2).
sin 2 2θ B = 4α 2 1 + 4α 2 .
For small α in comparison to no-mixing case modifications of P are small. The most significant change is in the cancellation region.
Finally, let us comment on the case of three different fermions χ j -each per generation. If VEV of φ is the only source of neutrino mass then the couplings are diagonal in the mass basis.
Furthermore, transition ν iχi → ν jχj will not form potential, since finalχ j differs from initialχ i being orthogonal each other. In this case the matrix of potentials is diagonal and the difference of
diagonal elements, V = V i − V j ∝ |g 2
i |−|g 2 j |, will enter expressions considered above.
Resonance refraction and Glashow resonance
In the standard model, the resonance refraction is realized in the Glashow resonance: that is, in the ν e −e scattering with W boson as the mediator. The resonance energy equals E R = m 2 W /2m e ≈ 6.4 PeV. Dependence of the potential on neutrino energy is described by the Eq. (14) with = −1,
V 0 = √ 2G F n e and ξ = 3g 2 W /16π.
At low energies the potential coincides with the Wolfenstein potential. The difference from what we have discussed before is that the coupling is large g W ∼ O(1). Therefore the width of the resonance is not negligible, enhancement is not extremely strong and smearing effect is weaker.
The maxima
|V max |= V 0 m W Γ W = V 0 16π 3g 2 W are achieved at E = E R (1 ± Γ W /m W ).
In the resonance region the vacuum contribution, ∆m 2 /2E
is negligible: r = 10 −6 . Vacuum mixing is strongly suppressed. Furthermore, dumping due to absorption can be substantial.
The refraction length in resonance can be reduced by factor 20 in comparison to the Wolfenstein length, being of the order 300 km. However, existence of observable effects at the Earth is questionable.
1. Oscillation effects with usual ∆m 2 and θ are negligible. Refraction index is still very close to 1, so that bending and refraction effects are negligible too.
2.
One can explore possible effect in astrophysical objects -sources of high energy neutrinos.
3. Mixing of active neutrinos with sterile neutrinos of mass 10 2 eV can be considered. In this case ∆m 2 /2E R ∼ V e and the mixing can be enhanced in matter.
Applications to specific experiments 4.1 Signatures and implications
Recall that the oscillatory pattern in terms of universal variables, R ∆ (y) and y depends on (i) rrelative strength of interactions with background (39), (ii) -charge asymmetry of the background, (iii) baseline L. Thus, observing the oscillatory pattern at given L one can determine and r (which is the combination of the fundamental parameters and density of a background (39)) or put bounds on these parameters.
Observable effects of the background vanish completely if r → 0, however, they do not disappear when → 1. At = 1 the resonance is absent, the cancellation point is at y c = 1 and R ∆ (y) = 1 + r y y + 1 .
So, the corrections increase with y: at y = 1 the ratio equals R ∆ = 1+r/2 , for y → ∞: R ∆ = 1+r.
For large energies the background effects are determined by r and dependence on is weak.
To some extend the effects of and r on the oscillatory pattern correlate, and there is certain degeneracy. However, variations of the pattern with r can be much more substantial than that with . Effect of is restricted by its minimal value −1.
The presence of the resonance bump testifies for = 1. Value of determines the benchmark energies. With → −1 the region of distortions in the resonance interval becomes wider. Measuring the oscillatory pattern in different energy ranges allows to disentangle effects of r and . Let us summarize signatures of interactions with background. For κ > 0 they include:
• deviation of the oscillatory pattern from sin 2 (A/y) in the low energy interval;
• oscillation dip at y < 1, with zero at y c ;
• increase of the probability at y → 1;
• bump at y ∼ 1;
• tail at y > 1.2, which corresponds to larger ∆m 2 eff than at low energies.
In the presence of usual matter we have in addition For κ < 0 the dip is at higher energies. In asymptotics the effective ∆m 2 eff is smaller than that at low energies.
Thus, to search for effects for fixed L one can consider different energy intervals. For a given neutrino beam one can use different L, e.g., results from near and far detectors.
MiniBooNE excess and resonance refraction
The low energy excess of events reported by the MiniBooNE collaboration [7] could be a manifestation of the resonance refraction [6]. The background is composed of the overdense relic neutrinos.
In this case m χ = 0.05 eV and ≈ 0.
The best fit of the MiniBooNE data is obtained for values of parameters
E B R = (320 − 340) MeV, Y ≡ g 2 (n χ +n χ ) 8m χ ≥ 10 −3 eV 2 .(71)
Then
m φ = 2m χ E B R = 5.8 keV.(72)
Notice that the mediator and background particles are light enough and therefore the astrophysical bounds on g are applicable (see sect. 2.1).
From (71) we obtain
V B 0 = 2Y E B R = 5.9 · 10 −12 eV, V vac R = 3.7 · 10 −12 eV,(73)
and correspondingly, r = 4Y ∆m 2 = 1.59, y c = 0.62. Thus, in the resonance region and above it the background potential dominates. The usual matter potential is very small: V e = 2 · 10 −13 eV.
The MiniBooNE baseline L MB = 541 m corresponds to
1 L MB = 3.1 · 10 −10 eV V vac R , V B 0 ,
which means that the phase is very small, Φ 1, everywhere except for narrow region close to y = 1. The resonance peak is smeared by the energy resolution.
Let us show that this solution is excluded because of strong dependence of the effective ∆m 2 eff on energy (y). In Fig. 6 we show ∆m 2 eff as function of energy for E R = 320 MeV, = 0 and different values of r. At low energies y 1, ∆m 2 eff ≈ ∆m 2 (as in vacuum) while above the resonance
∆m 2 eff ≈ 1 + r y 2 y 2 − 1 ∆m 2 .(74)
According to this equation for y = 2 and y = 3, which correspond to E = 680 MeV and 1020 MeV, the enhancement of ∆m 2 eff is given by factors 3.12 and 2.79 respectively. In asymptotics, y → ∞, it converges to 2.59. Fig. 6 shows also results of measurements of the "atmospheric" ∆m 2 ≈ m 2 3 at different energies. At the lowest energies, E = (2 − 5) MeV (y ∼ 10 −2 ) the data on ∆m 2 ee ≈ ∆m 2 31 are provided by the reactor experiments [33][34][35]. Here the background effect can be neglected. The T2K experiment [36,37] measures ∆m 2 32 at (0.3−1.3) GeV which is slightly above the resonance. At higher energies (essentially in asymptotics) the data are given by NOvA [38] and then MINOS and MINOS+ [39].
At even higher energies IceCube DeepCore [40] and ANTARES [41] give information on ∆m 2 32 .
The main conclusion is that within the experimental error bars ∆m 2 eff does not depend on energy over 4 orders of magnitude. This puts strong bound on strength of interaction with background:
r 0.01,(75)
which certainly excludes r > 1.6 required by MiniBooNE explanation.
Similar result can be obtained for negative r. Now above the resonance the predicted values of ∆m 2 eff are below the experimental points.
The same consideration with the same conclusion is applied for the bosonic background and fermionic mediator. In particular, the Fig. 6 will be unchanged. The only difference is that the potential is 2 times larger which can be accounted by renormalization g → √ 2g. The latter could have some implications to particle physics model but not to the exclusion.
Bounds on the background effects
We have obtained the upper bound on strength of the background effects r (75) for E R ∼ 320
MeV. According to Fig. 6 similar bound can be established in the interval of E R from 10 MeV to 10 GeV. For E R < 1 MeV -no distortion is expected in the region of observations (i.e. at E > MeV), while for E R > 10 GeV the effect of background in the observable region becomes much smaller than vacuum effect and it decreases with energy decrease.
The strength r (39) can be written as
r = 2V 0 E R ∆m 2 .(76)
This means that for given E R and r the potential is restricted by
V 0 = r ∆m 2 2E R .(77)
The largest value of E R , for which a given bound on r exists, gives the most strong restriction on V 0 . Therefore according to (77)
r(E = 0.32 GeV) = r(E = 3 GeV) 0.32 GeV 3 GeV ≈ 10 −3 .(78)
Thus, consideration at higher energies allows to strengthen the bound on r.
For the background particles at rest the strength factor can be written as
r = g 2 n χ 2m χ ∆m 2 .(79)
Is the bound on r we obtained from resonance refraction substantial, or there are other more strong bounds? One such a bound on the system comes from contribution of χ to the dark matter in the Universe:
ρ χ = E χ n χ ≥ m χ n χ .(80)
For a given value of m χ this gives the number density of χ which compose ρ χ /ρ DM fraction of the local dark matter:
n χ ∼ ρ DM m χ ρ χ ρ DM .(81)
Inserting this expression into (79) and taking for the local energy density of DM ρ DM = 0.4 GeV/cm 3 , we obtain the strength factor r = 2.6 · 10 −7 g 2 10 −3
2 0.05eV m χ 2 ρ χ ρ DM .(82)
For g satisfying the bound (2), ρ χ = ρ DM and m χ = 0.05 eV Eq. (82) gives r = 2.6 · 10 −14 which is much below the refraction bound. For these values of parameters n χ = 8 · 10 9 cm −3 . r can be enhanced if we take smaller mass of χ and g = 10 −3 , which satisfies the laboratory bounds but requires more complicated cosmological evolution that allows to avoid BBN and CMB bounds.
Then r = 10 −3 can be obtained for m χ = 8 · 10 −4 eV. The corresponding number density of χ equals n χ = 5 · 10 11 cm −3 .
This consideration is valid for bosonic background with changing subscripts χ ↔ φ in Eq. (81 -82). For the fermionic background additional restrictions follow from Pauli principle. Indeed, the density indicated above gives the Fermi momentum of the degenerate gas p F = (6π 2 n χ ) 1/3 = 1.3 eV. That is, E χ ≈ p F m χ , and therefore we deal here with strongly degenerate fermion gas.
Consequently, in all considerations above we should substitute m χ → E χ ∼ p F = (6π 2 n χ ) 1/3 .
In particular, E R = m 2 φ /2E χ and r = g 2 2∆m 2 n χ E χ = g 2 2(6π 2 ) 1/3 )∆m 2 (n χ ) 2/3 .
Using expression for the energy density in χ ρ χ = E χ n χ = (6π 2 ) 1/3 n 4/3 χ (84) we obtain
r = g 2 √ ρ χ 2 √ 6π∆m 2 .(85)
Numerically this gives r = 4.7 · 10 −8 g 2 10 −3 2 ρ χ ρ DM .
Thus, r is determined by the coupling constant and fraction of the DM in χ and does not depend on m χ . The value r ≤ 4.7 · 10 −8 , which is much smaller than sensitivity range to the resonance refraction effects of experiments at the laboratory energies. bump in the resonance region, (iv) additional contribution to V vac (y) above refraction resonance which does not disappear in asymptotics.
Conclusions
6. Effects of background can be considered as modification of the effective ∆m 2 eff (y) with peculiar dependence on energy. 7. As an example we applied our results to the MiniBooNE excess interpreted as bump produced by the refraction resonance. We show that this interpretation is excluded because of strong difference of ∆m 2 eff expected at high energies (T2K, NOvA, MINOS, MINOS+, IceCube, ANTARES) and low energies (reactor experiments) in contrast to observations. We obtain the bound on the relative strength of neutrino interactions with background r < (0.001 − 0.01).
Figure 1 :
1Feynman diagrams for scattering of neutrinos on a background composed of fermions χ (left) and antifermionsχ (right).
Figure 2 :
2The dependence of the potential V B /V 0 on energy, y = E/E R , for different values of .
which is realized for short baseline experiments, such as reactor reactor neutrino experiments, LSND and MiniBooNE and low energy LBL experiments, e.g., T2K.
Figure 3 :
3The dependence of the total potential, (V B +V vac )/V 0 , on energy y for different values of .The horizontal lines correspond to the usual matter potential V e /V 0 for neutrinos and antineutrinos.Crossings of these lines with (V B +V vac )/V 0 show the points of the MSW resonances in the neutrino (empty boxes) and antineutrino (empty circles) channels.
Figure 5 :
5The oscillatory factor as function of energy y for three different values of baselines L.
For small Φ 0
0the background effect is small everywhere except for the resonance region. For instance, if Φ 0 = π/20, then ∆y = 0.2 ( = −1). In the resonance region, E = E R (1 ± 0.1), we have sin 2 Φ ≥ 0.5, while outside the resonance sin 2 Φ ≈ sin 2 Φ 0 = 0.024 Let us consider total contribution from the resonance interval. Here the number of events
4 .
4In the refraction resonance, y = 1 (V B = 0): R ∆ = 1 and θ B = 0. 5. In asymptotics y → ∞: V B /V vac → r. Correspondingly, R ∆ = (1 + r) 2 + (2αr) 2 , and sin 2 2θ B = (2αr) 2 (1 + r) 2 + (2αr) 2 .
Figure 6 :
6The effective mass squared difference as function of the neutrino energy for different values or r. The curves are normalized at E → 0 to the value of the ∆m 2 32 from the global fit of all the data. Explanation of the MiniBooNE requires r > 1.6. Shown are the values of ∆m 2 32 , ∆m 2 31 and ∆m 2 ee extracted from experiments at different energies.
1 .
1In general, the medium potential is function of the neutrino energy and this function depends on the C-asymmetry of a background. The energy dependence of V B may have a resonance character related to the exchange of (on shell) mediator of interactions. Resonance is realized at √ s = M med and for light mediators and light scatterers (which requires extension of the Standard model) the resonance refraction can occur at energies available at laboratories. 2. The relative correction to the vacuum (kinetic) term from background vanishes at low energies, it can dominate in resonance and above it. At high energies the correction converges to constant. The interplay of the energy dependent potential V B (y) and vacuum contribution V vac (y) has several important features: -Cancellation of the contributions which corresponds to the MSW resonance on background (when mixing in the background is introduced), -above the resonance V B (y) gives correction to V vac (y) which does not disappear in asymptotics E → ∞. 3. The background can produce mixing of mass states, that is, the non-diagonal matrix of potentials in the mass basis. For small mixing substantial effect on oscillations appears in the region around the cancellation point (the MSW resonance on a background). 4. For long-baseline experiments usual matter effect should be added. The interaction with background shifts the energy of MSW resonance (which provides important signature) and leads to appearance of new resonances around E B R . 5. Signatures of refraction on the background include: (i) deviation of the oscillatory pattern in energy from sin 2 (A/E), (ii) dip of the oscillation probability below or above resonance, (iii)
As an option several new fermions χ j can be introduced. Notice that χ i themselves can be 4 component Dirac particles which implies more degrees of freedom. χ R can be the left antineutrino, so that neutrinos are Majorana particles.The coupling can be generated via mixing of singlet scalar field φ with the Higgs boson doublet(Higgs portal) [31]. Alternatively, φ can couple with RH singlet (sterile) neutrino, which in turn,
couples (mixes) with active neutrinos (lepton and Higgs doublets) -that is, via the RH neutrino
portal. In the Majorana case the singlet φ should mix with the neutral component of the Higgs
triplet.
Neutrino oscillations in matter. L Wolfenstein, https:/link.aps.org/doi/10.1103/PhysRevD.17.2369Phys. Rev. D. 179L. Wolfenstein. "Neutrino oscillations in matter". In: Phys. Rev. D 17 (9 May 1978), pp. 2369- 2374. doi: 10.1103/PhysRevD.17.2369. url: https://link.aps.org/doi/10.1103/ PhysRevD.17.2369.
Coherent scattering of cosmic neutrinos. R Opher, Astron. Astrophys. 37R. Opher. "Coherent scattering of cosmic neutrinos". In: Astron. Astrophys. 37.1 (1974), pp. 135-137.
Matter Effects on Three-Neutrino Oscillations. D Vernon, Barger, 10.1103/PhysRevD.22.2718Phys. Rev. D. 222718Vernon D. Barger et al. "Matter Effects on Three-Neutrino Oscillations". In: Phys. Rev. D 22 (1980), p. 2718. doi: 10.1103/PhysRevD.22.2718.
On the Detection of Cosmological Neutrinos by Coherent Scattering. Paul Langacker, Jacques P Leveille, Jon Sheiman, 10.1103/PhysRevD.27.1228doi: 10.1103/ PhysRevD.27.1228Phys. Rev. D. 271228Paul Langacker, Jacques P. Leveille, and Jon Sheiman. "On the Detection of Cosmological Neutrinos by Coherent Scattering". In: Phys. Rev. D 27 (1983), p. 1228. doi: 10.1103/ PhysRevD.27.1228.
The minimum width condition for neutrino conversion in matter. C Lunardini, A Yu, Smirnov, 10.1016/S0550-3213(00)00341-2Nuclear Physics B. 583C. Lunardini and A.Yu. Smirnov. "The minimum width condition for neutrino conversion in matter". In: Nuclear Physics B 583.1 (2000), pp. 260-290. issn: 0550-3213. doi: https: //doi.org/10.1016/S0550-3213(00)00341-2. url: https://www.sciencedirect.com/ science/article/pii/S0550321300003412.
New light Higgs boson and short-baseline neutrino anomalies. J Asaadi, 10.1103/PhysRevD.97.075021issn: 2470-0029. doi: 10.1103/ physrevd.97.075021Physical Review D. 977J. Asaadi et al. "New light Higgs boson and short-baseline neutrino anomalies". In: Physical Review D 97.7 (Apr. 2018). issn: 2470-0029. doi: 10.1103/ physrevd.97.075021. url: http://dx.doi.org/10.1103/PhysRevD.97.075021.
Updated MiniBooNE Neutrino Oscillation Results with Increased Data and New Background Studies. arXiv:2006.16883hep-exMiniBooNE Collaboration et al. Updated MiniBooNE Neutrino Oscillation Results with In- creased Data and New Background Studies. 2020. arXiv: 2006.16883 [hep-ex].
Neutrino effective potential in a fermion and scalar background. F José, Sarira Nieves, Sahu, 10.1103/PhysRevD.98.063003arXiv:1808.01629Phys. Rev. D. 98663003hep-phJosé F. Nieves and Sarira Sahu. "Neutrino effective potential in a fermion and scalar back- ground". In: Phys. Rev. D 98.6 (2018), p. 063003. doi: 10.1103/PhysRevD.98.063003. arXiv: 1808.01629 [hep-ph].
Neutrino damping in a fermion and scalar background. F José, Sarira Nieves, Sahu, 10.1103/PhysRevD.99.095013arXiv:1812.05672Phys. Rev. D. 99995013hep-phJosé F. Nieves and Sarira Sahu. "Neutrino damping in a fermion and scalar background". In: Phys. Rev. D 99.9 (2019), p. 095013. doi: 10 . 1103 / PhysRevD . 99 . 095013. arXiv: 1812.05672 [hep-ph].
Scalar Nonstandard Interactions in Neutrino Oscillation. Stephen J Shao-Feng Ge, Parke, 10.1103/PhysRevLett.122.211801Phys. Rev. Lett. 122211801hep-phShao-Feng Ge and Stephen J. Parke. "Scalar Nonstandard Interactions in Neutrino Oscil- lation". In: Phys. Rev. Lett. 122.21 (2019), p. 211801. doi: 10.1103/PhysRevLett.122. 211801. arXiv: 1812.08376 [hep-ph].
Neutrino Oscillations in Dark Matter. Ki-Young Choi, Eung Jin Chun, Jongkuk Kim, 10.1016/j.dark.2020.100606arXiv:1909.10478Phys. Dark Univ. 30100606hep-phKi-Young Choi, Eung Jin Chun, and Jongkuk Kim. "Neutrino Oscillations in Dark Matter". In: Phys. Dark Univ. 30 (2020), p. 100606. doi: 10.1016/j.dark.2020.100606. arXiv: 1909.10478 [hep-ph].
Neutrino nonstandard interactions via light scalars in the Earth, Sun, supernovae, and the early Universe. K S Babu, Garv Chauhan, P S Dev, 10.1103/PhysRevD.101.095029arXiv:1912.13488Phys. Rev. D. 101995029hep-phK. S. Babu, Garv Chauhan, and P. S. Bhupal Dev. "Neutrino nonstandard interactions via light scalars in the Earth, Sun, supernovae, and the early Universe". In: Phys. Rev. D 101.9 (2020), p. 095029. doi: 10.1103/PhysRevD.101.095029. arXiv: 1912.13488 [hep-ph].
Dispersion of neutrinos in a medium. Ki-Young Choi, Eung Jin Chun, Jongkuk Kim, arXiv:2012.09474hep-phKi-Young Choi, Eung Jin Chun, and Jongkuk Kim. "Dispersion of neutrinos in a medium". In: (Dec. 2020). arXiv: 2012.09474 [hep-ph].
Apparent CPT Violation in Neutrino Oscillation from Dark Non-Standard Interactions. Hitoshi Shao-Feng Ge, Murayama, arXiv:1904.02518hep-phShao-Feng Ge and Hitoshi Murayama. "Apparent CPT Violation in Neutrino Oscillation from Dark Non-Standard Interactions". In: (Apr. 2019). arXiv: 1904.02518 [hep-ph].
Neutrino Oscillations as a Probe of Light Scalar Dark Matter. Asher Berlin, Phys. Rev. Asher Berlin. "Neutrino Oscillations as a Probe of Light Scalar Dark Matter". In: Phys. Rev.
. Lett, 10.1103/PhysRevLett.117.231801arXiv:1608.01307117231801hep-phLett. 117.23 (2016), p. 231801. doi: 10.1103/PhysRevLett.117.231801. arXiv: 1608.01307 [hep-ph].
Distinguishing between Dirac and Majorana neutrinos in the presence of general interactions. Xun-Jie Werner Rodejohann, Carlos E Xu, Yaguna, 10.1007/JHEP05(2017)024arXiv:1702.05721JHEP 05. 24hep-phWerner Rodejohann, Xun-Jie Xu, and Carlos E. Yaguna. "Distinguishing between Dirac and Majorana neutrinos in the presence of general interactions". In: JHEP 05 (2017), p. 024. doi: 10.1007/JHEP05(2017)024. arXiv: 1702.05721 [hep-ph].
Neutrino-electron scattering: general constraints on Z and dark photon models. Manfred Lindner, 10.1007/JHEP05(2018)098arXiv:1803.00060JHEP 05. 98hep-phManfred Lindner et al. "Neutrino-electron scattering: general constraints on Z and dark photon models". In: JHEP 05 (2018), p. 098. doi: 10 . 1007 / JHEP05(2018 ) 098. arXiv: 1803.00060 [hep-ph].
New Physics Probes: Atomic Parity Violation, Polarized Electron Scattering and Neutrino-Nucleus Coherent Scattering. Giorgio Arcadi, arXiv:1906.04755hep-phGiorgio Arcadi et al. "New Physics Probes: Atomic Parity Violation, Polarized Electron Scat- tering and Neutrino-Nucleus Coherent Scattering". In: (2019). arXiv: 1906.04755 [hep-ph].
Coherent Neutrino-Nucleus Scattering and new Neutrino Interactions. Manfred Lindner, Werner Rodejohann, Xun-Jie Xu, 10.1007/JHEP03(2017)09797hep-phManfred Lindner, Werner Rodejohann, and Xun-Jie Xu. "Coherent Neutrino-Nucleus Scat- tering and new Neutrino Interactions". In: JHEP 03 (2017), p. 097. doi: 10.1007/JHEP03(2017) 097. arXiv: 1612.04150 [hep-ph].
Probing neutrino coupling to a light scalar with coherent neutrino scattering. Yasaman Farzan, 10.1007/JHEP05(2018)066arXiv:1802.05171JHEP 05. 66hep-phYasaman Farzan et al. "Probing neutrino coupling to a light scalar with coherent neutrino scattering". In: JHEP 05 (2018), p. 066. doi: 10.1007/JHEP05(2018)066. arXiv: 1802. 05171 [hep-ph].
Producing a new Fermion in Coherent Elastic Neutrino-Nucleus Scattering: from Neutrino Mass to Dark Matter. Vedran Brdar, Werner Rodejohann, Xun-Jie Xu, 10.1007/JHEP12(2018)024arXiv:1810.03626JHEP. 1224hep-phVedran Brdar, Werner Rodejohann, and Xun-Jie Xu. "Producing a new Fermion in Coherent Elastic Neutrino-Nucleus Scattering: from Neutrino Mass to Dark Matter". In: JHEP 12 (2018), p. 024. doi: 10.1007/JHEP12(2018)024. arXiv: 1810.03626 [hep-ph].
New Fixed-Target Experiments to Search for Dark Gauge Forces. James D Bjorken, 10.1103/PhysRevD.80.075018arXiv:0906.0580Phys. Rev. 8075018hep-phJames D. Bjorken et al. "New Fixed-Target Experiments to Search for Dark Gauge Forces". In: Phys. Rev. D80 (2009), p. 075018. doi: 10.1103/PhysRevD.80.075018. arXiv: 0906.0580 [hep-ph].
Exploring Portals to a Hidden Sector Through Fixed Targets. Brian Batell, Maxim Pospelov, Adam Ritz, 10.1103/PhysRevD.80.095024Phys. Rev. 8095024hep-phBrian Batell, Maxim Pospelov, and Adam Ritz. "Exploring Portals to a Hidden Sector Through Fixed Targets". In: Phys. Rev. D80 (2009), p. 095024. doi: 10.1103/PhysRevD. 80.095024. arXiv: 0906.5614 [hep-ph].
Discovering New Light States at Neutrino Experiments. Rouven Essig, 10.1103/PhysRevD.82.113008arXiv:1008.0636Phys. Rev. 82113008hep-phRouven Essig et al. "Discovering New Light States at Neutrino Experiments". In: Phys. Rev. D82 (2010), p. 113008. doi: 10.1103/PhysRevD.82.113008. arXiv: 1008.0636 [hep-ph].
Search for a Dark Photon in e + e − Collisions at BaBar. J P Lees, Phys. Rev. J. P. Lees et al. "Search for a Dark Photon in e + e − Collisions at BaBar". In: Phys. Rev.
. Lett, 10.1103/PhysRevLett.113.201801arXiv:1406.2980113201801hep-exLett. 113.20 (2014), p. 201801. doi: 10.1103/PhysRevLett.113.201801. arXiv: 1406.2980 [hep-ex].
Search for a muonic dark force at BABAR. J P Lees, 10.1103/PhysRevD.94.011102arXiv:1606.03501Phys. Rev. D94. 111102hep-exJ. P. Lees et al. "Search for a muonic dark force at BABAR". In: Phys. Rev. D94.1 (2016), p. 011102. doi: 10.1103/PhysRevD.94.011102. arXiv: 1606.03501 [hep-ex].
Exploring nu Signals in Dark Matter Detectors. Roni Harnik, Joachim Kopp, Pedro A N Machado, 10.1088/1475-7516/2012/07/026arXiv:1202.6073JCAP 1207. 26hep-phRoni Harnik, Joachim Kopp, and Pedro A. N. Machado. "Exploring nu Signals in Dark Matter Detectors". In: JCAP 1207 (2012), p. 026. doi: 10.1088/1475-7516/2012/07/026. arXiv: 1202.6073 [hep-ph].
Particle Physics Implications of a Recent Test of the Gravitational Inverse Sqaure Law. E G Adelberger, 10.1103/PhysRevLett.98.131104arXiv:hep-ph/0611223Phys. Rev. Lett. 98131104hep-phE. G. Adelberger et al. "Particle Physics Implications of a Recent Test of the Gravitational Inverse Sqaure Law". In: Phys. Rev. Lett. 98 (2007), p. 131104. doi: 10.1103/PhysRevLett. 98.131104. arXiv: hep-ph/0611223 [hep-ph].
Test of the equivalence principle using a rotating torsion balance. Stephan Schlamminger, 10.1103/PhysRevLett.100.041101doi:10.1103/PhysRevLett.100.041101.arXiv:0712.0607Phys. Rev. Lett. 10041101gr-qcStephan Schlamminger et al. "Test of the equivalence principle using a rotating torsion balance". In: Phys. Rev. Lett. 100 (2008), p. 041101. doi: 10 . 1103 / PhysRevLett . 100 . 041101. arXiv: 0712.0607 [gr-qc].
Constraining dark matter-neutrino interactions with IceCube-170922A. Ki-Young Choi, Jongkuk Kim, Carsten Rott, 10.1103/PhysRevD.99.083018arXiv:1903.03302Phys. Rev. D. 9983018astro-ph.COKi-Young Choi, Jongkuk Kim, and Carsten Rott. "Constraining dark matter-neutrino inter- actions with IceCube-170922A". In: Phys. Rev. D 99.8 (2019), p. 083018. doi: 10.1103/ PhysRevD.99.083018. arXiv: 1903.03302 [astro-ph.CO].
Neutrino mass and the Higgs portal dark matter in the ESSFSM. Najimuddin Khan, 10.1155/2018/4809682arXiv:1707.07300Adv. High Energy Phys. 4809682hep-phNajimuddin Khan. "Neutrino mass and the Higgs portal dark matter in the ESSFSM". In: Adv. High Energy Phys. 2018 (2018), p. 4809682. doi: 10 . 1155 / 2018 / 4809682. arXiv: 1707.07300 [hep-ph].
Neutrino and the Dark side of the Universe. A Y Smirnov, 3rd World Summit on Exploring the Dark Side of the Universe. 2020. urlA. Y. Smirnov. "Neutrino and the Dark side of the Universe". 3rd World Summit on Explor- ing the Dark Side of the Universe. 2020. url: https://indico.cern.ch/event/801461/ contributions/3728174/.
Measurement of the Electron Antineutrino Oscillation with 1958 Days of Operation at Daya Bay. D Adey, 10.1103/PhysRevLett.121.241805arXiv:1809.02261Phys. Rev. Lett. 121241805hep-exD. Adey et al. "Measurement of the Electron Antineutrino Oscillation with 1958 Days of Operation at Daya Bay". In: Phys. Rev. Lett. 121.24 (2018), p. 241805. doi: 10 . 1103 / PhysRevLett.121.241805. arXiv: 1809.02261 [hep-ex].
Measurement of Reactor Antineutrino Oscillation Amplitude and Frequency at RENO. G Bak, 10.1103/PhysRevLett.121.201801arXiv:1806.00248Phys. Rev. Lett. 12120201801hep-exG. Bak et al. "Measurement of Reactor Antineutrino Oscillation Amplitude and Frequency at RENO". In: Phys. Rev. Lett. 121.20 (2018), p. 201801. doi: 10.1103/PhysRevLett.121. 201801. arXiv: 1806.00248 [hep-ex].
Double Chooz θ 13 measurement via total neutron capture detection. H De Kerret, 10.1038/s41567-020-0831-ydoi:10.1038/s41567-020-0831-y.arXiv:1901.09445Nature Phys. 16hep-exH. de Kerret et al. "Double Chooz θ 13 measurement via total neutron capture detection". In: Nature Phys. 16.5 (2020), pp. 558-564. doi: 10 . 1038 / s41567 -020 -0831 -y. arXiv: 1901.09445 [hep-ex].
Measurement of the charged-current electron (anti-)neutrino inclusive crosssections at the T2K off-axis near detector ND280. K Abe, 10.1007/JHEP10(2020)114doi: 10114K. Abe et al. "Measurement of the charged-current electron (anti-)neutrino inclusive cross- sections at the T2K off-axis near detector ND280". In: JHEP 10 (2020), p. 114. doi: 10.
. / Jhep10, 10.1007/JHEP10(2020)114arXiv:2002.11986114hep-ex/JHEP10(2020)114. arXiv: 2002.11986 [hep-ex].
Constraint on the matter-antimatter symmetry-violating phase in neutrino oscillations. K Abe, 10.1038/s41586-020-2177-0Nature. 580Erratum: Nature 583, E16 (2020). hep-exK. Abe et al. "Constraint on the matter-antimatter symmetry-violating phase in neutrino oscillations". In: Nature 580.7803 (2020). [Erratum: Nature 583, E16 (2020)], pp. 339-344. doi: 10.1038/s41586-020-2177-0. arXiv: 1910.03887 [hep-ex].
First Measurement of Neutrino Oscillation Parameters using Neutrinos and Antineutrinos by NOvA. M A Acero, 10.1103/PhysRevLett.123.151803arXiv:1906.04907Phys. Rev. Lett. 123151803hep-exM. A. Acero et al. "First Measurement of Neutrino Oscillation Parameters using Neutrinos and Antineutrinos by NOvA". In: Phys. Rev. Lett. 123.15 (2019), p. 151803. doi: 10.1103/ PhysRevLett.123.151803. arXiv: 1906.04907 [hep-ex].
Precision Constraints for Three-Flavor Neutrino Oscillations from the Full MINOS+ and MINOS Dataset. P Adamson, 10.1103/PhysRevLett.125.131802arXiv:2006.15208Phys. Rev. Lett. 125131802hep-exP. Adamson et al. "Precision Constraints for Three-Flavor Neutrino Oscillations from the Full MINOS+ and MINOS Dataset". In: Phys. Rev. Lett. 125.13 (2020), p. 131802. doi: 10.1103/PhysRevLett.125.131802. arXiv: 2006.15208 [hep-ex].
Measurement of Atmospheric Tau Neutrino Appearance with IceCube DeepCore. M G Aartsen, 10.1103/PhysRevD.99.032007arXiv:1901.05366Phys. Rev. D. 99332007hep-exM. G. Aartsen et al. "Measurement of Atmospheric Tau Neutrino Appearance with IceCube DeepCore". In: Phys. Rev. D 99.3 (2019), p. 032007. doi: 10.1103/PhysRevD.99.032007. arXiv: 1901.05366 [hep-ex].
Measuring the atmospheric neutrino oscillation parameters and constraining the 3+1 neutrino model with ten years of ANTARES data. A Albert, 10.1007/JHEP06(2019)113arXiv:1812.08650JHEP 06. 113hep-exA. Albert et al. "Measuring the atmospheric neutrino oscillation parameters and constraining the 3+1 neutrino model with ten years of ANTARES data". In: JHEP 06 (2019), p. 113. doi: 10.1007/JHEP06(2019)113. arXiv: 1812.08650 [hep-ex].
| zyda_arxiv-1424000 |
ON THE DYNAMICS OF A NON-LOCAL PARABOLIC EQUATION ARISING FROM THE GIERER-MEINHARDT SYSTEM
6 Mar 2017
Nikos I Kavallaris
Takashi Suzuki
ON THE DYNAMICS OF A NON-LOCAL PARABOLIC EQUATION ARISING FROM THE GIERER-MEINHARDT SYSTEM
6 Mar 2017
The purpose of the current paper is to contribute to the comprehension of the dynamics of the shadow system of an activator-inhibitor system known as a Gierer-Meinhardt model. Shadow systems are intended to work as an intermediate step between single equations and reaction-diffusion systems. In the case where the inhibitor's response to the activator's growth is rather weak, then the shadow system of the Gierer-Meinhardt model is reduced to a single though non-local equation whose dynamics will be investigated. We mainly focus on the derivation of blow-up results for this non-local equation which can be seen as instability patterns of the shadow system. In particular, a diffusion driven instability (DDI), or Turing instability, in the neighbourhood of a constant stationary solution, which it is destabilised via diffusion-driven blow-up, is obtained. The latter actually indicates the formation of some unstable patterns, whilst some stability results of global-in-time solutions towards non-constant steady states guarantee the occurrence of some stable patterns.
Introduction
In as early as 1952, A. Turing in his seminal paper [27] attempted, by using reactiondiffusion systems, to model the phenomenon of morphogenesis, the regeneration of tissue structures in hydra, an animal of a few millimeters in length made up of approximately 100,000 cells. Further observations on the morphogenesis in hydra led to the assumption of the existence of two chemical substances (morphogens), a slowly diffusing (short-range) activator and a rapidly diffusing (long-range) inhibitor. A. Turing was the first to indicate that although diffusion has a smoothing and trivializing effect on a single chemical, for the case of the interaction of two or more chemicals different diffusion rates could force the uniform steady states of the corresponding reaction-diffusion systems to become unstable and to lead to nonhomogeneous distributions of such reactants. Such a phenomenon is now known as diffusion driven instability (DDI), or Turing instability.
Exploring Turing's idea further, A. Gierer and H. Meinhardt, [2], proposed in 1972 the following activator-inhibitor system, known since then as a Gierer-Meinhardt system, to model the regeneration phenomenon of hydra located in a domain Ω ⊂ R N , N ≥ 1 u t = ǫ 2 ∆u − u + u p v q , in Ω × (0, T ), (1.1) u(x, 0) = u 0 (x) > 0, v(x, 0) = v 0 (x) > 0, in Ω, (1.4) where ν denotes the unit outer normal vector to ∂Ω whilst u and v stand for the concentrations of the activator and the inhibitor respectively. System (1.1)-(1.4) intends to provide a thorough explanation of symmetry breaking as well as of de novo pattern formation by virtue of the coupling of a local activation and a long-range inhibition process. The inserted nonlinearities describe the fact that the activator promotes the differentiation process and it stimulates its own production, whereas the inhibitor acts a suppressant against the self-enhancing activator to prevent the unlimited growth.
τ v t = D∆v − v + u r v s , in Ω × (0, T ),(1.
Here, ǫ 2 , D represent the diffusing coefficients whereas the exponents satisfying the conditions:
p > 1, q, r, > 0, and s > −1, measure the morphogens interactions. In particular, the dynamics of system (1.1)- (1.4) can be characterised by two numbers: the net self-activation index ρ ≡ (p − 1)/r and the net cross-inhibition index γ ≡ q/(s+1). Indeed, ρ correlates the strength of self-activation of the activator with the cross-activation of the inhibitor. So, if ρ is large, then the net growth of the activator is large no matter the inhibitor's growth. On the other hand, γ measures how strongly the inhibitor suppresses the production of the activator and that of itself. Now if γ is large then the production of the activator is strongly suppressed by the inhibitor. Finally, the parameter τ quantifies the inhibitor's response against the activator's growth.
Guided by biological interpretation as well as by mathematical reasons, it is usually assumed that the parameters p, q, r, s satisfy the following condition
ρ ≡ p − 1 r < q s + 1 ≡ γ,
or equivalently p − rγ < 1 (1.5) Condition (1.5) is called a Turing condition whilst the reverse inequality p − rγ > 1 (1.6) will be referred to as an anti-Turing condition. The Turing condition guarantees, [21], that the spatially homogeneous equilibrium (u, v) = (1, 1) of the corresponding kinetic (ODE) system
du dt = −u + u p v q , τ dv dt = −v + u r v s . (1.7)
is stable if τ < s+1 p−1 . Nevertheless, once diffusion terms are introduced, with ǫ 2 ≪ D, and under (1.5) then (u, v) = (1, 1) becomes unstable and bifurcation occurs, see also [21]. Therefore, diffusion driven instability (DDI) takes place which leads to pattern formation and then explains the phenomenon of morphogenesis.
Apart from its vital biological importance the system (1.1)-(1.4) has also interesting mathematical features and emerging singularities. As such, it has attracted a lot of attention from the field of mathematical analysis. Subjects of interest include the existence of global-in-time solutions, which was first investigated in [29] and then studied more thoroughly in [16,19]. The author in [6] proved that under the condition p−1 r < 1, a globalin-time solution exists, which is an almost optimal result, also taking into consideration the results in [21]. Furthermore, [7] contains an investigation of the asymptotic behaviour of the solution of (1.1)-(1.4). In particular the authors showed that if τ = s+1 p−1 , s > 0, and
2 √ d 1 d 2 d 1 + d 2 ≥ (s + 1)(p − 1) sp , d 1 = ǫ 2 , d 2 = τ −1 D,
then the global-in-time solution of (1.1)-(1.4) is approaching uniformly a spatially homogeneous solution, which is always periodic-in-time unless it is a constant one. The occurrence of finite-time blow-up, which actually means unlimited growth for the activator, was first established in [16] and later in [8,13,30], whereas the case of nondiffusing activator finite-time blow-up is also investigated in [8]. The existence and stability of spiky stationary solutions is thoroughly studied in the survey paper [28]. As specified above, in the case of the Gierer-Meinhardt system, the inhibitor diffuses much faster compared to the activator, i.e. ǫ 2 ≪ D, and thus the system (1.1)-(1.4) can be fairly approximated by its shadow system when D ≫ 1. The concept of a shadow system was introduced by Keener, [10], to describe the qualitative behaviour of reaction-diffusion systems when one of the diffusing coefficients is very large. Such a system is formed by a reaction-diffusion equation coupled with an ordinary differential equation (ODE) with non-local effects and it actually contains all the essential dynamics of the original reaction-diffusion system. In particular, if there is a compact attractor for the shadow system the original reaction-diffusion system has a compact attractor too, see also [3].
In the following we provide a formal derivation of the shadow system of the Gierer-Meinhardt system (1.1)-(1.4). A rigorous proof can be found in [17,18] where it is also shown that the convergence of the original reaction-diffusion system towards its shadow system is valid locally in time except for an initial layer. Now, dividing (1.2) by D and letting D ↑ +∞ for any fixed t ∈ (0, T ), then due to the boundary condition (1.3) v becomes spatial homogeneous, i.e. v(x, t) = ξ(t). Next, integrating the resulting equation 3 over Ω we finally derive that u(x, t), ξ(t) satisfy the shadow system:
u t = ǫ 2 ∆u − u + u p ξ q , in Ω × (0, T ), (1.8) τ ξ t = −ξ + 1 ξ s − Ω u r dx in Ω × (0, T ), (1.9) ∂u ∂ν = 0, on ∂Ω × (0, T ), (1.10) u(x, 0) = u 0 (x) > 0, ξ(0) =v 0 = 1 |Ω| Ω v 0 , in Ω, (1.11) where − Ω u r dx ≡ 1 |Ω| Ω u r dx.
Note that (1.8)-(1.11) is non-local due to the presence of the integral term in (1.9). Since the convergence towards (1.8)-(1.11) holds only locally in time there might be discrepancies in global-in-time dynamics between of the (1.1)-(1.4) and those of (1.8)-(1.11) for some range of the involved parameters p, q, r, s; this has been also indicated in [12,15]. On the other hand, there are ranges of the involved parameters, where the two systems have exactly the same long-time behaviour [12, Theorem 1] and thus it is worth investigating the shadow system (1.8)-(1.11), which is simpler compared to the full system (1.1)-(1.4), so we can capture some of the features of (1.1)-(1.4).
Henceforth, we focus on the case where τ = 0; i.e. when the inhibitor's response rate is quite small against the inhibitor's growth. We will investigate the dynamics of (1.8)-(1.11) in a forthcoming paper. For τ = 0 the second equation (1.9) is solved as
ξ(t) = − Ω u r (x, t) dx 1 s+1
, and thus the shadow system reduces to the following non-local problem
u t = ∆u − u + u p − Ω u r dx γ , in Ω × (0, T ), (1.12) ∂u ∂ν = 0, on ∂Ω × (0, T ), (1.13) u(x, 0) = u 0 (x) > 0, in Ω,(1.14)
where for simplicity it has been considered ǫ = 1.
The rest of the current work is devoted to the study of problem the (1.12)-(1.14) whose mathematical structure is intriguing. In particular, due to the presence of the non-local term and the monotonicity of its nonlinearity, then problem (1.12)-(1.14) does not admit a maximum principle, [23], and so alternatives to comparison techniques should be employed to investigate its long-time behaviour. Some global-in-time existence and blow-up results for problem (1.12)-(1.14) were presented in [15], whereas some slow moving spike solutions were constructed in [5]. In the current paper, we provide novel global-in-time and blow-up results, extending further the mathematical analysis provided in [15], as well as describing the form of the destabilization patterns developed due to the phenomenon of DDI.
In addition, the investigation of the non-local problem (1.12)-(1.14) is also attractive from the biological point of view. Specifically, it will reveal under which circumstances the dynamics of the interaction of the two morphogens (activator and inhibitor) can be controlled by governing only the dynamics of the activator itself.
The rest of the manuscript is composed of nine sections. In the next section we provide the main notation as well as some preliminary results used throughout the manuscript. Our main results, which include the existence of global-in-time and blowing up solutions of (1.12)-(1.14), are presented in section 3. Section 4 contains the proof of a lower estimate of any solution of (1.12)-(1.14) which actually guarantees its well posedness. In section 5 we treat the special case r = p + 1, whence problem (1.12)-(1.14) has a variational structure, and under the Turing condition we prove that global-in-time solutions converge towards steady states through Turing patterns. Section 6 contains a global-in-time existence result for (1.12)-(1.14) analogous to the one in [6]. In section 7 we derive proper estimates of L ℓ −norms, ℓ > 1, of the solution u(x, t) which either lead to global-in-time existence or to finite-time blow-up. A DDI result, which is actually exhibited in the form of a diffusiondriven blow-up for peaky initial data, is proven in section 8. Finally section 9 investigates the blow-up rate as well the blow-up profile of the derived blowing up solutions and section 10 summarizes the main conclusions of the current work.
Preliminaries
Throughout the manuscript by ||h|| ℓ , ℓ > 0, we denote the L ℓ −norm of function h defined as: whereas ||h|| H 1 stands for the norm of the Sobolev space H 1 (Ω) defined as:
||h|| ℓ := − Ω |h| ℓ dx 1/ℓ , 0 < ℓ < ∞,||h|| H 1 := − Ω (|h| 2 + |∇h| 2 ) dx 1/2 .
Moreover, if ∆ is the Laplace operator associated with Neumann boundary conditions then by e t∆ we denote its semigroup. Then the well-known estimate, [29], holds
e t∆ h q ≤ C max{1, t − N 2 ( 1 ℓ − 1 q ) } h ℓ , 1 ≤ ℓ ≤ q ≤ ∞. (2.1)
Note that under condition (1.5) the solution of the spatially homogeneous part
du dt = −u + u p−rγ , u(0) =ū 0 > 0, (2.2)
never exhibits blow-up, since the non-linearity is sublinear, and its unique stationary state u = 1 is asymptotically stable. Below, by using stability analysis we show that condition (1.5) implies linear instability. Indeed, the linearized problem of (1.12)-(1.14) around u = 1 is given by
φ t = ∆φ + (p − 1)φ − rγ− Ω φ, in Ω × (0, T ), ∂φ ∂ν = 0, on ∂Ω,
and can be written in the form of an evolution equation in X = L 2 (Ω) as
dφ dt = −Aφ.
Here the generator A is a self-adjoint operator associated with the bi-linear form (see Kato [9])
a(φ, w) = − Ω (∇φ · ∇w + (1 − p)φw) dx + rγ− Ω φ · − Ω w, φ, w ∈ V = H 1 (Ω). Now for φ = w we derive a(φ, φ) = ∇φ 2 2 + (1 − p)− Ω φ 2 + rγ − Ω φ 2 = ∞ j=1 µ 2 j |(φ, ϕ j )| 2 + ∞ j=1 (1 − p)|(φ, ϕ j )| 2 + rγ|(φ, ϕ 1 )| 2 = (1 − p + rγ)|(φ, ϕ 1 )| 2 + ∞ j=2 (µ 2 j + 1 − p)|(φ, ϕ j )| 2 ,
where 0 = µ 1 < µ 2 ≤ · · · → ∞ denote the eigenvalues of −∆ associated with the Neumann boundary condition, and ϕ j is the corresponding j-th eigenfunction normalized by ϕ j 2 = 1. Note that under the Turing condition (1.5), the linearized instability of the steady-state solution u = 1 arises if and only if µ 2 2 < p − 1. The latter suggests that under condition (1.5) a Turing instability phenomenon should be anticipated, which in particular as shown in Theorem 3.7, this Turing instability is exhibited in the form of a diffusion-driven blow-up.
Main Results
In the current section we present our main results and we prove them in the following sections.
The first observation regarding the solution of (1.12)-(1.14) is that it never quenches in finite time. Indeed, the following holds
Proposition 3.1. Each T > 0 admits C T > 0 such that u(x, t) ≥ C T in Ω × [0, T ). (3.1)
Proof. By maximum principle and comparison theorems we obtain that u(x, t) > 0 and u(x, t) ≥ũ(t), whereũ =ũ(t) solves the following
dũ dt = −ũ in (0, T ),ũ(0) =ũ 0 ≡ inf Ω u 0 (x) > 0. Therefore we obtain (3.1) with C T =ũ 0 e −T .
Due to Proposition 3.1 the following alternatives are left; blow-up in finite time indicated by T < +∞, blow-up in infinite time, quenching in infinite time, and global-in-time compact orbit in C(Ω). In fact, by (3.1) and the parabolic regularity the existence time of the classical solution u = u(·, t) of (1.12)-(1.14) in 0 ≤ t ≤T is estimated below byT and u 0 ∞ (see, e.g., [29]). Then there holds
T < +∞ ⇒ lim t↑T u(·, t) ∞ = +∞ (3.2)
as in (8.25) of [26]. Now finite-time blow-up actually arises under the conditions of the following proposition.
Proposition 3.2. Assume that p ≥ r and the Turing condition (1.5) holds then
u ≥ max{1,ū 0 }. (3.3)
Whereas if anti-Turing condition (1.6) holds andū 0 > 1, then finite-time blow-up occurs, i.e. T < +∞.
Proof. Since p > 1 and p ≥ r, there is r ≤ µ ≤ p satisfying s ≥ 1. Then we obtain
− Ω u p ≥ − Ω u µ p µ , − Ω u r γ ≤ − Ω u µ r µ γ
via Hölder's inequality and hence
dū dt = −ū + − Ω u p − Ω u r γ ≥ −ū + − Ω u µ p−rγ µ ≥ −ū +ū p−rγ . (3.4)
In case p − rγ < 1 then the differential inequality (3.4) implies (3.3). Whilst, in the complementary case p − rγ > 1 again by virtue of (3.4) we derive thatū blows up in finite time providedū 0 > 1, and hence u does so.
Remark 3.1. Proposition 3.2 illustrates that in that case the qualitative behaviour of the full system (1.1)-(1.4) and those of the non-local problem (1.12)-(1.14) is quite different. It should be pointed out that the anti-Turing condition 1.6 the full system does not exhibit any instability, whilst an instability emerges when Turing condition (1.5) is imposed.
Here by quenching in infinite time we mean T = +∞ and lim inf t↑+∞ u(·, t) r = 0 for some r > 1. (3.5) We note that property (3.5) does not arise neither for the original system (1.1)-(1.4) nor for the shadow system (1.8)-(1.9), as it can be concluded by the classification of the homogeneous orbits given in [21]. Therefore, our first main result, see Theorem 3.1 below, which concerns the exclusion of infinite time quenching for the solutions of (1.12)-(1.14) is in agreement with what is observed in systems (1.1)-(1.4) and (1.12)-(1.14).
Henceforth, C and c denote large and small positive constants independent of t, respectively.
Theorem 3.1. There is δ 0 > 0 such that any 0 < δ ≤ δ 0 admits the estimate − Ω u −δ ≤ C for any t > 0, (3.6)
where constant C is independent of t. follows by Jensen's inequality taking δ ≤ r, where again c is independent of time t. Consequently, relation (3.7) guarantees that the nonlinear term of non-local equation (1.12) stays away from zero and therefore the solution u is bounded away from zero as well.
Remark 3.3. Remark 3.2 is interpreted in biological context as follows: the activator can never be diminished.
Next we focus on the special case r = p + 1. In this case problem (1.12)-(1.14) admits a variational structure which is not the case for the original system (1.1)-(1.4). In particular for r = p + 1 problem (1.12)-(1.14) has a Lyapunov functional of the form
J(u) = 1 2 ∇u 2 2 + u 2 2 − 1 (p + 1)(1 − γ) − Ω u p+1 dx 1−γ ,
since along any solution trajectory there holds
d dt J(u(t)) = − u t 2 2 ≤ 0. (3.8)
Note also that in that case Turing condition (1.5) is reduced to
γ > p − 1 p + 1 . (3.9)
Next under anti-Turing condition and via the double well potential method (see [25,11], we obtain the following:
Theorem 3.2. Let r = p + 1 and γ < min{1, p−1 p+1 }. If J(u 0 ) ≤ 0 then finite-time blow-up occurs, i.e. T < +∞.
Remark 3.4. Theorem 3.2 can be interpreted in the biological context as follows: if the activator's initial concentration is large and its suppression by the inhibitor is rather small (since 0 < γ < 1) then naturally the activator's growth becomes unlimited.
On the other hand, under the Turing condition (3.9) we derive the following:
Theorem 3.3. Let N ≥ 3 and r = p + 1. If p−1 p+1 < γ < 1 and 1 < p < N +2 N −2 then a global-in-time solution exists, i.e. T = +∞ and sup (0,T ) u(·, t) ∞ ≤ C.
(3.10)
Remark 3.5. Theorems 3.2 and 3.3 indicate that in case where r = p + 1 then there is a discrepancy between the behaviour of the full system (1.1)-(1.4) and those of the non-local problem (1.12)-(1.14). Indeed, under the anti-Turing condition the full system does not exhibit any instability, whilst an instability occurs when Turing condition (3.9) holds.
Turing condition (3.9) further implies that the solution orbit for problem (1.12)-(1.14)
is compact in C(Ω) and the ω-limit set
ω(u 0 ) = {u * ∈ C(Ω) | ∃t k ↑ +∞ s.t. lim k→∞ u(·, t k ) − u * ∞ = 0}
of this orbit is nonempty, connected, compact, and lies in the set of stationary solutions, which are defined as the solutions of the following problem
− ∆u * + u * = u p * − Ω u p+1 * γ , u * > 0 in Ω, ∂u * ∂ν = 0 on ∂Ω. (3.11)
Concerning (3.11), existence of stable spiky stationary solutions is known (see the survey paper by Wei [28]) and thus formation of Turing patterns converging to these spiky solutions is guaranteed as long as (3.9) holds. In the following, global-in-time existence of the solution is obtained via a priori estimates of some L ℓ -norms of solution u(x, t). These a priori estimates hold in a parameter range which implies condition p−1 r < 1 which, as mentioned earlier, guarantees the global-in-time existence of the solution to the original model (1.1)-(1.4).
Theorem 3.4. If p−1 r < min{1, 2 N , 1 2 (1 − 1 r )} and 0 < γ < 1, then problem (1.12)-(1.14) has a global-in-time solution, i.e. (3.10) holds.
Remark 3.6. The result of Theorem 3.4 is in agreement with the global-in-time existence result obtained in [6] for the full system, and so in that case (1.1)-(1.4) and (1.12)-(1.14) share the same dynamics. Now we consider the following L ℓ −norms , ℓ > 0, of solution u(x, t)
ζ(t) = − Ω u r dx, z(t) = − Ω u −p+1+r dx, w(t) = − Ω u p−1+r dx.
(3.12)
By choosing proper initial data and using phase plane analysis, we can actually derive estimates of ζ(t), z(t) and w(t), see section 7, identifying also some invariant regions in the plane. In particular, our results can be expressed as follows:
Theorem 3.5. Let 0 < γ < 1, r ≤ 1, and p−1 r > 1. Assume, furthermore that either (1) w(0) < ζ(0) 1−γ , or (2) p−1 r ≥ 2 and w(0) < 1. Then finite-time blow-up occurs for problem (1.12)-(1.14), i.e. T < +∞. In section 2 it has been already specified through linear stability analysis that under the Turing condition (1.5) the stable solutionū = 1 of (2.2) destabilises as a solution of (1.12)-(1.14). The next result shows that the preceding DDI phenomenon is realised in the form of diffusion-induced blow-up.
Theorem 3.7. Let N ≥ 3, max{1, N N −2 } ≤ r ≤ p and 2 N < p−1 r < γ, then there is a family of radially symmetric blowing up solutions corresponding to a family of spiky initial data.
αχ t = α ∆χ + 4(α − 1)|∇χ 1 2 | 2 − χ + f in Ω × (0, T ), (4.1) ∂χ ∂ν = 0, on ∂Ω × (0, T ), (4.2) χ(x, 0) = u 1 α 0 (x), in Ω, (4.3) with f = u p−1+ 1 α − Ω u r γ . (4.4)
Averaging (4.1) over Ω, we obtain
α d dt − Ω χ + 4α(1 − α)− Ω |∇χ 1 2 | 2 + − Ω χ = − Ω f (4.5)
and hence
d dt − Ω χ + 4(1 − α)− Ω |∇χ 1 2 | 2 + 1 α − Ω χ ≤ 0 (4.6)
for α < 0 since also f > 0. Letting now δ = − 1 α we have
d dt − Ω χ + 4(1 + δ −1 )− Ω |∇χ 1 2 | 2 ≤ δ− Ω χ.
Since Poincaré-Wirtinger's inequality reads ∇w 2 2 ≥ µ 2 w 2 2 , for any w ∈ H 1 (Ω), where µ 2 is the second eigenvalue of the Laplace operator associated with Neumann boundary conditions, then applied for w = χ 1 2 to (4.6) entails
d dt − Ω χ + c− Ω χ ≤ 0 (4.7)
for 0 < δ ≪ 1. Differential inequality (4.7) implies that χ(t) ≤ C < ∞ for any t > 0 and thus (3.6) follows by the fact that χ = u −δ .
Proof of Theorems 3.2 and 3.3
Throughout the current section, we consider r = p + 1.
Proof of Theorem 3.2. Since J(u 0 ) ≤ 0, then via the dissipation relation (3.8) we derive J(u(t)) ≤ 0 for any 0 < t < T. We also have
d dt u 2 2 = −2I(u)
where
I(u) = ∇u 2 2 + u 2 2 − − Ω u p+1 − Ω u r γ = ∇u 2 2 + u 2 2 − − Ω u p+1 1−γ = 2J(u) + 2 (p + 1)(1 − γ) − 1 − Ω u p+1 1−γ ≤ − 1 − 2 (p + 1)(1 − γ) u (p+1)(1−γ) p+1
.
Since 0 < γ < min{1, p−1 p+1 }, there holds that (p + 1)(1 − γ) > 2, and thus by virtue of Hölder's inequality we can find α > 0 such that
d dt u 2 2 ≥ c u 2+α 2 ,(5.1)
since also p > 1. Now (5.1) entails that u 2 2 blows up in finite time since u 0 (x) > 0 and thus u exhibits a finite-time blow-up as well.
Proof of Theorem 3.3. In this case we have 0 < γ < 1 and (p + 1)(1 − γ) < 2. Dissipation relation (3.8) suggests
1 2 ( ∇u 2 2 + u 2 2 ) ≤ J(u 0 ) + 1 (p + 1)(1 − γ) − Ω u p+1 1−γ . (5.2)
Furthermore, Sobolev's and Young's inequalities entail Remark 5.1. In the case where γ = p−1 p+1 and 1 < p < N +2 N −2 , we have always T = +∞, whilst infinite-time blow-up, i.e. lim t↑+∞ u(·, t) ∞ = +∞, may occur. In fact, by the proof of Theorem 3.3 we have u(·, t) H 1 ≤ C(1 + t) for 0 < t < T , and then by virtue of Sobolev's imbedding we obtain u(·, t) ∞ ≤ C T , which entails T = +∞ by the parabolic regularity. Furthermore, in case where J(u 0 ) < 0 we derive
− Ω u p+1 1−γ = u (p+1)(1−γ) p+1 ≤ 1 4 u 2 H 1 + C,(5.d dt u 2 2 ≥ −2J(u 0 ) > 0,
by the proof of Theorem 3.2, and then it follows that lim t↑∞ u(·, t) 2 = +∞. The latter implies that lim t↑+∞ u(·, t) ∞ = +∞ and thus infinite-time blow-up occurs in that case.
Proof of Theorem 3.4
We assume p−1 r < min{1, 2 N , 1 2 (1 − 1 r )} and 0 < γ < 1. We also consider N ≥ 2 since the complementary case N = 1 is simpler.
Since p > 1, the above assumption implies p−1 r < 2 N and r > p. Then there holds that
0 < 1 r − p + 1 < min 1, 1 p − 1 · 2 N − 2 , 1 1 − p + rγ ,
since also 0 < γ < 1.
13
Choosing 1 r−p+1 < α < min{1, 1 p−1 · 2 N −2 , 1 1−p+rγ }, we have max N − 2 N , 1 αr < 1 −α + 1 + αp ,
and hence there is β > 0 such that
max N − 2 N , 1 αr < 1 β < 1 −α + 1 + αp < 2,(6.1)
which also satisfies
β αr < 1 < β −α + 1 + αp . (6.2)
Note that for f defined by (4.4) holds
− Ω f = − Ω u p−1+ 1 α − Ω u r γ = − Ω χ −α+1+αp − Ω χ αr γ .
By virtue of (6.2) with 0 < σ = α{1 − p + rγ} < 1, recalling p−1 r < γ and α < 1 1−p+rγ . Now since 1 < 2β < 2N N −2 holds due to (6.1), then Sobolev's and Young's inequalities entail
− Ω χ −α+1+αp ≤ − Ω χ β −d dt − Ω χ + c χ 1 2 2 H 1 ≤ C, 0 < t < T,
using also 0 < α < 1, and in particular,
− Ω χ ≤ C, for any 0 < t < T.
Since 1 α can be chosen to be close to r − p + 1, we have u(·, t) q ≤ C q , 0 < t < T, for any 1 ≤ q < r − p + 1, (6.4) taking into account that χ = u 1 α .
Since p−1 r < 1 2 (1 − 1 r ) implies r−p+1 p > 1, then if there is a > 1 such that u(·, t) q ≤ C q , 0 < t < T, for any 1 ≤ q < a(r − p + 1), (6.5) by virtue of the semigroup estimate (2.1) inequality (6.5) can be extended for any q ≥ 1 as long as N 2 ( 1 ℓ − 1 q ) < 1. Therefore, we obtain u(·, t) q ≤ C q , 0 < t < T, for any 1 ≤ q < a 1 (r − p + 1), (6.6) and for a 1 > 0 defined by
1 a 1 = 1 a − 2 N · r − p + 1 p , (6.7)
as long as the right-hand side of (6.7) is positive, otherwise q = ∞ into relation (6.6). We eventually obtain (3.10), and the proof is complete.
Proof of Theorems 3.5 and 3.6
Let (ζ, z, w) = (ζ(t), z(t), w(t)) defined by (3.12) then, by virtue of Hölder's inequality we have wz ≥ ζ 2 . (7.1)
Proof of Theorem 3.5. We first consider r ≤ 1 < p−1 r and 0 < γ < 1. Since r ≤ 1 then (4.5) for α = 1 r yields
1 r dζ dt = 4 r 1 r − 1 − Ω |∇u r 2 | 2 − ζ + z ζ γ for 0 < t < T, (7.2)
and taking (7.1) into account we derive
1 r dζ dt ≥ −ζ + ζ 2−γ w = ζ w −w + ζ 1−γ for 0 < t < T. (7.3) Furthermore, since p−1 r > 1 then (4.5) for α = 1 −p+1+r reads α dw dt = 4α(α − 1)− Ω |∇u 1 2α | 2 − w + ζ 1−γ , for 0 < t < T, (7.4) which, since α = 1 −p+1+r < 0, implies α dw dt ≥ −w + ζ 1−γ , for 0 < t < T,
or equivalently
1 p − 1 − r dw dt ≤ w − ζ 1−γ , for 0 < t < T. (7.5)
The condition 0 < γ < 1, entails that the curve Γ 1 : w = ζ 1−γ , ζ > 0, (7.6) is concave in the wζ−plane, with its endpoint at the origin (0, 0). Relations (7.3) and (7.5) imply that the region R = {(ζ, w) | w < ζ 1−γ } is invariant for the system (7.2), (7.4), i.e. if (ζ(0), w(0)) ∈ R then (ζ(t), w(t)) ∈ R for any t > 0. Furthermore, ζ = ζ(t) and w = w(t) are increasing and decreasing on R, respectively. In case w(0) < ζ(0) 1−γ , then dw dt < 0, dζ dt > 0, for 0 ≤ t < T, and thus,
1 w − 1 ζ 1−γ ≥ 1 w(0) − 1 ζ(0) 1−γ ≡ c 0 > 0, for 0 ≤ t < T.
Therefore by virtue of (7.3)
1 r dζ dt ≥ −ζ + ζ 2−γ w = ζ 2−γ 1 w − 1 ζ 1−γ ≥ c 0 ζ 2−γ , 0 ≤ t < T. (7.7)
Since 2 − γ > 1 then (7.7) implies that ζ(t) blows up in finite time
t 1 ≤t 1 ≡ (ζ(0)) γ−1 (1 − γ)c 0 r ,
and using the inequality
ζ(t) = − Ω u r dx ≤ u(·, t) r ∞
we conclude that u(x, t) blows up in finite time T ≤ t 1 as well.
We consider now the second case when p−1 r ≥ 2 and thus q = p−1−r r ≥ 1. Then by virtue of Jensen's inequality
− Ω u r · − Ω (u −r ) q 1 q ≥ − Ω u r · − Ω u −r ≥ 1, and thus ζ 1 r ≥ w − 1 p−1−r which entails w ≥ ζ − p−1−r r = ζ 1− p−1 r . (7.8)
In addition, inequality p−1 r ≥ 2 implies that the curve
Γ 2 : w = ζ 1− p−1 r , ζ > 0,
is convex and approaches +∞ and 0 as ζ ↓ 0 and ζ ↑ +∞, respectively. The crossing of Γ 1 and Γ 2 is the point (ζ, w) = (1, 1), and therefore w(0) < 1 combined with (7.8) imply w(0) < ζ(0) 1−γ . Consequently, the second case is reduced to the first one and again the occurrence of finite-time blow-up is established.
Remark 7.1. The existence of the invariant region R = {(ζ, w) | w < ζ 1−γ } for the system (7.2), (7.4) entails that if the consumption of the activator cannot be suppressed initially then this can lead to its unlimited growth.
Proof of Theorem 3.6. We first note that under the assumption r ≥ 1 relation (7.2) entails
1 r dζ dt ≤ −ζ + z ζ γ , for 0 ≤ t < T. (7.9)
Furthermore, since α = 1 p−1−r < 0 results from p−1 r < 1, then (7.4) implies
α dw dt ≥ −w + ζ 1−γ , for 0 ≤ t < T, or equivalently 1 −p + 1 + r dw dt ≤ w − ζ 1−γ , for 0 ≤ t < T. (7.10)
We claim that the assumption ζ(0) 1+γ > z(0) yields that ζ(t) 1+γ > z(t) for any 0 ≤ t < T. Indeed, let us assume there exists t 0 > 0 such that ζ(t) 1+γ > z(t), 0 ≤ t < t 0 , and ζ(t 0 ) = z(t 0 ).
Then we obtain dζ dt < 0, for 0 ≤ t < t 0 and w(t 0 ) ≥ ζ(t 0 ) 1−γ , (7.11) by virtue of (7.9) and (7.1). On the other hand, w(0) < ζ(0) 1−γ , due to (7.10), entails
dw dt < 0, for 0 ≤ t < t 0 .
Consequently, since also γ > 1 then the curve (w(t), ζ(t)) for 0 ≤ t ≤ t 0 , remains in the region w < ζ 1−γ and hence w(t 0 ) < ζ(t 0 ) 1−γ , which contradicts the second inequality of (7.11).
Thus it follows that dζ dt < 0, dw dt < 0, for 0 ≤ t < T, and in particular, we have u(·, t) p−1+r ≤ C, for 0 ≤ t < T.
Since r ≥ 1 implies p−1+r p ≥ 1, we obtain (3.10) by the same bootstrap argument used at the end of the previous section.
Proof of Theorem 3.7
In the current section we restrict ourselves to the radial case Ω = B(0, 1) and we also consider N ≥ 3. Then the solution of (1.12)-(1.14) is radial symmetric, that is u(x, t) = u(ρ, t) for 0 ≤ ρ = |x| < 1.
We regard, as in [4], spiky initial data of the form
u 0 (ρ) = λϕ δ (ρ),(8.1)
with 0 < λ ≪ 1 and
ϕ δ (ρ) = ρ −a , δ ≤ ρ ≤ 1 δ −a 1 + a 2 − a 2 δ −(a+2) ρ 2 , 0 ≤ ρ < δ,(8.2)
for a = 2 p−1 and 0 < δ < 1. It can be easily checked that u 0 (ρ) is decreasing, i.e. u ′ 0 (ρ) < 0, and thus max ρ∈[0,1] u 0 (ρ) = u 0 (0). Furthermore, due to the maximum principle we have that u(ρ, t) is radial decreasing too, i.e. u ρ (ρ, t) < 0. Now having specified the form of the considered initial data, Theorem 3.7 can be rewritten as follows:
Theorem 8.1. Let N ≥ 3, 1 ≤ r ≤ p, p > N N −2 and 2 N < p−1 r < γ.
Then there is λ 0 > 0 with the following property: any 0 < λ ≤ λ 0 admits 0 < δ 0 = δ 0 (λ) < 1 such that any solution of problem (1.12)-(1.14) with initial data of the form (8.1) and 0 < δ ≤ δ 0 blows up in finite time, i.e. T < +∞.
We perceive that Theorem 8.1 for r = 1 is nothing but Proposition 3.3 in [12], which was proven using a series of auxiliary results and inspired by an approach introduced in [1,4]. Therefore, in order to prove Theorem 8.1 we are following in short the arguments presented in [12], and we provide any modifications where are necessary.
The next lemma is elementary and so its proof is omitted. Furthermore, there holds that
d ≡ inf 0<δ<1 1 2α 1 rγ p 1 2φ δ rγ p µ > 0. (8.8)
The following auxiliary result provides a key inequality satisfied by the initial data u 0 = u 0 (|x|) defined by (8.1). Indeed, we have
Lemma 8.2. If p > N N −2 and p−1 r < γ, there exists λ 0 = λ 0 (d) > 0 such that for any 0 < λ ≤ λ 0 there holds ∆u 0 + dλ −rγ u p 0 ≥ 2u p 0 . (8.9)
Proof. Note that inequality (8.9) is equivalent to
∆ϕ δ + dλ −rγ+p−1 ϕ p δ ≥ 2λ p−1 ϕ p δ
which is reduced to dλ −rγ+p−1 ≥ Na + 2λ p−1 due to (8.3). Then the result follows since p−1 r < γ.
Henceforth we fix 0 < λ ≤ λ 0 = λ 0 (d) so that (8.9) is satisfied. Given 0 < δ < 1, let T δ > 0 be the maximal existence time of the solution to (1.12)-(1.14) with initial data of the form (8.1).
In order, to get rid off the linear dissipative term −u we introduce z = e t u, which then satisfies
z t = ∆z + K(t)z p , in Q ≡ Ω × (0, T δ ),(8.10)
∂z ∂ν = 0, on ∂Ω × (0, T δ ), (8.11) z(x, 0) = u 0 (|x|), in Ω, (8.12) where and we finally derive the following estimatē
K(t) = e (1+rγ−p)t − Ω z r γ .z(t) ≥z(0) = − Ω u 0 . (8.16)
Another helpful estimate of z is given by the following lemma Lemma 8.3. There holds that
ρ N z(ρ, t) ≤z(t) in (0, 1) × (0, T δ ), (8.17) and z ρ 3 4 , t ≤ −c, 0 ≤ t < T δ ,(8.18)
for any 0 < δ < 1.
Proof. Set w = ρ N −1 z ρ , then w satisfies
H[w] = 0, in (0, 1) × (0, T δ ), w(0, t) = w(1, t) = 0, for t ∈ (0, T δ ), w(ρ, 0) < 0, for 0 < ρ < 1, where H[w] ≡ w t − w ρρ + N − 1 ρ w ρ − pK(t)z p−1 w.
The maximum principle now implies w ≤ 0, and hence z ρ ≤ 0 in (0, 1) × (0, T δ ). Then inequality (8.17) follows since ρ N z(ρ, t) = z(ρ, t) Once w ≤ 0 is proven, we have
w t − w ρρ + N − 1 ρ w ρ = pK(t)z p−1 w ≤ 0 in (0, 1) × (0, T δ ), w 1 2 , t ≤ 0, w (1, t) ≤ 0, for t ∈ (0, T δ ) w(ρ, 0) = ρ N −1 u ′ 0 (ρ) ≤ −c, for 1 2 < ρ < 1,
which entails w ≤ −c in ( 1 2 , 1) × (0, T δ ), and finally (8.18) holds. Lemma 8.4. Given ε > 0 and 1 < q < p then ψ defined as
ψ := ρ N −1 z ρ + ε · ρ N z q z γ+1 , (8.19) satisfies H[ψ] ≤ − 2qε z γ+1 z q−1 ψ + ερ N z q z 2(γ+1) 2qεz q−1 − (γ + 1)z γ−rγ − Ω z p −(p − q)z p−1zγ+1−rγ in (0, 1) × (0, T δ ). (8.20)
The proof of Lemma 8.4 follows the same steps as the proof of inequality (28) in [12], which holds for r = 1, and thus it is omitted.
Observe that when p > N N −2 there is 1 < q < p such that N > 2p q−1 and thus the following quantities
A 1 ≡ sup 0<δ<1 1 u µ 0 − Ω u p 0 = λ µ−p α 1 , A 2 ≡ inf 0<δ<1 1 u µ 0 − Ω u p 0 = λ µ−p α 2 (8.21)
are finite due to (8.7). The following result, which is a modification of Lemma 3.3 in [12] for r = 1, provides a key estimate of the L p −norm of z in terms of A 1 and A 2 and, since it is a core result for the proof of Theorem 8.1, we will sketch its proof shortly.
Proposition 8.1. There exist 0 < δ 0 < 1 and 0 < t 0 ≤ 1 independent of any 0 < δ ≤ δ 0 , such that the following estimate is satisfied
1 2 A 2 z µ ≤ − Ω z p dx ≤ 2A 1 z µ ,(8.22)
for any 0 < t < min{t 0 , T δ }.
The proof of the above proposition requires some auxiliary results shown below. 21 Take 0 < t 0 (δ) < T δ to be the maximal time for which inequality (8.22) holds true in 0 < t < t 0 (δ), then we have
1 2 A 2z µ ≤ − Ω z p ≤ 2A 1z µ , for 0 < t < t 0 (δ). (8.23)
We consider the case t 0 (δ) ≤ 1, since otherwise there is nothing to prove. Now the first auxiliary result states Lemma 8.5. There exists 0 < t 1 < 1 such that
z(t) ≤ 2ū 0 , 0 < t < min{t 1 , t 0 (δ)}, (8.24)
for any 0 < δ < 1.
Proof. Since r ≥ 1 and t 0 (δ) ≤ 1, it follows that
dz dt ≤ 2A 1 e 1+rγ−pzµ−rγ , 0 < t < t 0 (δ),
taking also into account relations (8.13) and (8.15). (8.5). Therefore, (8.24) holds for any 0 < t < min{t 1 , t 0 (δ)} where t 1 is estimated as
Setting C 1 = 2A 1 e 1+rγ−p , we obtain z(t) ≤ ū 1+rγ−µ 0 − C 1 (µ − rγ − 1)t − 1 µ−rγ−1 byt 1 ≤ min 1 − 2 1+rγ−µ C 1 (µ − rγ − 1) u 1+rγ−µ 0 , 1 ,
and it is independent of any 0 < δ < 1.
Another fruitful estimate is provided by the next auxiliary result Lemma 8.6. There exist 0 < δ 0 < 1 and 0 < ρ 0 < 3 4 such that for any 0 < δ ≤ δ 0 the following estimate holds
1 |Ω| B(ρ 0 ,0) z p ≤ A 2 8z µ , for 0 < t < min{t 1 , t 0 (δ)}. (8.25)
Proof. First observe that u 0 ≤z(t) ≤ 2ū 0 , for 0 < t < min{t 1 , t 0 (δ)}, (8.26) follows from (8.16) and (8.24). Then, − Ω z p is controlled by (8.22) for 0 < min{t 1 , t 0 (δ)}.
Since p > q then Young's inequality guarantees that the second term of the right-hand side in (8.20) is negative for 0 < t < min{t 1 , t 0 (δ)}, uniformly in 0 < δ < 1, provided that 0 < ε ≤ ε 0 for some 0 < ε 0 ≪ 1. Thus
H[ψ] ≤ − 2qεz q−1 z γ+1 ψ in (0, 1) × (0, min{t 1 , t 0 (δ)}). (8.27) 22
Due to (8.17) and (8.26), we also have
ψ = ρ N −1 z ρ + ε · ρ N z q z γ+1 ≤ ρ N −1 z ρ + ε · ρ N (1−q)zq−γ−1 ≤ ρ N −1 z ρ + C · ερ N (1−q) in (0, 1) × (0, min{t 1 , t 0 (δ)}) which, for 0 < ε ≤ ε 0 , entails ψ 3 4 , t < 0, 0 < t < min{t 1 , t 0 (δ)} (8.28)
by (8.18) and provided that 0 < ε 0 ≪ 1. Additionally (8.19) for t = 0 gives
ψ(ρ, 0) = ρ N −1 λϕ ′ δ (ρ) + ελ q−γ−1 ρ · ϕ q δ ϕ γ+1 δ . (8.29)
Now if 0 ≤ ρ < δ and ε are chosen small enough and independent of 0 < δ < δ 0 , then the right-hand side of (8.29) is estimated as follows:
ρ N λ −aδ −a−2 + ελ q−γ−2 · ϕ q δ ϕ γ+1 δ ρ N λ −aδ −a−2 + ελ q−γ−2 · δ −aq 0 since also ϕ q δ ϕ γ+1 δ δ −aq , δ ↓ 0, uniformly in 0 ≤ ρ < δ,
holds by (8.2) and (8.4) for m = 1, taking also into account that a + 2 = ap > ak.
On the other hand, if δ ≤ ρ ≤ 1 then we obtain
ψ(ρ, 0) = ρ N λ −aρ −a−1 + ελ q−γ−1 ρ −aq+1 ϕ γ+1 ρ ,(8.30)
by using again (8.4) for m = 1. Since a + 2 = ap > aq implies −a − 1 < −aq + 1, we derive
ψ(ρ, 0) < 0, δ ≤ ρ ≤ 3 4 ,
for any 0 < δ ≤ δ 0 and 0 < ε ≤ ε 0 , provided ε 0 is chosen sufficiently small. Consequently we deduce
ψ(ρ, 0) < 0, 0 ≤ ρ ≤ 3 4 ,(8.31)
for any 0 < δ ≤ δ 0 and 0 < ε ≤ ε 0 , provided 0 < ε 0 ≪ 1.
By using (8.35) and (8.36) the comparison principle yields that the solution z of (8.10)-(8.12) satisfies z ≥z in Q 0 ≡ Ω × (0, min{t 0 , T δ }), (8.37) wherez =z(x, t) solves the following
z t = ∆z + Dz p , in Q 0 , (8.38) ∂z ∂ν = 0, on ∂Ω × (0, min{t 0 , T δ }),(8.h t = ∆h + p(p − 1)z p−2 |∇z| 2 + Dpz p−1 h ≥ ∆h + Dpz p−1 h in Q 0 , and h(x, 0) = ∆z(x, 0) + Dz p (x, 0) −z p (x, 0) = ∆u 0 + (D − 1)u p 0 ≥ u p 0 > 0,
in Ω with boundary condition ∂h ∂ν = 0 on ∂Ω × (0, min{t 0 , T δ }).
Then the maximum principle entails that h > 0 in Q 0 , that is,
z t >z p in Q 0 . (8.41) Inequality (8.41) implies z(0, t) ≥ 1 z p−1 0 (0) − (p − 1)t − 1 p−1 = δ a λ(1 + a 2 ) p−1 − (p − 1)t − 1 p−1
for 0 < t < min{t 0 , T δ }, and therefore,
min{t 0 , T δ } < 1 p − 1 · δ a λ(1 + a 2 ) p−1 . (8.42)
For 0 < δ ≪ 1, the right-hand side on (8.42) is less than t 0 , and then T δ < +∞ follows. Furthermore, by (8.42) T δ → 0 as δ → 0 and the proof is complete. An alternative way to prove single-point blow-up is by virtue of the following estimate
− Ω z p dx = 1 |B 1 (0)| 1 0 ρ N −1 z p dρ ≤ C, for 0 < t ≤ T δ , (8.43)
which holds due to (8.22) and (8.24), taking 0 < δ ≪ 1 small enough such that T δ ≤ t 0 .
Then since z = z(ρ, t) is radially decreasing, then (8.43) implies that S = {0}.
9. Blow-up rate and blow-up pattern
One of our purposes in the current section is to determine the blow-up rate of the diffusion-driven blowing up solution provided by Theorem 8.1. We also intend to identify its blow-up pattern (profile) and thus reveal the formed patterns anticipated in this DDI event.
Theorem 9.1. Let N ≥ 3, max{r, N N −2 } < p < N +2 N −2 and 2 N < p−1 r < γ. Then the blow-up rate of the diffusion-induced blowing-up solution of Theorem 8.1 is determined as follows
u(·, t) ∞ ≈ (T max − t) − 1 p−1 , t ↑ T δ , (9.1)
where T max stands for the blow-up time.
Proof. We first note that
0 < K(t) = e (1+rγ−p)t − Ω z r γ ≤ C < ∞,(9.2)
by virtue of (8.43) and in view of Hölder's inequality since p > r. Consider now Φ satisfying
Φ t = ∆Φ + CΦ p , in Ω × (0, T max ), ∂Φ ∂ν = 0, on ∂Ω × (0, T max ), Φ(x, 0) = z 0 (x), in Ω, then via comparison z ≤ Φ in Ω × (0, T max ).
Yet it is known, see [23,Theorem 44.6], that
|Φ(x, t)| ≤ C η |x| − 2 p−1 −η for η > 0,
when x ∈ Ω, 0 < t < T max , and thus
|z(x, t)| ≤ C η |x| − 2 p−1 −η for (x, t) ∈ Ω × (0, T max ),(9.3)
which by virtue of (9.2), (9.3) and using also standard parabolic estimates entails that
z ∈ BUC σ {ρ 0 < |x| < 1 − ρ 0 } × T max 2 , T max (9.4)
for some σ ∈ (0, 1) and each 0 < ρ 0 < 1, where BUC σ (M) denotes in general the Banach space of all bounded and uniform σ−Hölder continuous functions h : M ⊂ R N → R; see also [23]. Consequently (9.4) implies that lim t→Tmax z(x, t) exists and it is finite for all x ∈ B 1 (0)\ {0}.
Recalling that 2p p − 1 < N (or equivalently p > N N − 2 , N > 2 ) then by using (9.2), (9.3) and in view of the dominated convergence theorem we derive lim t→Tmax K(t) = ω ∈ (0, +∞). (9.5) Applying now Theorem 44.3(ii) in [23], taking also into account (9.5), we can find a constant C u > 0 such that
||z(·, t)|| ∞ ≤ C u (T max − t) − 1 (p−1)
in (0, T max ). (9.6)
On the other hand, setting N(t) := ||z(·, t)|| ∞ = z(0, t) then N(t) is differentiable for almost every t ∈ (0, T δ ), in view of [1], and it also satisfies dN dt ≤ K(t)N p (t).
Now since K(t) ∈ C([0, T max )) is bounded in any time interval [0, t], t < T max , then upon integration we obtain ||z(·, t)|| ∞ ≥ C l (T δ − t) − 1 (p−1)
in (0, T max ), (9.7)
for some positive constant C l .
Since z(x, t) = e t u(x, t) then by virtue of (9.6) and (9.7) we obtain
C l (T max − t) − 1 (p−1) ≤ ||u(·, t)|| ∞ ≤ C u (T max − t) − 1 (p−1)
for t ∈ (0, T max ),
where now C l , C u depend on T max , which actually leads to (9.1).
Remark 9.1. Condition (9.1) implies that the diffusion-induced blow-up of Theorem 8.1 is of type I, i.e. the blow-up mechanism is controlled by the ODE part of (1.12).
In contrast, for the finite-time blow-up furnished by Proposition 3.2 and Theorems 3.2 and 3.5 we cannot derive a blow-up as in (9.1) since the blow-up of some L ℓ −norm, ℓ ≥ 1, in each of these cases entails that
K(t) = e (1−p)t − Ω u r γ → 0 as t → T max ,
and thus the approach of Theorem 9.1 fails. This might be an indication that in the preceding cases finite-time blow-up is rather of type II.
Remark 9.2. First observe that (9.3) provides a rough form of the blow-up pattern for z and thus for u as well. Nonetheless, due to (9.2) then the non-local problem (8.10)-(8.12) can be treated as the corresponding local one for which the following more accurate asymptotic blow-up profile, [20], is available lim t→Tmax z(|x|, t) ∼ C | log |x|| |x| 2 for |x| ≪ 1.
Therefore using again that z = e t u we derive a similar asymptotic blow-up profile for the driven-induced blowing up solution u. This actually reveals the form of the developed patterns which are induced as a result of the DDI and will be numerically verified in a forthcoming paper.
Conclusions
The main purpose of the current manuscript is to unveil under which circumstances the dynamics of the interaction of the two morphogens (activator and inhibitor), described by the Gierer-Meinhardt system (1.1)-(1.4), can be controlled by governing only the dynamics of the activator itself given by the non-local problem (1.12)-(1.14). We derive some globalin-time existence as well as blow-up results both in finite and in infinite time for (1.12)-(1.14). Global-in-time existence results guarantee the controlled growth of the activator as described by (1.12)-(1.14) whereas finite-time and infinite-time blow-up results are relevant with the activator's unlimited growth. We discovered, that there are cases, see Proposition 3.2 and Theorems 3.2 and 3.3, where there is a serious discrepancy between the dynamics of the full system (1.1)-(1.4) and those of the non-local problem (1.12)-(1.14). On the other hand, under other circumstances, see Theorems 3.4-3.7, then both (1.1)-(1.4) and (1.12)-(1.14) ensemble the same long-time dynamics. In particular, in Theorems 3.5 and 3.6 we show that the occurrence of some invariant regions for an associated dynamical system is vital in order to control the dynamics of the non-local problem (1.12)-(1.14) and thus the activator's growth. In addition, we prove that under a Turing condition a DDI occurs, which is exhibited in the form of a driven-diffusion blowup. The resulting destabilization enables the formation of some patterns, as anticipated in a Turing instability case. The form of the observed patterns is completely described via the study of the blow-up profile, see Remark 9.2. Consequently, in that case the pattern formation for activator's concentration can be efficiently predicted and prescribed by the dynamics of the non-local problem (1.12)-(1.14).
Remark 3. 7 .
7Since p−1 r > 1 > γ is assumed, Theorem 3.5 is associated with the finitetime blow-up under anti-Turing condition and is in agreement with the blow-up result[16, Theorem 2]. This is actually an indication, under condition p−1 r > 1 > γ,the qualitative behaviour of the full system (1.1)-(1.4) and those of the non-local problem (1.12)-(1.14) is quite similar.
Remark 3. 8 .
8The biological interpretation of Theorem 3.5 is as follows: a large initial concentration for the activator combined with small net cross-inhibition index can lead to its unlimited growth.
Theorem 3. 6 .
6Let γ > 1, r ≥ 1 and p−1 r < 1. Assume, further that w(0) < ζ(0) 1−γ and ζ(0) 1+γ > z(0). Then problem (1.12)-(1.14) has a global-in-time solution, i.e. (3.10) holds.
Remark 3. 9 .
9Theorem 3.6, on the contrary, deals with the case of global-in-time existence under Turing condition and it is also in agreement with Jiang's result in[6]. Consequently under assumptions of Theorem 3.6 both the full system (1.1)-(1.4) as well as the non-local problem (1.12)-(1.14) ensemble the same long-time behaviour.
Remark 3 . 10 .
310Usually DDI phenomena are connected with pattern formation. The same happens in the case of the diffusion-induced blow-up provided by Theorem 3.7. The form of the destabilising patterns are determined in section 9
p + 1)(1 − γ) < 2. Combining now (5.2) with (5.3) we derive the estimate u(·, t) H 1 ≤ C for 0 < t < T.(5.4) Now u satisfies u t = ∆u − u + a(t)u p , in Ω × (0, Ω u p+1 γ ≤ C < +∞, due to (3.7). Then, letting A to be −∆ + 1 with the homogeneous Neumann boundary condition, we use u(·, t) = e −tA u 0 +t 0 e −(t−s)A a(s)u(·, s) p ds to apply a standard bootstrap argument. In fact, it follows that (3.10) from (5.4), 1 < p < N +2 N −2 , and the proof of [26, Lemma 8.1].
Lemma 8. 1 .
1The function φ δ defined by (8.2) satisfies the following:(i) There holds that ∆ϕ δ ≥ −Naϕ p δ (8.3)in the weak sense for any 0 < δ < 1. (ii) If m > 0 and N > ma, .1 can used to obtain some further useful estimates. Indeed, if we consider µ > 1 + rγ (8.
( 8 . 13 )
813It is clear that u blows up in finite time if and only if z does so.
Nz(s, t)s N −1 ds = − Ω z =z(t).
Remark 8. 2 .
2The blowing up solution u obtained in Theorem 8.1 exhibits a single-point blow-up at the origin ρ = 0. Recalling that z = e t u we obtain the occurrence of single-point blow-up for u in view of Remark 8.1.
AcknowledgmentsThis work was supported by JSPS Grant-in-Aid Scientific Research (A) 26247013 and Core-to-Core project. Part of the current work was inspired and initiated when the first author was visiting the Department of System Innovation of Osaka University. He would like to express his gratitude for the warm hospitality.The authors would also like to thank the anonymous reviewers for the their stimulating comments, which substantially improved the form of the manuscript.Combining(8.27),(8.28)and(8.31) we end up with ψ = ρ N −1 z ρ + ε · ρ N z q z γ+1 ≤ 0 in (0,3 4) × (0, min{t 1 , t 0 (δ)}), which impliesRemark 8.1. It is worth noting that relation (8.32) implies that if z(ρ, t) blows up then this can only happen in the origin ρ = 0; that is, only a single-point blow-up is possible.In particular if we defineto be the blow-up set of z then S = {0} in the case z blows up in finite time.Next we prove the key estimate (8.22) using essentially Lemmata 8.5 and 8.6.Proof of Proposition 8.1. By virtue of (8.5) and since p−1 r < δ, there holds that ℓ = µ p > 1.We can easily see that θ = z z ℓ satisfiesin Ω.Therefore, by the standard parabolic regularity, see DeGiorgi-Nash-Moser estimates in[14, pages 144-145], there is 0 < t 2 ≤ t 1 independent of 0 < δ ≤ δ 0 such that sup 0<t<min{t 2 ,t 0 (δ)} θ p (·, t) − θ p (·, 0) L 1 (Ω\B(0,ρ 0 )) ≤ A 2 8 |Ω|,24which implies, 0 < t < min{t 2 , t 0 (δ)},(8.33)for any 0 < δ ≤ δ 0 . Inequalities (8.25) and (8.33) entail, for 0 < t < min{t 2 , t 0 (δ)} and 0 < δ ≤ δ 0 , and hencetaking into account thatTherefore, if we take t 0 (δ) ≤ t 2 then it follows thatand by a continuity argument we deduce thatfor some η > 0, which contradicts the definition of t 0 (δ). Consequently, we obtain t 2 < t 0 (δ) for any 0 < δ ≤ δ 0 , and the proof is complete with t 0 = t 2 .Now we have all the ingredients to proceed to the proof of the main result of this section.Proof of Theorem 8.1. Since t 0 ≤ t 1 in (8.24), we haveby virtue of (8.8) and(8.21). Since 0 < λ ≤ λ 0 (d), then inequality (8.9) applies to derive ∆u 0 + Du p 0 ≥ 2u p 0 (8.36) for any 0 < δ ≤ δ 0 .
Blow-up of positive solutions of semilinear heat equations. A Friedman, J B Mcleod, Indiana Univ. Math. J. 34A. Friedman & J.B. McLeod, Blow-up of positive solutions of semilinear heat equations, Indiana Univ. Math. J. 34 (1985) 425-447.
A Gierer, & H Meinhardt, A theory of biological pattern formation. 12A. Gierer & H. Meinhardt, A theory of biological pattern formation, Kybernetik (Berlin) 12 (1972) 30-39.
Shadow systems and attractors in reaction-diffusion equations. J K Hale, & K Sakamoto, Appl. Analysis. 32J.K. Hale & K. Sakamoto, Shadow systems and attractors in reaction-diffusion equations, Appl. Analysis 32 (1989) 287-303.
Semilinear parabolic equations with prescribed energy. B Hu & H-M. Yin, Rend. Circ. Mat. Palermo. 44B. Hu & H-M. Yin, Semilinear parabolic equations with prescribed energy, Rend. Circ. Mat. Palermo 44 (1995) 479-505.
A metastable spike solution for a nonlocal reaction-diffusion model. D Iron, & M Ward, SIAM J. Appl. Math. 603D. Iron & M. Ward, A metastable spike solution for a nonlocal reaction-diffusion model, SIAM J. Appl. Math. 60(3) (2000), 778-802.
Global existence of solutions of an activator-inhibitor system. H Jiang, Discrete Contin. Dyn. Syst. 14H. Jiang, Global existence of solutions of an activator-inhibitor system, Discrete Contin. Dyn. Syst. 14 (2006) 737-751.
Global-in-time behavior of the solution to a Gierer-Meinhardt system Discrete Contin. G Karali, T Suzuki, & Y Yamada, Dyn. Syst. 33G. Karali, T. Suzuki & Y. Yamada, Global-in-time behavior of the solution to a Gierer-Meinhardt system Discrete Contin. Dyn. Syst. 33 (2013) 2885-2900.
Finite-time blowup of solutions to some activator-inhibitor systems. G Karch, K Suzuki, & J Zienkiewicz, Discrete Contin. Dyn. Syst. 369G. Karch, K. Suzuki & J. Zienkiewicz, Finite-time blowup of solutions to some activator-inhibitor systems, Discrete Contin. Dyn. Syst. 36 (9) (2016) 4997-5010.
Perturbation Theory for Linear Operators. T Kato, SpringerBerlinT. Kato, Perturbation Theory for Linear Operators, Springer, Berlin, 1966.
Activators and inhibitors in pattern formation. J Keener, Stud. Appl. Math. 59J. Keener, Activators and inhibitors in pattern formation, Stud. Appl. Math. 59 (1978) 1-23.
H A Levine, Some nonexistence and instability theorems for formally parabolic equations of the form P u t = −Au + F (u). 51H.A. Levine, Some nonexistence and instability theorems for formally parabolic equations of the form P u t = −Au + F (u), Arch. Rational Mech. Anal. 51 (1973) 371-386.
On the global existence and finite time blow-up of shadow systems. F Li, & W.-M Ni, J. Differential Equations. 247F. Li & W.-M. Ni, On the global existence and finite time blow-up of shadow systems, J. Differential Equations, 247 (2009) 1762-1776.
Global existence and finite time blow-up of solutions of a Gierer-Meinhardt system. F Li, R Peng, & X Song, J. Differential Equations in pressF. Li, R. Peng & X. Song, Global existence and finite time blow-up of solutions of a Gierer-Meinhardt system, J. Differential Equations in press.
Second order parabolic differential equations. G M Lieberman, World Scientific Publishing Co., IncRiver Edge, NJG.M. Lieberman, Second order parabolic differential equations, World Scientific Publishing Co., Inc., River Edge, NJ, 1996.
Yip Finite time blow-up of parabolic systems with nonlocal terms Indiana Univ. F K Li & N, Math. J. 633F. Li & N. K. Yip Finite time blow-up of parabolic systems with nonlocal terms Indiana Univ. Math. J. 63(3) (2014), 783-829.
Boundedness and blow up for the general activator-inhibitor model. M , Li S Chen, & Y Qin, Acta Math. Appl. Sinica. 11M. Li S. Chen & Y. Qin, Boundedness and blow up for the general activator-inhibitor model, Acta Math. Appl. Sinica 11 (1995) 59-68.
Shadow Limit Using Renormalization Group Method and Center Manifold Method. A Marciniak-Czochra, & A Mikelić, Vietnam J. Math. 45A. Marciniak-Czochra & A. Mikelić, Shadow Limit Using Renormalization Group Method and Center Manifold Method, Vietnam J. Math. 45 (2017), 103-125.
Dynamical spike solutions in a nonlocal model of pattern formation. A Marciniak-Czochra, S Härting, G Karch, & K Suzuki, preprintA. Marciniak-Czochra, S. Härting, G. Karch & K. Suzuki, Dynamical spike solutions in a nonlocal model of pattern formation, preprint.
Reaction-diffusion systems in the Gierer-Meinhardt theory of biological pattern formation. K Masuda, & K Takahashi, Japan J. Appl. Math. 4K. Masuda & K. Takahashi, Reaction-diffusion systems in the Gierer-Meinhardt theory of biological pattern formation, Japan J. Appl. Math. 4 (1987) 47-58.
Refined uniform estimates at blow-up and applications for nonlinear heat equations. F Merle, & H Zaag, Geom. Funct. Anal. 86F. Merle & H. Zaag, Refined uniform estimates at blow-up and applications for nonlinear heat equa- tions, Geom. Funct. Anal. 8(6) (1998), 1043-1085.
The dynamics of a kinetic activator-inhibitor system. W.-M Ni, K Suzuki, & I Takagi, J. Differential Equations. 229W.-M. Ni, K. Suzuki & I. Takagi, The dynamics of a kinetic activator-inhibitor system, J. Differential Equations, 229(2006) 426-465.
W.-M Ni, The Mathematics of Diffusion CBMS-NSF Series. SIAMW.-M. Ni, The Mathematics of Diffusion CBMS-NSF Series, SIAM 2011.
Superlinear parabolic problems. Blow-up, global existence and steady states. P Quittner, Ph, Souplet, Birkhäuser VerlagBaselP. Quittner & Ph. Souplet, Superlinear parabolic problems. Blow-up, global existence and steady states, Birkhäuser Verlag, Basel, 2007.
Global Solution of Reaction-Diffusion Systems. F Rothe, Lecture Notes in Math. 1072SpringerF. Rothe, Global Solution of Reaction-Diffusion Systems, Lecture Notes in Math. 1072, Springer, Berlin, 1984.
On global solution of nonlinear hyperbolic equations. D Sattinger, Arch. Rational Mech. Anal. 30D. Sattinger, On global solution of nonlinear hyperbolic equations, Arch. Rational Mech. Anal. 30 (1968) 148-172.
T Suzuki, & T Senba, Applied Analysis, Mathematical Methods in Natural Science. LondonImperial College PressT. Suzuki & T. Senba, Applied Analysis, Mathematical Methods in Natural Science, Imperial College Press, London, 2012.
The chemical basis of morphogenesis. A M Turing, Phil. Trans. Roy. Soc. B. 237A.M. Turing, The chemical basis of morphogenesis, Phil. Trans. Roy. Soc. B 237 (1952), 37-72.
Existence and stability of spikes for the Gierer-Meinhardt system. J Wei, Handbook of differential equations: stationary partial differential equations. AmsterdamElsevier/North-HollandV30J. Wei, Existence and stability of spikes for the Gierer-Meinhardt system, Handbook of differen- tial equations: stationary partial differential equations, Vol. V, 487-585, Elsevier/North-Holland, Amsterdam, 2008. 30
Global solutions of reaction-diffusion equations. F Rothe, Lecture Notes in Mathematics. 1072Springer-VerlagF. Rothe, Global solutions of reaction-diffusion equations, Lecture Notes in Mathematics 1072, Springer-Verlag: Berlin-Heidelberg-New York, 1984.
Finite-time blow-up and blow-up rates for the GiererMeinhardt system. H Zou, Applicable Analysis. 9410H. Zou, Finite-time blow-up and blow-up rates for the GiererMeinhardt system, Applicable Analysis, 94 (10), (2015) 2110-2132.
UK E-mail address: [email protected] Division of Mathematical Science, Department of System Innovation. Thornton Science Park Pool Lane, Ince, Chester CH2 4NUDepartment of Mathematics, University of Chester ; Graduate School of Engineering Science, Osaka UniversityMachikaneyamacho 1-3, Toyonakashi, 560-8531, Japan E-mail address: [email protected] of Mathematics, University of Chester, Thornton Science Park Pool Lane, Ince, Chester CH2 4NU, UK E-mail address: [email protected] Division of Mathematical Science, Department of System Innovation, Graduate School of Engineering Science, Osaka University, Machikaneyamacho 1-3, Toyonakashi, 560- 8531, Japan E-mail address: [email protected]
| zyda_arxiv-1441000 |
Will Admins Cope? Decentralized Moderation in the Fediverse
Ishaku Hassan Anaobi
Queen Mary University of London
Aravindh Raman
Telefonica Research
Ignacio Castro
Queen Mary University of London
Haris Bin
Queen Mary University of London
Zia
Dami Ibosiola
Queen Mary University of London
Gareth Tyson
Queen Mary University of London
Hong Kong University of Science and Technology (GZ)
Will Admins Cope? Decentralized Moderation in the Fediverse
As an alternative to Twitter and other centralized social networks, the Fediverse is growing in popularity. The recent, and polemical, takeover of Twitter by Elon Musk has exacerbated this trend. The Fediverse includes a growing number of decentralized social networks, such as Pleroma or Mastodon, that share the same subscription protocol (ActivityPub). Each of these decentralized social networks is composed of independent instances that are run by different administrators. Users, however, can interact with other users across the Fediverse regardless of the instance they are signed up to. The growing user base of the Fediverse creates key challenges for the administrators, who may experience a growing burden. In this paper, we explore how large that overhead is, and whether there are solutions to alleviate the burden. We study the overhead of moderation on the administrators. We observe a diversity of administrator strategies, with evidence that administrators on larger instances struggle to find sufficient resources. We then propose a tool, WatchGen, to semi-automate the process.
Introduction
The Fediverse encompasses a group of increasingly popular platforms and technologies that seek to provide greater transparency and openness on the web. [18,30,34,13]. Well known Fediverse platforms include microblogging services (e.g. Pleroma [38], Mastodon [33]) and video sharing platforms (e.g. PeerTube [37]). The acquisition of Twitter by Elon Musk [11] has exacerbated this popularity with a large migration of Twitter users to the Fediverse [8].
In Fediverse social networks, individuals or organisations can install, own, and manage their own independent servers, also known as instances [15,54]. For these instances to interact, they rely on federation [41], whereby instances interconnect in a peer-to-peer fashion to exchange posts. Note that this allows for users to exchange content across platforms. This results in a physically decentralized model that is logically interconnected where users can interact globally. Unfortunately, this creates challenges for instance administrators, as activities on one instance impact others via federation. For example, recent work has shown that hateful ma-terial generated on one instance can rapidly spread to others [53].
To overcome this, most Fediverse social network implementations have in-built federation policies. These policies enable administrators to create rules to ban or modify content from instances that matches certain rules, e.g. banning content from a particular instance or associating it with warning tags. Although a powerful tool, this imposes an additional overhead on administrators [26,14,6]. Thus, we argue it is vital to better understand this process, and propose ways to improve it. This paper examines administrator activities in the Fediverse. We focus on Pleroma, a federated microblogging platform with similar functionality to Twitter. We collect a large-scale dataset covering 10 months: this includes 1,740 instances, 133.8k users, 29.9m posts, associated metadata, and importantly, the policies setup by the administrators. We find that instances are often "understaffed", with the majority of instances only having a single administrator, and recruiting no other moderators to assist, despite many having over 100K posts. This leads us to conjecture that some administrators may be overwhelmed. Indeed, we find that instance administrators often take many months before applying policies against other instances, even in cases where they exhibit clearly controversial traits (e.g. posting a large number of hate words).
We therefore turn our attention to the policy configurations employed. We observe a growing number of instances enacting a wide range of policy types. Common are 'maintenance' policies, such as those which automatically delete older posts (ObjectAgePolicy), as well as those aimed at preventing the spread of certain content (e.g. HashtagPolicy, which flags up posts with certain hashtags). We further observe a range of bespoke policies created by administrators, via the SimplePolicy, which can be configured to trigger a range of actions based on certain rules (e.g. blocking all connections from certain instances). The laborious nature of this moderation work leads us to explore automated techniques to assist administrators. We build a set of models to predict administrator actions. We embed them in WatchGen, a tool that can propose a set of instances for administrators to focus their moderation efforts on. To the best of our knowledge, this is the first study of Fediverse administrators. We make the following observations:
3. Intuitive features, such as the number of mentions and frequent use of hate words, are good indicators that an instance will later have a policy applied against it (Section 6). This suggests that there are key traits that garner more attention by moderators. 4. We show that it is possible to predict (F1=0.77) which instances will have policies applied against them (Section 6) and design WatchGen, a tool that flags particular instances for administrators to pay special attention to.
Pleroma: Overview
Pleroma is a lightweight decentralized microblogging server implementation with user-facing functionality similar to that of Twitter. In contrast to a centralized social network, Pleroma is a federation of multiple independently operated servers (aka instances). Users can register accounts on these instances and share posts with other users on the same instance, or on different instances. Through these instances, users are able to register accounts and share posts (called statuses) to other users on the same instance, other Pleroma instances, or instances from other Fediverse platforms, most notably Mastodon.
Federation. We refer to users registered on the same instance as local, and users on different instances as remote. A user on one instance can follow another user on a separate instance. Note that a user registered on their local instance does not need to register with the remote instance to follow the remote user. When the user wants to follow a user on a remote instance, the local instance subscribes to the remote user on behalf of the local user using an underlying subscription protocol (Ac-tivityPub [2]). This process of peering between instances in the Fediverse is referred to as federation.
The federated network includes instances from Pleroma and other platforms (e.g. Mastodon) that support the same subscription protocol (ActivityPub). Accordingly, Pleroma instances can federate and target their policies at non-Pleroma instances. The resulting network of federated instances is referred to as the Fediverse (with over 23k servers [16]).
Policies. Policies affect how instances federate with each other through different rule-action pairs. These allow certain actions to be executed when a post, user, or instance matches pre-specified criteria. For example, the SimplePolicy can perform a range of actions when a remote instance matches certain criteria such as rejecting connections. Note, there are numerous in-built policies, but tech-savvy administrators can also write their own bespoke policies.
Administrators. Instances are hosted and managed by specialized users called administrators. By default, the creator of an instance will take on the role of the administrator, however, it is also possible to delegate such responsibilities to multiple others. Instance administrators are responsible for carrying out the day-to-day administrative tasks on the instances. These include managing the front-end, users, uploads, database, emoji packs and carrying out administrative email tasks. The instance administrator is also responsible for accepting new user registrations and removing users where necessary. The administrator updates and backs-up the instance, set the terms of service and retains the ability to shutdown the instance. One essential responsibility of the instance administrator is the moderation of content (although they can also assign the role to other users called moderators). This can make instance administration a cumbersome task, and administrators a very important part of the Fediverse.
Data Collection
Instance & Administrator Dataset. Our measurement campaign covers 16th Dec 2020 -19th Oct 2021. We first compile a list of Pleroma instances by crawling the directory of instances from distsn.org and the-federation.info. We then capture the list of instances that each Pleroma instance has ever federated with using each instance's Peers API. 1 Note, this includes both Pleroma and non-Pleroma instances. In total, we identify 9,981 instances, out of which 2,407 are Pleroma and the remainder are non-Pleroma (e.g. Mastodon).
We then collect metadata for each Pleroma instance every 4 hours via their public API. 2 We record the list of administrators and any delegated moderators. We also obtain the number of users on the instance, the number of posts, the enabled policies, the applied policies as well as the instances targeted by these policies, and other meta information.
From the 2,407 Pleroma instances, we are able to gather data from a total of 1,740 instances (72.28%).
For the remaining 667 instances: 65.1% have non existent domains, 17.9% are not found (404 status code), 6.4% instances has private timelines (403), 4.5% result in Bad Gateway (502), 1.3% in Service Unavailable (503), and under 1% return Gone (410).
User Timelines. Users in Pleroma have three timelines: (i ) a home timeline, with posts published by the accounts that the user follows (local and remote); (ii ) a public timeline, with all the posts generated within the local instance; and (iii ) the whole known network, with all posts that have been retrieved from remote instances that the local users follow. Note, the whole known network is not limited to remote posts that a particular user follows: it is the union of remote posts retrieved by all users on the instance. We use the public Timeline API 3 to gather posts data from 819 instances (the remaining 912 instances have either no posts or unreachable public timelines).
Ethics. Our dataset covers Pleroma instances and their administrators. We exclusively focus on the policies that these administrators set, and do not investigate other aspects of administrator behavior (e.g. the posts they share). All data is available via public APIs. We emphasize that administrators, themselves, are the ones who control access to these APIs. Hence, the administrators covered in this paper consent for others to use this data. Further, the policies studied do not work on a per-user granularity and, thus, we cannot infer anything about individual users. All data is anonymized before usage, and it is stored within a secure silo.
Exploring Policy Configurations
Policy Footprint. We first quantify the presence of policies across instances. In total, we observe 49 unique policy types. From our 1.74k Pleroma instances, we retrieve policy information from 93.2% of instances (the remainder do not expose their policies). These cover 94.2% of the total users and 94.5% of all posts. Figure 1 shows the distribution of the top 15 policy types enabled by the administrators across instances and the percentage of users signed up within those instances as well as the posts on the instances. We see a wide range of policies with diverse functionalities and varying coverage based on which metric is considered. For instance, whereas the ObjectAgePolicy (which performs an action on a post once it reaches a certain age) is installed on 74.8% of instances, this only covers 52.4% of the users. In contrast, the KeyWordPolicy (which performs an action on any posts containing a given keyword) covers 18 Note, while the TagPolicy allows tagging user posts as sensitive (default: nsfw), the Hashtagpolicy allows the tagging of hashtags (e.g. nsfw sensitive). We find 54.6% and 34.3% of instances enabling these policies respectively. The other Pleroma default policy is the NoOpPolicy. This allows any content to be imported. This describes the default state of a new instance. Interestingly, we see administrators paying more attention to this policy: 89.7% of the instances have actively disabled it. 4 This suggests that administrators are aware and concerned about importing undesirable content.
Non-Default Policies. Non-default policies are those that instance administrators have to actively enable. Instances with these policies may indicate a more proactive administrator. We find 45 non-default policies during our data collection period.
The most powerful policy available is the SimplePolicy, enabled on 28.8% of instances. This policy allows administrators to apply a wide range of actions against specific instances (e.g. gab.com). The most impactful and common is the reject action. 5 56.9% of instance that enable the SimplePolicy employ the reject action. Interestingly, although we see only 28.8% of instances with the SimplePolicy enabled, its application affects 85.4% of users and 90.3% of the posts on the Pleroma platform. We see noteworthy instances being amongst the top targets of this policy (e.g. kiwifarms.cc and anime.website), which are all commonly understood to share controversial material. Interestingly, only 18.5% of instances with the SimplePolicy applied against them are from the Pleroma platform (the most are from Mastodon [39]). This means that 81.5% of the recipients are from federated instances outside of Pleroma.
Policy Growth. We next look at how the use of policies has changed over time. We conjecture that the longer administrators run their instances, the more experienced they become. As such, we expect to see greater application of policies. Here we focus on the 5 most popular policies as they account for 92.3% of the instances, 73.6% of users and 88.8% of the posts. For completeness, we include the sum of the other less popular policies too. Figure 2 presents the percentage of instances that activate each policy over time. Across our measurement period, we observe a growth of 40% in the total number of policies used. This suggests that the use of policies is becoming more common. 28.5% of these policies are introduced by new instances coming online, with newly installed default policies, e.g. ObjectAgepolicy, TagPolicy and HashtagPolicy. The remainder are instantiated by pre-existing instance administrators that update their policies, suggesting a relatively active subset of administrators.
We also inspect the growth on individual instances. Overall, 42% of instances add policies during our measurement period. Of these instances, 52.3% enable only one extra policy and we see only a small minority (1.9%) enabling in excess of 5 new policies (e.g. chaos.is enables 13 and poa.st 12). A closer look at these instances show they mostly add common policies. However, we also see a wide range of other less common policies (e.g. KeywordPolicy).
In contrast, the use of the SimplePolicy, with the most flexible range of moderation actions, has remained relatively stable. Actions under the SimplePolicy have instance-wide effect and can effectively control instance federation. Overall, we only see 28.8% of instances enabling this policy, without much growth across the measurement period (as seen in Figure 2). This could imply that administrators are unaware of this policy, do not have time to moderate their instances at this level or maybe find this policy too blunt (not fine-grained enough). The latter could lead to other issues, which administrators seek to avoid (e.g. collateral damage [22]). It is also worth noting that the SimplePolicy is one of the most complex, and administrators potentially shy away from these more labour-intensive policies. We argue that the diversity of policies could potentially overwhelm (volunteer) instance administrators (see Sec- tion 5). This suggest that they require further support to automate this process (see Section 6).
Characterising Administrators
Distribution of Administrators
Number of Administrators Per-Instance. We observe a total of 2,111 unique administrators from 1,633 instances (93.8% of 1.74k). 6 Figure 3 presents the distribution of the number of administrators per instance. Although a majority of instances (71.6%) are managed by a single administrator, we also see some instances with a larger number of administrators (e.g. rakket.app: 16 and poa.st: 13).
Administrator Workload. We next test if the number of administrators increases proportionately to the number of posts. We treat this as a rudimentary proxy for how much moderation must take place on an instance. Figure 4 presents the distribution of posts on instances vs. the number of administrators. Generally, we find that instances with more posts do have more administrators on average, e.g. instances with multiple administrators have more posts, with a ratio of 6:1. However, this is driven by a few instances (e.g. poa.st). Table 3 summarizes the top 10 instances that see the largest growth in administrators. Many of them are small instances with under 1000 users, and a proportionately small number of posts. This suggests that administrator growth does not necessarily occur on the instances that need it the most. To test if the number of administrators grow proportionately to the number of posts, Figure 5 plots the growth of administrators vs. the growth of posts on each individual instance during our data collection period. We see that a growth in posts on a given instance does not necessarily correspond to the recruitment of new administrators. In fact, only 6.9% of instances record a growth in administrators during this period. Overall, there is a weak correlation (Spearman coefficient of 0.19 for the number of posts vs. number of administrators). In total, we see a 60.3% increase in the number of posts, but just a 35.6% growth in administrators. Unsurprisingly, instances that grow their administrator pool do become more active. On average, instances with a growing number of administrators have 1.5x more policies than other instances. Specifically, looking at the policy with the most impact (reject), these instances apply it 1.8x more than others. Interestingly, instances with an increasing number of administrators also have 4x more policies applied against them.
Administrators' Response Lag
The previous section has shown that administrators face a growing moderation workload. To study this workload, we now look at how long it takes administrators to apply polices against particular instances. We focus on the SimplePolicy as this is clearly geared towards moderation, has instance-wide targeting, and lists the target instances. For each SimplePolicy against a given in- stance, we compute the lag between the date of the implementation of the policy and the date when the targeted instance was first federated with. This is a rudimentary proxy for how long it took an administrator to identify the problem. We temper our analysis with the fact that there could be many reasons for this delay, which we have limited vantage on.
Policy Creation Delay. Figure 6 presents the distribution of delays (as defined above). Note, we exclude the 55% of federations that occurred before the beginning of our data collection (as we cannot know their timestamp). We plot the delay distributions for applying policies against: (i ) All instances; (ii ) "Controversial" instances with the most policies applied against them (top 10); and (iii ) "Benign" instances with the fewest policies against them (bottom 10). It takes administrators an average of 82.3 days to apply any form of policy against other instances. Although, on average, it takes more time for a policy to be applied on the "bottom 10" instances than the "top 10" instances (74.7 and 59.5 days respectively), we see that there is a noticeable lag (almost 3 months) between federation occurring and policies being imposed. This may suggest that administrators find it difficult to keep-up with the need to rapidly identify instances that justify policy imposition.
Delay for Controversial Instances. We next extract the top 10 instances that receive the most policies targeted against them. For each one, Figure 7 plots the distribution of delays (i.e. how long it takes other instances to impose a policy against them). In-line with expectations, we see that administrators take less time to apply policies against instances like gab.com, known for its right-wing stance (average of 19 days). However, we see much longer delays for other controversial instances that are less well-known (e.g. neckbeard.xyz), averaging up to 98.4 days. These instances are quite active, with significant growth in posts during our measurement period (e.g. neckbeard.xyz: 789.4k and kiwifarms.cc: 469.2k). With other instances such as anime.website posting "lolicon" (suggestive art depicting prepubescent females), it is expected that policies would be swift, however, we see a very wide breadth of delays. The diverse nature of these administrator reactions indicates that any future automated moderation tools should be specialized to the preferences of individual administrators.
Administrators & Moderators
Moderation Delegation. As administrators are responsible for a wide range of activities, they can delegate the task of content moderation to select individuals. These accounts are referred to as moderators. Of our 1.74k instances, 47% of them (819) expose information in our dataset. From these, only 12% (98) of instances have assigned the role of moderator to any other accounts. Of these, 73.5% (72) of the instances have the administrator also doubling as a moderator, while 29.6% (29) of the instances assign the entire moderator role to an account that is not the administrator. This implies that only 3.5% of instances have dedicated account(s) assigned the role of moderator.
O bj ec tA ge P ol ic y Ta gP ol ic y H as ht ag P ol ic y S im pl eP ol ic y N oO pP ol ic y H el lth re ad P ol ic y S te al E m oj iP ol ic y A nt iF ol lo w bo tP ol ic y M ed ia P ro xy W ar m in gP ol ic y K ey w or dP ol ic y A nt iL in kS pa m P ol ic y Fo rc eB ot U nl is te dP ol ic y A ct iv ity E xp ira tio nP ol ic y E ns ur eR eP re pe nd ed N or m al iz eM ar ku p Are moderators helpful? We conjecture that instances with dedicated moderators outside of their administrator team might be swifter in the application of policies. Figure 8 shows the percentage of instances that enable the 15 most popular policies (Figure 1). We present two bars for each policy: (i ) Instances with additional moderators (who are not an administrator); and (ii ) Instances without additional moderators. There is a broadly similar distribution across these two groups. However, we notice that instances without additional moderators have approximately 3x more of the NoOpPolicy configured. Recall, this is the default state of an instance and allows any content to be imported. This begins to suggest that instances with additional moderators do pay greater attention to policies.
We expand this analysis in Figure 9, where we show the number of SimplePolicy actions and the delay to apply a policy after federation (in days) for instances in the two groups. We use the SimplePolicy for this analysis as it is the only moderation policy with instancewide targeting and a list of targeted instance domains. The plot shows that instances with moderators take less time (average 103 days) to impose a SimplePolicy after federation, compared to instances without dedicated moderators (average 111 days). The figure also shows a marked difference in the number of instances that apply the SimplePolicy. Only 38% of the instances with dedicated moderators apply no SimplePolicy actions, compared to 70% for those without. This confirms that instances with additional moderators are more proactively moderated.
# of Days
No additional moderators(policy) No additional moderators(days) Figure 9: CDF of the number of SimplePolicy actions per instance (X1-axis) and the lag (in days) for instances to impose a policy after federation (X2-axis). We separate instances into (i ) those with dedicated moderators; and (ii ) those without dedicated moderators.
WatchGen: Automating Moderation
Our results indicate that moderation is labor-intensive. We now explore techniques to assist administrators. We propose WatchGen, 7 a tool that recommends to administrators a "watchlist" of instances that may require federated moderation. This watchlist must be on a perinstance basis, as different administrators may have varying views on what is considered appropriate for the instance they manage. WatchGen, helps administrators to more proactively identify instances requiring attention with regards to content moderation. We build Watch-Gen by compiling a large feature set for each instance, and experimenting with a number of classification models to flag instances that are more likely to require attention.
Feature Selection. We first extract features for each instance. These features include information about user (e.g. number of users) and administrator activities with respect to moderation (e.g. number of rejected instances). We also extract features from post content (e.g. number of hate words in posts). We experiment with a total of 38 features (see Table 5). Through extensive manual experimentation, we distil this down to the 16 most determinant features (highlighted in Table 5).
Model Training. Next, we train multiple machine learning models using the sklearn library, and Grid-SearchCV within 5-fold cross-validation to find the optimal hyper-parameter settings. We detail below the hyperparameters for each model.
Logistic Regression (LR
Generating a Global Watchlist
Task. We first assume a WatchGen central broker that compiles a global pool of training data, collected from all instances through their public APIs (similar to us in Section 3). We use this global pool of training data, with an 80:20 split, to predict if a given instance will be subject to any policy (by any other instance). We then produce a 'watchlist' of instances that may be worthy of attention.
To investigate how long it would take to garner sufficient data to train WatchGen, we also train several models on datasets covering increasing time windows. We first train on one month of data and increase the training dataset by one month at a time (up to 9 months). For our test dataset, we use the data remaining after the training snapshot.
Results. Table 1 summarizes the result with the global pool of training data (80:20 split) with Random Forest being the best performing model (f1=0.77). Recall, that we also run experiments with a training set based on varying time windows. Figure 10 presents the f1 scores based on the size (duration) of the training set. We observe that it takes at least 5 months for a model to achieve its best score (e.g. Gradient Boosted Trees is month 5 and Random Forest in month 7). Note that the training sets are different from Table 1 and hence the scores differ.
Feature Importance. We next inspect which features are most important. This sheds insight into which characteristics are most related to triggering policies. We use Table 1: WatchGen performance results when using global training pool and the full feature set.
the in-built functions for feature importance. Figure 11 presents the feature importance for the explainable models. We see that the top 3 features (transformed post, average number of mentions in a post, and number of posts on an instance) are all related to the number of posts on an instance. This suggests that the likelihood of an instance having a policy applied against it is closely related to the amount of content its users post. In other words, the more users and posts on an instance, the higher the probability of having a policy applied against it. This is expected as such instances are likely to attract more attention.
Features such as the number of mentions and hate words in the posts also play an important role. This is in-line with prior work that observed how mentions and quote retweets result in more attention [17]. To better understand the importance of these secondary metrics, we retrain the model without the two top features (number of posts and transformed posts). We show the results in Table 4. Confirming our prior assertion, we retain relatively good performance. For Random Forest, we attain an f1 of 0.62 (vs. 0.77 with the full feature set in Table 5). This confirms that these other factors play an important role in determining if an instance has a policy applied against it. In other words, in addition to the size of an instance, other features are required to obtain a fairly good prediction of instances being subject to any policy.
Generating a Local Watchlist
Task. Our prior WatchGen models assume a central pool of training data, aggregated from all instances. This may be infeasible in practice due to the decentralized nature of the Fediverse. Hence, we next investigate how well our best model (Random Forest) performs when decentralizing the training process. For each instance, we extract its federated peers and exclusively build a local training set from their data (using the features highlighted in Table 5). For each pair of instances, we tag whether or not a directed policy is imposed, i.e. each instance only considers the policies it locally sees. Finally, each instance trains its own local model using the first 8 months of data (and tests on the last 2). This creates one independent model per-instance. Based on this, Watch-Gen predicts whether a policy will be applied against the instance.
Results. Figure 12 presents the distribution of performance metrics per-instance. As expected, we observe an overall performance drop compared to the prior task based on a global model. Instances attain an average f1 score of 0.55. This is largely due to the significant reduction in per-instance training data. That said, we observe a wide array of performances across the instances: 42.6% of instances achieve above 0.6 f1, with a tail of 8.3% attaining below 0.4 f1. We find that performance is impacted by the training set size. Instances that perform relatively well (>=0.6 f1), tend to be larger (i.e. more posts and users). For example, 65.4% of the best performing instances (>=0.6 f1) have a local post count of over 50k (e.g. neckbeard.xyz and freespeechextremist.com). In contrast, only 4.4% of instances that perform poorly (<0.6 f1) have over 50k posts (e.g. princess.cat and sleepy.cafe). This implies that as instances grow, their local performance will improve. The above experiments show that instances can use these locally trained models to generate a personalized watchlist of instances they peer with. Thus, we argue that these automatically compiled lists can helps administrator pay attention to these instances.
Related Work
Social Network Studies. Extensive work has been carried out in the area of online social networks. However, most of these are on centralized social networks (e.g. Facebook and Twitter) [29,31,19,28,3,36]. A number of these look at the anatomy of social graphs [23] and moderation challenges [20]. Others look into areas ranging from the evolution of user activities to demographics [32,45]. In contrast to Pleroma, these social networking platforms tend to rely on central (commercial) administrators and moderators [48]. [7,1]. Recent works have examined the standardization of related protocols [27,35]. Bielenberg et al. analyzed the growth, topology and server reliability of Diaspora (a decentralized social network) [4]. Similarly, Zignani et al. studied the evolution of the Mastodon social graph [55]. Our work differs in that we focus on exploring administrator actions within the Fediverse.
Online Moderation. Prior work has investigated the roles that volunteer moderators play in platforms like Twitch [51]. Text-based content classification and filtering has been extensively studied too. These include computational techniques to detect cyberbullying [12,49,10], anti-social posting [43,25,42,52], and hate speech [9,46,40,50,21,47,24]. These models have proven effective in reducing the workload of human moderators. For example, Cheng et. al. [25] use random forest and logistic regression classifiers to predict whether a user will be banned, reducing the manual load on moderators. Similarly, Zia et al. [53] look at detecting the spread of toxic posts specifically in Pleroma (although not administra-tor reactions). In our prior work, we also studied the use of federation policies [22]. Here, we build on this, with a focus on the actions undertaken by administrators. We further propose WatchGen to assist administrators. To the best of our knowledge, this is the first large-scale study of administrator activities in the Fediverse. We hope that this can further contribute to the wider understanding of moderation in other platforms.
Conclusion and Discussion
We have studied instance administrators in a popular Fediverse platform, Pleroma. Although 66.9% of instances are still running on default policies, we observe an uptake of more sophisticated management functions. We find evidence that some administrators may become overwhelmed with the growing number of posts and users they must manage. For instance, it takes an average of 82.3 days for administrators to apply any policy against a newly federated instance. Another sign of the overhead is that just 3.5% of instances share the load across multiple moderators. This lack of moderators may come with challenges: instances with fewer moderators tend to employ less sophisticated policy strategies (e.g. 70% of them apply no SimplePolicy actions). To alleviate this, we have proposed WatchGen, a tool that identifies instances in need of closer attention. We show that WatchGen can predict which instances will later have a policy imposed (f1 = 0.77).
Our study opens up a number of lines of future work. First, we wish to expand our work to cover other Fediverse platforms, e.g. Mastodon or PeerTube. Second, we plan to experiment with alternate feature sets that can better identify instances that will later require policy attention. Through this we hope to improve WatchGen and pilot its deployment. Last, we want to perform a qualitative study to better understand the subjective opinions of administrators that underlie these trends. We conjecture that such qualitative insights might be invaluable for improving WatchGen.
Figure 1 :
1The top 15 policies and percentage of instances that use each policy (sorted by the percentage of instances). 92.3% of all instances, 73.6% of users and 88.8% of the posts. Default Policies. Default policies come auto-enabled with new installations. Prior to version 2.3.0 in March, 2021, only the ObjectAgepolicy and NoOpPolicy are enabled by default. Since version 2.3.0, the TagPolicy and HashtagPolicy are also enabled with a new installation (or upgrade). 66.9% of instances only have these default policies running. Relying solely on default policies may indicate several things. For example, administrators maybe unaware of management and moderation functionalities, unable to use them or simply not have sufficient time. Alternatively, they may actively choose not to use them.
Figure 2 :
2Time series showing the percentage of instances (Y-axis) that use the 5 most popular Pleroma policies.We include the sum of all the remaining policies as "Others".
Figure 3 :
3Instances (%) by number of administrators.
Figure 4 :
4Box plot of the number of posts per instances with different number of administrators.
Figure 5 :Figure 6 :
56Per instance growth in the number of administrators (Y2-axis) and posts (Y1-axis). Individual instances are on the X-axis, sorted by the number of posts. CDF showing the distribution of days from federation to moderation for all moderated instances. We also show results for the top 10 and bottom 10 instances, based on the number of policies applied against them.
Figure 7 :
7Box plot showing the distribution of the number of days from federation to the imposition of policies for the top 10 instances with the most policies applied against them.
Figure 8 :
8The percentage of instances that enable the top 15 most popular policies. We separate instances into two groups: (i ) Instances without additional moderators; and (ii ) instances with additional moderators outside of the administrator set.
Figure 10 :
10Time series of f1-scores for the Logistic Regression, Multi-Layer Perceptron, Random Forest and Gradient Boosted Trees models. Note that we exempt month 10 as this leaves insufficient test data.
Figure 11 :Figure 12 :
1112Feature importance for our explainable models. CDF of per-instance performance for Random Forest trained on data from local and federated instances. of studies have focused on the Fediverse or Decentralized Web applications. Raman et al. looked at the challenges in the Fediverse, with a particular focus on the infrastructure and resilience of Mastodon [39]. Trautwein et al. studied the Inter Planetary File System (IPFS), a decentralized storage solution [44]. Guidi et al. and Datta et al. studied the structure, data management, and privacy aspects of decentralized social networks
Policy Description %
DescriptionInstances % Users % Posts Growth in Inst. % Growth in InstObjectAgePolicy
Applies action based on post age
74.80
57.00
65.30
352
73.50%
TagPolicy
Applies policies to individual users based on tags
58.50
39.40
31.30
509
707.40%
HashtagPolicy
List of hashtags to apply actions against
36.40
16.20
21.20
479
15,833.00%
SimplePolicy
Wide range of actions applied against instances
28.80
39.70
36.30
83
30.70%
NoOpPolicy
Default state of an instance
11.50
5.90
3.70
-98
-63.70%
StealEmojiPolicy
List of hosts to steal emojis from
7.00
6.10
5.40
29
80.50%
HellthreadPolicy
Performs action when a threshold of mentions is reached
6.50
10.90
19.80
21
42.80%
AntiFollowbotPolicy
Stops bots from following users on the instance
4.50
6.20
6.90
13
40.60%
MediaProxyWarmingPolicy Crawls attachments using their MediaProxy URLs
3.60
7.00
8.30
16
72.70%
KeywordPolicy
Matches a pattern in a post for an action to be taken
23.00
19.40
10.00
9
36.00%
ForceBotUnlistedPolicy
Makes all bot posts to disappear from public timelines
2.70
7.00
5.50
27
675.00%
AntiLinkSpamPolicy
Rejects posts from likely spambots by rejecting posts from new users that contain links
2.70
6.70
6.80
12
85.70%
ActivityExpirationPolicy
Sets a default expiration on all posts made by users of the local instance.
1.30
1.20
0.73
11
366.60%
EnsureRePrepended
Rewrites posts to ensure that replies to posts with subjects do not have an identical subject
1.30
0.40
1.80
6
66.60%
NormalizeMarkup
processes messages through an alternate pipeline
0.9
4.2
1.4
6
150%
Table 2 :
2The top 15 policies applied by administrators with the percentage of instances applying the policies. It shows the percentage of users and posts on the instances applying them, and their growth during our measurement period. Number of instances where private (DMs, followers-only) activities will not be sent Count 1k mentions count Number of mentions in user posts on an instance Count 24m hate avg Average number of hate words on an instance from hatebase.org Count 1.5 url avg Average number of URLs in user posts on an instance Count 0.2 hashtags avg Average number of hashtags in user posts on an instance Count 0.3 mentions avg Average number of mentions in user posts on an instance Count 0.8 hashtags count Number of hashtags in user posts on an instance Count 7m hate percent Average percentage of hate words in a post from hatebase.org Percentage 2.2% url percent Average percentage of URLs in user posts on an instance Percentage 8.4% hashtags percent Average percentage of hashtags in user posts on an instanceFeature
Table 5 :
5Summary of all extracted features used for model training.
instance.uri /api/v1/instance/peers 2 instance.uri /api/v1/instance/
Note, this is overridden if a user enabled any other policy.5 This blocks all connections from a given instance
The remaining instances do not publish their administrator(s) information.
AcknowledgementsThis research was supported by EPSRC grants EP/S033564/1, EP/W032473/1, UKRI DSNmod (REPHRAIN EP/V011189/1), and EU Horizon Framework grant agreement 101093006 (TaRDIS).
Decentralized online social networks. In: Furht B (ed) Handbook of social network technologies and applications. D A , B S , V , L-H , S T , R K , SpringerD. A, B. S, V. L-H, S. T, and R. K. Decentralized online social networks. In: Furht B (ed) Handbook of social network technologies and applications. In Springer, page 349-378, 2010.
. Activitypub, ActivityPub. https://www.w3.org/TR/activitypub/, 2018.
Analysis of topological characteristics of huge online social networking services. Y.-Y Ahn, S Han, H Kwak, S Moon, H Jeong, Proceedings of the 16th international conference on World Wide Web. the 16th international conference on World Wide WebY.-Y. Ahn, S. Han, H. Kwak, S. Moon, and H. Jeong. Analysis of topological characteristics of huge online social networking services. In In Proceedings of the 16th international conference on World Wide Web, page 835-844, 2007.
The growth of Diaspora -A decentralized online social network in the wild. B Ames, H Lara, G Anthony, S Dan, Z Honggang, INFOCOM Workshops. B. Ames, H. Lara, G. Anthony, S. Dan, and Z. Hong- gang. The growth of Diaspora -A decentralized online social network in the wild. In INFOCOM Workshops, 2012.
Moving with the times: Investigating the alt-right network gab with temporal interaction graphs. N A Arnold, B Steer, I Hafnaoui, H A Parada, G , R J Mondragón, F Cuadrado, R G Clegg, Proceedings of the ACM on Human-Computer Interaction. 5CSCW2N. A. Arnold, B. Steer, I. Hafnaoui, H. A. Parada G, R. J. Mondragón, F. Cuadrado, and R. G. Clegg. Mov- ing with the times: Investigating the alt-right network gab with temporal interaction graphs. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2):1- 17, 2021.
Quick, communityspecific learning: How distinctive toxicity norms are maintained in political subreddits. R Ashwin, R Paul, B Ceren, Proceedings of the 14th International AAAI Conference on Web and Social Media. the 14th International AAAI Conference on Web and Social Media2020R. Ashwin, R. Paul, and B. Ceren. Quick, community- specific learning: How distinctive toxicity norms are maintained in political subreddits. In Proceedings of the 14th International AAAI Conference on Web and Social Media, ICWSM 2020, pages 557-568, 2020.
Managing social contents in decentralized online social networks: a survey. G B , C M , P A , R L , Online Social Networks and Media. 7G. B, C. M, P. A, and R. L. Managing social contents in decentralized online social networks: a survey. In Online Social Networks and Media, volume 7, pages 12-29, 2018.
Flocking to mastodon: Tracking the great twitter migration. H Zia, J He, A Raman, I Castro, N Sastry, G Tyson, ArxivH. Bin Zia, J. HE, A. Raman, I. Castro, N. Sastry, and G. Tyson. Flocking to mastodon: Tracking the great twitter migration. In Arxiv, 2023.
Hate speech, machine classification and statistical modelling of information flows on Twitter: Interpretation and communication for policy decision making. P Burnap, M L Williams, Internet, Policy and Politics Conference. Oxford, United KingdomP. Burnap and M. L. Williams. Hate speech, machine classification and statistical modelling of information flows on Twitter: Interpretation and communication for policy decision making. In In Internet, Policy and Poli- tics Conference, Oxford, United Kingdom, 2014.
Aggressive, repetitive, intentional, visible, and imbalanced: Refining representations for cyberbullying classification. Z C , V Y , M F , Proceedings of the 14th International AAAI Conference on Web and Social Media. the 14th International AAAI Conference on Web and Social Media2020Z. C, V. Y, and M. F. Aggressive, repetitive, intentional, visible, and imbalanced: Refining representations for cy- berbullying classification. In In Proceedings of the 14th International AAAI Conference on Web and Social Me- dia, ICWSM 2020, page 808-819, 2020.
000 new users signed up for mastodon after elon musk bought twitter. J Cox, 30J. Cox. 30,000 new users signed up for mastodon after elon musk bought twitter. https://www.vice.com/en/article/n7npd7/30000-new- users-signed-up-for-mastodon-after-elon-musk-bought- twitter, 2022.
Modeling the detection of Textual Cyberbullying. K Dinakar, R Reichart, H Lieberman, The Social Mobile Web. K. Dinakar, R. Reichart, and H. Lieberman. Modeling the detection of Textual Cyberbullying. In In The Social Mobile Web, pages 11-17, 2011.
Measuring Decentralized Video Streaming: A Case Study of DTube. T V Doan, T D Pham, M Oberprieler, V Bajpai, IFIP Networking 2020. T. V. Doan, T. D. Pham, M. Oberprieler, and V. Baj- pai. Measuring Decentralized Video Streaming: A Case Study of DTube. In IFIP Networking 2020, pages 118- 126, 2020.
The bag of communities. C Eshwar, S Mattia, S Anirudh, G Eric, Advances in Neural Information Processing Systems. C. Eshwar, S. Mattia, S. Anirudh, and G. Eric. The bag of communities. In Advances in Neural Information Processing Systems, pages 3175-3187, 2017.
A beginner's guide to. M Farokhmanesh, M. Farokhmanesh. A beginner's guide to
Mastodon, the hot new open-source Twitter clone. Mastodon, the hot new open-source Twitter clone. https://www.theverge.com/2017/4/7/15183128/ mastodon-open-source-twitter-clone-how-to-use, 2017.
Quote rts on twitter: Usage of the new feature for political discourse. K Garimella, I Weber, M. De Choudhury, Proceedings of the 8th ACM Conference on Web Science. the 8th ACM Conference on Web ScienceK. Garimella, I. Weber, and M. De Choudhury. Quote rts on twitter: Usage of the new feature for political discourse. In Proceedings of the 8th ACM Conference on Web Science, pages 200-204, 2016.
Managing social contents in Decentralized Online Social Networks: A survey. B Guidi, M Conti, A Passarella, L Ricci, Online Social Networks and Media. B. Guidi, M. Conti, A. Passarella, and L. Ricci. Manag- ing social contents in Decentralized Online Social Net- works: A survey. Online Social Networks and Media, 2018.
What is twitter, a social network or a news media?. K H , C Lee, H Park, M , In Proceedings of the 19th International Conference on World wide web, WWW '10. 19th International Conference on World wide web, WWW '10K. H., C. Lee, H. Park, , and M. S. What is twitter, a social network or a news media? In In Proceedings of the 19th International Conference on World wide web, WWW '10, page 591-600, 2010.
Who watches the watchmen: Exploring complaints on the web. D Ibosiola, I Castro, G Stringhini, S Uhlig, G Tyson, The World Wide Web Conference. D. Ibosiola, I. Castro, G. Stringhini, S. Uhlig, and G. Tyson. Who watches the watchmen: Exploring com- plaints on the web. In The World Wide Web Conference, pages 729-738, 2019.
Exploring crowdsourced content moderation through lens of reddit during covid-19. W Iqbal, M H Arshad, G Tyson, I Castro, Proceedings of the 17th Asian Internet Engineering Conference. the 17th Asian Internet Engineering ConferenceW. Iqbal, M. H. Arshad, G. Tyson, and I. Castro. Ex- ploring crowdsourced content moderation through lens of reddit during covid-19. In Proceedings of the 17th Asian Internet Engineering Conference, pages 26-35, 2022.
Exploring content moderation in the decentralised web: The pleroma case. H A Ishaku, R Aravindh, C Ignacio, Z H Bin, D C Emiliano, S Nishanth, T Gareth, CoNEXT 2021 -Proceedings of the 17th International Conference on emerging Networking EXperiments and Technologies. H. A. Ishaku, R. Aravindh, C. Ignacio, Z. H. Bin, D. C. Emiliano, S. Nishanth, and T. Gareth. Exploring content moderation in the decentralised web: The pleroma case. In CoNEXT 2021 -Proceedings of the 17th International Conference on emerging Networking EXperiments and Technologies, pages 328-335, 2021.
The anatomy of the facebook social graph. U J , K B , B L , C Marlow, arXiv:1111.4503In arXiv preprintU. J., K. B., B. L., and C. Marlow. The anatomy of the facebook social graph. In arXiv preprint arXiv:1111.4503, 2011.
A first look at covid-19 messages on whatsapp in pakistan. R T Javed, M E Shuja, M Usama, J Qadir, W Iqbal, G Tyson, I Castro, K Garimella, 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEER. T. Javed, M. E. Shuja, M. Usama, J. Qadir, W. Iqbal, G. Tyson, I. Castro, and K. Garimella. A first look at covid-19 messages on whatsapp in pakistan. In 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 118-125. IEEE, 2020.
Antisocial behavior in online discussion communities. C Justin, D.-N.-M Cristian, L Jure, Proceedings of the 9th International Conference on Web and Social Media, ICWSM 2015. the 9th International Conference on Web and Social Media, ICWSM 2015C. Justin, D.-N.-M. Cristian, and L. Jure. Antisocial be- havior in online discussion communities. In Proceedings of the 9th International Conference on Web and Social Media, ICWSM 2015, pages 61-70, 2015.
A collusionresistant automation scheme for social moderation systems. L J Kai, C K Ta, L C Laung, Conference on Human Factors in Computing Systems-Proceedings. L. J. Kai, C. K. Ta, and L. C. Laung. A collusion- resistant automation scheme for social moderation sys- tems. In Conference on Human Factors in Computing Systems-Proceedings, pages 1157-1162, February 2016.
The web we weave: Untangling the social graph of the ietf. P Khare, M Karan, S Mcquistin, C Perkins, G Tyson, M Purver, P Healey, I Castro, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media16P. Khare, M. Karan, S. McQuistin, C. Perkins, G. Tyson, M. Purver, P. Healey, and I. Castro. The web we weave: Untangling the social graph of the ietf. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 500-511, 2022.
Structure and evolution of online social networks. R Kumar, J Novak, A Tomkins, Link mining: models, algorithms, and applications. SpringerR. Kumar, J. Novak, and A. Tomkins. Structure and evolution of online social networks. In In Link mining: models, algorithms, and applications. Springer., page 337-357, 2010.
Social structure of facebook networks. T A L , M P J , P M , Physica A: Statistical Mechanics and its Applications. T. A. L., M. P. J., and P. M. A. Social structure of facebook networks. In Physica A: Statistical Mechanics and its Applications, page 4165-4180, 2012.
Understanding the growth of the fediverse through the lens of mastodon. L C Lucio, G Sergio, T Andrea, Applied Network Science. 6L. C. Lucio, G. Sergio, and T. Andrea. Understanding the growth of the fediverse through the lens of mastodon. In Applied Network Science, volume 6, 2021.
Measuring user influence in twitter: The million follower fallacy. C M , H H , B F , G P , Proceedings of the 5th International Conference on Web and Social Media, ICWSM '10. the 5th International Conference on Web and Social Media, ICWSM '10C. M., H. H., B. F., and G. P. K. Measuring user in- fluence in twitter: The million follower fallacy. In In Proceedings of the 5th International Conference on Web and Social Media, ICWSM '10., 2010.
Analyzing user activities, demographics, social network structure and user-generated content on Instagram. L Manikonda, Y Hu, S Kambhampati, arXiv:1410.8099arXiv preprintL. Manikonda, Y. Hu, and S. Kambhampati. Analyz- ing user activities, demographics, social network struc- ture and user-generated content on Instagram. In arXiv preprint arXiv:1410.8099 (2014), 2014.
. Mastodon, Mastodon. https://joinmastodon.org, 2016.
Mastodon content warnings: Inappropriate contents in a microblogging platform. Z Matteo, Q Christian, G Alessia, G Sabrina, R G Paolo, Proceedings of the 13th International Conference on Web and Social Media, ICWSM 2019. the 13th International Conference on Web and Social Media, ICWSM 2019Z. Matteo, Q. Christian, G. Alessia, G. Sabrina, and R. G. Paolo. Mastodon content warnings: Inappropriate contents in a microblogging platform. In Proceedings of the 13th International Conference on Web and Social Media, ICWSM 2019, pages 639-645, 2019.
Characterising the ietf through the lens of rfc deployment. S Mcquistin, M Karan, P Khare, C Perkins, G Tyson, M Purver, P Healey, W Iqbal, J Qadir, I Castro, Proceedings of the 21st ACM Internet Measurement Conference. the 21st ACM Internet Measurement ConferenceS. McQuistin, M. Karan, P. Khare, C. Perkins, G. Tyson, M. Purver, P. Healey, W. Iqbal, J. Qadir, and I. Castro. Characterising the ietf through the lens of rfc deploy- ment. In Proceedings of the 21st ACM Internet Mea- surement Conference, pages 137-149, 2021.
Information network or social network? The structure of the Twitter follow graph. S A Myers, A Sharma, P Gupta, J Lin, Proceedings of the 23rd International Conference on World Wide Web. the 23rd International Conference on World Wide WebS. A. Myers, A. Sharma, P. Gupta, and J. Lin. Infor- mation network or social network? The structure of the Twitter follow graph. In In Proceedings of the 23rd Inter- national Conference on World Wide Web, page 493-498, 2014.
. Peertube, PeerTube. https://joinpeertube.org, 2018.
. Pleroma, Pleroma. https://pleroma.social/, 2018.
Challenges in the decentralised web: The mastodon case. A Raman, S Joglekar, E D Cristofaro, N Sastry, G Tyson, ACM IMC. A. Raman, S. Joglekar, E. D. Cristofaro, N. Sastry, and G. Tyson. Challenges in the decentralised web: The mastodon case. In ACM IMC, pages 217-229, October 2019.
Offensive language detection using multi-level classification. A H Razavi, D Inkpen, S Uritsky, S Matwin, Canadian Conference on Artificial Intelligence. SpringerA. H. Razavi, D. Inkpen, S. Uritsky, and S. Matwin. Offensive language detection using multi-level classifica- tion. In In Canadian Conference on Artificial Intelli- gence. Springer, pages 16-27, 2010.
SoNet-Privacy and Replication in Federated Online Social Networks. L Schwittmann, C Boelmann, M Wander, T Weis, Distributed Computing Systems Workshops. L. Schwittmann, C. Boelmann, M. Wander, and T. Weis. SoNet-Privacy and Replication in Federated Online So- cial Networks. In Distributed Computing Systems Work- shops, 2013.
Automatic identification of personal insults on social news sites. S O Sood, E F Churchill, J Antin, Journal of the American Society for Information Science and Technology. S. O. Sood, E. F. Churchill, and J. Antin. Automatic identification of personal insults on social news sites. In Journal of the American Society for Information Science and Technology, page 270-285, 2012.
This post will just get taken down": Characterizing removed proeating disorder social media content. C Stevie, L Zhiyuan, D C Munmun, 6th IEEE Consumer Communications and Networking Conference. C. Stevie, L. Zhiyuan, and D. C. Munmun. This post will just get taken down": Characterizing removed pro- eating disorder social media content. In 2009 6th IEEE Consumer Communications and Networking Conference, CCNC 2009, pages 1157-1162, 2009.
Design and evaluation of ipfs: a storage layer for the decentralized web. D Trautwein, A Raman, G Tyson, I Castro, W Scott, M Schubotz, B Gipp, Y Psaras, Proceedings of the ACM SIGCOMM 2022 Conference. the ACM SIGCOMM 2022 ConferenceD. Trautwein, A. Raman, G. Tyson, I. Castro, W. Scott, M. Schubotz, B. Gipp, and Y. Psaras. Design and eval- uation of ipfs: a storage layer for the decentralized web. In Proceedings of the ACM SIGCOMM 2022 Conference, pages 739-752, 2022.
On the evolution of user interaction in facebook. B Viswanath, A Mislove, M Cha, K P Gummadi, Proceedings of the 2nd ACM workshop on Online social networks. the 2nd ACM workshop on Online social networksB. Viswanath, A. Mislove, M. Cha, and K. P. Gummadi. On the evolution of user interaction in facebook. In In Proceedings of the 2nd ACM workshop on Online social networks., page 37-42, 2010.
Detecting hate speech on the world wide web. W Warner, J Hirschberg, Proceedings of the Second Workshop on Language in Social Media. Association for Computational Linguistics. the Second Workshop on Language in Social Media. Association for Computational LinguisticsW. Warner and J. Hirschberg. Detecting hate speech on the world wide web. In In Proceedings of the Second Workshop on Language in Social Media. Association for Computational Linguistics, pages 19-26, 2014.
Hateful symbols or hateful people? predictive features for hate speech detection on twitter. Z Waseem, D Hovy, Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT). the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)Z. Waseem and D. Hovy. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In In Proceedings of the North American Chapter of the Association for Computational Linguis- tics (NAACL-HLT), pages 88-93, 2016.
Optimizing transparency for users in social networking sites. info. E Wauters, V Donoso, E Lievens, E. Wauters, V. Donoso, and E. Lievens. Optimizing transparency for users in social networking sites. info, 2014.
An Examination of Regret in Bullying Tweets. J.-M Xu, B Burchfiel, X Zhu, A Bellmore, Proceedings of the North American Chapter ofthe Association for Computational Linguistics (NAACL-HLT). the North American Chapter ofthe Association for Computational Linguistics (NAACL-HLT)J.-M. Xu, B. Burchfiel, X. Zhu, and A. Bellmore. An Examination of Regret in Bullying Tweets. In In Pro- ceedings of the North American Chapter ofthe Associa- tion for Computational Linguistics (NAACL-HLT), page 697-702, 2011.
Filtering offensive language in online communities using grammatical relations. Z Xu, S Zhu, Proceedings of the Seventh Annual Collaboration, Electronic Messaging, Anti-Abuse and Spam Conference. the Seventh Annual Collaboration, Electronic Messaging, Anti-Abuse and Spam ConferenceZ. Xu and S. Zhu. Filtering offensive language in online communities using grammatical relations. In In Pro- ceedings of the Seventh Annual Collaboration, Electronic Messaging, Anti-Abuse and Spam Conference, pages 1- 10, 2010.
Volunteer Moderators in Twitch Micro Communities. W D Yvette, W. D. Yvette. Volunteer Moderators in Twitch Micro Communities. pages 1-13, 2013.
Racist or sexist meme? classifying memes beyond hateful. H B Zia, I Castro, G Tyson, Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021). the 5th Workshop on Online Abuse and Harms (WOAH 2021)H. B. Zia, I. Castro, and G. Tyson. Racist or sexist meme? classifying memes beyond hateful. In Proceed- ings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 215-219, 2021.
Toxicity in the decentralized web and the potential for model sharing. H B Zia, A Raman, I Castro, I H Anaobi, E De Cristofaro, N Sastry, G Tyson, ACM SIGMETRICS2022H. B. Zia, A. Raman, I. Castro, I. H. Anaobi, E. De Cristofaro, N. Sastry, and G. Tyson. Toxicity in the decentralized web and the potential for model shar- ing. ACM SIGMETRICS, 2022.
Follow the "Mastodon": Structure and Evolution of a Decentralized Online Social Network. M Zignani, S Gaito, G P Rossi, ICWSM. M. Zignani, S. Gaito, and G. P. Rossi. Follow the "Mastodon": Structure and Evolution of a Decentral- ized Online Social Network. In ICWSM, 2018.
Follow the "Mastodon": Structure and evolution of a decentralized online social media. M Zignani, S Galto, G P Rossi, ICWSM. M. Zignani, S. Galto, and G. P. Rossi. Follow the "Mastodon": Structure and evolution of a decentralized online social media. In ICWSM, pages 541-550, 2018.
| zyda_arxiv-1464000 |
Higgs production and decay with a fourth Standard-Model-like fermion generation
9 Mar 2012 March 2012
A Denner
Institut für Theoretische Physik und Astrophysik
Universität Würzburg
D-97074WürzburgGermany
Introduction
S Dittmaier
Albert-Ludwigs-Universität Freiburg
Physikalisches InstitutD-79104FreiburgGermany
A Mück
Institut für Theoretische Teilchenphysik und Kosmologie
RWTH Aachen
D-52056AachenGermany
G Passarino
Dipartimento di Fisica Teorica
Università di Torino
Italy
INFN
Sezione di Torino
Italy
M Spira
Paul Scherrer Institut
Würenlingen und VilligenCH-5232Villigen PSISwitzerland
C Sturm
Max-Planck-Institut für Physik (Werner-Heisenberg-Institut
D-80805MünchenGermany
S Uccirati
Institut für Theoretische Physik und Astrophysik
Universität Würzburg
D-97074WürzburgGermany
Introduction
M M Weber
Max-Planck-Institut für Physik (Werner-Heisenberg-Institut
D-80805MünchenGermany
Higgs production and decay with a fourth Standard-Model-like fermion generation
9 Mar 2012 March 2012LHC Higgs Cross Section Working Group
State-of-the-art predictions for the Higgs-boson production cross section via gluon fusion and for all relevant Higgs-boson decay channels are presented in the presence of a fourth Standard-Model-like fermion generation. The qualitative features of the most important differences to the genuine Standard Model are pointed out, and the use of the available tools for the predictions is described. For a generic mass scale of 400−600 GeV in the fourth generation explicit numerical results for the cross section and decay widths are presented, revealing extremely large electroweak radiative corrections, e.g., to the cross section and the Higgs decay into WW or ZZ pairs, where they amount to about −50% or more. This signals the onset of a non-perturbative regime due to the large Yukawa couplings in the fourth generation. An estimate of the respective large theoretical uncertainties is presented as well.
Introduction
In the last years intensive studies at the LHC aimed at putting exclusion limits on an extension of the Standard Model (SM) with an additional fourth generation of heavy fermions. Besides direct searches for heavy quarks [1,2], Higgs production in gluon fusion (gg-fusion) is an important channel in this respect [3,4], as it is particularly sensitive to new coloured, heavy particles. 1 Given the spectacular modification in the Higgs-boson cross section at hadron colliders that can be tested easily with LHC data, a SM with a fourth generation of heavy fermions stimulates great interest.
So far, the experimental analysis has concentrated on models with ultra-heavy fourth-generation fermions, excluding the possibility that the Higgs boson decays to heavy neutrinos. Furthermore, in the literature [8] the two-loop electroweak corrections to gg-fusion have been included only under the assumption that they are dominated by light fermions. At the moment, however, the experimental strategy consists in computing the ratio of Higgs-production cross sections in the SM with a 4th generation of fermions (SM4) and the SM with 3 generations (SM3), R = σ(SM4)/σ(SM3), with HIGLU [9] while all next-to-leading-order (NLO) electroweak (EW) radiative corrections are switched off. The experimental situation is as follows: the search in all channels, updated for the International Europhysics Conference on High Energy Physics 2011 (HEP2011) and the XXV International Symposium on Lepton Photon Interactions at High Energies (LP11), requires Higgs-boson masses M H < 120 GeV or M H > 600 GeV (ATLAS and CMS ex-aequo [10]). At low M H , LHC limits are more stringent than Tevatron limits. However, in all existing analyses complete NLO EW corrections are not included. Therefore, changes of up to 10 GeV are expected in limits at the low end while changes of the order of 30 GeV are possible in the high-mass region [10].
Leading-order (LO) or NLO QCD predictions typically depend only weakly on the precise values of the masses of the heavy fermions and approach a constant value in the limit of very heavy fermion masses. In contrast, NLO EW corrections are enhanced by powers of the masses of the heavy fermions and thus induce a strong dependence of the results on these masses and a breakdown of perturbation theory for very heavy fermions.
While the complete electroweak corrections to Higgs production in SM4 at the LHC have already been calculated in Ref. [11], we present in this paper for the first time results for all relevant Higgs-boson decay channels including NLO electroweak corrections in SM4. For ultraheavy fermions the leading corrections can be obtained easily within an effective theory [12]. However, for heavy fermions with masses at the level of 500 GeV the asymptotic results are not precise enough and in particular for a heavy Higgs boson they are not valid. Including the complete NLO corrections, we discuss the corresponding predictions for various scenarios of heavy fermion masses and provide estimates of the theoretical uncertainties.
The paper is organized as follows: In Section 2 we define our general setup. In Section 3 we describe the calculation of the SM4 contributions to Higgs-boson production via gluon fusion and in Section 4 those for Higgs-boson decays into 4 fermions, fermion pairs, gluon pairs, photon pairs and photon plus Z boson. In Section 5 we present numerical results, and Section 6 contains our conclusions.
General setup
We study the extension of the SM that includes a 4th generation of heavy fermions, consisting of an up-and a down-type quark (t ′ , b ′ ), a charged lepton (l ′ ), and a massive neutrino (ν l ′ ). The 4th-generation fermions all have identical gauge couplings as their SM copies and equivalent Yukawa couplings proportional to their masses, but are assumed not to mix with the other three SM generations.
Experimentally, 4th-generation fermions are strongly constrained. Direct experimental searches from the Tevatron [5,6] and the LHC [1,2] yield lower limits, in particular on the masses of the heavy quarks:
m b ′ > 361 GeV, m t ′ > 450 GeV at 95%CL. (2.1)
Stringent bounds on the mass splittings of the heavy fermions result from electroweak precision data [13], more precisely from experimental constraints on the S and T parameters of Peskin and Takeuchi [14]. These constraints typically require mass splittings for the heavy quarks and leptons. Nevertheless also a mass-degenerate 4th family is not excluded if one allows for flavour mixing of the 4th-generation fermions [15]. While 4th-generation models can accomodate a heavier Higgs boson as the SM3, very large values of a SM-like Higgs boson are not favoured [16].
Since the Yukawa couplings of the heavy fermions are proportional to their masses, perturbation theory breaks down for masses of the heavy fermions above ∼ 500 GeV [12]. In the presence of heavy fermions, non-perturbative analyses on the lattice push the allowed Higgs masses to larger values [17].
The main goal of this paper is to provide the electroweak corrections within SM4 for Higgs production and decay. Owing to screening (see Section 3), LO or NLO QCD predictions typically depend only weakly on the precise values of the masses of the heavy fermions. Therefore, experimental analyses used very heavy masses for the extra fermions in order to derive conservative limits. When complete NLO EW corrections are included, the situation changes dramatically. Since the NLO EW corrections are enhanced by powers of the masses of the heavy fermions, perturbation theory breaks down for fermion masses above ∼ 500 GeV and perturbative results become questionable. Therefore, we focus on 4th-generation masses between 400 and 550 GeV, i.e. values above the direct search bounds but small enough for perturbation theory to be still viable, and study different scenarios that are in agreement with electroweak precision tests. In detail, we consider scenarios that are consistent with the constraints derived in Ref. [18] (see in particular Figure 13). We choose
m t ′ = 500 GeV, m l ′ = 450 GeV (2.2)
and consider three different mass splittings for heavy quarks and leptons each for three values of the Higgs-boson mass: Moreover, we provide a scan over Higgs-boson masses from 100 GeV to 600 GeV for the scenario
M H [GeV] 120 350 600 m t ′ − m b ′ [GeV] −50, 0, +50 −50, 0, +50 −50, 0, +50 m ν l ′ − m l ′ [GeV] −m t ′ = 500 GeV, m l ′ = 450 GeV, m b ′ = 450 GeV, m ν l ′ = 375 GeV, (2.4)
which is a particular case of (2.2)/(2.3). Note that for this range of Higgs-boson masses, the decay of the Higgs boson into a pair of heavy fermions is kinematically not allowed in the scenarios considered above. In addition, we provide results for the extreme scenario
m b ′ = m l ′ = m ν l ′ = 600 GeV, m t ′ = m b ′ + 1 + 1 5 ln M H 115 GeV 50 GeV, (2.5)
where the relation among the heavy fermion masses is used to avoid current exclusion limits from EW precision data (see Ref. [13]). This setup is at the border between the perturbative and the non-perturbative regime. It is as close as possible to the infinite 4th-generation case, which was used by ATLAS and CMS to get conservative exclusion limits, and in fact was employed to derive experimental limits on the Higgs-boson mass within SM4 [4].
In the extreme scenario (2.5), we give results for Higgs masses between 100 GeV and 1 TeV for an on-shell Higgs boson. For Higgs masses above ∼ 500 GeV, the off-shellness of the Higgs boson becomes relevant, and finite-width effects and background contributions can become important. A treatment of these effects is very difficult and beyond the scope of the present paper. Attempts to describe these effects in the SM can be found in Refs. [19,20] and a discussion of the corresponging theoretical uncertainties in Ref. [21].
Higgs-boson production via gluon fusion
In the Standard Model with three fermion generations the Higgs-boson production via gluon fusion is basically determined at leading order by the contribution of just the one-loop diagram where a top quark is running in the loop (the bottom-quark loop can be neglected in a first approximation). Despite the presence of the Yukawa coupling proportional to the top-quark mass, the LO amplitude goes at high m t asymptotically towards a constant (screening). Moving from SM3 to SM4, the LO gg-fusion cross section for a light Higgs boson is then about nine times larger than the one of SM3, because three heavy fermions instead of one propagate in the loop [22].
The screening behaviour at leading order is preserved by QCD corrections [8,23]. Concerning the EW corrections the leading behaviour for high values of the masses in the fourth generation is known since long [24,25] (see also Ref. [26]) and shows an enhancement of radiative corrections proportional to the square of the (heavy) fermion masses. This enhancement is, however, accidentally spoiled in the quark sector in presence of degenerate t ′ −b ′ quarks, while it still survives in the (heavy) lepton sector. Recently the complete two-loop EW corrections to Higgs-boson production through gg-fusion at the LHC in SM4 have been computed in Ref. [11] by extending the corresponding calculations of Refs. [27,28] in SM3. In Ref. [11] explicit results have been given in the scenario (2.5) of large fourth-generation masses; in this section we determine the complete two-loop EW corrections using the same methods, however, for different mass scenarios.
Let us start with the scan over Higgs-boson masses specified in Eq. (2.4) of Section 2. The relative EW two-loop correction δ to the fourth generation are positive for a light Higgs-boson mass and start to become negative for Higgs-boson masses above 260 GeV. Figure 1 also shows the behaviour of δ Table 1.
In addition to these scenarios for the masses of the fourth generation of fermions, we have also performed a scan in the m b ′ −m ν l ′ space as given in Eq. (2.3) for fixed values of the masses m t ′ = 500 GeV, m l ′ = 450 GeV and for three values of the Higgs-boson mass M H =120, 350, 600 GeV. These results for the relative correction are listed in Table 2.
For the mass scenario of Eq. (2.5) the EW NLO corrections become −100% just before the heavy-quark thresholds of the 4th generation and also for the mass scenario (2.4) the EW NLO corrections become sizable when approaching the heavy-quark thresholds making in both cases the use of the perturbative approach questionable. In the high-mass region we have no solid argument to estimate the remaining uncertainty and prefer to state that SM4 is in a fully nonperturbative regime which should be approached with extreme caution. For the low-mass region we can do no more than make educated guesses, based on the expected asymptotic behaviour for a heavy fourth generation. At EW NNLO there are diagrams with five Yukawa couplings; we can therefore expect an enhancement at 3 loops which goes as the fourth power of the heavy-fermion mass, unless some accidental screening occurs. Therefore, assuming a quartic leading 3-loop behaviour in the heavy fermion mass m f ′ , we estimate the remaining uncertainty to be of the order of (α/π) 2 (m f ′ /M W ) 4 and thus ∼ 2 % for the scenario (2.5) in the interval M H = 100−600 GeV, even less for the scenarios (2.2)/(2.3). Table 1: Relative NLO EW corrections to the gg → H cross sections in SM4, for the mass scenario m t ′ = 500 GeV, m b ′ = 450 GeV, m ν l ′ = 375 GeV, m l ′ = 450 GeV. The absolute numerical integration error is well below 0.01% for Higgs-boson masses below the tt-threshold and below 0.05% above it. Having computed the EW corrections δ
M H [GeV] δ (4) EW [%] M H [GeV] δ(4)
(4) EW we should discuss some aspects of their inclusion in the production cross section σ (gg → H + X), i.e. their interplay with QCD corrections and the remaining theoretical uncertainty. The most accepted choice is given by
σ F = σ LO 1 + δ QCD 1 + δ EW ,(3.2)
which assumes complete factorization of QCD and EW corrections. The latter is based on the work of Ref. [29] where it is shown that, at zero Higgs momentum, exact factorization is violated but with a negligible numerical impact; the result of Ref. [29] can be understood in terms of soft-gluon dominance. The residual part beyond the soft-gluon-dominated part contributes up to 5−10% to the total inclusive cross section (for Higgs-boson masses up to 1 TeV). Since the EW Table 2: Relative NLO EW corrections to the gg → H cross sections in SM4 for three different values of the Higgs-boson mass M H with fixed values for the masses m t ′ = 500 GeV, m l ′ = 450 GeV and different values for the masses m b ′ , m ν l ′ . The absolute numerical integration error is well below 0.002% for M H = 120 GeV and below 0.05% for the other Higgs-boson masses. The additional corrections in SM4 arise from 4th-generation fermion loops in the HWW/HZZ vertices, the gauge-boson self-energies, and the renormalization constants. For the large 4thgeneration masses of O(400−600 GeV) considered here, the 4th-generation Yukawa couplings are large, and the total corrections are dominated by the 4th-generation corrections. Numerically the NLO corrections amount to about −50% for the scenarios (2.2)/(2.3) and −85% for the extreme scenario (2.5) and depend only weakly on the Higgs-boson mass for not too large M H . The corrections from the 4th generation are taken into account at NLO with their full mass dependence, but their behaviour for large masses can be approximated well by the dominant corrections in the heavy-fermion limit. In this limit the leading contribution can be absorbed into effective HWW/HZZ interactions in the G µ renormalization scheme via the Lagrangian
M H = 120 GeV M H = 350 GeV M H = 600 GeV m b ′ m ν l ′ δ (4) EW [%] m b ′ m ν l ′ δ (4) EW [%] m b ′ m ν l ′ δ (4) EW [%] inL HV V = √ 2G µ H 2M 2 W W † µ W µ (1 + δ tot W ) + M 2 Z Z µ Z µ (1 + δ tot Z ) , (4.1)
where W, Z, H denote the fields for the W, Z, and Higgs bosons. The higher-order corrections are contained in the factors δ tot V whose expansion up to two-loop order is given by
δ tot(1) V = δ (1) u + δ (1) V , δ tot(2) V = δ (2) u + δ (2) V + δ (1) u δ (1) V . (4.2)
The one-loop expressions for a single SU(2) doublet of heavy fermions with masses m A , m B read [12] δ (1)
u = N c X A 7 6 (1 + x) + x 1 − x ln x , δ(1)V = −2N c X A (1 + x), (4.3) where x = m 2 B /m 2 A , X A = G µ m 2 A /(8 √ 2π 2 )
, and N c = 3 or 1 for quarks or leptons, respectively. The results for the two-loop corrections δ tot(2) V can be found in Ref. [33] for the QCD corrections of O(α s G µ m 2 f ′ ) and in Ref. [25] for the EW corrections of O(G 2 µ m 4 f ′ ). The corrected partial decay width Γ is then given by
Γ NLO ≈ Γ LO 1 + δ (1) Γ + δ (2) Γ = Γ LO 1 + 2δ tot(1) V + (δ tot(1) V ) 2 + 2δ tot(2) V .
(4.4)
The size of the two-loop corrections δ
Γ is about +(6−9)% for the scenarios (2.2)/(2.3) and +15% for the extreme scenario (2.5) depending only very weakly on the Higgs mass. Due to the large one-loop corrections Prophecy4f includes the two-loop QCD and EW corrections in the heavy-fermion limit in addition to the exact one-loop corrections. Although the asymptotic two-loop corrections are not directly applicable for a heavy Higgs boson, they can be viewed as a qualitative estimate of the two-loop effects. One should keep in mind that for a Higgs boson heavier than about 600 GeV many more uncertainties arise owing to the breakdown of perturbation theory.
The leading two-loop terms can be taken as an estimate of the error from unknown higherorder corrections. This implies an error relative to the LO of 7% for the scenarios (2.2)/(2.3) and 15% for the extreme scenario (2.5) on the partial width for all H → 4f decay channels. Assuming a scaling law of this error proportional to X 2 A , the uncertainty for general mass scenarios can be estimated to about 100X 2 A relative to the LO prediction. However, since the correction grows large and negative, the relative uncertainty on the corrected width gets enhanced to 100X 2
A /(1 − 64X A /3 + 100X 2 A )
, where the linear term in X A parametrizes the leading one-loop correction. For the mass m A in X A either the weighted squared average m 2
A = N c (m 2 b ′ + m 2 t ′ ) + m 2 l ′ + m 2 ν l ′ or the maximal mass m A = max(m b ′ , m t ′ , m l ′ , m ν l ′ ) should be used. For m f ′ = 500
GeV and m f ′ = 600 GeV this results roughly in an uncertainty of 14% and 50%, respectively, on the corrected H → 4f decay widths.
H → ff
The decay widths for H → ff are calculated with HDECAY [34] which includes the approximate NLO and NNLO EW corrections for the decay channels into SM3 fermion pairs in the heavy-SM4-fermion limit according to Ref. [25] and mixed NNLO EW/QCD corrections according to Ref. [33]. These corrections originate from the wave-function renormalization of the Higgs boson and are thus universal for all fermion species. The leading one-loop part is given by δ
(1) u of Eq. (4.3)
. Numerically the EW one-loop correction to the partial decay widths into fermion pairs amounts to about +25% or +40%, for the scenarios (2.2)/(2.3) or (2.5), respectively, while the two-loop EW and QCD correction contributes an additional +5% or +20%. The corrections are assumed to factorize from whatever is included in HDECAY, since the approximate expressions emerge as corrections to the effective Lagrangian after integrating out the heavy fermion species. Thus, HDECAY multiplies the relative SM4 corrections with the full corrected SM3 result including QCD and approximate EW corrections. The scale of the strong coupling α s has been identified with the average mass of the heavy quarks t ′ , b ′ of the 4th generation.
The unknown higher-order corrections from heavy fermions can be estimated as for the decay H → 4f above from the size of the leading two-loop corrections. Since the corrections enhance the LO prediction, the uncertainty relative to the corrected width, which we estimate as 100X 2
A /(1 + 32X A /3 + 100X 2
A ) is reduced, resulting in a theoretical uncertainty for the SM4 part to the full partial decay widths into fermion pairs of 5% and 10% for the scenarios (2.2)/(2.3) and (2.5), respectively. The uncertainties of the SM3 EW and QCD parts are negligible with respect to that.
H → gg, γγ, γZ
For the decay modes H → gg, γγ, γZ, HDECAY [34] is used as well.
For H → gg, HDECAY includes the NNNLO QCD corrections of the SM in the limit of a heavy top quark [23,[35][36][37], applied to the results including the heavy-quark loops. While at NNLO the exact QCD corrections in SM4 [8] are included in this limit, at NNNLO the relative SM3 corrections are added to the relative NNLO corrections and multiplied by the LO result including the additional quark loops. Since the failure of such an approximation is less than 1% at NNLO, we assume that at NNNLO it is negligible, i.e. much smaller than the residual QCD scale uncertainty of about 3%. In addition the full NLO EW corrections of Section 3 have been included in factorized form, since the dominant part of the QCD corrections emerges from the gluonic contributions on top of the corrections to the effective Lagrangian in the limit of heavy quarks. Taking besides the scale uncertainty also the missing quark-mass dependence at NLO and beyond into account, the total theoretical uncertainties can be estimated to about 5%.
HDECAY [34] includes the full NLO QCD corrections to the decay mode H → γγ supplemented by the additional contributions of the 4th-generation quarks and charged leptons according to Refs. [23,38].
Extending the same techniques used for H → gg in Ref. [11], we have computed the exact amplitude for H → γγ up to NLO (two-loop level). For phenomenological reasons we restrict the analysis to the range M H < ∼ 150 GeV. The introduction of EW NLO corrections to this decay requires particular attention. If we write the amplitude as
A = A LO + X W A NLO + X 2 W A NNLO + . . . , X W = G µ M 2 W 8 √ 2π 2 , (4.5)
the usual way to include the NLO EW corrections is
|A| 2 ∼ |A LO | 2 + 2 X W Re A NLO A † LO = |A LO | 2 1 + δ (4) EW , (4.6) with δ (4) EW = 2 X W Re[A NLO A † LO ] |A LO | 2 .
(4.7)
From the explicit calculation it turns out that in all scenarios taken into consideration, δ (4) EW is negative and its absolute value is bigger than 1. Part of the problem is related to the fact that at LO the cancellation between the W and the fermion loops is stronger in SM4 than in SM3 so that the LO result is suppressed more, by about a factor of 2 at the level of the amplitude and thus by about a factor of 4 at the level of the decay width. Furthermore, the NLO corrections are strongly enhanced for ultra-heavy fermions in the 4th generation; assuming for instance the mass scenario of Eq. (2.5) for the heavy fermions and a Higgs-boson mass of 100 GeV we get δ (4) EW = −319%; clearly it does not make sense and one should always remember that a badly behaving series should not be used to derive limits on the parameters, i.e. on the heavy-fermion masses. The scenario (2.4) is even more subtle.
In such a situation, where the LO is suppressed, a proper estimate of |A| 2 must also include the next term in the expansion, i.e. X 2 W |A NLO | 2 :
|A| 2 ∼ |A LO + X W A NLO | 2 = |A LO | 2 1 +δ (4) EW , withδ (4) EW = |A LO + X W A NLO | 2 |A LO | 2 − 1. (4.8)
We define at the amplitude level the K -factor
A LO + X W A NLO = A LO 1 − K NLO . (4.9)
K NLO is a complex quantity, but the imaginary part of A LO is small and therefore the major part of the NLO correction comes from the real part of K NLO , which is positive in both scenarios. The relation betweenδ (4) EW and K NLO is: ] is close to one and, not only A LO is small but also A = A LO + X W A NLO is small (even smaller). Therefore it turns out, thatδ (4) EW is large (close to one in absolute value) and a description of NLO corrections just based onδ (4) EW could lead to the conclusion that perturbation theory breaks down. However, this conclusion would be too strong. The point is: a) A LO is accidentally small, b) X W A NLO is large as expected, but it is accidentally of the same order as A LO and with opposite sign.
δ (4) EW = Re [K NLO ] Re[K NLO ] − 2 + Im[K NLO ] 2 .(
We are facing here the problem of dealing with accidentally small quantities and it is hard to give expectations on the convergence of perturbation theory. In our opinion, for this process, the effect of including NLO EW corrections is thus better discussed in terms of shifted quantities:
A LO = A LO + X W A NLO , A NLO = A NNLO . (4.11)
The idea is to use A LO to define a 2-loop corrected decay width
Γ LO = Γ LO (1 +δ (4) EW ) = Γ LO |A LO + X W A NLO | 2 |A LO | 2 , (4.12)
which represents the best starting point of a perturbative expansion. In other words, the major part of the NLO corrections emerges from an effective Lagrangian in the heavy-particle limit, therefore we should consider them as correction to the effective Feynman rules and thus to the amplitude.
To give an estimation of the theoretical error on the missing higher-order corrections, we analyse in more details the situation at NLO and try to guess the order of magnitude of A NLO = A NNLO . Assuming for simplicity m b ′ = m t ′ = m Q and m l ′ = m ν l ′ = m L , the amplitude can be written as
A = A LO 1 + X W C Q m 2 Q M 2 W + C L m 2 L M 2 W + R + O(X 2 W ) ,(4.13)
where we have factorized out the leading behaviour in the heavy masses. The quantities C Q,L and R depend on masses, but go towards a constant for high fourth-generation masses. In the asymptotic region, M H < 2 M W ≪ m Q , m L we require R to be a constant and parametrize the C -functions as
C Q = − 192 5 (1 + c Q τ ) , C L = − 32 3 (1 + c L τ ) ,(4.14)
where C Q,L are constant and τ = M 2 H /(2M W ) 2 . Note that for τ = R = 0 this is the leading two-loop behaviour predicted in Ref. [25] (see also Ref. [26] for the top-dependent contribution which we hide here in R). By performing a fit to our exact result we obtain a good agreement in the asymptotic region, showing that the additional corrections proportional to τ play a relevant role. For instance, with fermions of the 4th generation heavier than 300 GeV we have fit/exact −1 less than 5% in the window M H = [80−130] GeV.
Our educated guess for the error estimate is to use the absolute value of the NLO leading coefficient as the unknown coefficient in the NNLO one, assuming a leading behaviour of m 4 Q , m 4 L , i.e. no accidental cancellations: where we put m f ′ = max(m t ′ , m b ′ , m ν l ′ , m l ′ ) in the last term. In principle one should work at a fixed order in perturbation theory and estimate the corresponding theoretical uncertainty from the LO-NNLO interference (since |A NLO | 2 is already part of |A LO | 2 ). However, the large cancellations in A LO (less relevant in the conservative scenario) make this option unrealistic and we prefer a more conservative estimate of the uncertainty, for which we take
A NLO = A NNLO ∼ A LO C Q + C L m 4 f ′ M 4 W ,(4.15)A 2 ∼ A LO 2 ± 2 X 2 W Re [A NLO A † LO ] ∼ A LO 2 ± 2 X 2 W Re [A LO A † LO ] |C Q + C L | m 4 f ′ M 4 W .
(4. 16) Given our setups the difference between m t ′ , m b ′ , m ν l ′ and m l ′ is irrelevant in estimating the uncertainty which is now defined as
Γ(H → γγ) = Γ LO 1 ± δ THU , δ THU = 2 X 2 W Re [A † LO A LO ] |A LO | 2 C Q + C L m 4 f ′ M 4 W .
(4.17)
The results for the mass scenario (2.4) and for the setup of Eq. (2.5) are shown in Table 3 and in Table 4, respectively. In Table 5 we show the results at fixed M H = 120 GeV for different masses in the fourth generation. The insensitivity of the LO width Γ LO with respect to the mass scale in the fourth generation is reflecting the screening property of the heavy-mass limit Table 5: NLO EW corrections to the H → γγ decay width according to Eq. (4.12) and estimate for the missing higher-order (δ THU ) corrections from Eq. (4.17). Here we have fixed m t ′ = 500 GeV, m l ′ = 450 GeV, and M H = 120 GeV. EW are given for completeness but one should remember that the prediction is in terms of Γ LO . In the mass scenario (2.4), the uncertainty from higher orders δ THU is large for low values of M H . In the extreme scenario of Eq. (2.5), above M H = 145 GeV the credibility of our estimate for the effect of the NNLO corrections becomes more and more questionable and the results cannot be trusted anymore, missing the complete NNLO term. In any case perturbation theory becomes questionable for higher values of M H .
m b ′ [GeV] m ν l ′ [GeV] Γ LO [GeV]δ(
It is worth noting that for H → V V (see Section 4.1) the situation is different. There is no accidentally small LO (there SM3=SM4 in LO) and the square of A NLO is taken into account by the leading NNLO term taken from Ref. [25], which serves as our error estimate.
The decay mode H → γZ is treated at LO only, since the NLO QCD corrections within the SM3 are known to be small [39] and can thus safely be neglected. The EW corrections in SM3 as well as in SM4 are unknown. This implies a theoretical uncertainty of the order of 100% in the intermediate Higgs-boson mass range within SM4, since large cancellations between the W and fermion loops emerge at LO similar to the decay mode H → γγ.
Numerical results
The results for the Higgs-boson production cross section via gluon fusion have been obtained by including the NLO QCD corrections with full quark-mass dependence [23] and the NNLO QCD corrections in the limit of heavy quarks [8]. The full EW corrections [11] have been included in factorized form as discussed in Section 3. We use the MSTW2008NNLO parton density functions [40] with the strong coupling normalized to α s (M Z ) = 0.11707 at NNLO. The renormalization and factorization scales are chosen as µ R = µ F = M H /2.
In Table 6 we show results for the scenarios defined in (2.2)/(2.3) for the Higgs production cross section at √ s = 8 TeV. For the specific scenario (2.4) we display the ratio between the SM4 and SM3 cross sections at 8 TeV in Fig. 2. The SM4 cross sections are enhanced by factors of 4−9 with respect to SM3. In the extreme scenario (2.5) we have studied the gluon-fusion cross section at √ s = 7 TeV. Corresponding results are shown in Table 7 and the ratio to the SM cross section is plotted in Fig. 3. The enhancement is similar as in the scenario shown in Fig. 2. For the gg-fusion cross section in SM4 the QCD uncertainties are about the same as in The results for the Higgs branching fractions have been obtained in a similar way as those for the results in SM3 in Refs. [41,42]. While the partial widths for H → WW/ZZ have been computed with Prophecy4f, all other partial widths have been calculated with HDECAY. Then, the branching ratios and the total width have been calculated from these partial widths.
M H [GeV] m b ′ [GeV] m ν l ′ [GeV] σ [pb] M H [GeV] m b ′ [GeV] m ν l ′ [GeV] σ [
The results of the Higgs branching fractions for the scenarios defined in (2.2)/(2.3) are shown in Table 8 for the 2-fermion final states and in Table 9 for the 2-gauge-boson final states. In the latter table also the total Higgs width is given. Table 10 lists the branching fractions for the e + e − e + e − and e + e − µ + µ − final states as well as several combined channels. Apart from the sum of all 4-fermion final states (H → 4f ) the results for all-leptonic final states H → 4l with l = e, µ, τ, ν e , ν µ , ν τ , the results for all-hadronic final states H → 4q with q = u, d, c, s, b and the semi-leptonic final states H → 2l2q are shown. To compare with the pure SM3, Fig. 4 shows the ratios between the SM4 and SM3 branching fractions for the most important channels for the scenario (2.4). While the branching ratio into gluons is enhanced by a factor 5−15, BR(H → bb) is reduced for small M H but enhanced for M H > ∼ 150 GeV. The branching ratios into electroweak gauge-boson pairs are suppressed for small Higgs masses, and the one into photon pairs is reduced by 65 to 100% in the Higgs-mass range 100 GeV < M H < 150 GeV.
Results in the extreme scenario (2.5) for Higgs masses up to 1 TeV are shown in Table 11 for the 2-fermion final states, in Table 12 for the 2-gauge-boson final states and the total Higgs width, and in Table 13 for selected 4-fermion final states. The ratios between the SM4 and SM3 branching fractions for the most important channels are shown in Fig. 5. As compared to the scenario of Fig. 4, the enhancement and suppression effects are stronger (as they scale roughly with the square of the heavy fermion masses). While BR(H → γγ) is different in detail it is again suppressed by a factor 100.
The effect of the NLO EW corrections on the H → γγ decay width in the scenarios (2.2)/(2.3) is shown in Table 14. The branching ratio for H → γγ is strongly reduced in SM4 owing to cancellations between LO and NLO. In Table 15 we display the effect of the NLO EW corrections on the H → γγ decay width in the extreme scenario (2.5). While the branching ratio differs
M H / m b ′ / m ν l ′ H → bb H → τ + τ − H → µ + µ − H → ss H → cc H → tt [GeV]
120/ 450/ 350 4.39 · 10 −1 4.77 · 10 −2 1.65 · 10 −4 1.87 · 10 −4 2.21 · 10 −2 0.00 120/ 450/ 375 4.39 · 10 −1 4.77 · 10 −2 1.66 · 10 −4 1.87 · 10 −4 2.22 · 10 −2 0.00 120/ 450/ 400 4.39 · 10 −1 4.77 · 10 −2 1.66 · 10 −4 1.87 · 10 −4 2.22 · 10 −2 0.00 120/ 500/ 350 4.45 · 10 −1 4.83 · 10 −2 1.68 · 10 −4 1.90 · 10 −4 2.24 · 10 −2 0.00 120/ 500/ 375 4.45 · 10 −1 4.84 · 10 −2 1.68 · 10 −4 1.90 · 10 −4 2.25 · 10 −2 0.00 120/ 500/ 400 4.45 · 10 −1 4.84 · 10 −2 1.68 · 10 −4 1.90 · 10 −4 2.25 · 10 −2 0.00 120/ 550/ 350 4.52 · 10 −1 4.91 · 10 −2 1.70 · 10 −4 1.93 · 10 −4 2.28 · 10 −2 0.00 120/ 550/ 375 4.52 · 10 −1 4.91 · 10 −2 1.70 · 10 −4 1.93 · 10 −4 2.28 · 10 −2 0.00 120/ 550/ 400 4.52 · 10 −1 4.92 · 10 −2 1.71 · 10 −4 1.93 · 10 −4 2.28 · 10 −2 0.00 350/ 450/ 350 7.25 · 10 −4 9.60 · 10 −5 3.33 · 10 −7 3.09 · 10 −7 3.64 · 10 −5 3.14 · 10 −2 350/ 450/ 375 7.32 · 10 −4 9.68 · 10 −5 3.36 · 10 −7 3.12 · 10 −7 3.68 · 10 −5 3.17 · 10 −2 350/ 450/ 400 7.39 · 10 −4 9.78 · 10 −5 3.39 · 10 −7 3.15 · 10 −7 3.71 · 10 −5 3.20 · 10 −2 350/ 500/ 350 7.72 · 10 −4 1.02 · 10 −4 3.54 · 10 −7 3.29 · 10 −7 3.88 · 10 −5 3.35 · 10 −2 350/ 500/ 375 7.79 · 10 −4 1.03 · 10 −4 3.57 · 10 −7 3.32 · 10 −7 3.92 · 10 −5 3.38 · 10 −2 350/ 500/ 400 7.87 · 10 −4 1.04 · 10 −4 3.61 · 10 −7 3.35 · 10 −7 3.95 · 10 −5 3.41 · 10 −2 350/ 550/ 350 8.36 · 10 −4 1.11 · 10 −4 3.83 · 10 −7 3.56 · 10 −7 4.20 · 10 −5 3.62 · 10 −2 350/ 550/ 375 8.44 · 10 −4 1.12 · 10 −4 3.87 · 10 −7 3.59 · 10 −7 4.24 · 10 −5 3.66 · 10 −2 350/ 550/ 400 8.53 · 10 −4 1.13 · 10 −4 3.91 · 10 −7 3.63 · 10 −7 4.28 · 10 −5 3.70 · 10 −2 600/ 450/ 300 1.24 · 10 −4 1.80 · 10 −5 6.25 · 10 −8 5.26 · 10 −8 6.20 · 10 −6 2.97 · 10 −1 600/ 450/ 350 1.22 · 10 −4 1.78 · 10 −5 6.17 · 10 −8 5.19 · 10 −8 6.12 · 10 −6 2.93 · 10 −1 600/ 450/ 400 1.23 · 10 −4 1.80 · 10 −5 6.24 · 10 −8 5.24 · 10 −8 6.18 · 10 −6 2.96 · 10 −1 600/ 500/ 300 1.29 · 10 −4 1.88 · 10 −5 6.51 · 10 −8 5.49 · 10 −8 6.47 · 10 −6 3.10 · 10 −1 600/ 500/ 350 1.27 · 10 −4 1.86 · 10 −5 6.43 · 10 −8 5.42 · 10 −8 6.39 · 10 −6 3.06 · 10 −1 600/ 500/ 400 1.29 · 10 −4 1.88 · 10 −5 6.50 · 10 −8 5.48 · 10 −8 6.46 · 10 −6 3.09 · 10 −1 600/ 550/ 300 1.36 · 10 −4 1.98 · 10 −5 6.87 · 10 −8 5.80 · 10 −8 6.84 · 10 −6 3.27 · 10 −1 600/ 550/ 350 1.35 · 10 −4 1.96 · 10 −5 6.78 · 10 −8 5.72 · 10 −8 6.75 · 10 −6 3.23 · 10 −1 600/ 550/ 400 1.36 · 10 −4 1.98 · 10 −5 6.85 · 10 −8 5.78 · 10 −8 6.82 · 10 −6 3.26 · 10 −1 considerably from those in scenarios (2.2)/(2.3) a similarly strong reduction by a factor of 100 with respect to SM3 is observed. Thus, this branching ratio is completely irrelevant in SM4.
M H / m b ′ / m ν l ′ H → gg H → Zγ H → WW H → ZZ Γ H [GeV]
[GeV] 120/ 450/ 350 4.39 · 10 −1 4.54 · 10 −4 4.70 · 10 −2 5.14 · 10 −3 6.68 · 10 −3 120/ 450/ 375 4.39 · 10 −1 4.54 · 10 −4 4.66 · 10 −2 5.10 · 10 −3 6.69 · 10 −3 120/ 450/ 400 4.39 · 10 −1 4.53 · 10 −4 4.62 · 10 −2 5.05 · 10 −3 6.70 · 10 −3 120/ 500/ 350 4.34 · 10 −1 4.51 · 10 −4 4.47 · 10 −2 4.88 · 10 −3 6.74 · 10 −3 120/ 500/ 375 4.35 · 10 −1 4.50 · 10 −4 4.43 · 10 −2 4.84 · 10 −3 6.75 · 10 −3 120/ 500/ 400 4.35 · 10 −1 4.50 · 10 −4 4.38 · 10 −2 4.79 · 10 −3 6.76 · 10 −3 120/ 550/ 350 4.29 · 10 −1 4.45 · 10 −4 4.19 · 10 −2 4.55 · 10 −3 6.82 · 10 −3 120/ 550/ 375 4.29 · 10 −1 4.45 · 10 −4 4.15 · 10 −2 4.51 · 10 −3 6.83 · 10 −3 120/ 550/ 400 4.30 · 10 −1 4.44 · 10 −4 4.10 · 10 −2 4.46 · 10 −3 6.84 · 10 −3 350/ 450/ 350 6.91 · 10 −3 5.54 · 10 −5 6.62 · 10 −1 2.99 · 10 −1 9.72 350/ 450/ 375 6.97 · 10 −3 5.58 · 10 −5 6.62 · 10 −1 2.98 · 10 −1 9.65 350/ 450/ 400 7.04 · 10 −3 5.62 · 10 −5 6.61 · 10 −1 2.98 · 10 −1 9.58 350/ 500/ 350 7.17 · 10 −3 5.77 · 10 −5 6.60 · 10 −1 2.98 · 10 −1 9.33 350/ 500/ 375 7.24 · 10 −3 5.81 · 10 −5 6.60 · 10 −1 2.98 · 10 −1 9.26 350/ 500/ 400 7.31 · 10 −3 5.86 · 10 −5 6.59 · 10 −1 2.98 · 10 −1 9.19 350/ 550/ 350 7.54 · 10 −3 6.07 · 10 −5 6.58 · 10 −1 2.97 · 10 −1 8.87 350/ 550/ 375 7.62 · 10 −3 6.12 · 10 −5 6.58 · 10 −1 2.96 · 10 −1 8.80 350/ 550/ 400 7.70 · 10 −3 6.17 · 10 −5 6.58 · 10 −1 2.96 · 10 −1 8.73 600/ 450/ 300 2.27 · 10 −3 6.52 · 10 −6 4.71 · 10 −1 2.30 · 10 −1 8.96 · 10 1 600/ 450/ 350 2.32 · 10 −3 6.43 · 10 −6 4.73 · 10 −1 2.31 · 10 −1 9.09 · 10 1 600/ 450/ 400 2.36 · 10 −3 6.47 · 10 −6 4.71 · 10 −1 2.30 · 10 −1 9.03 · 10 1 600/ 500/ 300 2.27 · 10 −3 6.62 · 10 −6 4.62 · 10 −1 2.26 · 10 −1 8.78 · 10 1 600/ 500/ 350 2.31 · 10 −3 6.53 · 10 −6 4.65 · 10 −1 2.27 · 10 −1 8.92 · 10 1 600/ 500/ 400 2.35 · 10 −3 6.57 · 10 −6 4.63 · 10 −1 2.26 · 10 −1 8.86 · 10 1 600/ 550/ 300 2.29 · 10 −3 6.77 · 10 −6 4.51 · 10 −1 2.20 · 10 −1 8.57 · 10 1 600/ 550/ 350 2.34 · 10 −3 6.67 · 10 −6 4.54 · 10 −1 2.21 · 10 −1 8.70 · 10 1 600/ 550/ 400 2.38 · 10 −3 6.71 · 10 −6 4.51 · 10 −1 2.20 · 10 −1 8.64 · 10 1
M H / m b ′ / m ν l ′ H → 4e H → 2e2µ H → 4l H → 4q H → 2l2q H → 4f [GeV]
120/ 450/ 350 6.39 · 10 −6 1.14 · 10 −5 5.24 · 10 −3 2.38 · 10 −2 2.27 · 10 −2 5.17 · 10 −2 120/ 450/ 375 6.33 · 10 −6 1.13 · 10 −5 5.19 · 10 −3 2.36 · 10 −2 2.25 · 10 −2 5.12 · 10 −2 120/ 450/ 400 6.27 · 10 −6 1.12 · 10 −5 5.14 · 10 −3 2.33 · 10 −2 2.23 · 10 −2 5.08 · 10 −2 120/ 500/ 350 6.05 · 10 −6 1.08 · 10 −5 4.96 · 10 −3 2.26 · 10 −2 2.15 · 10 −2 4.91 · 10 −2 120/ 500/ 375 5.99 · 10 −6 1.07 · 10 −5 4.91 · 10 −3 2.24 · 10 −2 2.13 · 10 −2 4.87 · 10 −2 120/ 500/ 400 5.93 · 10 −6 1.06 · 10 −5 4.86 · 10 −3 2.22 · 10 −2 2.11 · 10 −2 4.82 · 10 −2 120/ 550/ 350 5.62 · 10 −6 1.00 · 10 −5 4.63 · 10 −3 2.12 · 10 −2 2.02 · 10 −2 4.60 · 10 −2 120/ 550/ 375 5.56 · 10 −6 9.93 · 10 −6 4.58 · 10 −3 2.10 · 10 −2 2.00 · 10 −2 4.56 · 10 −2 120/ 550/ 400 5.50 · 10 −6 9.82 · 10 −6 4.53 · 10 −3 2.08 · 10 −2 1.98 · 10 −2 4.51 · 10 −2 350/ 450/ 350 3.25 · 10 −4 6.52 · 10 −4 9.47 · 10 −2 4.52 · 10 −1 4.14 · 10 −1 9.61 · 10 −1 350/ 450/ 375 3.25 · 10 −4 6.52 · 10 −4 9.46 · 10 −2 4.52 · 10 −1 4.14 · 10 −1 9.60 · 10 −1 350/ 450/ 400 3.24 · 10 −4 6.51 · 10 −4 9.45 · 10 −2 4.52 · 10 −1 4.13 · 10 −1 9.60 · 10 −1 350/ 500/ 350 3.23 · 10 −4 6.48 · 10 −4 9.41 · 10 −2 4.52 · 10 −1 4.12 · 10 −1 9.58 · 10 −1 350/ 500/ 375 3.23 · 10 −4 6.48 · 10 −4 9.40 · 10 −2 4.52 · 10 −1 4.12 · 10 −1 9.58 · 10 −1 350/ 500/ 400 3.22 · 10 −4 6.47 · 10 −4 9.39 · 10 −2 4.52 · 10 −1 4.12 · 10 −1 9.58 · 10 −1 350/ 550/ 350 3.20 · 10 −4 6.43 · 10 −4 9.35 · 10 −2 4.51 · 10 −1 4.11 · 10 −1 9.55 · 10 −1 350/ 550/ 375 3.20 · 10 −4 6.42 · 10 −4 9.33 · 10 −2 4.51 · 10 −1 4.10 · 10 −1 9.55 · 10 −1 350/ 550/ 400 3.19 · 10 −4 6.41 · 10 −4 9.32 · 10 −2 4.51 · 10 −1 4.10 · 10 −1 9.54 · 10 −1 600/ 450/ 300 2.52 · 10 −4 5.05 · 10 −4 6.92 · 10 −2 3.29 · 10 −1 3.02 · 10 −1 7.00 · 10 −1 600/ 450/ 350 2.53 · 10 −4 5.07 · 10 −4 6.96 · 10 −2 3.31 · 10 −1 3.04 · 10 −1 7.04 · 10 −1 600/ 450/ 400 2.52 · 10 −4 5.05 · 10 −4 6.92 · 10 −2 3.30 · 10 −1 3.02 · 10 −1 7.01 · 10 −1 600/ 500/ 300 2.47 · 10 −4 4.95 · 10 −4 6.77 · 10 −2 3.24 · 10 −1 2.96 · 10 −1 6.88 · 10 −1 600/ 500/ 350 2.48 · 10 −4 4.97 · 10 −4 6.81 · 10 −2 3.26 · 10 −1 2.98 · 10 −1 6.92 · 10 −1 600/ 500/ 400 2.47 · 10 −4 4.94 · 10 −4 6.77 · 10 −2 3.24 · 10 −1 2.96 · 10 −1 6.88 · 10 −1 600/ 550/ 300 2.40 · 10 −4 4.81 · 10 −4 6.58 · 10 −2 3.16 · 10 −1 2.89 · 10 −1 6.71 · 10 −1 600/ 550/ 350 2.41 · 10 −4 4.83 · 10 −4 6.63 · 10 −2 3.18 · 10 −1 2.91 · 10 −1 6.75 · 10 −1 600/ 550/ 400 2.40 · 10 −4 4.80 · 10 −4 6.58 · 10 −2 3.17 · 10 −1 2.89 · 10 −1 6.71 · 10 −1 2.02 · 10 −4 2.92 · 10 −5 1.01 · 10 −7 8.58 · 10 −8 1.01 · 10 −5 4.82 · 10 −1 700
1.52 · 10 −4 2.26 · 10 −5 7.82 · 10 −8 6.46 · 10 −8 7.61 · 10 −6 4.21 · 10 −1 800
1.18 · 10 −4 1.80 · 10 −5 6.24 · 10 −8 5.02 · 10 −8 5.91 · 10 −6 3.56 · 10 −1 1000 7.37 · 10 −5 1.17 · 10 −5 4.06 · 10 −8 3. 3.78 · 10 −1 1.35 · 10 −4 7.20 · 10 −3 5.83 · 10 −4 6.41 · 10 −3 120 4.10 · 10 −1 4.06 · 10 −4 2.27 · 10 −2 2.37 · 10 −3 7.49 · 10 −3 130 4.28 · 10 −1 8.51 · 10 −4 5.77 · 10 −2 7.12 · 10 −3 8.92 · 10 −3 140 4.20 · 10 −1 1.46 · 10 −3 1.29 · 10 −1 1.68 · 10 −2 1.11 · 10 −2 150
3.63 · 10 −1 2.13 · 10 −3 2.75 · 10 −1 3.09 · 10 −2 1.55 · 10 −2 160
1.62 · 10 −1 2.01 · 10 −3 6.80 · 10 −1 2.78 · 10 −2 4.10 · 10 −2 170 5.27 · 10 −2 8.96 · 10 −4 8.90 · 10 −1 2.02 · 10 −2 1.49 · 10 −1 180
3.85 · 10 −2 6.95 · 10 −4 8.82 · 10 −1 5.43 · 10 −2 2.37 · 10 −1 190 2.75 · 10 −2 5.07 · 10 −4 7.60 · 10 −1 1.97 · 10 −1 3.84 · 10 −1 200 2.33 · 10 −2 4.28 · 10 −4 7.18 · 10 −1 2.43 · 10 −1 5.21 · 10 −1 250
1.62 · 10 −2 2.48 · 10 −4 6.93 · 10 −1 2.86 · 10 −1 1.41 300
1.39 · 10 −2 1.58 · 10 −4 6.84 · 10 −1 3.00 · 10 −1 2.87 400 7.40 · 10 −3 3.63 · 10 −5 3.74 · 10 −1 1.68 · 10 −1 1.55 · 10 1 500 4.28 · 10 −3 1.41 · 10 −5 3.22 · 10 −1 1.51 · 10 −1 4.06 · 10 1 600 2.99 · 10 −3 8.21 · 10 −6 3.46 · 10 −1 1.69 · 10 −1 6.99 · 10 1 700 2.17 · 10 −3 5.46 · 10 −6 3.86 · 10 −1 1.91 · 10 −1 1.06 · 10 2 800 1.60 · 10 −3 3.88 · 10 −6 4.27 · 10 −1 2.15 · 10 −1 1.52 · 10 2 1000 8.08 · 10 −4 2.28 · 10 −6 5.02 · 10 −1 2.54 · 10 −1 2.88 · 10 2 7.81 · 10 −7 1.27 · 10 −6 7.32 · 10 −4 3.53 · 10 −3 3.36 · 10 −3 7.62 · 10 −3 120
2.76 · 10 −6 4.99 · 10 −6 2.36 · 10 −3 1.17 · 10 −2 1.08 · 10 −2 2.48 · 10 −2 130
8.00 · 10 −6 1.48 · 10 −5 6.12 · 10 −3 3.05 · 10 −2 2.78 · 10 −2 6.44 · 10 −2 140
1.83 · 10 −5 3.51 · 10 −5 1.38 · 10 −2 6.88 · 10 −2 6.24 · 10 −2 1.45 · 10 −1 150
3.33 · 10 −5 6.42 · 10 −5 2.91 · 10 −2 1.45 · 10 −1 1.31 · 10 −1 3.05 · 10 −1 160 2.95 · 10 −5 5.75 · 10 −5 6.85 · 10 −2 3.31 · 10 −1 3.05 · 10 −1 7.04 · 10 −1 170 2.13 · 10 −5 4.17 · 10 −5 8.81 · 10 −2 4.29 · 10 −1 3.92 · 10 −1 9.08 · 10 −1 180 5.64 · 10 −5 1.11 · 10 −4 8.98 · 10 −2 4.44 · 10 −1 4.02 · 10 −1 9.36 · 10 −1 190 2.03 · 10 −4 4.04 · 10 −4 9.04 · 10 −2 4.57 · 10 −1 4.09 · 10 −1 9.56 · 10 −1 200 2.49 · 10 −4 4.99 · 10 −4 9.06 · 10 −2 4.63 · 10 −1 4.10 · 10 −1 9.64 · 10 −1 250 2.90 · 10 −4 5.86 · 10 −4 9.05 · 10 −2 4.71 · 10 −1 4.16 · 10 −1 9.78 · 10 −1 300
3.00 · 10 −4 6.10 · 10 −4 9.05 · 10 −2 4.74 · 10 −1 4.18 · 10 −1 9.82 · 10 −1 400
1.71 · 10 −4 3.42 · 10 −4 5.02 · 10 −2 2.61 · 10 −1 2.30 · 10 −1 5.41 · 10 −1 500
1.56 · 10 −4 3.13 · 10 −4 4.41 · 10 −2 2.27 · 10 −1 2.01 · 10 −1 4.73 · 10 −1 600
1.74 · 10 −4 3.51 · 10 −4 4.81 · 10 −2 2.48 · 10 −1 2.19 · 10 −1 5.15 · 10 −1 700
1.99 · 10 −4 4.00 · 10 −4 5.41 · 10 −2 2.77 · 10 −1 2.46 · 10 −1 5.77 · 10 −1 800 2.24 · 10 −4 4.52 · 10 −4 6.05 · 10 −2 3.08 · 10 −1 2.74 · 10 −1 6.42 · 10 −1 1000 2.70 · 10 −4 5.42 · 10 −4 7.21 · 10 −2 3.60 · 10 −1 3.23 · 10 −1 7.56 · 10 −1
Conclusions
Additional hypothetical heavy-fermion generations, which are embedded in the Standard Model, strongly affect the prediction for the production and decay of a Higgs boson. The Yukawa couplings of heavy fermions grow very large, eventually jeopardizing the use of perturbation theory.
In this article we have presented state-of-the-art predictions for the Higgs-boson production cross section via gluon fusion and for all relevant Higgs-boson decay channels including one additional heavy-fermion generation in a variety of scenarios with a generic mass scale of 450 GeV as well as for an extreme scenario with a mass scale of 600 GeV, which is at the border between perturbativity and non-perturbativity in the 4th-generation sector. The loop-induced transitions gg → H, H → gg, H → γγ receive large lowest-order contributions, as frequently pointed out in the literature before. Here we emphasize the effect that on top of that the electroweak radiative corrections grow very large. They typically grow with powers of the heavy-fermion masses, eventually leading to a breakdown of perturbation theory. For Higgs production via gluon fusion and the Higgs decay into gluon pairs they are at the level of 10% for M H < 600 GeV. For the important Higgs decays into WW or ZZ pairs we find corrections of the order of −40% and −60% or more for the adopted heavy-fermion mass scales of 450 GeV and 600 GeV, respectively, where the onset of the non-perturbative regime is clearly visible by electroweak one-loop corrections of the size of about −85% in the latter case. The branching ratios into fermion pairs are enhanced by 30% and 60% for 4th-generation fermion masses of about 450 GeV and 600 GeV, respectively. The branching ratio for the decay into photon pairs is reduced by 65 to 100% in the Higgs-mass range 100 GeV < M H < 150 GeV in all considered scenarios for a heavy 4th fermion generation, where the reduction factor, however, shows a strong dependence on the Higgs and heavy-fermion masses. We also present estimates for the respective theoretical uncertainties, which are quite large (several 10%). As the NLO EW corrections are enhanced by powers of the masses of the heavy fermions they depend strongly on the actual values of these masses.
The presented results and error estimates, the qualitative description of the most important impact of heavy fermions, and the description of the available tools and calculations will certainly prove useful in upcoming refined analyses of LHC data on Higgs searches.
( 4 )Figure 1 :
41EW with respect to the leading-order cross section σ LO SM4 (gg → H) in SM4 are defined via the corrected cross section byσ SM4 (gg → H) = σ LO SM4 (gg → H) 1 + δRelative corrections in SM4 due to two-loop EW corrections to gg → H. The solid, red curve corresponds to the mass scenario m t ′ = 500 GeV, m b ′ = 450 GeV, m ν l ′ = 375 GeV, m l ′ = 450 GeV, while the dashed, blue curve corresponds to the extreme scenario of Eq. (2.5).
( 4 )
4EW in the extreme scenario of Eq. (2.5) (dashed, blue curve), which can be considered as the upper bound of EW corrections in the perturbative regime. Some values of the solid, red curve ofFig. 1are also listed in
Figure 2 :
2Ratio of Higgs-boson production cross sections via gluon fusion in SM4 with respect to SM3 including NNLO QCD and NLO EW corrections for m t ′ = 500 GeV, m b ′ = 450 GeV, m l ′ = 450 GeV, and m ν l ′ = 375 GeV and √ s = 8 TeV.
Figure 3 :Figure 4 :
34Ratio of Higgs-boson production cross sections via gluon fusion in SM4 with respect to SM3 including NNLO QCD and NLO EW corrections for √ s = 7 TeV in the extreme Ratio of branching fractions in SM4 with respect to SM3 for WW, ZZ, gg, bb, and γγ decay channels (γγ ratio multiplied with 100) as a function of M H for scenario(2.4).
Figure 5 :
513 · 10 −8 3.69 · 10 −6 2.43 · 10 Ratio of branching fractions in SM4 with respect to SM3 for WW, ZZ, gg, bb, and γγ decay channels (γγ ratio multiplied with 100) as a function of M H in the extreme scenario (2.5).
Table 3 :
3NLO EW corrections to the H → γγ decay width (mass scenario of Eq.(2.4)) according
Table 4 :
4NLO EW corrections to the H → γγ decay width (mass scenario of Eq.(2.5)) according
Table 6 :
6SM4 Higgs-boson production cross section via gluon fusion including NNLO QCD and NLO EW corrections using MSTW2008NNLO PDFs for√ s = 8 TeV in the scenarios (2.2)/(2.3).
Table 7 :
7SM4 Higgs-boson production cross section via gluon fusion including NNLO QCD and NLO EW corrections using MSTW2008NNLO PDFs for√ s = 7 TeV in the extreme scenario
Table 8 :
8SM4 Higgs branching fractions for 2-fermion decay channels for the scenarios definedin (2.2)/(2.3).
Table 9 :
9SM4 Higgs branching fractions for 2-gauge-boson decay channels and total Higgs width
for the scenarios defined in (2.2)/(2.3).
Table 10 :
10SM4 Higgs branching fractions for 4-fermion final states with l = e, µ, τ, ν e , ν µ , ν τ and
q = u, d, c, s, b for the scenarios defined in (2.2)/(2.3).
Table 11 :
11SM4 Higgs branching fractions for 2-fermion decay channels in the extreme scenario(2.5).M H [GeV] H → bb H → τ + τ − H → µ + µ − H → ss H → cc H → tt 1005.70 · 10 −1 5.98 · 10 −2 2.08 · 10 −4 2.44 · 10 −4 2.88 · 10 −2 70 · 10 −3 3.48 · 10 −4 1.21 · 10 −6 1.15 · 10 −6 1.36 · 10 −4 3.26 · 10 −4 400 6.38 · 10 −4 8.63 · 10 −5 2.99 · 10 −7 2.71 · 10 −7 3.20 · 10 −5 4.50 · 10 −1 500 2.96 · 10 −4 4.16 · 10 −5 1.44 · 10 −7 1.26 · 10 −7 1.48 · 10 −5 5.22 · 10 −1 6000.00
Table 12 :
12SM4 Higgs branching fractions for 2-gauge-boson decay channels and total Higgs width in the extreme scenario (2.5). 39 · 10 −1 1.70 · 10 −5 1.67 · 10 −3 1.35 · 10 −4 5.52 · 10 −3 110M H [GeV]
H → gg
H → Zγ
H → WW
H → ZZ
Γ H [GeV]
100
3.
Table 13 :
13SM4 Higgs branching fractions for 4-fermion final states with l = e, µ, τ, ν e , ν µ , ν τ and q = u, d, c, s, b in the extreme scenario (2.5). 26 · 10 −7 3.31 · 10 −7 1.67 · 10 −4 7.48 · 10 −4 7.85 · 10 −4 1.70 · 10 −3 110M H [GeV]
H → 4e
H → 2e2µ
H → 4l
H → 4q
H → 2l2q
H → 4f
100
2.
Table 14 :
14Higgs branching fractions for the γγ decay channel without and with NLO EW corrections in the scenarios (2.2)/(2.3) (QCD corrections are always included).
Table 15 :
15Higgs branching fractions for the γγ decay channel without and with NLO EW corrections in the extreme scenario (2.5) (QCD corrections are always included).M H [GeV] w/o NLO EW w/ NLO EW100
1.31 · 10 −4
4.65 · 10 −5
110
1.72 · 10 −4
4.40 · 10 −5
120
2.26 · 10 −4
3.77 · 10 −5
130
2.95 · 10 −4
2.71 · 10 −5
140
3.81 · 10 −4
1.30 · 10 −5
150
4.74 · 10 −4
1.42 · 10 −6
Results of similar searches at Tevatron can be found in Refs.[5,6] and Ref.[7], respectively.
EW in this scenario is shown inFig. 1(solid, red curve). The vertical lines in the figure denote the location of the WW-, ZZ-, and tt-thresholds. The NLO EW corrections due
AcknowledgementsThis work is supported in part by the Gottfried Wilhelm Leibniz programme of the Deutsche Forschungsgemeinschaft (DFG) and by Ministero dell'Istruzione, dell'Università e della Ricerca 21 (MIUR) under contract 2008H8F9RA 002. We gratefully acknowledge several discussions with P. Gambino, C. Mariotti, and R. Tanaka.
. S Chatrchyan, CMS CollaborationarXiv:1102.4746Phys. Lett. 701hep-exS. Chatrchyan et al. [CMS Collaboration], Phys. Lett. B701 (2011) 204-223. [arXiv:1102.4746 [hep-ex]].
. M M H Luk, arXiv:1110.3246hep-exM. M. H. Luk, arXiv:1110.3246 [hep-ex].
. G Aad, ATLAS CollaborationarXiv:1106.2748hep-exG. Aad et al. [ATLAS Collaboration], [arXiv:1106.2748 [hep-ex]].
. T Aaltonen, CDF CollaborationarXiv:1107.3875Phys. Rev. Lett. 107261801hep-exT. Aaltonen et al. [CDF Collaboration], Phys. Rev. Lett. 107, 261801 (2011) [arXiv:1107.3875 [hep-ex]].
. T Aaltonen, CDF CollaborationarXiv:1101.5728Phys. Rev. Lett. 106141803The. hep-exT. Aaltonen et al. [The CDF Collaboration], Phys. Rev. Lett. 106 (2011) 141803 [arXiv:1101.5728 [hep-ex]].
. T Aaltonen, CDF and D0 CollaborationarXiv:1005.3216Phys. Rev. 8211102hep-exT. Aaltonen et al. [CDF and D0 Collaboration], Phys. Rev. D82 (2010) 011102. [arXiv:1005.3216 [hep-ex]].
. C Anastasiou, S Buehler, E Furlan, F Herzog, A Lazopoulos, arXiv:1103.3645Phys. Lett. 702hep-phC. Anastasiou, S. Buehler, E. Furlan, F. Herzog, A. Lazopoulos, Phys. Lett. B702 (2011) 224-227. [arXiv:1103.3645 [hep-ph]];
. C Anastasiou, R Boughezal, E Furlan, arXiv:1003.4677JHEP. 1006101hep-phC. Anastasiou, R. Boughezal, E. Furlan, JHEP 1006 (2010) 101. [arXiv:1003.4677 [hep-ph]].
. M Spira, arXiv:hep-ph/9510347hep-ph/9610350Nucl. Instrum. Meth. 389M. Spira, arXiv:hep-ph/9510347 and Nucl. Instrum. Meth. A389 (1997) 357-360. [hep-ph/9610350].
Talk by A. David at Implications of LHC results for TeV-scale physics. Talk by A. David at Implications of LHC results for TeV-scale physics, CERN, August 29, 2011, http://indico.cern.ch/conferenceOtherViews.py?view=standard&confId=141983.
. G Passarino, C Sturm, S Uccirati, arXiv:1108.2025Phys. Lett. B. 706195hep-phG. Passarino, C. Sturm and S. Uccirati, Phys. Lett. B 706 (2011) 195 [arXiv:1108.2025 [hep-ph]].
. M S Chanowitz, M A Furman, I Hinchliffe, Phys. Lett. B. 78285M. S. Chanowitz, M. A. Furman and I. Hinchliffe, Phys. Lett. B 78 (1978) 285;
. Nucl. Phys. B. 153402Nucl. Phys. B 153 (1979) 402.
. G D Kribs, T Plehn, M Spannowsky, T M P Tait, arXiv:0706.3718Phys. Rev. 7675016hep-phG. D. Kribs, T. Plehn, M. Spannowsky, T. M. P. Tait, Phys. Rev. D76 (2007) 075016. [arXiv:0706.3718 [hep-ph]];
. M Hashimoto, arXiv:1001.4335Phys. Rev. 8175023hep-phM. Hashimoto, Phys. Rev. D81 (2010) 075023. [arXiv:1001.4335 [hep-ph]].
. M E Peskin, T Takeuchi, Phys. Rev. D. 46381M. E. Peskin and T. Takeuchi, Phys. Rev. D 46 (1992) 381.
. O Eberhardt, A Lenz, J Rohrwild, arXiv:1005.3505Phys. Rev. D. 8295006hep-phO. Eberhardt, A. Lenz and J. Rohrwild, Phys. Rev. D 82 (2010) 095006 [arXiv:1005.3505 [hep-ph]].
. J Erler, P Langacker, arXiv:1003.3211Phys. Rev. Lett. 10531801hep-phJ. Erler and P. Langacker, Phys. Rev. Lett. 105 (2010) 031801 [arXiv:1003.3211 [hep-ph]].
. P Gerhold, K Jansen, J , arXiv:1011.1648JHEP. 1101143hep-latP. Gerhold, K. Jansen and J. Kallarackal, JHEP 1101 (2011) 143 [arXiv:1011.1648 [hep-lat]].
. M Baak, M Goebel, J Haller, A Hoecker, D Ludwig, K Moenig, M Schott, J Stelzer, arXiv:1107.0975hep-phM. Baak, M. Goebel, J. Haller, A. Hoecker, D. Ludwig, K. Moenig, M. Schott and J. Stelzer, arXiv:1107.0975 [hep-ph].
. C Anastasiou, S Buehler, F Herzog, A Lazopoulos, arXiv:1107.0683JHEP. 111258hep-phC. Anastasiou, S. Buehler, F. Herzog and A. Lazopoulos, JHEP 1112, 058 (2011) [arXiv:1107.0683 [hep-ph]].
. S Goria, G Passarino, D Rosco, arXiv:1112.5517hep-phS. Goria, G. Passarino and D. Rosco, arXiv:1112.5517 [hep-ph].
. G Passarino, G. Passarino, http://personalpages.to.infn.it/∼giampier/CPHTO.html.
. H M Georgi, S L Glashow, M E Machacek, D V Nanopoulos, Phys. Rev. Lett. 40692H. M. Georgi, S. L. Glashow, M. E. Machacek, D. V. Nanopoulos, Phys. Rev. Lett. 40 (1978) 692.
. M Spira, A Djouadi, D Graudenz, P M Zerwas, arXiv:hep-ph/9504378Nucl. Phys. 453hep-phM. Spira, A. Djouadi, D. Graudenz, P. M. Zerwas, Nucl. Phys. B453 (1995) 17-82. [arXiv:hep-ph/9504378 [hep-ph]].
. A Djouadi, P Gambino, hep-ph/9406432Phys. Rev. Lett. 73A. Djouadi, P. Gambino, Phys. Rev. Lett. 73 (1994) 2528-2531. [hep-ph/9406432].
. A Djouadi, P Gambino, B A Kniehl, hep-ph/9712330Nucl. Phys. 523A. Djouadi, P. Gambino, B. A. Kniehl, Nucl. Phys. B523 (1998) 17-39. [hep-ph/9712330].
. F Fugel, B A Kniehl, M Steinhauser, hep-ph/0405232Nucl. Phys. 702F. Fugel, B. A. Kniehl, M. Steinhauser, Nucl. Phys. B702 (2004) 333-345. [hep-ph/0405232].
. S Actis, G Passarino, C Sturm, S Uccirati, arXiv:0809.3667Nucl. Phys. 811hep-phS. Actis, G. Passarino, C. Sturm, S. Uccirati, Nucl. Phys. B811 (2009) 182-273. [arXiv:0809.3667 [hep-ph]].
. S Actis, G Passarino, C Sturm, S Uccirati, arXiv:0809.1301Phys. Lett. 670hep-phS. Actis, G. Passarino, C. Sturm, S. Uccirati, Phys. Lett. B670 (2008) 12-17. [arXiv:0809.1301 [hep-ph]].
. C Anastasiou, R Boughezal, F Petriello, arXiv:0811.3458JHEP. 09043hepphC. Anastasiou, R. Boughezal, F. Petriello, JHEP 0904 (2009) 003. [arXiv:0811.3458 [hep- ph]].
Prophecy4f: A Monte Carlo generator for a proper description of the Higgs decay into 4 fermions. A Bredenstein, A Denner, S Dittmaier, A Mück, M M Weber, A. Bredenstein, A. Denner, S. Dittmaier, A. Mück, M. M. Weber, Prophecy4f: A Monte Carlo generator for a proper description of the Higgs decay into 4 fermions, http://omnibus.uni-freiburg.de/∼sd565/programs/prophecy4f/prophecy4f.html, 2010.
. A Bredenstein, A Denner, S Dittmaier, M M Weber, hep-ph/0611234Phys. Rev. 7480A. Bredenstein, A. Denner, S. Dittmaier, M. M. Weber, Phys. Rev. D74 (2006) 013004 [hep-ph/0604011] and JHEP 0702 (2007) 080 [hep-ph/0611234].
. A Bredenstein, A Denner, S Dittmaier, M M Weber, arXiv:0708.4123hep-phA. Bredenstein, A. Denner, S. Dittmaier, M. M. Weber, [arXiv:0708.4123 [hep-ph]].
. B A Kniehl, hep-ph/9602304Phys. Rev. 53B. A. Kniehl, Phys. Rev. D53 (1996) 6477-6485. [hep-ph/9602304].
. A Djouadi, J Kalinowski, M Spira, hep-ph/9704448Comput. Phys. Commun. 108A. Djouadi, J. Kalinowski, M. Spira, Comput. Phys. Commun. 108 (1998) 56-74. [hep-ph/9704448];
. M Spira, hep-ph/9705337Fortsch. Phys. 46M. Spira, Fortsch. Phys. 46 (1998) 203-284. [hep-ph/9705337];
An update of the program HDECAY. A Djouadi, J Kalinowski, M Mühlleitner, M Spira, arXiv:1003.1643J. M. Butterworth et al.hep-phA. Djouadi, J. Kalinowski, M. Mühlleitner, M. Spira, An update of the program HDECAY, appeared in J. M. Butterworth et al., [arXiv:1003.1643 [hep-ph]].
. T Inami, T Kubota, Y Okada, Z. Phys. 1869T. Inami, T. Kubota, Y. Okada, Z. Phys. C18 (1983) 69.
. A Djouadi, M Spira, P M Zerwas, Phys. Lett. 264A. Djouadi, M. Spira, P. M. Zerwas, Phys. Lett. B264 (1991) 440-446.
. K G Chetyrkin, B A Kniehl, M Steinhauser, hep-ph/9705240Phys. Rev. Lett. 79K. G. Chetyrkin, B. A. Kniehl, M. Steinhauser, Phys. Rev. Lett. 79 (1997) 353-356. [hep-ph/9705240];
. P A Baikov, K G Chetyrkin, hep-ph/0604194Phys. Rev. Lett. 9761803P. A. Baikov, K. G. Chetyrkin, Phys. Rev. Lett. 97 (2006) 061803. [hep-ph/0604194].
. A Djouadi, M Spira, J J Van Der Bij, P M Zerwas, Phys. Lett. 257A. Djouadi, M. Spira, J. J. van der Bij, P. M. Zerwas, Phys. Lett. B257 (1991) 187-190;
. A Djouadi, M Spira, P M Zerwas, arXiv:hep-ph/9305335Phys. Lett. 311hep-phA. Djouadi, M. Spira, P. M. Zerwas, Phys. Lett. B311 (1993) 255-260. [arXiv:hep-ph/9305335 [hep-ph]];
. K Melnikov, O I Yakovlev, arXiv:hep-ph/9302281Phys. Lett. 312179K. Melnikov and O. I. Yakovlev, Phys. Lett. B312 (1993) 179 [arXiv:hep-ph/9302281];
. M Inoue, R Najima, T Oka, J Saito, Mod. Phys. Lett. 91189M. Inoue, R. Najima, T. Oka and J. Saito, Mod. Phys. Lett. A9 (1994) 1189.
. M Spira, A Djouadi, P M Zerwas, Phys. Lett. 276M. Spira, A. Djouadi, P. M. Zerwas, Phys. Lett. B276 (1992) 350-353.
. A D Martin, W J Stirling, R S Thorne, G Watt, arXiv:0901.0002Eur. Phys. J. 63hep-phA. D. Martin, W. J. Stirling, R. S. Thorne, G. Watt, Eur. Phys. J. C63 (2009) 189-285. [arXiv:0901.0002 [hep-ph]].
. S Dittmaier, LHC Higgs Cross Section Working Group CollaborationarXiv:1101.0593hep-phS. Dittmaier et al. [LHC Higgs Cross Section Working Group Collaboration], [arXiv:1101.0593 [hep-ph]].
. A Denner, S Heinemeyer, I Puljak, D Rebuzzi, M Spira, arXiv:1107.5909Eur. Phys. J. 711753hep-phA. Denner, S. Heinemeyer, I. Puljak, D. Rebuzzi, M. Spira, Eur. Phys. J. C71 (2011) 1753. [arXiv:1107.5909 [hep-ph]].
| zyda_arxiv-1465000 |
The "FIP Effect" and the Origins of Solar Energetic Par- ticles and of the Solar Wind The FIP Effect and Origin of SEPs and the Solar Wind
Donald V Reames [email protected]
Institute for Physical Science and Technology
University of Maryland
20742-2431College ParkMDUSA
D V Reames
Institute for Physical Science and Technology
University of Maryland
20742-2431College ParkMDUSA
The "FIP Effect" and the Origins of Solar Energetic Par- ticles and of the Solar Wind The FIP Effect and Origin of SEPs and the Solar Wind
Solar energetic particles · Solar wind · Coronal mass ejections · Solar system abundances · Solar flares
We find that the element abundances in solar energetic particles (SEPs) and in the slow solar wind (SSW), relative to those in the photosphere, show different patterns as a function of the first ionization potential (FIP) of the elements. Generally, the SEP and SSW abundances reflect abundance samples of the solar corona, where low-FIP elements, ionized in the chromosphere, are more efficiently conveyed upward to the corona than high-FIP elements that are initially neutral atoms. Abundances of the elements, especially C, P, and S show a crossover from low to high FIP at ≈10 eV in the SEPs but ≈14 eV for the solar wind. Naively this seems to suggest cooler plasma from sunspots beneath active regions. More likely, if the ponderomotive force of Alfvén waves preferentially conveys low-FIP ions into the corona, the source plasma that eventually will be shock-accelerated as SEPs originates in magnetic structures where Alfvén waves resonate with the loop length on closed magnetic field lines. This concentrates FIP fractionation near the top of the chromosphere. Meanwhile, the source of the SSW may lie near the base of diverging open-field lines surrounding, but outside of, active regions, where such resonance does not exist, allowing fractionation throughout the chromosphere. We also find that energetic particles accelerated from the solar wind itself by shock waves at corotating interaction regions (CIRs), generally beyond 1 AU, confirm the FIP pattern of the solar wind.
Introduction
For many years it has been recognized that the average abundances of the elements in solar energetic particles (SEPs), relative to the corresponding abundances in the solar photosphere, has a characteristic dependence on the first ionization potential (FIP) of the elements (e.g. Webber 1977;Meyer 1985). The relative abundances of the elements with FIP < 10 eV (e.g. Mg, Si, Fe) are enhanced by a factor of about 4 relative to those with FIP > 10 eV (e.g. He, C, O, Ne). This "FIP effect" is understood as an ion-neutral fractionation that occurs as particles expand from the chromosphere up into the corona. The low-FIP elements are easily ionized at photospheric and chromospheric temperatures but those with high FIP are often neutral atoms; the ions are convected upward by the action of Alfvén waves, for example (Laming 2009(Laming , 2015, but the neutral atoms are not. All elements become highly ionized on reaching the ≈1 MK corona, but the ionization time for He, at the highest FIP = 24 eV, is the longest.
Meyer recognized that the observed SEP abundances were influenced by two factors. The first was the FIP effect which characterizes the abundances of the corona before acceleration and the second was a dependence on the mass-to-charge ratio A/Q of each ion during transport, after acceleration, which varied with time and from event to event, as was also clearly shown by Breneman and Stone (1985). The ions in large "gradual" SEP events are accelerated at shock waves driven out from the Sun by coronal mass ejections (CMEs; Kahler et al. 1984;Gosling 1993;Cliver, Kahler, and Reames 2004;Lee 2005;Zank, Li, and Verkhoglyadova 2007;Lee, Mewaldt, and Giacalone 2012;Rouillard et al. 2011Rouillard et al. , 2012Desai and Giacalone 2016;Reames 2017a). The dependence on A/Q may result from rigidity-dependent scattering as the ions spread from the shock (e.g. Ng, Reames, and Tylka 2003;Reames 2016a). For example, Fe, with higher A/Q scatters less than O, so Fe/O, at constant velocity will be enhanced early in events and depleted later. Solar rotation can also turn this behavior into a dependence on solar longitude (e.g. Reames 2015). Spatial averaging should recover source abundances.
Over the years, our measurement statistics and the sample of SEP events have increased (e.g. Reames 1995Reames , 2014 and measurements of the FIP effect in the solar wind have also improved (Bochsler 2009) where a weaker FIP effect is seen in the fast solar The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames 3 wind (FSW), that originates primarily in coronal holes, than in the slow solar wind (SSW), that is often associated with solar active regions. In addition, a property of SEP events, the under-abundance of the element He, has recently become better understood (Reames 2017b) as probable spatial variations in the source plasma. A correction of the He abundance brings the FIP effects of SEPs and the SSW into better agreement. SEP events with He/O ≈ 90 often come from shock acceleration of source plasma with a temperature T ≈ 3 MK. The seed population for these events is laced with residual 3 He-rich, Fe-rich suprathermal ions from previous "impulsive" SEP events in solar active regions (Desai et al. 2003;Tylka et al. 2005;Reames, Cliver, and Kahler, 2014;Reames 2013Reames , 2016aReames , 2016bReames , 2017aReames , 2017bReames , 2018. The temperature of 3 MK is actually a property of the residual impulsive suprathermal ions. Events with suppressed He/O involve acceleration of ambient coronal source plasma of < 2 MK (Reames 2017b(Reames , 2018.
Do the SEPs and the SSW sample similar regions of the solar corona? How and why do their FIP patterns differ? Are spectroscopic measurements (e.g. Schmelz et al. 2012;Fludra and Schmelz 1999;Feldman and Widing 2007) of extreme ultraviolet (EUV) and X-ray spectral lines in flares helpful? There are also corotating interaction regions (CIRs) that are formed when FSW streams overtake slow wind, spawning shock acceleration primarily of FSW ions, generally out beyond 1 AU (e.g. Richardson 2004).
Do abundances of energetic ions from CIRs look like solar wind or like SEPs?
More specifically, one abundance difference that has prevented reconciling SEP and SSW FIP patterns for many years is that of well-measured C/O. Recent abundances are C/O = 0.68 ± 0.07 in both SSW and FSW (Bochsler 2009) and C/O = 0.420 ± 0.010 in SEPs (Reames 2014). None of the 70 individual SEP events studied have C/O > 0.5.
Earlier measurements of SEPs and of the solar wind show similar differences, as does the FSW (e.g. Gloeckler and Geiss 2007). If C and O are both high-FIP ions, why should their ratio be different from that in the photosphere or from each other? Lack of a convincing answer to this question has stalled our understanding for many years.
The FIP-Dependence of SEP and SSW Abundances
In Figure 1 we compare the FIP dependence of the ratio of SEP/SSW abundances in the lower panel, while in the upper panel we overlay the usual FIP patterns of the SEP and The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames 4 SSW abundances relative to the photospheric abundances of Caffau et al. (2011) and Lodders, Palma, and Gail (2009). The alternative photospheric abundances of Asplund et al. (2009) are compared for SEPs by Reames (2015); a choice that has no bearing on our results. In the upper panel we have chosen to multiply the SSW abundances by a factor of 1.2 to improve the relative abundances at high and low FIP, as seen also by the dashed line in the lower panel. These abundances and others used herein are shown in Table 1. Reames (1995Reames ( , 2014Reames ( , 2017a 3 Reames, Richardson, and Barbier (1991); Reames (1995) 4 Bochsler (2009) The remarkable feature of Figure 1 is that the discrepancy between the SEP and SSW abundances seems largely confined to the elements C, P, and S at intermediate values of FIP. It seems that the crossover from low to high FIP occurs at about 14 eV for the SSW and ≈10 eV for the SEPs. Thus the elements C, P, S, and even O (hence the fac-
The FIP-Dependence of CIR and FSW Abundances
Corotating interaction regions (CIRs) are formed when FSW streams overtake and collide with SSW emitted in that direction earlier in the solar rotation. Two shock waves may be formed, generally beyond 1 AU; the forward shock propagates outward into the SSW and the reverse shock propagates sunward into the FSW. Ions are accelerated mainly from the FSW at the reverse shock which is usually stronger (e.g. Richardson et al. 1993;Mason and Sanderson 1999;Richardson 2004). Using the data in Table 1, we compare the FIP effect of the CIR and FSW populations in Figure 2. of solar-wind speed (Collier et al. 1996;Bochsler 2007;Rakowsky and Laming 2012).
Nevertheless, C and S clearly behave like low-FIP ions, Mg, Si, and Fe, especially in the CIR population, unlike the behavior of the SEPs where C and S behave like N, O, and
Ne. There is some evidence that C/O increases with the speed of the high-speed stream (Richardson et al. 1993;Mason et al. 1997), but this could reflect changes in the seed population that may prefer residual suprathermal SEP ions at the weaker shocks at lower stream speeds. The SEP FIP pattern suggests a source that is cooler, in some sense, than that of the SSW, so that C, S, and P are less likely to be ionized so they behave more like neutral atoms. The model of Laming (2015Laming ( , 2017 Open field lines are out of resonance and produce ponderomotive force further down where H is neutral, fractionation is easier, and neutral back diffusion less important. Here C, P and S can be fractionated (see Table 4 of Laming 2015). For the SSW, the amplitude of the FIP-bias depends upon the amplitude of slow-mode acoustic waves as shown in Table 4 of Laming (2015).
Discussion
The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames 8
In Figure 3, the lower panel compares the FIP pattern of the SEPs with the closed field Alfvén-wave model (from Table 3 Table 4 of Laming 2015).
The The agreement in Figure 3 is generally good, but the theory seems a bit above the SEPs for C and S and below for Si. The SEP abundances, especially C/O, are very well established. Errors for the SSW are larger, but C and the high-FIP elements N, Ar, Ne, and He are consistently below the theory. While the CIR abundances are expected to be more like FSW, rather than SSW, most of the elements fit well, especially for the transition elements, C and P; Si and Fe fall a bit below theory, but Mg agrees well.
FIP Effect
X-ray measurements of S, Ca, and Fe in flares seem to show a suppression of S relative to Ca and Fe (Schmeltz et al. 2012;Fludra and Schmelz 1999). We should not be surprised that any measurements in flares show the same FIP pattern as SEPs. It is most likely that the suppression of S is also be related to measurements on closed magnetic loops (Laming 2015), but the measurements are surely related to flares and active regions. At a shock wave, ions accelerated from 30 keV amu -1 to 3 MeV amu -1 , for example, have increased their magnetic rigidity and gyroradii by an order of magnitude, so that newly accelerated SEP ions may be able to escape weak trapping on high coronal loops. In addition, the "seed population" for shock acceleration of SEPs is important (see Desai et al. 2003;Tylka et al. 2005;Laming et al. 2013;Reames 2017a). Those SEP events that show higher He/O ratios and 3 MK source plasma temperatures (Reames 2017b) are associated with solar jets from active regions. Other gradual SEP events with source plasma temperatures of 1 -2 MK (Reames 2016a, 2017a) may involve seed particles from ambient coronal material that was weakly bound on high coronal loops at 2 -3 solar radii where these SEPs are initially sampled (Reames 2009a(Reames , 2009b(Reames , 2017a (Reames 2009a(Reames , 2009bCliver, Kahler, and Reames 2004) and mostly reaccelerate the The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames 11 same particles farther out. At lower energies, shock acceleration may continue farther from the Sun and incorporate more of the solar wind plasma, but this is not actually observed (e.g. Desai et al. 2003). It has been reported previously (e.g. Mewaldt et al. 2002), based upon differences in FIP, that SEPs can not be only accelerated solar wind.
However, the present article is the first to characterize the FIP patterns as differences in the location of the crossover between high and low FIP, to discuss the relationship with CIR abundances, and to consider the theoretical connection to open and closed magnetic loops.
Why does C/O differ in SEPs and the SSW? C behaves as a high-FIP neutral atom in the closed loops in active regions that supply seed particles for SEPs. However, C is a transition element, partially ionized and partially enhanced during transit to the corona that contributes to the SSW.
SEPs are one of the most complete samples of coronal abundances that we have.
Studying them may provide insight on the origin of these and of other coronal samples as well.
Figure 1 .
1The lower panel shows the direct ratio of the element abundances from SEPs to those of the SSW (Bochsler 2009), both normalized at O, as a function of FIP. The dashed line suggests an alternate normalization factor of 1.2. The upper panel shows the SEP/photospheric and 1.2 times the SSW/photospheric abundance ratios as a function of FIP. The curves are empirical and are only used to show the trend of the data.
Figure 2 .
2The CIR/photospheric and FSW/photospheric abundances, normalized at O, are shown as a function of FIP. C and S behave like the low-FIP ions Mg, Si, and Fe, especially in the CIR population. With the exception of He, the FIP patterns in Figure 2 are in reasonable agreement. However, variations of He/O are known in the solar wind as functions of time and
Brooks, Ugarte and Warren (2016;Abbo et al. 2016) have made a full-sun map of FIP bias based upon the S/Si abundance ratio. This map shows a large FIP bias in active regions. Based upon our forgoing analysis, S/Si should certainly show regions of appropriate FIP bias for SEPs, but certainly not for the SSW, since S and Si are both low-FIP elements for the solar wind. Thus S/Si should be unaltered in the source of the SSW. The S/Si map shows the source of the FIP pattern seen by SEPs to be in active regions.
explains the FIP effect in terms of the ponderomotive force of Alfvén waves on ions in the chromosphere and the low corona. This force can differ on closed and open magnetic loops since on closed loops the wavelength can resonate with the loop length. The fractionation is concentrated at the top of the chromosphere where H is becoming ionized if the waves causing the ponderomotive force are in resonance with a coronal loop above (see Figure 8 of Laming 2015). In this case back diffusion of any small neutral fraction restricts fractionation, particularly of C, P, and S which are less ionized than Fe, Mg, and Si.
of Laming 2015) while the upper panel compares the FIP patterns of both the SSW and the CIRs with the open field model (from
Figure 3 .
3The lower panel compares the FIP pattern of SEPs with the closed loop model ofLaming (2015,
The FIP Effect and Origin of SEPs and the Solar Wind D. V. ReamesTable 1 Photospheric, SEP, CIR, SSW, and FSW Abundances. 1 Lodders, Palme, and Gail (2009) * Caffau et al. (2011) 25
Table 3 )
3. The upper panel compares the SSW and CIR FIP patterns the open field model ofLaming (2015,
Table 4 ).
4The FIP Effect and Origin of SEPs and the Solar Wind
D. V. Reames
). Many of these are the loops that may be closed for coronal plasma but open for 3 -10 MeV amu -1 SEP ions SEP events with suppressed values of He/O and source plasma temperatures < 2 MK may involve shock acceleration of plasma from newly formed coronal loops with incomplete He ionization on the fringes of active regions. In any case, SEPs and the SSW must come from different regions of the corona overlying different FIP-dependent processes. Thus, SEPs, at least above a few MeV amu -1 , are not merely accelerated solar wind; they are a fundamentally different sample of the solar corona. Generally, in the large SEP events, the shock waves begin to sample the corona at 2 -3 solar radii
Acknowledgments:The author thanks Martin Laming for helpful discussions related to the theory included in this manuscript.Disclosure of Potential Conflicts of InterestThe authors declare they have no conflicts of interest.
Slow solar wind: Observations and modeling. L Abbo, L Ofman, S K Antiochos, V H Hansteen, L Harra, Y.-K Ko, 10.1007/s11214-016-0264-1Space Sci. Rev. 20155Abbo, L., Ofman, L., Antiochos, S.K., Hansteen, V.H., Harra, L., Ko, Y.-K. et al.: 2016, Slow solar wind: Observations and modeling Space Sci. Rev. 201 55, doi: 10.1007/s11214-016-0264-1
The chemical composition of the sun. M Asplund, N Grevesse, A J Sauval, P Scott, 10.1146/annurev.astro.46.060407.145222Ann. Rev. Astron. Astrophys. 47481Asplund, M., Grevesse, N., Sauval, A.J., Scott, P.: 2009, The chemical composition of the sun. Ann. Rev. Astron. As- trophys. 47, 481 doi: 10.1146/annurev.astro.46.060407.145222
Solar abundances of oxygen and neon derived from solar wind observations. P Bochsler, 10.1051/0004-6361:20077772Astron. Astrophys. 471315Bochsler, P.: 2007, Solar abundances of oxygen and neon derived from solar wind observations, Astron. Astrophys. 471 315, doi: 10.1051/0004-6361:20077772
Composition of matter in the heliosphere. P Bochsler, 10.1017/S1743921309029044Proc. IAU Sympos. IAU Sympos257Bochsler, P.: 2009, Composition of matter in the heliosphere, Proc. IAU Sympos 257, 17, doi:10.1017/S1743921309029044.
. The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames. 12The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames 12
Solar coronal and photospheric abundances from solar energetic particle measurements. H H Breneman, E C Stone, 10.1086/184580Astrophys. J. Lett. 29957Breneman, H.H., Stone, E.C.: 1985, Solar coronal and photospheric abundances from solar energetic particle measure- ments, Astrophys. J. Lett. 299, L57, doi: 10.1086/184580
Full-Sun observations for identifying the source of the slow solar wind Nature Comms. D H Brooks, I Ugarte-Urra, H P Warren, 10.1038/ncomms69476Brooks, D.H., Ugarte-Urra, I., Warren, H.P.:2016, Full-Sun observations for identifying the source of the slow solar wind Nature Comms. 6, 5947, doi: 10.1038/ncomms6947
Solar chemical abundances determined with a CO5BOLD 3D model atmosphere, Solar Phys. 268. E Caffau, H.-G Ludwig, M Steffen, B Freytag, P Bonofacio, 10.1007/s11207-010-9541-4255Caffau, E., Ludwig, H.-G., Steffen, M., Freytag, B., Bonofacio, P.: 2011, Solar chemical abundances determined with a CO5BOLD 3D model atmosphere, Solar Phys. 268, 255. doi:10.1007/s11207-010-9541-4
Coronal shocks and solar energetic proton events. E W Cliver, S W Kahler, D V Reames, 10.1086/382651Astrophys. J. 605902Cliver, E.W., Kahler, S.W., Reames, D.V.: 2004, Coronal shocks and solar energetic proton events, Astrophys. J. 605, 902, doi: 10.1086/382651
Neon-20, oxygen-16, and helium-4 densities, temperatures, and suprathermal tails in the solar wind determined with WIND/MASS. M R Collier, D C Hamilton, G Gloeckler, P Bochsler, R B Sheldon, 10.1029/96GL00621Geophys. Res. Lett. 231191Collier, M.R., Hamilton, D.C., Gloeckler, G., Bochsler, P., Sheldon, R.B.: 1996, Neon-20, oxygen-16, and helium-4 densities, temperatures, and suprathermal tails in the solar wind determined with WIND/MASS, Geophys. Res. Lett., 23, 1191 doi: 10.1029/96GL00621
Large gradual solar energetic particle events. M I Desai, J Giacalone, 10.1007/s41116-016-0002-5Living Reviews of Solar Physics. Desai, M.I., Giacalone, J.: 2016, Large gradual solar energetic particle events, Living Reviews of Solar Physics, doi: 10.1007/s41116-016-0002-5.
Evidence for a suprathermal seed population of heavy ions accelerated by interplanetary shocks near 1 AU. M I Desai, G M Mason, J R Dwyer, J E Mazur, R E Gold, S M Krimigis, C W Smith, R M Skoug, 10.1086/374310Astrophys. J. 5881149Desai, M.I., Mason, G.M., Dwyer, J.R., Mazur, J.E., Gold, R.E., Krimigis, S.M., Smith, C.W., Skoug, R.M.: 2003, Evidence for a suprathermal seed population of heavy ions accelerated by interplanetary shocks near 1 AU, Astrophys. J. 588, 1149, doi: 10.1086/374310
Spectroscopic measurement of coronal compositions. U Feldman, K G Widing, 10.1007/s11214-007-9157-7Space Sci. Rev. 130115Feldman, U., Widing, K.G.: 2007, Spectroscopic measurement of coronal compositions, Space Sci. Rev. 130 115 doi: 10.1007/s11214-007-9157-7
The absolute coronal abundances of sulfur, calcium, and iron from Yohkoh-BCS flare spectra. A Fludra, J T Schmelz, Astron. Astrophys. 348286Fludra, A., Schmelz, J. T.: 1999, The absolute coronal abundances of sulfur, calcium, and iron from Yohkoh-BCS flare spectra, Astron. Astrophys. 348, 286.
The composition of the solar wind in polar coronal holes. G Gloeckler, J Geiss, 10.1007/s11214-007-9189-zSpace Sci. Rev. 130139Gloeckler,G., Geiss, J.: 2007, The composition of the solar wind in polar coronal holes, Space Sci. Rev. 130 139 doi: 10.1007/s11214-007-9189-z
The solar flare myth. J T Gosling, 10.1029/93JA01896J. Geophys. Res. 98Gosling, J.T.: 1993 The solar flare myth, J. Geophys. Res. 98, 18937 doi: 10.1029/93JA01896
Associations between coronal mass ejections and solar energetic proton events. S W Kahler, N R Sheeley, Jr, R A Howard, M J Koomen, D J Michels, R E Mcguire, T T Von Rosenvinge, D V Reames, 10.1029/JA089iA11p09683J. Geophys. Res. 899683Kahler, S.W., Sheeley, N.R.,Jr., Howard, R.A., Koomen, M.J., Michels, D.J., McGuire R.E., von Rosenvinge, T.T., Reames, D.V.: 1984, Associations between coronal mass ejections and solar energetic proton events, J. Geo- phys. Res. 89, 9683, doi: 10.1029/JA089iA11p09683
Non-WKB models of the first ionization potential effect: implications for solar coronal heating and the coronal helium and neon abundances. J M Laming, 10.1088/0004-637X/695/2/954Astrophys. J. 695954Laming, J.M.: 2009, Non-WKB models of the first ionization potential effect: implications for solar coronal heating and the coronal helium and neon abundances, Astrophys. J. 695, 954 doi: 10.1088/0004-637X/695/2/954
The FIP and inverse FIP effects in solar and stellar coronae. J M Laming, 10.1007/lrsp-2015-2Living Reviews in Solar Physics. 122Laming, J.M.: 2015, The FIP and inverse FIP effects in solar and stellar coronae, Living Reviews in Solar Physics, 12, 2 doi: 10.1007/lrsp-2015-2
The First Ionization Potential Effect from the Ponderomotive Force: On the Polarization and Coronal Origin of Alfvén Waves. J Laming, 10.3847/1538-4357/aa7cf1Leedoi: 10.1086/428753Astrophys. J. Suppl. 84438Astrophys J. Lett.Laming, J.M.: 2017, The First Ionization Potential Effect from the Ponderomotive Force: On the Polarization and Cor- onal Origin of Alfvén Waves Astrophys J. Lett. 844 L153 doi: 10.3847/1538-4357/aa7cf1Lee, M.A.: 2005, Coupled hydromagnetic wave excitation and ion acceleration at an evolving coronal/interplanetary shock, Astrophys. J. Suppl., 158, 38, doi: 10.1086/428753
On the remote detection of suprathermal ions in the solar corona and their role as seeds for solar energetic particle production. J M Laming, J D Moses, Y.-K Ko, C K Ng, C E.; Rakowski, A J Tylka, 10.1088/0004-637X/770/1/73Astrophys. J. 77073Laming, J.M. Moses, J.D., Ko, Y.-K. Ng, C.K., Rakowski, C.E.;,Tylka, A.J.: 2013, On the remote detection of su- prathermal ions in the solar corona and their role as seeds for solar energetic particle production, Astrophys. J. 770 73 doi: 10.1088/0004-637X/770/1/73
Shock acceleration of ions in the heliosphere. M A Lee, R A Mewaldt, J Giacalone, 10.1007/s11214-012-9932-ySpace Sci. Rev. 173247Lee, M.A., Mewaldt, R.A., Giacalone, J.: 2012, Shock acceleration of ions in the heliosphere, Space Sci. Rev., 173, 247, doi: 10.1007/s11214-012-9932-y
Abundances of the elements in the solar system. K Lodders, H Palme, H.-P Gail, Landolt-Börnstein, New Series VI/4B. Trümper, J.E.BerlinSpringerChap. 4.4, 560Lodders, K., Palme, H., Gail, H.-P.: 2009, Abundances of the elements in the solar system, In: Trümper, J.E. (ed.) Lan- dolt-Börnstein, New Series VI/4B, Springer, Berlin. Chap. 4.4, 560.
The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames 13. The FIP Effect and Origin of SEPs and the Solar Wind D. V. Reames 13
New spectral and abundance features of interplanetary heavy ions in corotating interaction regions. G M Mason, J E Mazur, J R Dwyer, D V Reames, Von Rosenvinge, 10.1086/310845Astrophys, J. 486149Mason, G.M., Mazur, J.E., Dwyer, J.R., Reames, D.V., von Rosenvinge, T.T.: 1997, New spectral and abundance fea- tures of interplanetary heavy ions in corotating interaction regions, Astrophys, J. 486 149 doi: 10.1086/310845
CIR associated energetic particles in the inner and middle heliosphere. G M Mason, T R Sanderson, 10.1023/A:1005216516443Space Sci. Rev. 8977Mason, G.M., Sanderson, T.R.: 1999, CIR associated energetic particles in the inner and middle heliosphere, Space Sci. Rev., 89, 77 doi: 10.1023/A:1005216516443
Fractionation of solar energetic particles and solar wind according to first ionization potential. R A Mewaldt, C M S Cohen, R A Leske, E R Christian, A C Cummings, E C Stone, T T Von Rosenvinge, M E Wiedenbeck, 10.1016/S0273-1177(02)00263-6Advan. Space Res. 3079Mewaldt, R.A., Cohen, C.M.S., Leske, R.A., Christian, E.R., Cummings, A.C., Stone, E.C., von Rosenvinge, T.T., Wiedenbeck, M.E.: 2002, Fractionation of solar energetic particles and solar wind according to first ioniza- tion potential, Advan. Space Res.30 79 doi: 10.1016/S0273-1177(02)00263-6
The baseline composition of solar energetic particles. J.-P Meyer, 10.1086/191000Astrophys. J. Suppl. 57151Meyer, J.-P.: 1985, The baseline composition of solar energetic particles, Astrophys. J. Suppl. 57, 151, doi: 10.1086/191000
Modeling shock-accelerated solar energetic particles coupled to interplanetary Alfvén waves. C K Ng, D V Reames, A J Tylka, 10.1086/375293Astrophys. J. 591461Ng, C.K., Reames, D.V., Tylka, A.J.: 2003, Modeling shock-accelerated solar energetic particles coupled to interplane- tary Alfvén waves, Astrophys. J. 591, 461, doi: 10.1086/375293
On the origin of the slow speed solar wind: helium abundance variations. C E Rakowsky, J M Laming, 10.1088/0004-637X/754/1/65Astrophys. J. 754Rakowsky, C.E., Laming, J. M.: 2012, On the origin of the slow speed solar wind: helium abundance variations Astrophys. J. 754, 65, doi: 10.1088/0004-637X/754/1/65
Coronal Abundances determined from energetic particles. D V Reames, Adv. Space Res. 15741Reames, D.V.: 1995, Coronal Abundances determined from energetic particles, Adv. Space Res. 15 (7), 41.
Solar release times of energetic particles in ground-level events. D V Reames, 10.1088/0004-637X/693/1/812Astrophys. J. 693Reames, D.V.: 2009a, Solar release times of energetic particles in ground-level events, Astrophys. J. 693, 812, doi: 10.1088/0004-637X/693/1/812
Solar energetic-particle release times in historic ground-level events. D V Reames, doi;10.1088/0004-637X/706/1/844Astrophys. J. 706Reames, D.V.: 2009b, Solar energetic-particle release times in historic ground-level events, Astrophys. J. 706, 844, doi; 10.1088/0004-637X/706/1/844
The two sources of solar energetic particles. D V Reames, 10.1007/s11214-013-9958-9Space Sci. Rev. 175Reames, D.V.: 2013, The two sources of solar energetic particles, Space Sci. Rev. 175, 53, doi: 10.1007/s11214-013- 9958-9
Element abundances in solar energetic particles and the solar corona. D V Reames, 10.1007/s11207-013-0350-4Solar Phys. 289977Reames, D.V.:2014, Element abundances in solar energetic particles and the solar corona, Solar Phys., 289, 977 doi: 10.1007/s11207-013-0350-4
What are the sources of solar energetic particles? Element abundances and source plasma temperatures. D V Reames, 10.1007/s11214-015-0210-7Space Sci. Rev. 194303Reames, D.V.: 2015, What are the sources of solar energetic particles? Element abundances and source plasma tem- peratures, Space Sci. Rev., 194: 303, doi: 10.1007/s11214-015-0210-7.
Temperature of the source plasma in gradual solar energetic particle events. D V Reames, 10.1007/s11207-016-0854-9Solar Phys. 291911Reames, D.V.: 2016a, Temperature of the source plasma in gradual solar energetic particle events, Solar Phys., 291 911, doi: 10.1007/s11207-016-0854-9
The origin of element abundance variations in solar energetic particles. D V Reames, 10.1007/s11207-016-0942-xSolar Phys. 2912099Reames, D.V.:2016b, The origin of element abundance variations in solar energetic particles, Solar Phys, 291 2099, doi: 10.1007/s11207-016-0942-x,
Solar Energetic Particles. D V Reames, 978-3-319-50870-2,DOI10.1007/978-3-319-50871-9Lecture Notes in Physics. 932SpringerReames D.V.: 2017a, Solar Energetic Particles, Lecture Notes in Physics 932, Springer, Berlin, ISBN 978-3-319- 50870-2, DOI 10.1007/978-3-319-50871-9.
The abundance of helium in the source plasma of solar energetic particles. D V Reames, 10.1007/s11207-017-1173-5arXiv:1708.05034Solar Phys. 292156Reames, D.V., 2017b The abundance of helium in the source plasma of solar energetic particles, Solar Phys. 292 156 doi: 10.1007/s11207-017-1173-5 (arXiv: 1708.05034)
Abundances, ionization states, temperatures, and FIP in solar energetic particles. D V Reames, arxiv: 1709.00741Space Sci. Rev. submitted. Reames, D.V.: 2018, Abundances, ionization states, temperatures, and FIP in solar energetic particles, Space Sci. Rev. submitted, (arxiv: 1709.00741)
Abundance enhancements in impulsive solar energetic-particle events with associated coronal mass ejections. D V Reames, E W Cliver, S W Kahler, 10.1007/s11207-014-0547-1Solar Phys. 2893817Reames, D.V., Cliver, E.W., Kahler, S.W.: 2014, Abundance enhancements in impulsive solar energetic-particle events with associated coronal mass ejections, Solar Phys. 289, 3817 doi: 10.1007/s11207-014-0547-1
On the differences in element abundances of energetic ions from corotating events and from large solar events. D V Reames, I G Richardson, , Barbier, L M , 10.1086/186209Astrophys. J. Lett. 38243Reames, D.V., Richardson, I.G., Barbier,, L.M.:1991, On the differences in element abundances of energetic ions from corotating events and from large solar events, Astrophys. J. Lett. 382, L43 doi: 10.1086/186209
Energetic particles and corotating interaction regions in the solar wind. I G Richardson, 10.1023/B:SPAC.0000032689.52830.3eSpace Sci. Rev. 111267Richardson, I. G.: 2004, Energetic particles and corotating interaction regions in the solar wind, Space Sci. Rev. 111 267 doi: 10.1023/B:SPAC.0000032689.52830.3e
Corotating MeV/amu ion enhancements at ≤ 1 AU from 1978 to 1986. I G Richardson, L M Barbier, D V Reames, T T Von Rosenvinge, 10.1029/92JA01837J. Geophys. Res. 9813Richardson, I.G., Barbier, L.M., Reames, D.V., von Rosenvinge, T.T.: 1993, Corotating MeV/amu ion enhancements at ≤ 1 AU from 1978 to 1986, J. Geophys. Res. 98 13 doi: 10.1029/92JA01837
Interpreting the properties of solar energetic particle events by using combined imaging and modeling of interplanetary shocks. A C Rouillard, D Odstrčil, N R SheeleyJr, A J Tylka, A Vourlidas, G Mason, C.-C Wu, N P Savani, B E Wood, C K Ng, 10.1088/0004-637X/735/1/7Astrophys. J. 735Rouillard, A.C., Odstrčil, D., Sheeley, N.R. Jr., Tylka, A.J., Vourlidas, A., Mason, G., Wu, C.-C., Savani, N.P., Wood, B.E., Ng, C.K., et al.: 2011, Interpreting the properties of solar energetic particle events by using combined imaging and modeling of interplanetary shocks, Astrophys. J. 735, 7, doi: 10.1088/0004-637X/735/1/7
The longitudinal properties of a solar energetic particle event investigated using modern solar imaging. A Rouillard, N R SheeleyJr, A Tylka, A Vourlidas, C K Ng, C Rakowski, C M S Cohen, R A Mewaldt, G M Mason, D Reames, 10.1088/0004-637X/752/1/44Astrophys. J. 75244Rouillard, A., Sheeley, N.R.Jr., Tylka, A., Vourlidas, A., Ng, C.K., Rakowski, C., Cohen, C.M.S., Mewaldt, R.A., Ma- son, G.M., Reames, D., et al.: 2012, The longitudinal properties of a solar energetic particle event investi- gated using modern solar imaging, Astrophys. J. 752 44, doi: 10.1088/0004-637X/752/1/44
Composition of the solar corona, solar wind, and solar energetic particles. J T Schmelz, D V Reames, R Steiger, S Basu, Astrophys. J. 75533Schmelz , J. T., Reames, D. V., von Steiger, R., Basu, S.:2012, Composition of the solar corona, solar wind, and solar energetic particles, Astrophys. J. 755 33
Shock geometry, seed populations, and the origin of variable elemental composition at high energies in large gradual solar particle events. A J Tylka, C M S Cohen, W F Dietrich, M A Lee, C G Maclennan, R A Mewaldt, C K Ng, D V Reames, 10.1086/429384Astrophys. J. 625Tylka, A.J., Cohen, C.M.S., Dietrich, W.F., Lee, M.A., Maclennan, C.G., Mewaldt, R.A., Ng, C.K., Reames, D.V.: 2005, Shock geometry, seed populations, and the origin of variable elemental composition at high energies in large gradual solar particle events, Astrophys. J. 625, 474, doi: 10.1086/429384
Solar and galactic cosmic ray abundances -A comparison and some comments. W R Webber, Proc. 14 th Int. Cos. Ray Conf, (Munich). 14 th Int. Cos. Ray Conf, (Munich)51597Webber, W. R.: 1975, Solar and galactic cosmic ray abundances -A comparison and some comments. Proc. 14 th Int. Cos. Ray Conf, (Munich), 5, 1597.
Particle Acceleration at Interplanetary Shocks. G P Zank, G Li, O Verkhoglyadova, 10.1007/s11214-007-9214-2Space Sci. Rev. 130Zank, G.P., Li, G., Verkhoglyadova, O., Particle Acceleration at Interplanetary Shocks, Space Sci. Rev. 130, 255 (2007) doi: 10.1007/s11214-007-9214-2
| zyda_arxiv-1483000 |
Split Cycle: A New Condorcet Consistent Voting Method Independent of Clones and Immune to Spoilers
Wesley H Holliday [email protected]
Eric Pacuit [email protected]
University of California
Berkeley
University of Maryland
Split Cycle: A New Condorcet Consistent Voting Method Independent of Clones and Immune to Spoilers
Version of March 2023. Forthcoming in Public Choice.
We propose a Condorcet consistent voting method that we call Split Cycle. Split Cycle belongs to the small family of known voting methods satisfying the anti-vote-splitting criterion of independence of clones. In this family, only Split Cycle satisfies a new criterion we call immunity to spoilers, which concerns adding candidates to elections, as well as the known criteria of positive involvement and negative involvement, which concern adding voters to elections. Thus, in contrast to other clone-independent methods, Split Cycle mitigates both "spoiler effects" and "strong no show paradoxes."
In this paper, we propose a Condorcet consistent voting method that we call Split Cycle, which has a number of attractive axiomatic properties. 1 Split Cycle responds to a concern well expressed by a 2004 letter to the Washington Post sent by a local organizer of the Green Party, as quoted by Miller (2019, p. 119):
[Electoral engineering] isn't rocket science. Why is it that we can put a man on the moon but can't come up with a way to elect our president that allows voters to vote for their favorite candidate, allows multiple candidates to run and present their issues and. . . [makes] the 'spoiler' problem. . . go away?
Starting with the problem of spoilers, Split Cycle satisfies not only the independence of clones criterion proposed by Tideman (1987) as an anti-vote-splitting criterion but also a new criterion we call immunity to spoilers that rules out spoiler effects not ruled out by independence of clones. What the Green Party organizer meant by a voting method that "allows voters to vote for their favorite candidate" is open to multiple interpretations; if it means a reasonable voting method that never provides an incentive for strategic voting, as Miller takes it to mean, then such a method is unavailable by well-known theorems on strategic voting (see Gibbard 1973, Satterthwaite 1973, Taylor 2005. More modestly, one may ask for a voting method such that at the very least, voters will never cause their favorite candidate to be defeated by going to the polls and expressing that their favorite candidate is their favorite. Understood this way, one is asking for a voting method that satisfies the criterion of positive involvement (Saari 1995). Split Cycle satisfies this criterion, as well as a number of other desirable criteria, including the Condorcet loser criterion, independence of Smith-dominated alternatives, negative involvement, non-negative responsiveness, reversal symmetry, and a criterion concerning the possibility of ties among winners that we call rejectability. In fact, Split Cycle can be distinguished from all voting methods we know of in any of the following three ways:
• Only Split Cycle satisfies independence of clones, positive involvement, and at least one of Condorcet consistency, non-negative responsiveness, and immunity to spoilers. 2 • Only Split Cycle satisfies independence of clones and negative involvement.
• Only Split Cycle satisfies independence of clones, immunity to spoilers, and rejectability. Split Cycle is an example of a head-to-head (or pairwise) voting method. We compare each pair of candidates a and b in a head-to-head match. If more voters rank a above b than rank b above a, then a wins the head-to-head match and b loses the head-to-head match. If a wins against b, then the number of voters who rank a above b minus the number who rank b above a is a's margin of victory over b. If one candidate wins its matches against all other candidates, that candidate is the winner of the election. But 1 After submitting this paper, we learned from Jobst Heitzig of his notion of the "immune set" discussed in a 2004 post on the Election-Methods mailing list (Heitzig 2004a), which is equivalent to the set of winners for Split Cycle after replacing 'stronger' with 'at least as strong' in Heitzig's definition in the post. See Remark 3.13 for further connections with Heitzig 2002. We subsequently learned from Markus Schulze of Steve Eppley's notion of the "Beatpath Criterion Method" in a 2000 post on the Election-Methods mailing list (Eppley 2000), which is defined analogously to Split Cycle except that it measures strength of majority preference using winning votes (the number of voters who rank x above y) whereas Split Cycle uses the margin of victory (the number of voters who rank x above y minus the number of voters who rank y above x). As far as we know, the Split Cycle voting method has not been studied in the research literature. In a companion paper, Holliday and Pacuit 2021a, we study Split Cycle as what is known as a collective choice rule in the social choice theory literature.
2 Among proposed non-Condorcet methods, we believe only Instant Runoff satisfies both independence of clones and positive involvement, but it fails the non-negative responsiveness criterion, which Split Cycle satisfies, as well as immunity to spoilers and negative involvement (see Appendix C.8).
there is a chance that every candidate will lose a match to some other candidate. 3 When this happens, there is a majority cycle: a list of candidates where each candidate wins against the next in the list, and the last candidate wins against the first. For example, candidates a, b, c form a majority cycle if a wins against b, b wins against c, and c wins against a. There can also be cycles involving more than three candidates.
Split Cycle deals with the problem of majority cycles as follows: 4 1. In each cycle, identify the head-to-head win(s) with the smallest margin of victory in that cycle.
2. After completing step 1 for all cycles, discard the identified wins. All remaining wins count as defeats of the losing candidates.
For example, if a wins against b by 1,000 votes, b wins against c by 2,000 votes, and c wins against a by 3, 000 votes, then a's win against b is discarded. Candidate b's win against c counts as a defeat of c unless it appears in another cycle (involving some other candidates) with the smallest margin of victory in that cycle. The same applies to c's win against a. Crucially, after step 2, there is always an undefeated candidate (see Section 3.2). If there is only one, that candidate wins the election. If there is more than one, then a tiebreaker must be used (see Section 3.3).
In the rest of this introduction, we provide additional background to the benefits of Split Cycle: we spell out the problem of "spoiler effects" that the independence of clones and immunity to spoiler criteria mitigate (Section 1.1), followed by the "strong no show paradox" that the positive involvement and negative involvement criteria rule out (Section 1.2). We then provide a roadmap of the rest of the paper in Section 1.3.
The Problem of Spoilers
Let us begin with one of the most famous recent examples of a spoiler effect in a U.S. election.
Example 1.1. In the 2000 U.S. Presidential election in Florida, run using the Plurality voting method, George W. Bush, Al Gore, and Ralph Nader received the following votes: 2, 912, 790 2, 912, 253 97, 488 Bush Gore Nader
It is reasonable to assume that if Nader had dropped out before Election Day, then a sufficiently large number of his 97,488 voters would have voted for Gore so that Gore would have won the election (Magee 2003, Herron andLewis 2007). It is also reasonable to assume that while for some Gore voters, Nader may have been their favorite but they strategically voted for Gore, still many more voters preferred Gore to Nader than vice versa. So Nader would pose no direct threat to Gore in a two-person election, but by drawing enough votes away from Gore in the three-person election, he handed the election to Bush. Thus, Nader "spoiled" the election for Gore.
In elections where voters submit rankings of the candidates, rather than only indicating their favorite,
we can give precise content to the claim that one candidate spoiled the election for another. Now p has a majority of first place votes, so p is declared the Instant Runoff winner. Note, however, that in the three-person election, d was the Condorcet winner: a majority of voters (66) prefer d to p, and a majority of voters (63) prefer d to r. Yet the addition of r kicks d out of the winning spot and results in p being the Instant Runoff winner. Thus, r spoiled the election for d.
The independence of clones criterion cannot account for the sense in which r spoiled the election for d, because r is not a clone of any candidate. Moreover, Instant Runoff satisfies independence of clones. Thus, independence of clones does not address all spoiler effects. A more recent example occurred in the 2022 Special General Election for U.S. Representative in Alaska on August 16, 2022: had one of the Republicans, Palin, not been on the ballot, then (holding voter rankings fixed) the other Republican, Begich, would have won; moreover, a majority of voters ranked Begich above Palin; and yet with Palin included, Instant Runoff elected the Democrat in the race, making Palin a spoiler (see https://github.com/voting-tools/electionanalysis). Below we will propose a criterion of immunity to spoilers that accounts for cases like this and that of the Burlington mayoral election.
The cost of using a voting method that allows spoiler effects is not just that elections will actually be spoiled, as in Examples 1.1 and 1.2 (for discussion of the 2016 U.S. Presidential election, see Kurrild-Klitgaard 2018, Woon et al. 2020, andPotthoff andMunger 2021, and for examples outside of the U.S., see Kaminski 2015, § 20.3.2 andFeizi et al. 2020). Another cost is that potential candidates may be discouraged from entering close races in the first place on the grounds that they might be spoilers.
What kind of "spoiler effects" should we try to prevent? This question mixes the conceptual question of what a "spoiler" is and the normative question of what effects we should prevent. Note that here we are dealing only with spoiler effects in single-office elections, as matters are more complicated in multi-office elections (see, e.g., Kaminski 2018).
First consider an obviously flawed definition of a spoiler: b is a "spoiler" for a just in case a would win without b in the election, but when b joins, then b but not a wins. This is of course not the relevant notion, since spoilers are not winners. Thus, consider a second definition: b is a "spoiler" for a just in case a would win without b in the election, but when b joins, neither a nor b wins. It is clearly necessary, in order for b to be a spoiler for a, that neither a nor b wins after b joins, but is it sufficient? Whether or not it is sufficient according to the ordinary concept of a spoiler, we do not think that we should prevent all such effects. 5 Consider the following example, where the diagram on the right indicates that, e.g., the number of voters who prefer a to c is one greater than the number who prefer c to a: 5 Thus, we think it is too strong to require that a voting method satisfy the condition known as the Aïzerman property (Laslier 1997, p. 41) or weak superset property or α ⊆ (Brandt et al. 2018), which is equivalent to the condition that if a would win were no candidate from a set N in the election, then after the candidates in N (the "newcomers") join the election, if none of the candidates in N wins, then a still wins. For the same reason, we think it is too strong to require that a voting method satisfy the strong candidate stability property (studied for resolute voting methods in Dutta et al. 2001 andEhlers andWeymark 2003 and generalized to irresolute methods in Eraslan andMcLellan 2004 andRodríguez-Álvarez 2006), which implies that if b would not win were b to join the election, then a would win with b in the election if and only if a would win without b in the election (cf. α in Brandt et al. 2018). The problem with these conditions is that they ignore the majority preference relations between a and the new candidates, which our condition of immunity to spoilers takes into account. For this election, we agree with proponents of voting methods such as Minimax, Ranked Pairs, and Beat
Path (all defined in Appendix C) that c should be the winner. Everyone suffers a majority loss to someone, but while c suffers a slight majority loss to a, a suffers a larger majority loss to b, who suffers an even larger majority loss to c. The electorate is in a sense incoherent, and the fairest way to respond in this case is to elect c. 6 But if b had not been in the election, so we would not have had to account for the majority preferences for b over a and for c over b, then a would have been the appropriate winner in the two-person election. Since we agree with all of these verdicts, we do not think a voting method should prevent all effects of the kind described in the second definition.
Similar remarks apply to a third definition (from Wikipedia contributors 2020a): b is a "spoiler" for a just in case a would win without b in the election, and (most of ) the voters who prefer b over c also prefer a over c, but when b joins, neither a nor b wins but rather c wins. Based on the example above, in which all voters who prefer b over c also prefer a over c, we do not think a voting method should prevent all such effects.
The problem with the definitions of spoiler effects above is that they ignore the voters' preferences for a vs. b. If a majority of voters prefer b to a, then b may legitimately make a a loser, even if b does not replace a as a winner. Thus, the only spoiler effects that we ought to rule out are those in which a is majority preferred to b. This leads to the idea that one ought to use a voting method with the following property:
• Immunity to spoilers: if a would win without b in the election, and more voters prefer a to b than prefer b to a, then it is not the case that when b joins the election, both a and b lose.
This captures Example 1.1 (in the imaginary version with ranked ballots), as a majority of voters prefer Gore to Nader. Unlike independence of clones, it also captures Example 1.2, as a majority of voters prefer the Democrat to the Republican in the Burlington election.
One way to avoid a spoiler effect of the kind identified in immunity to spoilers is that when a would win without b in the election, and a majority of voters prefer a to b, then when b joins the election, a loses but b wins. In this case, we say that b steals the election from a. It is hardly more desirable for b to steal the election from a than to spoil the election for a, so we propose the property of • Immunity to stealers: if a would win without b in the election, and more voters prefer a to b than prefer b to a, then it is not the case that when b joins the election, a loses but b wins.
The combination of immunity to spoilers and immunity to stealers is equivalent to a criterion we call
• Stability for winners: if a would win without b in the election, and more voters prefer a to b than prefer b to a, then when b joins, a still wins.
This criterion can be seen as extending the idea of Condorcet consistency 7 to the variable-candidate setting: a candidate who would be a winner without the newcomers and is majority preferred to all the newcomers 6 At least for a deterministic voting method. A voting method that outputs a probability distribution on the set of candidates (see Brandt 2017) could assign nonzero probabilities to each candidate in this example. But in this paper we do not consider probabilistic voting methods. 7 More accurately, the idea that any Condorcet winner ought to be at least tied for winning the election.
remains a winner after the addition of the newcomers. We will show that Split Cycle satisfies stability for winners and hence immunity to spoilers and stealers.
Note that we are not claiming that in a particular election using a particular voting method, if there is a candidate a such that for some b, a would win in the election without b, and more voters prefer a to b than prefer b to a, then a ought to win. For example, suppose that in an election using the method of dictatorship, a would have won without b in the election, i.e., a is the favorite candidate of the dictator besides b, and a majority of voters prefer a to b. It does not follow that a ought to win the election, as, e.g., there may be a Condorcet winner c who ought to win. Our claim is rather that one ought to use a voting method satisfying stability for winners, which rules out dictatorship as a candidate voting method in the first place.
Another qualification is that in the statement of the axioms above, by 'win' we really mean at least tie for the win. It is too much to require that these axioms hold even when we incorporate tiebreaking to select a single winner. For a simple example, suppose that in a two-candidate election, the same number of voters prefer a to c as prefer c to a, so there is a perfect tie; and suppose that with b included in the election, both a and c beat b head-to-head, but a's margin of victory over b is much larger than c's margin of victory over b. In this case, both a and c can argue that without b in the election, they would have won, and each of them beats b head-to-head, so a and c should both still win, thereby avoiding a spoiler effect with respect to b. Indeed, we will count both a and c as undefeated, according to Split Cycle. But if we need to select a single winner, we think it is reasonable to break the tie in a's favor when b is included in the election, as a has a larger majority victory over b than c does, thereby breaking the symmetry between a and c. Thus, while stability for winners should hold for the method of selecting the pre-tiebreaking winners, we should not require it of tiebreaking procedures. We propose weaker axioms on tiebreaking procedures in Section 4.
The Strong No Show Paradox
The term "no show paradox" was coined by Fishburn and Brams (1983) for violations of what is now called the negative involvement criterion (see Pérez 2001). This criterion states that if a candidate x is not among the winners in an initial election scenario, then if we add to that scenario some new voters who rank x as the (unique) last place candidate on their ballots, then the addition of those voters should not make x a winner.
2 3 1 3 a b c c b c a b c a b a
Candidate a receives the fewest first place votes, so a is eliminated in the first round. With a eliminated from the ballots, b receives a majority of first place votes and hence wins according to Instant Runoff. But now suppose that two additional voters with the ranking abc (so a is preferred to b and c, and b is preferred to c) make it to the election-their car does not break down-resulting in the following:
4 3 1 3 a b c c b c a b c a b a
Now candidate b receives the fewest first place votes, so b is eliminated in the first round. With b eliminated from the ballots, c receives a majority of first place votes and hence wins according to Instant Runoff. Thus, the addition of two voters who rank c last makes c the winner. This is a failure of negative involvement.
The dual of the negative involvement criterion is the positive involvement criterion (again see Saari 1995, Pérez 2001. 8 This criterion states that if a candidate x is among the winners in an initial election scenario, then if we add to that scenario some new voters who rank x as the (unique) first place candidate on their ballots, then the addition of these new voters should not make x a loser. Moulin (1988) gives the following example of a failure of positive involvement for the Sequential Elimination voting method in which a faces b in the first round, and then the winner of the first round faces c. 9 In the initial election scenario, we have the following ballots:
2 2 1 a b c b c a c a b
In the first round, a beats b (3 voters prefer a to b, and only 2 prefer b to a), and then in the second, c beats a (3 voters prefer c to a, and only 2 prefer a to c). But now suppose two additional voters make it to the election with the ballot cba, so we have:
2 2 1 2 a b c c b c a b c a b a
Now in the first round, b beats a (4 voters prefer b to a, and only 3 prefer a to b), and then in the second, b
beats c (4 voters prefer b to c, and only 3 prefer c to b). Thus, adding voters whose favorite candidate is c turns c from being a winner to a loser. This is a failure of positive involvement.
What is wrong with voting methods that fail negative or positive involvement? Our objection to them is not that they incentivize a certain kind of strategic (non-)voting. All reasonable voting methods incentivize some kind or other of strategic voting (again see Taylor 2005). Suppose we have a group of voters who will definitely cast their ballots and vote sincerely, regardless of the electoral consequences. Thus, the voters in the previous example who rank c first will come to the polls and cast their ballots, resulting in c losing the election that otherwise c would have won. Since the voters do not stay home strategically, is the fact that the voting method fails positive involvement unproblematic? Not at all. The problem is that the voting method is responding in the wrong way to additional unequivocal support from a voter for a candidate (c is the voter's unique favorite). As an analogy, a voting method failing the non-negative responsiveness criterion (see Section 5.5.1) also means that it can incentivize a certain kind of strategic voting; but even for a group of always sincere voters, failing non-negative responsiveness is a flaw of a voting method because it means that the voting method is responding in the wrong way to voters purely improving a candidate's position relative to other candidates. The failure of positive or negative involvement is now sometimes called the "strong no show paradox." 10
The reason seems to be that Moulin (1988) changed the meaning of "no show paradox" to stand not for a violation of negative (or positive) involvement but rather for a violation of the participation criterion: if a candidate x is the winner in an initial election, then if we add to that scenario some new voters who rank x above y, then the addition of these new voters should not make y the winner. Crucially, it is not required here that x is at the top of the new voters' ballots or that y is at the bottom. In our view, this participation criterion is problematic. To see why (as made precise in Appendix B), note that if the new voters do not rank x at the top of their ballots and do not rank y at the bottom, then in the presence of majority cycles, new voters having ballots with the ranking x x y y increase the number of people who prefer x to x, which may result in x knocking x out of contention, and increase the number of people who prefer y to y , which may result in y no longer knocking y out of contention. No wonder, then, that the winner may change from
x to y. In fact, remarkably, adding new voters who rank x above y may make y's new position vis-à-vis other candidates perfectly symmetrical to x's old position vis-à-vis other candidates, up to a renaming of the candidates (again see Appendix B). A certain kind of neutrality then requires that the winner changes from
x to y. Of course, a method not satisfying participation will incentivize some strategic non-voting, as the voters in question will have an incentive not to vote (sincerely). But again, all voting methods incentivize strategic behavior. Thus, we are not so troubled by results showing that all Condorcet consistent voting methods fail versions of participation 11 and therefore incentivize some strategic behavior. By contrast, we are troubled by failures of positive or negative involvement, as this shows that the method responds in the wrong way to unequivocal support for (resp. rejection of) a candidate.
Unlike well-known voting methods such as Instant Runoff, Ranked Pairs, and Beat Path, the method we propose in this paper, Split Cycle, satisfies positive and negative involvement. Hence it is not only immune to spoilers but also immune to the strong no show paradox. Figure 1 illustrates how solving the spoiler problem and the strong no show paradox leads uniquely to Split Cycle as opposed to standard voting methods.
Organization
The rest of the paper is organized as follows. In Section 2, we review some preliminary notions: profiles, margin graphs, and voting methods, as well as operations on profiles. In Section 3, we motivate and define our proposed voting method, Split Cycle. In Section 4, we discuss our new voting criteria concerning spoilers, stealers, and stability, as well as a stronger criterion based on Sen's (1971;1993) choice-functional condition 10 Perez (2001) calls violations of positive involvement the "positive strong no show paradox" and violations of negative involvement the "negative strong no show paradox." Felsenthal and Tideman (2013) and Felsenthal and Nurmi (2016) call them the "P-TOP" and "P-BOT" paradoxes, respectively. 11 Note that Moulin (1988) only proves participation failure for Condorcet consistent voting methods that are resolute, i.e., always pick a unique winner, which requires imposing an arbitrary tiebreaking rule that violates anonymity or neutrality (see Section 5.1.1). Since none of the standard Condorcet consistent voting methods are resolute, one may wonder about the significance of the fact that resolute Condorcet methods all fail participation. For discussion of the irresolute case, see Pérez 2001, Jimeno et al. 2009, and Sanver and Zwicker 2012 Negative Involvement Figure 1: An illustration of how solving the Spoiler Problem and the Strong No Show Paradox leads uniquely to Split Cycle as opposed to standard voting methods, some of which are displayed in gray (and defined in Appendix C). For each of the three "roads" in the diagram and each of the displayed voting methods, a voting method is shown on a road if and only if it satisfies all of the criteria that appear in the road lower in the diagram than the voting method. For example, Minimax satisfies Condorcet Winner and Positive Involvement, explaining its location on the middle road; it satisfies Negative Involvement but not Condorcet Loser, explaining its location on the right road; and it satisfies Immunity to Spoilers but not Independence of Clones, explaining its absence from the left road. All voting methods except Split Cycle are blocked from entering the overlap of the three roads, but some are blocked even earlier by other criteria, as indicated by the horizontal lines. For example, Instant Runoff is blocked by the Condorcet Winner criterion. of expansion consistency. In Section 5, we test Split Cycle against a number of other criteria from the literature. Our axiomatic analysis of Split Cycle and other methods is summarized in Figure 2. We conclude in Section 6 with a brief summary and directions for further research on Split Cycle. Appendices A and B contain proofs deferred from the main text. Appendices C and D contain definitions of other voting methods in Figure 2 and data from simulations of Split Cycle and other voting methods, respectively. Remark 1.3. The Split Cycle voting method is currently in use at stablevoting.org, as described in Section 3.3. An implementation in Python of Split Cycle and other methods referenced in this paper is available Figure 2: Comparison of Split Cycle to standard voting methods in terms of selected voting criteria. A indicates that the criterion is satisfied, while − indicates that it is not. The indicates that Ranked Pairs satisfies immunity to stealers in uniquely-weighted election profiles (Definition 2.4) but not in general. The * indicates that there are subtleties in how one must define Ranked Pairs to ensure full independence of clones (together with anonymity), as discussed in Remark 5.25. For the Uncovered Set column, there are several definitions of the Uncovered Set that are equivalent for an odd number of voters with linear ballots but inequivalent in general; the † indicates that while one version of the Uncovered Set (Fishburn 1977) fails to satisfy independence of clones and expansion consistency for all profiles, other definitions satisfy both axioms for all profiles, and all definitions do so for profiles with an odd number of voters with linear ballots. The ‡ indicates that whether Instant Runoff satisfies independence of clones depends on how ties for the fewest first-place votes are handled. For proofs of these claims and those in the table about voting methods other than Split Cycle, see Appendix C.
− − − − Immunity to Stealers (4.2) − − − − − Stability for Winners (4.3) − − − − − − Expansion Consistency, γ (4.4) − − − − /− † − − Anonymity and Neutrality (5.1.1) Reversal Symmetry (5.1.2) − − − Pareto (5.2.1) − Condorcet Winner (5.2.2) − − Condorcet Loser (5.2.2) − − Smith (5.2.3) − − − ISDA (5.3.1) − − − Independence of Clones (5.3.2) * − − † ‡ − Rejectability (5.4.1) − − − Resolvability (5.4.2) − − − − Non-negative Responsiveness (5.5.1) − Positive Involvement (5.5.2) − − − − − Negative Involvement (5.5.2) − − − − − −
Preliminaries
Profiles, Margin Graphs, and Voting Methods
Fix infinite sets V and X of voters and candidates, respectively. A given election will involve only finite subsets V ⊆ V and X ⊆ X , but we want no upper bound on the number of voters or candidates who may participate in elections. A binary relation P on X is asymmetric if for all x, y ∈ X, if xP y, then not yP x.
Let B(X) be the set of all asymmetric binary relations on X.
Definition 2.1. A profile is a pair (P, X(P)) where P : V (P) → B(X(P)) for some nonempty finite X(P) ⊆ X and nonempty finite V (P) ⊆ V. We conflate the profile with the function P. 12 We call X(P) and V (P) the sets of candidates in P and voters in P, respectively. We call P(i) voter i's ballot, and we write 'xP i y' for (x, y) ∈ P(i).
As usual, we take xP i y to mean that voter i strictly prefers candidate x to candidate y. It is standard to assume that P i satisfies additional constraints beyond asymmetry, such as transitivity and even negative transitivity (if not xP i y and not yP i z, then not yP i z). More generally, one may consider the following classes of profiles: P, the class of all profiles; A , the class of acyclic profiles, in which each voter's ballot is acyclic, meaning that there are no x 1 , . . . , x n ∈ X(P) with n > 1 such that for k ∈ {1, . . . , n − 1}, we have x k P i x k+1 , and x n = x 1 ; S , the class of strict weak order profiles, in which each voter's ballot is a strict weak order, meaning that it is asymmetric and negatively transitive (which together imply transitivity); L , the class of linear profiles, in which each voter's ballot is a linear order, meaning that it is transitive and for all x, y ∈ X(P) with x = y, we have either xP i y or yP i x.
Proving that a voting method satisfies some property with respect to a larger class of profiles, like P or A , is stronger than proving that it satisfies the property with respect to a smaller class of profiles, like L . On the other hand, proving that a voting method does not satisfy some property with respect to a smaller class of profiles, like L , is stronger than proving that it does not satisfy the property with respect to a larger class of profiles, such as P or A .
Remark 2.2. By not requiring linear profiles, we can represent elections in which voters are not required to rank all the candidates up for election. Depending on the official interpretation of what it means to leave a candidate unranked, a ballot with unranked candidates could mean either that (i) all ranked candidates are strictly preferred to all unranked candidates, and there are no strict preferences between unranked candidates or that (ii) there are no strict preferences at all involving unranked candidates.
Next we define the notions of an abstract margin graph and the margin graph of a particular profile. Definition 2.3. A margin graph is a weighted directed graph M with positive integer weights whose edge relation is asymmetric. We say M has uniform parity if all weights of edges are even or all weights of edges are odd, and if there are two nodes with no edge between them, then all weights are even.
Two examples of margin graphs already appeared in Section 1.1.
Definition 2.4. Let P be a profile and a, b ∈ X(P). Then M argin P (a, b) = |{i ∈ V (P) | aP i b}| − |{i ∈ V (P) | bP i a}|.
The margin graph of P, M(P), is the weighted directed graph whose set of nodes is X(P) with an edge from a to b weighted by M argin P (a, b) when M argin P (a, b) > 0, in which case we say that a is majority preferred to b. We write a α → P b if α = M argin P (a, b) > 0, omitting the α when the size of the margin is not important and P when the profile in question is clear. We say that P is uniquely weighted if for all
x, y, x , y ∈ X(P), if x = y, x = y , and (x, y) = (x , y ), then M argin P (x, y) = M argin P (x , y ).
We call the unweighted directed graph underlying M(P) the majority graph of P, denoted M (P), and we call the edge relation of M (P) the majority relation of P.
The key fact about the relation between margin graphs and profiles is given by Debord's Theorem.
Theorem 2.5 (Debord 1987). For any margin graph M, there is a strict weak order profile P such that M is the margin graph of P; and if M has uniform parity, then there is a linear profile P such that M is the margin graph of P.
Finally, we define what we mean by a voting method for the purposes of this paper.
Definition 2.6. Given a set D of profiles, a voting method on D is a function F such that for all profiles P ∈ D, we have ∅ = F (P) ⊆ X(P). We call F (P) the set of winners or winning set for P under F . We write dom(F ) for the set D on which F is defined.
As usual, if F (P) contains multiple winners, we assume that some further tiebreaking process would then apply, though we do not fix the nature of this process (see Schwartz 1986, pp. 14-5 for further discussion).
Options include the use of a deterministic tiebreaking procedure, an even-chance lottery on F (P), a runoff election with the candidates in F (P) in which a different set of voters may participate, etc.
Operations on Profiles
Sometimes we will be interested in combining two profiles for the same set of candidates and disjoint sets of voters, for which we use the following notation.
Definition 2.7. Given profiles P and P such that X(P) = X(P ) and V (P) ∩ V (P ) = ∅, we define the profile P + P :
V (P) ∪ V (P ) → B(X(P)) such that (P + P )(i) = P(i) if i ∈ V (P) and (P + P )(i) = P (i) if i ∈ V (P ).
To add P to itself, we may take P + P * where P * is a copy of P with a disjoint set of voters. 13
We will also be interested in deleting some candidates from every ballot in a profile, as follows.
Definition 2.8. Given a profile P and nonempty Y ⊆ X(P), define the restricted profile P| Y to be the profile with X(P| Y ) = Y and V (P| Y ) = V (P) such that for each i ∈ V (P| Y ), P| Y (i) is the restriction of the relation P(i) to Y . As a special case, when |X(P)| > 1 and x ∈ X(P), let P −x = P| X(P)\{x} , i.e., the result of removing candidate x from each ballot.
13 I.e., X(P) = X(P * ), V (P) ∩ V (P * ) = ∅, and there is a bijection h : V (P) → V (P * ) such that for all i ∈ V (P) and x, y ∈ X(P), we have xP i y if and only if xP * h(i) y.
Split Cycle
Three Main Ideas
The "Paradox of Voting" is the phenomenon that cycles may occur in the margin graph of a profile, e.g., a
is majority preferred to b, b is majority preferred to c, and c is majority preferred to a. Recall the formal definition of a cycle.
Definition 3.1. Given a directed graph G (e.g., a margin graph), a path in G is a sequence x 1 , . . . , x n of nodes from G such that n > 1 and for all i ∈ {1, . . . , n − 1}, we have x i → x i+1 , where → is the edge relation of the graph. A cycle in G is defined in the same way but requiring
x 1 = x n . The cycle is simple if for all distinct i, j ∈ {1, . . . , n}, x i = x j only if i, j ∈ {1, n} (i.e., all nodes are distinct except x 1 = x n ).
The voting method we propose in this paper, Split Cycle, provides a way of dealing with the problem of majority cycles. It is based on three main ideas:
1. Group incoherence raises the threshold for one candidate to defeat another, but not infinitely. By "group incoherence" we mean cycles in the majority relation. Consider the margin graph on the left again: Due to the group incoherence, the margin of 1 for a over c is not sufficient for a to defeat c. But if we raise the threshold for defeat to winning by more than 1, and we redraw the graph with an arrow from x to y if and only if M argin P (x, y) > 1, as on the right, then the group is no longer incoherent at this threshold. Since the group is no longer incoherent with respect to the win by more than 1 threshold, we think it is reasonable to take c to defeat b and b to defeat a, leaving c as the winner. Thus, as suggested, group incoherence does not raise the threshold for b to defeat a infinitely but rather only enough to eliminate any incoherence in which b and a are involved. This shows that our proposal differs from the GETCHA and GOCHA methods (Section 5.2.3), which take all 3-cycles to result in three-way ties regardless of the margins. 14 2. Incoherence can be localized. Consider the following margin graph: It would be a mistake to think that the margin of 1 for a over d is not sufficient for a to defeat d, due to the incoherence involving a, b, and c, which is only eliminated by raising the threshold to win by more than 3.
For there is no incoherence with respect to d and the other candidates, all of whom are majority preferred to d, so they all defeat d. The lesson from this example is that when deciding whether the margin of a over d is sufficient for a to defeat d, we set the threshold in terms of the cycles (if any) involving a and d. This shows that our proposal differs from the Minimax method, which takes the winner in the example above to be the Condorcet loser d (see Definition 5.8).
3. Defeat is direct. On our view, for a candidate x to defeat a candidate y, so that y is not in the set of winners, x must have a positive margin over y. Consider the following margin graph (note that if there is no edge between two candidates, then the margin of each candidate over the other is 0): In this case, we think a should defeat f , but a should not defeat d. Some other voting methods, such as
Beat Path, commit one to a view that we find dubious: that even though a is not majority preferred to d, nonetheless a should kick d out of the set of winners because of the indirect path from a to f to e to d with margins of 4 at each step. By contrast, we adopt a direct pairwise perspective: for a to kick d out of the winning set, a must be majority preferred to d. We find it difficult to try to explain to d's supporters that although a was not majority preferred to d, nonetheless a kicks d out of the winning set because of a's relation to other candidates, f and e, neither of whom defeat d! 15 Of course reasonable definitions of defeat cannot fully satisfy the independence of irrelevant alternatives (IIA) criterion (Arrow 1963), 16 but in our view this seems too flagrant a violation of the idea behind IIA. We endorse the following weakening of IIA, known as weak IIA (Baigent 1987): if two profiles are alike with respect to how everyone votes on x vs. y, then it should not be possible that in one profile, x defeats y, while in the other, y defeats x (though it should be possible that in one, x defeats y, while in the other, neither x defeats y nor y defeats x, due to a cycle).
Let P be a profile whose margin graph is shown above, and let P be a profile just like P with respect to how everyone votes on a vs. d but in which all voters have either a followed by d or d followed by a at the top of their ballots, followed by the linear order bP i cP i eP i f . In P , since d is majority preferred to a by 2 and there are no cycles, surely d should defeat a, kicking a out of the winning set. Then it follows by weak IIA that in P, a does not defeat d. Thus, weak IIA is inconsistent with the indirect notion of defeat according to Beat Path. By contrast, it is satisfied by the direct notion of defeat we will define for Split Cycle. 17
Defining Split Cycle
To define Split Cycle, in line with our first idea above, we first measure the degree of incoherence of a cycle by the smallest margin occurring on an edge in the cycle-for if we raise our threshold above that margin, then we split the cycle, restoring coherence at the higher threshold as in the second graph in Section 3.1.
Definition 3.2. Let P be a profile and ρ a simple cycle in M(P). The splitting number of ρ, Split# P (ρ), is the smallest margin between consecutive candidates in ρ (e.g., the splitting number of a
3 → b 1 → c 5
→ a is 1). We omit the subscript for P when the profile is clear from context. Thus, for example, the splitting number of the cycle in the three-candidate margin graph in Section 3.1 is 1, while the splitting number of the cycle in the four-candidate margin graph in Section 3.1 is 3.
In line with our second idea that incoherence can be localized, when deciding whether a defeats b, we look at all and only the simple cycles containing a and b (not at the other cycles that do not contain a and b); and in line with our third idea about the directness of defeat, for a to defeat b, we require that the direct margin of a over b exceeds the splitting number of every simple cycle containing a and b, which means that that direct margin survives after we raise the threshold above those splitting numbers.
Definition 3.3. Let P be a profile and a, b ∈ X(P). Then a defeats b in P if M argin P (a, b) > 0 and M argin P (a, b) > Split#(ρ) for every simple cycle ρ in M(P) containing a and b.
A candidate b is undefeated in P if there is no candidate who defeats b.
Remark 3.4. Just as some sports have a win by 2 rule for defeat, Split Cycle says that for a to defeat b, a must win by more than n over b, where n is the smallest number such that there are no cycles involving a and b in the wins by more than n relation, defined by xW n P y if M argin P (x, y) > n.
Finally, we can define the voting method we call Split Cycle:
Definition 3.5. For any profile P, the set of Split Cycle winners, SC(P), is the set of candidates who are undefeated in P.
As explained in Section 1, one can determine SC(P) in a simple two-step process (see Footnote 20 for a faster algorithm): 1. For each simple cycle, identify the edges with the smallest margin in that cycle. 2. After completing step 1 for all simple cycles, discard the identified edges. All remaining edges count as defeats.
Remark 3.6. Since the only information Split Cycle uses about a profile P is its margin graph, we can also think of Split Cycle as assigning to each margin graph M a set SC(M) of winners.
Let us consider some examples of calculating the set of Split Cycle winners.
Example 3.7. The Split Cycle winners for the margin graphs illustrating our three main ideas in Section 3.1 are as follows: in the three-candidate example, the unique Split Cycle winner is c; in the four-candidate example, the Split Cycle winners are a, b, and c; and in the six-candidate example, the Split Cycle winners are all candidates except f . Let us now show that the set of Split Cycle winners is always nonempty.
Lemma 3.9. For a profile P, let the defeat graph of P be the directed graph whose set of nodes is X(P) with an edge from a to b when a defeats b in P. Then for any profile P, the defeat graph of P contains no cycles. Thus, SC(P) = ∅.
Proof. Suppose there is a cycle a 1 Da 2 D . . . Da n Da 1 in the defeat graph of P, which we may assume is simple (since if there is any cycle, there is a simple one). This yields a simple cycle ρ = a 1
α1 −→ a 2 α2 −→ . . . αn−1 −→ a n αn −→ a 1 in M(P)
where each margin α i is greater than the splitting number of any simple cycle containing a i , a i+1 mod n and hence greater than the splitting number of ρ itself, which is impossible.
Remark 3.10. Like defeat relations in sports tournaments, the Split Cycle defeat relation is not necessarily transitive: it may be, as in Example 3.8, that d defeated c, and c defeated b, while d is not among those who defeated b-nonetheless, b is not among the winners of the tournament, having been defeated by c. Acyclicity, as in Lemma 3.9, is sufficient for there always to be a nonempty set of winners-transitivity is not required.
That the Split Cycle defeat relation is acyclic but not necessarily transitive explains how it can satisfy weak IIA without contradicting Baigent's (1987) generalization of Arrow's impossibility theorem (cf. Campbell and Kelly 2000), which states that under Arrow's axioms but with IIA weakened to weak IIA, there must be a weak dictator (a voter i such that if i prefers x to y, then y does not defeat x socially). Baigent's theorem requires that the social defeat relation is not only acyclic but a strict weak order. 18 Implicit here is that we can view Split Cycle as a collective choice rule, i.e., a function mapping each profile P to a binary relation on X(P) (cf. Sen 2017, Ch. 2*), by taking the binary relation to be the defeat relation. This is the perspective on Split Cycle adopted in Holliday and Pacuit 2021a. However, in this paper we focus on Split Cycle as a voting method (as in Definition 2.6) that maps each profile to a set of winners.
Another useful lemma about Split Cycle is that if a candidate z is not a winner for a profile P, then there is some winner x and a path in the defeat graph of P from x to z.
Lemma 3.11. For any profile P and z ∈ X(P)\SC(P), there is an x ∈ SC(P) and distinct y 1 , . . . , y n ∈ X(P) with y 1 = x and y n = z such that y 1 Dy 2 D . . . Dy n−1 Dy n .
Proof. We first find w 1 , . . . , w n ∈ X(P) such that w n Dw n−1 D . . . Dw 2 Dw 1 and then relabel w 1 , . . . , w n as y n , . . . , y 1 , so that y 1 Dy 2 D . . . Dy n−1 Dy n . If z ∈ X(P) \ SC(P), then setting w 1 = z, there is a w 2 such that w 2 Dz. If w 2 ∈ SC(P), then we are done with x = w 2 ; otherwise, there is a w 3 such that w 3 Dw 2 ; and so on. Since X(P) is finite and there are no cycles in the defeat graph of P by Lemma 3.9, we eventually find the desired w n ∈ SC(P).
Yet another useful lemma about Split Cycle is that to check whether a defeats b, it suffices to check the splitting number of just the simple cycles in which b immediately follows a, rather than all simple cycles containing a and b.
Lemma 3.12. Let P be a profile and a, b ∈ X(P). Then a defeats b in P if and only if M argin P (a, b) > 0 and
M argin P (a, b) > Split#(ρ) for every simple cycle ρ in M(P) of the form a → b → x 1 → · · · → x n → a.
Proof. Obviously if M argin P (a, b) is greater than the splitting number of every simple cycle containing a and b, then it is greater than the splitting number of every simple cycle of the form a → b → x 1 → · · · → x n → a. Conversely, assume M argin P (a, b) is greater than the splitting number of every simple cycle of the form
a → b → x 1 → · · · → x n → a.
To show that M argin P (a, b) is greater than the splitting number of every simple cycle containing a and b, let ρ be a simple cycle containing a and b whose splitting number is maximal among all such cycles. If ρ contains a → b, then we are done. So suppose ρ does not contain a → b. Without loss of generality, we may assume ρ is of the form
b → x 1 → · · · → x n → a → y 1 → · · · → y m → b. Let ρ be a → b → x 1 → · · · → x n → a.
It follows from our initial assumption that the splitting number of ρ is not equal to the margin of the a → b edge. Since ρ has maximal splitting number of any simple cycle containing a and b, it follows that one of the edges in ρ after the a → b edge has this splitting number as its margin; for if none of the edges in ρ after the a → b edge has this splitting number as its margin, then since the splitting number is defined as a minimum, ρ has a higher splitting number than ρ, contradicting the fact that ρ has maximal splitting number of any simple cycle containing a and b. Thus, ρ has splitting number at least that of ρ, and by assumption M argin P (a, b) > Split#(ρ ), so we have M argin P (a, b) > Split#(ρ).
Thus, M argin P (a, b) is greater than the splitting number of every simple cycle containing a and b.
Remark 3.13. After submitting this paper, we learned from Markus Schulze that Lemma 3.12 relates Split Cycle to the notion of immunity to binary arguments in Heitzig 2002. In particular, Split Cycle (along with Beat Path and Ranked Pairs) satisfies all of Heitzig's axioms (Im Mα ) for 1/2 < α ≤ 1. Although when defining choice rules, Heitzig (2002, Lemma 2 and following) only defines rules based on his notion of strong immunity to binary arguments, 19 which includes Beat Path (in his notation, the rule that selects the common optimal elements of the chain {tr S (M α ) | 1 2 < α ≤ 1}), not Split Cycle, it is natural in that setting to consider the Split Cycle rule formulated as in Lemma 3.12 as well. Heitzig's axioms (Im Mα ) are also closely related to the notion of a stack from Zavist and Tideman 1989, defined in Section 3.3.
It will facilitate reasoning about the defeat relation to introduce one more convenient piece of notation.
Definition 3.14. Let P be a profile and a, b ∈ X(P). The cycle number of a and b in P is
Cycle# P (a, b) = max({0} ∪ {Split#(ρ) | ρ a simple cycle of the form a → b → x 1 → · · · → x n → a}).
Then we can equivalently rewrite the definition of the defeat relation as follows.
Lemma 3.15. Let P be a profile and a, b ∈ X(P). Then a defeats b in P if and only if M argin P (a, b) > Cycle# P (a, b).
We will often apply Lemmas 3.12 and 3.15 in proofs without comment.
Refinements of Split Cycle
Holliday and Pacuit 2021a argues that the Split Cycle defeat relation from Definition 3.3 provides the right notion of one candidate defeating another in a democratic election using ranked ballots. Let us say that a voting method F is regarded as a pre-tiebreaking voting method if one regards F (P) as the set of undefeated candidates and regards any further narrowing of F (P) as "tiebreaking." The political significance of this distinction is that if F (P) contains a single winner, then that winner may be viewed as having a stronger mandate from voters, as a result of a more unambiguous election, than a candidate who is among several undefeated candidates in F (P) but wins by some further tiebreaking process. In this paper, we are proposing Split Cycle as a pre-tiebreaking voting method.
Since there can be multiple undefeated candidates, the question arises of how to pick an ultimate winner from among the undefeated. In addition to the usual non-anonymous, or non-neutral, or non-deterministic tiebreaking procedures (e.g., let the Chair decide among the undefeated, or use seniority to decide among the undefeated, or randomly choose an undefeated candidate), one can apply an anonymous, neutral, and deterministic tiebreaker before resorting to tiebreakers that violates one of these properties. Indeed, one 19 Compare Heitzig's notion of strong immunity to Schwartz's (1986) characterization of GOCHA in Lemma 5.14 below.
can view voting methods that refine Split Cycle as deterministic tiebreakers. On this approach, an election result consists in an announcement of undefeated candidates according to Split Cycle and, in the event of multiple undefeated candidates, the announcement of a tiebreak winner. This is precisely how Split Cycle is used on the election website stablevoting.org, where the tiebreaking procedure is the recently proposed Stable Voting method (Holliday and Pacuit Forthcoming).
Other refinements of Split Cycle are the well-known Beat Path (Schulze 2011(Schulze , 2022 and Ranked Pairs (Tideman 1987;Zavist and Tideman 1989) voting methods, as well as variants of Ranked Pairs such as the River method (Heitzig 2004b). These methods may pick different candidates from among the undefeated candidates according to Split Cycle, as shown in Example 4.6 below in the case of Ranked Pairs and Beat
Path. All of these methods, including Stable Voting, satisfy the following property, which implies that nonanonymous, non-neutral, or non-deterministic tiebreaking is only needed in non-uniquely weighted profiles.
Definition 3.16. A voting method F is quasi-resolute if for every uniquely-weighted P ∈ dom(F ), |F (P)| = 1.
That Split Cycle is not quasi-resolute is shown by Example 4.6 in the next section.
According to Beat Path, a wins in P if for every other candidate b, the strongest path from a to b in M(P) is at least as strong as the strongest path from b to a in M(P), where the strength of a path is the smallest margin between consecutive candidates in the path. We can relate Split Cycle to Beat Path using the following lemma. 20
Lemma 3.17. Let P be a profile and a, b ∈ X(P). Then a defeats b in P if and only if M argin P (a, b) > 0 and M argin P (a, b) > the strength of the strongest path from b to a.
Proof. By Lemma 3.12, a defeats b if and only if M argin P (a, b) is greater than 0 and the splitting number of every simple cycle of the form a → b → x 1 → · · · → x n → a. But this is equivalent to M argin P (a, b) being greater than 0 and the strength of every path of the form b → x 1 → · · · → x n → a, which is equivalent to M argin P (a, b) being greater than 0 and the strength of the strongest path from b to a.
Lemma 3.18. For any profile P, BP (P) ⊆ SC(P), where BP is the Beat Path method.
Proof. Suppose a ∈ SC(P), so there is a b ∈ X(P) such that b defeats a according to Split Cycle. Hence M argin P (b, a) is greater than the strength of the strongest path from a to b by Lemma 3.17. Since b → a is a path from b to a, it follows that the strength of the strongest path from b to a is greater than the strength of the strongest path from a to b. Hence a ∈ BP (P).
We can prove an analogous lemma for Ranked Pairs. To compute the Ranked Pairs winners, given a linear order T of the edges of M(P), order the edges in M(P) from largest to smallest margin, breaking ties according to T . Considering each edge in turn, "lock in" the edge if adding the edge to the list of already locked-in edges does not create a cycle of locked-in edges. Then a ∈ RP (P) if there is some T such that after running the above algorithm with T , there is no locked-in edge pointing to a. We also make use of an alternative characterization of Ranked Pairs due to Zavist and Tideman (1989). Given a profile P, a linear order L on X(P) is a stack for P if for any a, b ∈ X(P), if aLb, then there are distinct x 1 , . . . , x n ∈ X(P)
with x 1 = a and x n = b such that x i Lx i+1 and M argin P (x i , x i+1 ) ≥ M argin P (b, a) for all i ∈ {1, . . . , n−1}.
Lemma 3.19 (Zavist and Tideman 1989). For any profile P and a ∈ X(P), we have a ∈ RP (P) if and only if a is the maximum element in some stack for P. 21
Lemma 3.20. For any profile P, RP (P) ⊆ SC(P), where RP is the Ranked Pairs method.
Proof. Suppose a ∈ SC(P), so there is some b ∈ X(P) such that M argin P (b, a) > 0 and M argin P (b, a) > Split#(ρ) for every simple cycle ρ containing b and a. Now suppose for contradiction that a ∈ RP (P). Then by Lemma 3.19, there are distinct y 1 , . . . , y m ∈ X(P) such that a
α0 −→ y 1 → · · · → y m αm −→ b with α i ≥ M argin P (b, a) for each i ∈ {0, . . . , m}. But then ρ := b → a α0 −→ y 1 → · · · → y m αm −→ b is a simple cycle such that M argin P (b, a) > Split#(ρ), which is a contradiction. Hence a ∈ RP (P).
The Stable Voting method is a refinement of Split Cycle by definition: to find the Stable Voting winner in P, order the pairs (a, b) of candidates such that a is undefeated in P from largest to smallest value of M argin P (a, b), and declare as Stable Voting winners the candidate(s) a from the earliest pair(s) (a, b) such that a is a Stable Voting winner in P −b . Thus, Stable Voting is defined recursively, where in the case of a profile with a single candidate, that candidate is the Stable Voting winner. Remarkably, the Simple Stable Voting procedure that is defined just like Stable Voting but without the requirement that a is undefeated appears to always select from the undefeated candidates anyway in profiles that are uniquely weighted. 22
Conjecture 3.21. For any uniquely-weighted profile P, SSV (P) ⊆ SC(P) and SSV (P) = SV (P), where SSV and SV are the Simple Stable Voting and Stable Voting methods, respectively.
The cost of deterministic tiebreaking is the violation of variable-candidate and variable-voter axioms from Sections 1.1 and 1.2 that are satisfied by Split Cycle. Beat Path and Ranked Pairs violate even weaker axioms of partial immunity to spoilers and stealers, defined in Section 4, while Beat Path, Ranked Pairs, and Stable Voting all violate positive involvement. Of the known quasi-resolute refinements of Split Cycle, we prefer Stable Voting on the grounds that it satisfies an axiom of stability for winners with tiebreaking, also defined in Section 4, which implies partial immunity to spoilers and stealers.
Spoilers, Stealers, and Stability
In Section 1.1, we discussed the problem of spoiler effects in elections with more than two candidates. As noted, the independence of clones criterion (Tideman 1987) is often cited as an anti-spoiler axiom. In Section 5.3.2, we show that Split Cycle satisfies this axiom. However, independence of clones only rules out a special type of spoiler effect, namely, vote splitting by the introduction of a similar candidate. But sometimes a candidate b can spoil the election for a dissimilar candidate a, as in Example 1.2, where the Republican spoiled the election for the Democrat; and in real cases of vote splitting, such as Example 1.1, the "similar candidate" will almost never qualify as a clone in the formal sense (see Definition 5.22).
To capture perverse effects of the introduction of a candidate not covered by independence of clones, in Section 1.1 we suggested the concepts of spoilers and stealers, defined formally as follows. 21 Using this lemma, we can also prove a strengthened version of Lemma 3.20: for any profile P and a, b ∈ X(P), if b defeats a according to Split Cycle (Definition 3.3), then bLa for any stack L for P.
22 For non-uniquely weighted profiles, there are extremely rare examples in which SSV (P) ⊆ SC(P). However, if one defines Simple Stable Voting to use parallel-universe tiebreaking of tied margins in the style of Ranked Pairs (see ), then Conjecture 3.21 implies that SSV P U T (P) ⊆ SC(P) and SSV P U T (P) = SV P U T (P) for all profiles P.
Definition 4.1. Let F be a voting method, P ∈ dom(F ), and a, b ∈ X(P). Then we say that:
1. b spoils the election for a in P if a ∈ F (P −b ), M argin P (a, b) > 0, a ∈ F (P), and b ∈ F (P); 2. b steals the election from a in P if a ∈ F (P −b ), M argin P (a, b) > 0, a ∈ F (P), and b ∈ F (P).
Recall the three axioms from Section 1.1, now defined formally as well.
Definition 4.2. Let F be a voting method.
1. F satisfies immunity to spoilers if for P ∈ dom(F ) and a, b ∈ X(P), b does not spoil the election for a.
2. F satisfies immunity to stealers if for P ∈ dom(F ) and a, b ∈ X(P), b does not steal the election from a.
F satisfies stability for winners if for
P ∈ dom(F ) and a, b ∈ X(P), if a ∈ F (P −b ) and M argin P (a, b) > 0, then a ∈ F (P).
The following is immediate from the definition. It is useful to have terminology for a candidate a who would win without another candidate b in the election, such that a majority of voters prefer a to b.
Definition 4.4. Given a voting method F , profile P, and a ∈ X(P), we say that a is Condorcetian for F
in P if there is some b ∈ X(P) such that a ∈ F (P −b ) and M argin P (a, b) > 0. 23
That a is weakly Condorcetian for F in P is defined in the same way but with M argin P (a, b) ≥ 0.
Recall that a is a Condorcet winner if a wins against every other candidate head-to-head. In a similar spirit, a candidate who is Condorcetian wins against X(P −b ) according to F and wins against b head-tohead. In fact, the notions of a Condorcet winner and a Condorcetian candidate are related as follows. Recall that F is Condorcet consistent if F (P) = {a} whenever a is the Condorcet winner in P ∈ dom(F ).
Lemma 4.5. Let F be a voting method. Then (i) if F is Condorcet consistent, P ∈ dom(F ) with |X(P)| > 1, and c is a Condorcet winner in P, then c is the unique Condorcetian candidate for F in P; and (ii) if for any P ∈ dom(F ) with a unique Condorcetian candidate c, F (P) = {c}, then F is Condorcet consistent.
Proof. For part (i), let P and c be as in the statement. Then for any a, b ∈ X(P), if M argin P (a, b) > 0, then b = c, so c ∈ X(P −b ) and hence c is the Condorcet winner in P −b , so F (P −b ) = {c} by the assumption that F is Condorcet consistent. It follows that c is the unique Condorcetian candidate.
For part (ii), suppose c is a Condorcet winner in P. If |X(P)| = 1, then F (P) = {c}. If |X(P)| > 1, then by part (i), c is the unique Condorcetian candidate in P and hence by the assumption on F , F (P) = {c}.
Thus, F is Condorcet consistent.
Note that in a given profile, there may be no Condorcetian candidates. For example, let F be a voting method that applies majority rule in two-candidate profiles. Then in an election with three candidates in a majority cycle, no candidate is Condorcetian for F ; for if a is majority preferred to b, then removing b from the election results in a two-candidate profile in which a loses. On the other hand, there may be more than one Condorcetian candidate, as in the following example.
23 In Holliday and Pacuit Forthcoming, such an a is called stable for F in P.
Example 4.6. Assume F is a voting method such that in any three-candidate profile with a majority cycle, if there is a candidate who has both the uniquely largest majority victory and the uniquely smallest majority loss, that candidate is among the winners. Then in a four-candidate profile with the following margin graph, both candidates a and c are Condorcetian for F in P (for a, remove c, and for c, While what we called a pre-tiebreaking method in Section 3.3 can-and we think should-satisfy stability for winners, Example 4.6 suggests that a voting method that incorporates tiebreaking cannot. Indeed, we prove the following impossibility theorem in other work.
Theorem 4.7 (Holliday et al. Forthcoming). There is no voting method (whose domain contains all profiles with up to four candidates) satisfying anonymity, neutrality, stability for winners, and quasi-resoluteness.
We will prove a related impossibility theorem in Section 5.4.2.
While tiebreaking is therefore inconsistent with selecting all the Condorcetian candidates, it is compatible with selecting from among the Condorcetian candidates. These observations lead us to the following modified axioms applicable to tiebreaking procedures.
Definition 4.8. Let F be a voting method.
1. F satisfies partial immunity to spoilers if for all P ∈ dom(F ) and a, b ∈ X(P), if a is the unique Condorcetian candidate in P, then b does not spoil the election for a.
2. F satisfies partial immunity to stealers if for all P ∈ dom(F ) and a, b ∈ X(P), if a is the unique Condorcetian candidate in P, then b does not steal the election from a.
3. F satisfies partial stability for winners if for all P ∈ dom(F ) and a ∈ X(P), if a is the unique Condorcetian candidate in P, then a ∈ F (P).
4.
A voting method F satisfies stability for winners with tiebreaking if for all P ∈ dom(F ) , if some candidate is Condorcetian for F in P, then all candidates in F (P) are Condorcetian for F in P.
The logical relations between these axioms are recorded in the following.
Fact 4.9. (i) Partial stability for winners is equivalent to the conjunction of partial immunity to spoilers and partial immunity to stealers. (ii) Stability for winners with tiebreaking implies partial stability for winners.
(iii) Stability for winners with tiebreaking and stability for winners are incomparable in strength.
Parts (i) and (ii) are immediate from the definitions, while part (iii) follows from facts proved in Section 4.3: Split Cycle satisfies stability for winners but not stability for winners with tiebreaking, whereas Stable Voting satisfies stability for winners with tiebreaking but not stability for winners. But again, stability for winners and stability for winners with tiebreaking are intended as axioms on different kinds of functions:
we regard stability for winners as an appropriate axiom for pre-tiebreaking voting methods, whereas stability for winners with tiebreaking is an appropriate axiom for voting methods that are conceived as incorporating tiebreaking: if there are Condorcetian candidates, then the ultimate tiebreak winner must be one of them.
Spoilers
It is easy to check that GETCHA, GOCHA, Uncovered Set, and Minimax satisfy immunity to spoilers.
However, Beat Path and Ranked Pairs do not.
Proposition 4.10. Beat Path does not satisfy even partial immunity to spoilers. In fact, there are uniquely-weighted linear profiles P and distinct a, b, c ∈ X(P) such that with respect to Beat Path: b spoils the election for a in P; a is the unique Condorcetian candidate; the largest margin in
P is M argin P (a, b); BP (P) = {c}, c is the Condorcet loser in P −b , and M argin P (a, c) > 0.
Proof. Let P be a linear profile whose margin graph is displayed on the right below, so the margin graph of First, it is easy to see that BP (P −b ) = {a}. Then since M argin P (a, b) > 0, a is Condorcetian for Beat Path in P. One can also check that BP (P) = {c}. Therefore, b is a spoiler for a in P. Moreover, no other candidate is Condorcetian for Beat Path in P,
P −b is displayed on the left below:since b ∈ BP (P −d ), c ∈ BP (P −b ), d ∈ BP (P −c ), d ∈ BP (P −e ), e ∈ BP (P −a ), e ∈ BP (P −b )
, and e ∈ BP (P −c ). Finally, observe that the largest margin in P is M argin P (a, b), that in P −b , c is the Condorcet loser, and that M argin P (a, c) > 0. Proposition 4.12. Ranked Pairs does not satisfy even partial immunity to spoilers. In fact, there are uniquely-weighted linear profiles P and distinct a, b, c ∈ X(P) such that with respect to Ranked Pairs: b spoils the election for a in P; a is the unique Condorcetian candidate in P; the largest margin in P is M argin P (a, b); RP (P) = {c} and M argin P (a, c) > 0.
Proof. Let P be a profile whose margin graph is displayed on the right below, so the margin graph of P −b is displayed on the left below: Therefore, b is a spoiler for a in P. Moreover, no other candidate is Condorcetian for Ranked Pairs in P, since
b ∈ RP (P −e ), c ∈ RP (P −b ), c ∈ RP (P −d ), d ∈ RP (P −a ), d ∈ RP (P −b ), e ∈ RP (P −c ), and e ∈ RP (P −d ).
Finally, observe that the largest margin in P is M argin P (a, b) and that M argin P (a, c) > 0.
Stealers
As observed in § 1.1, when a would win without b in the election, and more voters prefer a to b than prefer b to a, yet a loses when b joins, it hardly improves the situation to find that b wins; in this case, although b does not spoil the election for a-since spoilers are by definition losers-we say that b steals the election from a. Recall the formal definition of immunity to stealers from Definition 4.2.
Proposition 4.14. Minimax and Beat Path do not satisfy even partial immunity to stealers. In fact, there are profiles P such that with respect to Minimax: b steals the election from a; a is the unique Condorcetian candidate; and b is the Condorcet loser in P. The same holds for Beat Path but without the condition that b is a Condorcet loser.
Proof. First, for Minimax, let P be a profile whose margin graph is displayed in the middle below, so the margin graph of P −b is displayed on the left: Observe that M inimax(P −b ) = {a} and M argin P (a, b) > 0. Yet M inimax(P) = {b}, so Minimax violates immunity to stealers. Moreover, a is the only candidate who is Condorcetian for Minimax in P, and the stealer b is the Condorcet loser in P.
For Beat Path, let P be a profile whose margin graph is displayed on the right above, so again the
margin graph of P −b is displayed on the left. Observe that BP (P −b ) = {a}, that M argin P (a, b) > 0
, and yet BP (P) = {b}, so Beat Path violates immunity to stealers. Moreover, a is the only candidate who is Condorcetian for Beat Path in P.
Immunity to stealers reveals an interesting axiomatic difference between Beat Path and Ranked Pairs.
Proposition 4.15. Ranked Pairs satisfies immunity to stealers on uniquely-weighted profiles, though not on all profiles.
Proof. Toward a contradiction, suppose there is a uniquely-weighted profile P and a, b ∈ X(P) with , it follows that b ∈ RP (P ) (Tideman 1987, p. 204) and hence RP (P ) = {b} since P is uniquely weighted. However, it is easy to see that abz 1 . . . z n is a stack for P , so a ∈ RP (P ) by Lemma 3.19, a contradiction.
a ∈ RP (P −b ), M argin P (a, b) > 0, and b ∈ RP (P). Since a ∈ RP (P −b ), by Lemma 3.19, there is a stack az 1 . . . z n for P −b . Now let M
To see that Ranked Pairs does not satisfy immunity to stealers on all profiles, let P be a profile whose margin graph is displayed on the right below, so the margin graph of P −b is displayed on the left: The only tie in margins is between (a, e) and (f, a). In P −b , if we break the tie between (a, e) and (f, a)
in favor of (a, e), then a is the Ranked Pairs winner, whereas if we break it in favor of (f, a), then e is winner. Hence RP (P −b ) = {a, e}. In P, if we break the tie in favor of (a, e), then c is the Ranked Pairs winner, whereas if we break it in favor of (f, a), then b is the winner. Hence RP (P) = {b, c}. Then since M argin P (a, b) > 0, Ranked Pairs violates immunity to stealers.
Stability
In virtue of violating (partial) immunity to spoilers or (partial) immunity to stealers, Beat Path, Ranked Pairs, and Minimax all violate (partial) stability for winners. By contrast, we will now prove that Split Cycle satisfies stability for winners. In fact, it satisfies the following slightly stronger property.
Definition 4.16. A voting method F satisfies strong stability for winners if for all P ∈ dom(F ), all candidates who are weakly Condorcetian for F in P belong to F (P).
Proposition 4.17. Split Cycle satisfies strong stability for winners.
Proof. If a ∈ SC(P −b ), then for all c ∈ X(P −b ), M argin P −b (c, a) ≤ Cycle# P −b (c, a). As M argin P −b (c, a) = M argin P (c, a) and Cycle# P −b (c, a) ≤ Cycle# P (c, a), we have that (i) for all c ∈ X(P −b ), M argin P (c, a) ≤ Cycle# P (c, a). By the assumption that M argin P (a, b) ≥ 0, we have M argin P (b, a) ≤ 0 and hence (ii) M argin P (b, a) ≤ Cycle# P (b, a). By (i) and (ii), a ∈ SC(P).
Informally, the explanation of why Split Cycle satisfies strong stability for winners is the following, using two of our main ideas from Section 3: since defeat is direct, if M argin P (a, b) ≥ 0, then b does not defeat a; and since incoherence raises the threshold for defeat, and adding a candidate can increase incoherence 24 but cannot decrease incoherence in the initial set of candidates, if a candidate x did not defeat a before the addition of b, then it does not defeat a after.
As for other voting methods, GOCHA satisfies stability for winners but not strong stability for winners (see the proof of Proposition 5.16), while GETCHA satisfies both. 25 Uncovered Set satisfies stability for winners, as well as strong stability for winners under some definitions (see Appendix C.7).
Let us now turn to stability for winners with tiebreaking. In virtue of violating immunity to spoilers or immunity to stealers in profiles with a unique Condorcetian candidate, Beat Path, Ranked Pairs, and Minimax all violate stability for winners with tiebreaking (recall Fact 4.9). In fact, so does Split Cycle, as there are profiles with multiple undefeated candidates, including non-Condorcetian undefeated candidates, and Split Cycle does not perform any further tiebreaking among undefeated candidates.
Proposition 4.18. Split Cycle does not satisfy stability for winners with tiebreaking.
Proof. Let P be a profile whose margin graph is shown below: 24 This is why Split Cycle may allow a candidate x who is not among the winners in P −b to become a winner in P. 25 To see that GETCHA satisfies strong stability for winners, using the definition of GETCHA in Definition 5.10, suppose a ∈ GET CHA(P −b ) and M argin P (a, b) ≥ 0. Further suppose for contradiction that a ∈ GET CHA(P), which with
M argin P (a, b) ≥ 0 implies b ∈ GET CHA(P), so GET CHA(P) ⊆ X(P −b ). It follows that GET CHA(P) is → P −b -dominant, so GET CHA(P −b ) ⊆ GET CHA(P)
, which contradicts the facts that a ∈ GET CHA(P −b ) and a ∈ GET CHA(P). It is easy to check that SC(P) = {a, b}. Moreover, candidate a is Condorcetian for SC in P, as witnessed by the fact that a ∈ SC(P −b ) and M argin P (a, b) > 0. However, b is not Condorcetian, as b does not win in P −c or P −d . Thus, stability for winners with tiebreaking requires that b not win in P. We think Split Cycle delivers the correct verdict that neither a nor b is defeated in the profile used for Proposition 4.18; for if a candidate x is undefeated without a candidate y in the election, and more voters prefer x to y than y to x, then x should still be undefeated (see Holliday and Pacuit 2021a for extensive discussion). Yet we also recognize the practical imperative to break ties, as well as the appeal in some cases of breaking ties deterministically as far as possible. Thus, in response to the profile used for Proposition 4.18, one may reasonably maintain that although neither a nor b is defeated, we can break the tie in favor of a on the grounds that a would have won without b in the election, a majority of voters prefer a to b, and no other candidate is Condorcetian in this way. This idea leads to the axiom of stability for winners with tiebreaking (Definition 4.8.4), which requires that only Condorcetian candidates be selected, if such candidates exist. For example, the Stable Voting method breaks ties among Condorcetian candidates by selecting the Condorcetian candidate(s) a with the largest margin over a candidate who shows a to be Condorcetian. Thus, Stable Voting satisfies stability for winners with tiebreaking, as well as the following strengthening of the axiom.
Definition 4.20. A voting method F satisfies strong stability for winners with tiebreaking if not only does F satisfy stability for winners with tiebreaking but also for all P ∈ dom(F ), if some candidate is weakly Condorcetian for F in P but no candidate is Condorcetian for F in P, then all candidates who win in P are weakly Condorcetian for F in P.
Proposition 4.21. Stable Voting satisfies strong stability for winners with tiebreaking.
For a proof of stability for winners with tiebreaking that easily adapts to the strong version, see Holliday and Pacuit Forthcoming.
Ultimately the question of whether a voting method F should satisfy stability for winners or stability for winners with tiebreaking depends on the distinction from Section 3.3 of thinking of F as a pre-tiebreaking method for selecting undefeated candidates or as incorporating tiebreaking among undefeated candidates.
Finally, to use the axioms discussed in this section to choose between voting methods, one can go beyond showing that a voting method F satisfies an axiom while another method F violates it. One can attempt to study the probability that F will violate the axiom. In fact, we think the better question is: what is the probability that F will violate the axiom, conditional on F and F picking different sets of winners in the election? After all, when choosing between F and F based on how well they pick winners-as opposed to, e.g., their computational cost-all that matters is the elections in which the voting methods disagree.
By analogy, when choosing between two insurance policies of the same cost, what matters is the different coverage the policies provide in the event of an accident; the fact that an accident is improbable is not a reason to dismiss as unimportant the differences in coverage.
Let us illustrate the methodology of estimating the probability that a voting method will violate an axiom conditional on its picking different winners than another voting method that satisfies the axiom, deferring a systematic treatment to future work (as in Holliday and Pacuit 2021b for the axiom of positive involvement).
First, we imagine Beat Path being proposed as a voting method for selecting the undefeated candidates in an election, a problem for which we think the axiom of stability for winners should be satisfied. Table 1 shows that in linear profiles with four candidates, the least number for which Beat Path can disagree with Table 1: Of four-candidate linear profiles in which BP (P) = SC(P), the percentage in which Beat Path violates strong stability for winners, i.e., there are a, b ∈ X(P) such that a ∈ BP (P −b ), M argin P (a, b) ≥ 0, and a ∈ BP (P). For each column labeled by n/n + 1, we sampled 1,000,000 profiles with n voters and 1,000,000 with n + 1 voters using the probability model named in the row, defined in Appendix D.
Second, we imagine Beat Path proposed as a voting method for breaking ties among the undefeated candidates as determined by Split Cycle, a problem for which we think the axiom of stability for winners with tiebreaking should hold. Table 2 shows that in linear profiles with four candidates, among those in which Beat Path disagrees with Stable Voting, a non-trivial percentage involve violations by Beat Path of the strong stability for winners with tiebreaking axiom satisfied by Stable Voting, using the same probability models.
Expansion
Stability for winners concerns when a winner a in a profile P −b remains a winner in a profile P with one new candidate b added. The profile P may be viewed as combining the profile P −b without b and the two-candidate profile P ab with X(P ab ) = {a, b} that agrees with P on the ranking of a vs. b assigned to each voter. As M argin P (a, b) ≥ 0 is equivalent (for voting methods based on majority voting) to a being a winner in the two-candidate profile P ab , (strong) stability for winners can be restated as follows: if a is a winner in both P −b and P ab , then a is a winner in the full profile P. We can generalize this idea to apply not only to profiles of the form P −b and P ab but to any two subprofiles of P such that every candidate in P appears in one of the subprofiles ("if a can win the battles separately, then a can win the war"). Table 2: Of four-candidate linear profiles in which BP (P) = SV (P), the percentage in which Beat Path violates strong stability for winners with tiebreaking. For each column labeled by n/n + 1, we sampled 1,000,000 profiles with n voters and 1,000,000 with n + 1 voters using the probability model named in the row, defined in Appendix D.
Definition 4.22. A voting method F satisfies expansion consistency if for all P ∈ dom(F ) and nonempty
Y, Z ⊆ X(P) with Y ∪ Z = X(P), we have F (P| Y ) ∩ F (P| Z ) ⊆ F (P).
Since expansion consistency implies strong stability for winners for voting methods that agree with majority rule on two-candidate profiles, all the voting methods shown to violate stability for winners in the previous sections also violate expansion consistency. Intuitively, the reason Split Cycle satisfies expansion consistency is the following. First, given a ∈ F (P| Y ) ∩ F (P| Z ), the margins for each candidate x over a do not change from P| Y , P| Z to P. How then could a suddenly be defeated by a candidate x in P? By our first and third main ideas in Section 3, this would require the margin for x over a to meet the threshold for defeat determined by incoherence; but incoherence can only increase from P| Y , P| Z to P, not decrease, so if x's margin over a was not sufficient to defeat a in P| Y (resp. P| Z ), then it is not sufficient to defeat a in P. An example of a voting method satisfying stability for winners but not expansion consistency is the Banks voting method, defined as follows. 26 Say that a chain in M (P) is a subset of X(P) linearly ordered by the majority relation of P. Then a ∈ Banks(P) if a is the maximum element with respect to the majority relation of some maximal chain in M (P).
Proposition 4.24. Banks satisfies strong stability for winners but not expansion consistency.
Proof. For strong stability for winners, suppose a ∈ Banks(P −b ) and M argin P (a, b) ≥ 0. Hence a is the maximum element of some maximal chain R in M (P −b ). Let R be a maximal chain in M (P) with R ⊆ R .
Since M argin P (a, b) ≥ 0, b is not the maximum of R , so a is the maximum of R , so a ∈ Banks(P).
For the violation of expansion consistency, consider a profile P with the majority graph below: 26 We thank Felix Brandt for providing this example. To relate choice consistency conditions to social choice, Sen (1993) defines, for some fixed nonempty sets X and V , a functional collective choice rule (FCCR) for (X, V ) to be a function f mapping each profile P with X(P) = X and V (P) = V to a choice function f (P, ·) on X(P). Let a variable-election FCCR (VFCCR)
be a function f mapping each profile P (i.e., allowing X(P) and V (P) to vary with P) to a choice function f (P, ·) on X(P). For voting theory, we take X(P) to be the set of candidates who appeared on the ballot in the election scenario modeled by P-since there is no practical point in considering voting procedures that have access to ranking information not on submitted ballots-but after the ballots are collected, some candidates may withdraw from consideration, be rejected by higher authorities, become incapacitated, etc., leaving us to choose winners from some "feasible set" S X(P).
There are three ways to apply the notion of expansion consistency to a VFCCR f :
• f satisfies feasible expansion consistency if for all profiles P, f (P, ·) satisfies expansion consistency, i.e., for all S, T ⊆ X(P), f (P, S) ∩ f (P, T ) ⊆ f (P, S ∪ T );
• f satisfies profile expansion consistency if for all profiles P and Y, Z ⊆ X(P) with Y ∪ Z = X(P),
f (P| Y , Y ) ∩ f (P| Z , Z) ⊆ f (P, X(P));
• f satisfies full expansion consistency if for all profiles P and Y, Z ⊆ X(P) with Y ∪ Z = X(P), S ⊆ Y ,
and T ⊆ Z, we have f (P| Y , S) ∩ f (P| Z , T ) ⊆ f (P, S ∪ T ).
Thus, profile expansion consistency and full expansion consistency constrain the relation between sets of winners for different election scenarios involving rankings of different (but possibly overlapping or extended) sets of candidates, modeled by different profiles; by contrast, feasible expansion consistency constrains the choices of winners from feasible sets based on a fixed profile of rankings. All three notions of expansion consistency are equivalent for VFCCRs satisfying the independence condition that for all profiles P and nonempty S ⊆ X(P), f (P, S) = f (P| S , S). However, they are not equivalent in general. For example, consider the global Borda count VFCCR (cf. Kelly 1988, p. 71): f (P, S) is the set of all x ∈ S such that for all y ∈ S, the Borda score of x calculated with respect to the full profile P is at least that of y. 27 Global Borda count satisfies feasible expansion consistency but not profile expansion consistency. By contrast, both local and global VFCCR versions of Split Cycle 28 satisfy full expansion consistency. 29
Remark 4.26. Expansion consistency may remind one of the reinforcement criterion (see Pivato 2013b), but that criterion concerns combining profiles for disjoint sets of voters voting on the same set of candidates, whereas expansion consistency concerns profiles for the same set of voters voting on different sets of candidates. Reinforcement states that for any profiles P and P with V (P) ∩ V (P ) = ∅ and X(P) = X(P ), if F (P) ∩ F (P ) = ∅, then F (P + P ) = F (P) ∩ F (P ). No Condorcet consistent voting method satisfies reinforcement (see, e.g., Zwicker 2016, Proposition 2.5); we know of no non-trivial voting method that satisfies both expansion consistency and reinforcement; 30 and we do not find reinforcement normatively compelling for all voting contexts. 31 Why when the voting method is given the full information in P + P should it be constrained by what it outputs when given only the limited information of P and only the limited information in P ? For example, for three candidates a, b, c, suppose P is the classic Condorcet paradox profile with 6 voters such that M argin P (a, b) = 2, M argin P (b, c) = 2, and M argin P (c, a) = 2, so F (P) = {a, b, c} by fairness considerations (i.e., for any anonymous and neutral method F ), while P is a profile with 3 voters such that M argin P (b, a) = 1, M argin P (a, c) = 1, and M argin P (b, c) = 1, so F (P ) = {b} by Condorcet consistency. When we look at all the information in P + P , we see that a is majority preferred to every other candidate-a is the Condorcet winner, so F (P + P ) = {a}. Note that b is only majority preferred to a by a small margin in P , whereas a is majority preferred to b by a larger margin in P. Due to c's poor performance in P , there is no cycle in the full profile P + P , so although a's margin over b failed to make a the winner in P due to fairness considerations, a's margin over b in the full cycle-free profile makes a the winner, as it should. Note this also shows it is a mistake to think that given a profile like P + P , one can assume that the "Condorcet component" P can be deleted from the profile, as if it contains no information, while not changing the winning set (cf. the property of cancelling properly in Balinski and Laraki 2010, p. 77). The Condorcet component P contains the important information that M argin P (a, b) = 2. 27 The local Borda count VFCCR (cf. Kelly 1988, p. 74) takes f (P, S) to be the set of all x ∈ S such that for all y ∈ S, the Borda score of x calculated with respect to the restricted profile P| S is at least that of y.
28 Global Split Cycle takes f (P, S) to be the set of all x ∈ S such that for all y ∈ S, y does not defeat x according to the defeat relation calculated with respect to P, whereas local Split Cycle uses the defeat relation calculated with respect to P| S .
29 It is more difficult to describe VFCCRs satisfying both feasible and profile but not full expansion consistency. However, the essential point can be seen by considering a binary choice function C that for ∅ = S ⊆ Y ⊆ X(P) returns a nonempty C(Y, S) ⊆ S; think of Y as determining the profile P| Y and S as determining the feasible set S ⊆ Y . Let X(P) = {a, b, c} and let C be as follows: . Yet for each nonempty Y ⊆ X(P), C(Y, ·) is a choice function satisfying expansion consistency, so C satisfies feasible expansion consistency, and the choice function C defined by C (S) = C(S, S) satisfies expansion consistency, so C satisfies profile expansion consistency.
30 Expansion consistency implies a weakening of Condorcet consistency-it implies that if there is a Condorcet winner, that candidate must be among the winners. But with no other axioms, expansion consistency and reinforcement are consistent, as they are both satisfied by the trivial voting method that always picks all candidates as winners. 31 In the context where there is a "true" ranking of the candidates of which voters have noisy perceptions, reinforcement is satisfied by any voting method that can be rationalized as the maximum likelihood estimator for some noise model with i.i.d. votes (see Conitzer and Sandholm 2005;Pivato 2013a).
Other Criteria
In this section, we consider how Split Cycle fairs with respect to a number of other criteria for voting methods.
We organize the criteria into five groups: symmetry criteria (5.1), dominance criteria (5.2), independence criteria (5.3), resoluteness criteria (5.4), and monotonicity criteria (5.5).
Symmetry Criteria
Anonymity and Neutrality
The two most basic symmetry criteria say that permuting voter names does not change the election result (anonymity) and permuting candidate names changes the result according to the permutation (neutrality).
Definition 5.1. Let F be a voting method.
1. F satisfies anonymity if for any P, P ∈ dom(F ) and X(P) = X(P ), if there is a permutation π of V such that π[V (P)] = V (P) and P(i) = P (π(i)) for all i ∈ V (P), then F (P) = F (P );
2. F satisfies neutrality if for any P, P ∈ dom(F ) with V (P) = V (P ), if there is a permutation τ of X such that τ [X(P)] = X(P ) and for each i ∈ V (P) and x, y ∈ X(P), (x, y) ∈ P(i) if and only if (τ (x), τ (y)) ∈ P (i), then τ [F (P)] = F (P ).
The following is obvious from the definition of Split Cycle.
Proposition 5.2. Split Cycle satisfies anonymity and neutrality.
Reversal Symmetry
Next we consider a criterion due to Saari (Saari 1994, § 3.1.3; Saari 1999, § 7.1) that can be seen as related to neutrality. Neutrality implies that if we swap the places of two candidates a and b on every voter's ballot, then if a won the election before the swap, b should win the election after the swap. Reversal symmetry extends this idea from pairwise swaps to full reversals of voters' ballots.
Definition 5.3. A voting method F satisfies reversal symmetry if for any P ∈ dom(F ) with |X(P)| > 1, if F (P) = {x}, then x ∈ F (P r ), where P r is the profile such that P r i = {(x, y) | (y, x) ∈ P i }.
Proposition 5.4. Split Cycle satisfies reversal symmetry.
Proof. Suppose P is such that |X(P)| > 1 and SC(P) = {x}. It follows by Lemma 3.11 that there is a y ∈ X(P) such that x defeats y in P, i.e., such that M argin P (x, y) > Cycle# P (x, y). But then since M argin P (x, y) = M argin P r (y, x) and Cycle# P (x, y) = Cycle# P r (y, x), we have M argin P r (y, x) > Cycle# P r (x, y), so x ∈ SC(P r ).
Dominance Criteria
Pareto
Our first dominance criterion is the well-known Pareto principle (see, e.g., Zwicker 2016, Definition 2.6), stating that Pareto-dominated candidates cannot be elected.
Definition 5.5. A voting method F satisfies Pareto if for any P ∈ dom(F ) and a, b ∈ X(P), if all voters in P rank a above b, then b ∈ F (P).
Proposition 5.6. Restricted to the class of acyclic profiles, Split Cycle satisfies Pareto.
Proof. Suppose all voters in P rank a above b, so M argin P (a, b) = |V (P)|. Since it is impossible to have a
cycle ρ = a |V (P)| −→ b |V (P)| −→ x 1 |V (P)| −→ . . . |V (P)| −→ x n |V (P)|
−→ a in an acyclic profile, one edge of any such cycle must have weight less than |V (P)|, so a defeats b by Lemma 3.12.
Condorcet Winner and Loser
The next notions of dominance are based on majority preference rather than unanimity: the Condorcet (winner ) criterion states that a candidate who is majority preferred to every other candidate must be the unique winner, while the Condorcet loser criterion states that a candidate who is majority dispreferred to every other candidate must not be among the winners.
Definition 5.7. For a profile P and x ∈ X(P), we say that x is a Condorcet winner (resp. Condorcet loser ) in P if for every y ∈ X(P) \ {x}, we have M argin(x, y) > 0 (resp. M argin(y, x) > 0 and X(P) = {x}).
Definition 5.8. A voting method F satisfies the Condorcet (winner ) criterion (resp. Condorcet loser criterion) if for every P ∈ dom(F ) and x ∈ X(P), if x is the Condorcet winner (resp. Condorcet loser), then F (P) = {x} (resp. x ∈ F (P)). If F satisfies the Condorcet criterion, we say that F is Condorcet consistent.
Proposition 5.9. Split Cycle satisfies the Condorcet criterion and the Condorcet loser criterion.
Proof. If x is the Condorcet winner (resp. loser), then for every y ∈ X(P) \ {x}, we have M argin P (x, y) > 0 (resp. M argin P (y, x) > 0). It follows that x is not involved in any cycles, so for every y ∈ X(P) \ {x}, we have M argin P (x, y) > Cycle# P (x, y) = 0 (resp. M argin P (y, x) > Cycle# P (y, x) = 0). Hence x defeats every other candidate, so SC(P) = {x} (resp. is defeated by every other candidate, so x ∈ SC(P)).
Smith and Schwartz Criteria
A strengthening of the Condorcet criterion is the Smith criterion (Smith 1973), according to which the set of winners must be a subset of the Smith set-the smallest set of candidates such that every candidate inside the set is majority preferred to every candidate outside the set. Following the terminology of Schwartz (1986), we also call the Smith set the GETCHA set ("GETCHA" stands for "generalized top-choice assumption").
Definition 5.10. Let P be a profile and S ⊆ X(P). Then S is → P -dominant if S = ∅ and for all x ∈ S and y ∈ X(P) \ S, we have x → P y. Define GET CHA(P) = {S ⊆ X(P) | S is → P -dominant}.
Definition 5.11. A voting method F satisfies the Smith criterion if for any P ∈ dom(F ), we have F (P) ⊆ GET CHA(P).
Proposition 5.12. Split Cycle satisfies the Smith criterion.
Proof. Suppose b ∈ SC(P) \ GET CHA(P). Since b ∈ GET CHA(P), there is an a ∈ GET CHA(P) such that a → P b. Then since b ∈ SC(P), it follows by Lemma 3.12 that there is a simple cycle ρ of the form a → b → x 1 → · · · → x n → a. Hence one of the edges in ρ goes from a candidate outside GET CHA(P) to a candidate inside GET CHA(P), which is a contradiction.
Next we consider a strengthening of the Smith criterion, based on the idea of the Schwartz set, or in Schwartz's (1986) terminology, the GOCHA set ("GOCHA" stands for "generalized optimal choice axiom").
Definition 5.13. Let P be a profile and S ⊆ X(P). Then S is → P -undominated if for all x ∈ S and y ∈ X(P) \ S, we have y → P x. Define GOCHA(P) = {S ⊆ X(P) | S is → P -undominated and no S S is → P -undominated}.
Note that if there are no zero margins between distinct candidates in P, then GOCHA(P) = GET CHA(P).
Another useful characterization of the GOCHA set is given by the following lemma.
Lemma 5.14 (Schwartz 1986, Corollary 6.2.2). Let P be any profile, and let → * P be the transitive closure of → P , i.e., a → * P b if and only if there are x 1 , . . . , x n ∈ X(P) with a = x 1 and b = x n such that x 1 → P · · · → P x n . Then GOCHA(P) = {x ∈ X(P) | there is no y ∈ X(P) : y → * P x and x → * P y}.
Just as the Smith criterion states that the set of winners should always be a subset of the Smith set, the Schwartz criterion states that the set of winners should always be a subset of the Schwartz set.
Definition 5.15. A voting method F satisfies the Schwartz criterion if for any P ∈ dom(F ), F (P) ⊆
GOCHA(P).
In contrast to Proposition 5.12, Split Cycle does not satisfy the Schwartz criterion. After the proof, we will explain why we do not find the Schwartz criterion normatively plausible.
Proposition 5.16. Split Cycle does not satisfy the Schwartz criterion, even when restricted to linear profiles.
Proof. By Debord's Theorem, there is a linear profile P with the following margin graph (simplifying our example for the third idea of Section 3.1):
a e d f 2 2 2 2 First, note that d ∈ SC(P) (indeed, SC(P) = {a, d, e}), because the only candidate with a positive margin over d is e, but M argin P (e, d) > Cycle# P (e, d). Yet d ∈ GOCHA(P), because a → * P d and d → * P a.
For the reasons explained in Section 3.1 for the idea that defeat is direct, we think that d should not be kicked out of the winning set by a in the profile P used in the proof of Proposition 5.16. Thus, we do not accept the Schwartz criterion. The profile used in the proof of Proposition 5.16 also shows the following.
Proposition 5.17. There is no voting method on the domain of linear profiles satisfying anonymity, neutrality, strong stability for winners, and the Schwartz criterion.
Proof. Where P is the linear profile used in the proof of Proposition 5.16, by anonymity and neutrality, F (P −a ) = {d, e, f }. Then since M argin P (a, d) = 0, it follows by strong stability for winners that d ∈ F (P), which contradicts the Schwartz criterion as in the proof of Proposition 5.16.
Independence Criteria
Independence of Smith-Dominated Alternatives
The Smith criterion of Section 5.2.3 can be strengthened to the criterion that deletion of candidates outside the Smith set should not change the set of winners.
Definition 5.18. A voting method F satisfies independence of Smith-dominated alternatives (ISDA) if for any P ∈ dom(F ) and x ∈ X(P) \ GET CHA(P), we have F (P) = F (P −x ).
Remark 5.19. ISDA implies the Smith criterion, since if x ∈ F (P) \ GET CHA(P), then F (P) = F (P −x ).
Remark 5.20. If F satisfies ISDA, then F satisfies the following inter-profile condition: for any profiles P and P , where S = GET CHA(P) and S = GET CHA(P ), if P| S = P | S , then F (P) = F (P ). This inter-profile condition may be viewed as a weakening of the independence of irrelevant alternatives.
Proposition 5.21. Split Cycle satisfies ISDA.
Proof. Suppose x ∈ X(P) \ GET CHA(P). It follows that GET CHA(P) = GET CHA(P −x ). 32 Toward showing that SC(P) = SC(P −x ), let Q, Q ∈ {P, P −x } with Q = Q . Suppose y ∈ SC(Q). We will show y ∈ SC(Q ). For any z ∈ X(P −x ), we have M argin P (z, y) = M argin P−x (z, y).
Hence if M argin Q (z, y) = 0, then M argin Q (z, y) = 0, so z does not defeat y in Q . Suppose instead that M argin Q (z, y) > 0. Since y ∈ SC(Q), z does not defeat y in Q, so we have M argin Q (z, y) < Cycle# Q (z, y).
Now we claim that
Cycle# P (z, y) = Cycle# P−x (z, y).(3)
Since y ∈ SC(Q), y ∈ GET CHA(Q) by Proposition 5.12. Then from M argin Q (z, y) > 0, it follows that z ∈ GET CHA(Q). Since x ∈ X(P) \ GET CHA(P) and z ∈ GET CHA(Q) = GET CHA(P), there is no simple cycle in M(P) of the form z → y → w 1 → · · · → w n → z with x ∈ {w 1 , . . . , w n }, since there is no path from a candidate outside GET CHA(P), like x, to a candidate inside GET CHA(P), like z. This establishes 32 Clearly GET CHA(P) is → P −x -dominant, so GET CHA(P −x ) ⊆ GET CHA(P) by Definition 5.10. To see that GET CHA(P −x ) is → P -dominant, consider an a ∈ GET CHA(P −x ) and b ∈ X(P) \ GET CHA(P −x ). If b = x, then b ∈ X(P −x ) \ GET CHA(P −x ) and hence a → P −x b because GET CHA(P −x ) is → P −x -dominant, which implies a → P b. If b = x, then since GET CHA(P −x ) ⊆ GET CHA(P) and x ∈ X(P) \ GET CHA(P), again we have a → P b. Thus, GET CHA(P −x ) is → P -dominant, so GET CHA(P) ⊆ GET CHA(P −x ) by Definition 5.10.
(3). Then together (1), (2), and (3) entail M argin Q (z, y) < Cycle# Q (z, y). Hence z does not defeat y in Q . Finally, if Q = P, then x does not defeat y in Q , since y ∈ GET CHA(Q ) while x ∈ GET CHA(Q ). Thus, no candidate defeats y in Q , so y ∈ SC(Q ).
Independence of Clones
In Section 1.1, we informally discussed the anti-vote-splitting axiom of independence of clones (Tideman 1987). Recall that a set C of two or more candidates is a set of clones if no candidate outside of C appears in between two candidates from C on any voter's ballot.
Definition 5.22. Given a profile P, a set C ⊆ X(P) is a set of clones in P if 2 ≤ |C| < |X(P)| and for all c, c ∈ C, x ∈ X(P) \ C, and i ∈ V (P), if cP i x then c P i x, and if xP i c then xP i c .
The independence of clones criterion states that (i) removing a clone from a profile does not change which non-clones belong to the winning set, and (ii) a clone wins in a profile if and only if after removing that clone from the profile, one of the other clones wins in the resulting profile.
Definition 5.23. A voting method F is such that non-clone choice is independent of clones if for every P ∈ dom(F ), set C of clones in P, c ∈ C, and a ∈ X(P) \ C, a ∈ F (P) if and only if a ∈ F (P −c ).
F is such that clone choice is independent of clones if for every P ∈ dom(F ), set C of clones in P, and Finally, F satisfies independence of clones if F is such that non-clone choice is independent of clones and clone choice is independent of clones.
We prove the following in Appendix A.
Theorem 5.24. Split Cycle satisfies independence of clones.
Remark 5. 25. Tideman (1987) shows that the version of Ranked Pairs defined in Section 3.3 and Appendix C.1 satisfies the condition of independence of clones for all profiles P such that for all a, b, x, y ∈ X(P), M argin P (a, b) = 0 only if a = b, and M argin P (a, b) = M argin P (x, y) only if (i) a = x or a and x belong to a set of clones, and (ii) b = y or b and y belong to a set of clones. Zavist and Tideman (1989) show that the same version of Ranked Pairs does not satisfy independence of clones for all profiles. They propose a modified version of Ranked Pairs that satisfies independence of clones at the expense of violating anonymity. However, as suggested by Tideman (p.c.), one can obtain an anonymous and fully clone-independent version of Ranked Pairs for linear profiles by declaring a candidate x a winner in a linear profile P if there exists a voter i ∈ V (P) such that the Zavist and Tideman version of Ranked Pairs declares x a winner in P when i is the designated voter used to generate their tiebreaking ranking of candidates (TBRC). 33
Resoluteness Criteria
In this section, we discuss criteria concerning the ability of a voting method to narrow down the set of winners. Of course, any anonymous and neutral voting method will select all candidates as winners in a profile in which all candidates are tied. But such profiles are highly unlikely. To rule out such cases, we can consider uniquely-weighted profiles as in Definition 2.4. Still, some highly unlikely uniquely-weighted profiles will produce large winning sets for Split Cycle, as shown by part 2 of the following.
Proposition 5.26.
1. For any uniquely-weighted profile P such that |X(P)| ≥ 3, we have |SC(P)| ≤ |X(P)| − 2.
2. For any n ≥ 4, there is a uniquely-weighted profile P with |X(P)| = n and |SC(P)| = |X(P)| − 2.
Proof. For part 1, pick a, b ∈ X(P) such that M argin P (a, b) is the largest margin of any edge in M(P), which implies that M argin P (a, b) > 0. Fortunately, the margin graphs used in the proof of Proposition 5.26.2 are realized by an extremely small proportion of profiles, as we will see with some data in Section 5.4.2.2 ( Table 3). As worst-case winning set sizes are not the best measure of the general ability of a voting method to narrow down the set of winners, we will consider alternative resoluteness criteria in the next two subsections.
Rejectability
The next criterion we propose concerns winnowing a set of winners down to a single winner. The rejectability criterion states that if in a profile P, candidate x is among the winners, then we should be able to make x the unique winner in a profile P + obtained from P by adding voters who sufficiently strengthen the rejection of other candidates, i.e., sufficiently increase what were already non-negative margins against other candidates, so as to defeat the others (recall our idea in Section 3 that incoherence does not raise the threshold for defeat infinitely). Thus, if candidate a is majority preferred to b in P, then this still holds in P + with a margin that is at least as large and possibly larger than in P. No majority preferences are reversed from P to P + , for if we were to allow that, then we could simply make x the Condorcet winner in P + , trivializing the criterion.
Definition 5.27. A voting method F satisfies rejectability if for any P ∈ dom(F ) such that |F (P)| > 1 and x ∈ F (P), there is a profile P + ∈ dom(F ) with X(P) = X(P + ) and V (P) ⊆ V (P + ) such that for all
a, b ∈ X(P), if M argin P (a, b) > 0, then M argin P + (a, b) ≥ M argin P (a, b), and F (P + ) = {x}.
Thus, if a method fails rejectability, then for some P and x ∈ F (P), no matter how extremely we turn majority preferences against other candidates into enormous landslides, we cannot make x the unique winner.
Rejectability is a strong criterion insofar as it rules out all irresolute C1 voting methods (as does the resolvability criterion of Section 5.4.2). Recall that a voting method F is C1 (Fishburn 1977) if for any profiles P and P , if their majority graphs (Definition 2.4) are the same-M (P) = M (P )-then their winners are also the same-F (P) = F (P ). Copeland, GETCHA/GOCHA, and Uncovered Set are all C1.
Proposition 5.28. No anonymous and neutral C1 voting method (whose domain contains all linear profiles with three candidates) satisfies rejectability.
Proof. Given a profile P with X(P) = {a, b, c} and whose margin graph contains the cycle a → b → c → a, no matter the margins, an anonymous and neutral C1 method F must have F (P) = {a, b, c}; hence we can never increase any margins in such a way that one candidate becomes the unique winner.
An example of a non-C1 method violating rejectability is the Weighted Covering method Laslier 1999, Pérez-Fernández andDe Baets 2018), according to which x ∈ W C(P) if there is no y ∈ X(P)
such that M argin P (y, x) > 0 and for all z ∈ X(P), M argin P (y, z) ≥ M argin P (x, z). Weighted Covering also selects all candidates in the profile P in the proof of Proposition 5.28.
In our proof that Split Cycle satisfies rejectability, we use the following lemma.
Lemma 5.29. Split Cycle satisfies the overwhelming majority 34 criterion: for all profiles P and P with X(P) = X(P ) and V (P) ∩ V (P ) = ∅, there is an n ∈ N such that for all m ∈ N with m ≥ n, we have SC(P + mP ) ⊆ SC(P ), where mP = γ 1 (P ) + · · · + γ m (P ) with γ 1 (P ), . . . , γ m (P ) being copies of P with pairwise disjoint sets of voters (recall Definition 2.7).
Proof. Let n = 2|V (P)| + 1. To show that SC(P + mP ) ⊆ SC(P ), it suffices to show that for any It follows that M argin P+mP (a, b) > Cycle# P+mP (a, b), so a defeats b in P + mP .
Proposition 5.30. Split Cycle satisfies rejectability.
Proof. We claim that to establish rejectability, it suffices to show that for any profile P such that |SC(P)| > 1 and x ∈ SC(P), there is a profile P with X(P) = X(P ) such that for all a, b ∈ X(P), if M argin P (a, b) > 0, We now claim that every y ∈ X(P) \ {x} is defeated in M .
Case 1: M argin P (x, y) ≥ 0. Then M argin M (x, y) = n + 3 by rule 1. Moreover, for any simple cycle ρ of the form x → y → z 1 → · · · → z k → x in M , we have M argin P (z k , x) > 0 by the construction of M from M(P) and hence M argin M (z k , x) = n + 1 by rule 2, so Split#(ρ) = n + 1. Hence Cycle# M (x, y) = n + 1. Thus, M argin M (x, y) > Cycle# M (x, y), so x defeats y in M .
Case 2: M argin P (y, x) > 0. Then since x ∈ SC(P), it follows by Lemma 3.12 that there is a simple cycle of the form y → x → z 1 → · · · → z k → y in P where x → z 1 → · · · → z k → y is a shortest simple path from x to y. Hence M argin M (z k , y) = n + 3 by rule 1. We claim that z k defeats y in M . If there is no simple cycle of the form z k → w 1 → · · · → w with w 1 = y and w = z k in M , then z k defeats y in M . If there is such a simple cycle ρ, then we claim that one of the edges w i → w i+1 in ρ has weight n + 1.
If there is no simple path from x to any of w 2 , . . . , w , this follows from rule 2 above. So suppose there is a simple path from x to one of w 2 , . . . , w . Then there is a w i such that (i) the shortest path p from x to w i is no longer than the shortest path from x to any w j . This setup is shown in Figure 3. Now we claim that the edge w i−1 → w i in ρ has weight n + 1; for it to have weight n + 3, the edge w i−1 → w i must occur on a shortest path from x to w i , which is impossible. For suppose p is a path from x to w i including the edge w i−1 → w i . By (i), the initial segment of p from x to w i−1 has length at least that of p, by our choice of p;
35 A simple path in a graph is a sequence x 1 , . . . , xn of distinct nodes with x i → x i+1 for each i ∈ {1, . . . , n − 1}. The length of a path is the number of nodes in the path minus one. so the length of p is at least the length of p plus one; hence p is not a shortest path from x to w i . Thus,
x • y • w = z k n + 3 • w 2 • w i−1 n + 1 •wi
we have proved that one of the edges w i → w i+1 has weight n + 1. Thus, Split#(ρ) = n + 1. It follows that
Cycle# M (z k , y) = n + 1, which with M argin M (z k , y) = n + 3 implies that z k defeats y in M .
Corollary 5.31. Beat Path and Ranked Pairs satisfy rejectability.
Proof. Let F ∈ {BP, RP } and P be a profile such that |F (P)| > 1 and x ∈ F (P). Then by Lemmas 3.18 and 3.20, |SC(P)| > 1 and x ∈ SC(P). Hence by Proposition 5.30, there is a P as in the definition of rejectability such that SC(P ) = {x}, which implies F (P ) = {x} given Lemmas 3.18 and 3.20 and F (P ) = ∅.
Example 5.32. If we pick any candidate x in the majority graph shown on the left below, the proof of Proposition 5.30 give us an algorithm to weight the edges of the majority graph such that in the resulting margin graph x is the unique Split Cycle winner. For example, we can make a the unique winner with the weighting on the middle graph and d the unique winner with the weighting on the right graph. In fact, from the proof of Proposition 5.30 we can extract a proof of the following proposition about when it is possible, starting from an arbitrary graph, to turn the graph into a margin graph in which a given candidate is a (unique) winner for Split Cycle.
Proposition 5.33. For any asymmetric directed graph G = (G, →) and a ∈ G, the following are equivalent:
1. there is a margin graph M based on G such that a ∈ SC(M) (recall Remark 3.6);
2. there is a margin graph M based on G such that SC(M) = {a};
3. for all x ∈ G \ {a}, if x → a, then there is a simple cycle of the form x → a → y 1 → · · · → y n → x in G.
Resolvability
Like the rejectability criterion of Section 5.4.1, the criteria considered in this section concern winnowing a set of winners down to a unique winner.
5.4.2.1
Single-voter resolvability The first criterion, single-voter resolvability, says that any tied winner can be made the unique winner by adding just one new voter. We see no justification for requiring that one voter is always sufficient, and as far as we know, no arguments for the normative necessity of this criterion are given in the literature. Tideman (1987) uses single-voter resolvability to rule out the GOCHA method, but this can be accomplished by rejectability instead. Indeed, we suspect that some intuitions about winnowing sets of winners to a unique winner are better captured by rejectability than by single-voter resolvability.
Definition 5.34. Given a voting method F and D ⊆ dom(F ), we say that F satisfies single-voter resolvability with respect to D if for any P ∈ D, if |F (P)| > 1, then for any x ∈ F (P), there is a profile P with V (P) ∩ V (P ) = ∅ and |V (P )| = 1 such that F (P + P ) = {x}. Here SC(P) = {a, b, d}, but there is no one-voter profile P with SC(P + P ) = {a} or SC(P + P ) = {b}, since however each margin changes by at most one from P to P + P , the margins of a over d and of b over d will still be the weakest in a cycle in M(P + P ).
Below we will show a deep tension between single-voter resolvability and stability for winners.
Resolvability and rejectability can be related using the following additional criterion from Smith 1973.
Definition 5.36. A voting method F satisfies homogeneity if for any P ∈ dom(F ), if P * is a copy of P with a disjoint set of voters (recall Definition 2.7), then F (P) = F (P + P * ).
Lemma 5.37. If a voting method F satisfies homogeneity and single-voter resolvability with respect to dom(F ), then it satisfies rejectability.
Proof. Let P ∈ dom(F ) be such that |F (P)| > 1 and x ∈ F (P). Let P * be a copy of P with a disjoint set of voters. Then by homogeneity, F (P) = F (P + P * ). It follows by resolvability that there is a single voter profile Q such that F (P + P * + Q) = {x}. Since for any a, b ∈ X(P) with M argin P (a, b) > 0, we have M argin P+P * +Q (a, b) ≥ M argin P (a, b), the profile P + P * + Q is the desired profile P + for rejectability.
Asymptotic resolvability
Another use of the term 'resolvability' (see Schulze 2011, § 4.2.1) concerns the proportion of profiles with multiple winners as the number of voters goes to infinity.
Definition 5.38. For k ∈ N, a voting method F satisfies asymptotic resolvability for k candidates if the proportion of profiles P ∈ dom(F ) with |X(P)| = k and |V (P)| = n for which |F (P)| > 1 approaches 0 as n approaches infinity.
For comparison, recall the quasi-resoluteness condition from Section 3.3, according to which F picks a unique winner in any uniquely-weighted profile. Since the proportion of profiles that are uniquely weighted goes to 1 as the number of voters goes to infinity, quasi-resoluteness implies asymptotic resolvability. However, the converse implication does not hold. For example, the Borda method (for a definition, see Pacuit 2019, § 2.1) is asymptotically resolvable but not quasi-resolute (e.g., consider a three-candidate election in which M argin P (a, b) = 2, M argin P (b, c) = 4, and M argin P (c, a) = 6, in which case Borda picks b and c).
In Section 4, we discussed the tradeoff between a voting method being quasi-resolute and satisfying stability for winners. The next result illustrates this tradeoff in the case of resolvability. We impose an assumption that is satisfied by all voting methods based on majority margins that we know of-not only Condorcet methods but also, e.g., Borda (see Zwicker 2016, p. 28 for a formulation of Borda as a marginbased method). Say that a voting method F satisfies the triangle property if for any uniquely-weighted linear profile P with a majority cycle, if x has the largest margin of victory and smallest margin of loss, then
x ∈ F (P) (a property also used in Example 4.6). The proof of Theorem 5.39 makes essential use of a theorem of Harrison-Trainor (2022) that answers one of our conjectures.
Theorem 5.39. Suppose F is a voting method on the domain of linear profiles satisfying stability for winners and the triangle property. Then F does not satisfy single-voter resolvability with respect to its domain, and F does not satisfy asymptotic resolvability for any k > 3.
Proof. In this proof, all profiles are assumed to be linear. We will use the fact that stability for winners implies the following: for any profile P, defining P G = P| GET CHA(P) (recall Section 5.2.3), we have F (P G ) ⊆ F (P). To see this, let X(P) \ GET CHA(P) = {b 1 , . . . , b n }. Suppose a ∈ F (P G ), so a ∈ GET CHA(P). Then a → b 1 , so stability for winners implies a ∈ F (P| GET CHA(P)∪{b1} ). Then since a → b 2 , stability for winners implies a ∈ F (P| GET CHA(P)∪{b1,b2} ), and so on, until we obtain a ∈ F (P).
We will also use the notion of a qualitative margin graph, which is a pair M = (M, ≺) where M is an asymmetric directed graph and ≺ is a strict weak order on the set of edges of M . We say that M is uniquely weighted if ≺ is a strict linear order. Given a profile P, let the qualitative margin graph M(P) of P be the pair (M (P), ≺ P ) where M (P) is the majority graph of P, and ≺ P is the relation on the set of edges
of M (P) defined by (a, b) ≺ P (c, d) if M argin P (a, b) < M argin P (c, d).
It follows from Debord's Theorem that every qualitative margin graph is realized by some profile. Harrison-Trainor (2022) proves that for any k ≥ 1 and uniquely-weighted qualitative margin graph M with k candidates, the proportion of profiles with k candidates and n voters realizing M does not go to 0 as n goes to infinity. Thus, asymptotic resolvability for k candidates implies the following condition ( ): there is no uniquely-weighted qualitative margin graph M with k candidates such that for every profile P realizing M, |F (P)| > 1. This also follows from single-voter resolvability: for if there exists a uniquely-weighted M such that every P realizing M has |F (P)| > 1, then we can pick a profile P realizing M with sufficiently many voters (note that if P realizes M, so does P + P * where P * is a copy of P with a disjoint set of voters) such that for any single-voter profile P , P + P still realizes M (since the differences between distinct margins are too large in P for one voter to change the qualitative margin graph), so that |F (P + P )| > 1, in violation of single-voter resolvability.
Now consider any profile P with X(P) > 3 realizing a qualitative margin graph M that when restricted to GET CHA(P) has the following form, where α ≺ γ ≺ β and γ ≺ ϕ ≺ ψ:
x 1
x 3
x 2
x 4 γ ϕ χ α β ψ
Since α ≺ γ ≺ β, by the triangle property we have x 4 ∈ F ((P G ) −x3 ). Then given x 4 → x 3 , from stability for winners we have x 4 ∈ F (P G ) and hence x 4 ∈ F (P) by the first paragraph of the proof. Since γ ≺ ϕ ≺ ψ, by the triangle property we have x 1 ∈ F ((P G ) −x4 ). Then given x 1 → x 4 , from stability for winners we have
x 1 ∈ F (P G ) and hence x 1 ∈ F (P) by the first paragraph of the proof. Thus, |F (P)| > 1. Since this holds for every P realizing M, condition ( ) above does not hold, so neither version of resolvability holds either.
It is easy to see that Split Cycle satisfies asymptotic resolvability for k = 2 and k = 3 (for k = 3, this follows from Proposition 5.26.1). For k > 3, since Split Cycle satisfies the triangle property and stability for winners, Theorem 5.39 yields the following.
Corollary 5.40. For k > 3, Split Cycle does not satisfy asymptotic resolvability for k candidates. Table 3 shows estimates for the average sizes of winning sets in the limit as the number of voters goes to infinity for several voting methods that are not asymptotically resolvable. Estimates were obtained using the Monte Carlo simulation technique described in Harrison-Trainor 2022, § 9, sampling 1,000,000 profiles for each number of candidates. Table 3: Estimated average sizes of winning sets for profiles with a given number of candidates (top row) in the limit as the number of voters goes to infinity.
While it is certainly of theoretical interest to know whether the proportion of profiles with multiple winners goes to 0 as the number of voters goes to infinity, for real world applications, what matters is the proportion of profiles with multiple winners for realistic numbers of voters. In Appendix D, we provide a quantitative analysis. For instance, our results show that when there are 7 candidates and up to a few thousand voters, Split Cycle produces multiple winners on only about 1% more of such profiles than Beat
Path, which satisfies resolvability in both forms above. Our results also show that this difference in frequency of multiple winners decreases as the number of candidates decreases. In addition, our analysis shows that Split Cycle is substantially more resolute than GETCHA.
Monotonicity Criteria
Non-Negative Reponsiveness
The term 'monotonicity' has many meanings in voting theory. One of the standard meanings is given by the criterion of non-negative responsiveness (Tideman 1987): lifting the position of a winner x on voters' ballots cannot result in x becoming a loser.
Definition 5.41. For any profiles P and P with V (P) = V (P ) and x ∈ X(P) = X(P ), we say that P is obtained from P by a simple lift of x if the following conditions hold:
1. for all a, b ∈ X(P) \ {x} and i ∈ V (P), aP i b if and only if aP i b;
2. for all a ∈ X(P) and i ∈ V (P), if xP i a then xP i a;
3. for all a ∈ X(P) and i ∈ V (P), if aP i x then aP i x.
Definition 5.42. A voting method F satisfies non-negative responsiveness if for every P ∈ dom(F ) and
x ∈ X(P), if x ∈ F (P) and P ∈ dom(F ) is obtained from P by a simple lift of x, then x ∈ F (P ).
Proposition 5.43. Split Cycle satisfies non-negative responsiveness.
Proof. Suppose x ∈ SC(P) and P is obtained from P by a simple lift of x. Since x ∈ SC(P), for all y ∈ X(P), y does not defeat x in P, so M argin P (y, x) ≤ Cycle# P (y, x). We claim that y does not defeat
x in P either. Since P is obtained from P by a simple lift of x, we have M argin P (y, x) ≤ M argin P (y, x). If M argin P (y, x) ≤ 0, then y does not defeat x in P , so suppose M argin P (y, x) > 0. We claim that Cycle# P (y, x) ≥ Cycle# P (y, x) − (M argin P (y, x) − M argin P (y, x)). x)). This proves (4), which with M argin P (y, x) ≤ Cycle# P (y, x) implies M argin P (y, x) ≤ Cycle# P (y, x). Hence y does not defeat x in P . Since y was arbitrary, we conclude that x ∈ SC(P ).
that ρ = y α → x β → z 1 γ1 → . . . γn−1 → z n γn → y is a simple cycle in M(P ) where α ≤ α and β ≤ β . Hence Split#(ρ ) ≥ Split#(ρ) − (M argin P (y, x) − M argin P (y,
Positive and Negative Involvement
Like rejectability and resolvability, the next two criteria we consider-positive and negative involvementalso concern adding voters to an election. In this case, the concern is about perverse changes to the set of winners in light of who the new voters rank as their favorite (resp. least favorite) candidate. Recall our discussion in Section 1.2 of violations of positive or negative involvement as "strong no show paradoxes." The criterion of positive (resp. negative) involvement ensures that if x is among the winners (resp. losers) and
we add a voter who ranks x as their favorite (resp. least favorite), then x will still be a winner (resp. loser).
Definition 5.44. F satisfies positive involvement if for any profiles P ∈ dom(F ) and P with X(P) = X(P ), V (P) ∩ V (P ) = ∅, and |V (P )| = 1, if x ∈ F (P), P + P ∈ dom(F ), and for i ∈ V (P ), xP i y for all y ∈ X(P ) \ {x}, then x ∈ F (P + P ).
F satisfies negative involvement if for any profiles P ∈ dom(F ) and P with X(P) = X(P ), V (P) ∩ V (P ) = ∅, and |V (P )| = 1, if x ∈ F (P), P + P ∈ dom(F ), and for i ∈ V (P ), yP i x for all y ∈ X(P ) \ {x}, then x ∈ F (P + P ).
Lemma 5.45. If F satisfies positive involvement (resp. negative involvement), then it satisfies the analogous coalitional properties that drop the restriction that |V (P )| = 1.
Proof. To prove the properties for a coalition of more than one voter, add each voter in the coalition one at a time, applying positive (resp. negative involvement) at each step. This can be iterated because the property of x belonging to (resp. not belonging to) the winning set is preserved at each step.
Remark 5.46. It is important to distinguish positive and negative involvement from the participation criterion (recall Section 1.2), which we discuss further in Appendix B. It is also important that positive (resp. negative) involvement applies only when adding a voter for whom x is their unique favorite (resp. least favorite) candidate. One may consider a related criterion concerning voters for whom x is among their favorite Proof. First, consider positive involvement. We prove the contrapositive. Suppose x ∈ SC(P + P ). Hence there is a z ∈ X(P) that defeats x in P + P , i.e., such that
M argin P+P (z, x) > Cycle# P+P (z, x).(5)
Since |V (P )| = 1, we have
Cycle# P+P (z, x) ≥ Cycle# P (z, x) − 1,(6)
and since xP i z, we have M argin P+P (z, x) = M argin P (z, x) − 1.
It follows from (5)-(7) that M argin P (z, x) > Cycle# P (z, x), so x ∈ SC(P).
Next, consider negative involvement. Suppose x ∈ F (P). Hence there is a z ∈ X(P) that defeats x in P, i.e., such that M argin P (z, x) > Cycle# P (z, x).
Since |V (P )| = 1, we have
Cycle# P+P (z, x) ≤ Cycle# P (z, x) + 1,(9)
and since zP i x, we have M argin P+P (z, x) = M argin P (z, x) + 1.
It follows from (8)-(10) that
M argin P+P (z, x) > Cycle# P+P (z, x), so x ∈ SC(P + P ).
Thus, with Split Cycle the strong no show paradox discussed in Section 1.2 is impossible.
Conclusion
In this paper, we have proposed the Split Cycle voting method, which can be distinguished from all methods we know of in any of the following three ways:
• Only Split Cycle satisfies independence of clones, positive involvement, and at least one of Condorcet consistency, monotonicity, and immunity to spoilers.
• Only Split Cycle satisfies independence of clones and negative involvement.
• Only Split Cycle satisfies independence of clones, immunity to spoilers, and rejectability.
Moreover, Split Cycle can be motivated by the three key ideas of Section 3:
1. Group incoherence raises the threshold for defeat, but not infinitely.
2. Incoherence can be localized.
Defeat is direct.
We think the third idea is especially important for justifying election outcomes to supporters of a candidate who was not among the winners of the election. To try to explain to supporters of a candidate x that the reason x is not among the winners is that another candidate y "defeated" x even though a majority of voters prefer x to y (as is possible with the Beat Path voting method, for example) seems a recipe for complaints of illegitimacy and resulting social instability.
A Independence of Clones
In this appendix, we prove that Split Cycle satisfies independence of clones. In the following, fix a profile P with a set C of clones and c ∈ C. Then obviously we have the following. Next we show that certain cycle numbers do not change from P to P −c . For this we use the following key lemma.
Lemma A.2. For any c 1 , c 2 ∈ C with c 1 = c 2 and simple cycle ρ in M(P) that contains c 1 and some nonclone, the sequence ρ obtained from ρ by replacing all clones in ρ by c 2 and then replacing any subsequence c 2 , . . . , c 2 by c 2 is a simple cycle in M(P) such that Split#(ρ ) ≥ Split#(ρ).
Proof. For any a ∈ X(P) \ C and d ∈ C, if a → d (resp. d → a) occurs in ρ with margin α in M(P), then by the definition of a set of clones (Definition 5.22), we have a → c 2 (resp. c 2 → a) with margin α in M(P). It follows that ρ is a simple cycle in M(P) and that the margins between successive candidates in ρ already occurred as margins between successive candidates in ρ, which implies Split#(ρ ) ≥ Split#(ρ).
Lemma A.3. For any a ∈ X(P) \ {c} and b ∈ X(P) \ C, we have Cycle# P (a, b) = Cycle# P−c (a, b).
Proof. First, observe that any simple cycle in M(P −c ) is also a simple cycle with the same margins in M(P). Proof. Suppose C ∩ SC(P) = ∅. Hence every clone in C is defeated in P. Since the defeat graph for P contains no cycles (Lemma 3.9), it follows that there is some a ∈ X(P) \ C that defeats some d ∈ C in P. Theorem A.7. Split Cycle satisfies independence of clones.
Hence Cycle# P (a, b) ≥ Cycle# P−c (a, b). Second, to show Cycle# P (a, b) ≤ Cycle# P−c (a, b),
Proof. By Propositions A.5 and A.6.
B Participation
In Sections 1.2 and 5.5.2 on positive and negative involvement, we mentioned the related participation criterion. Participation is usually stated for resolute voting methods: if x is the winner in a profile, and we add to the profile a new voter who strictly prefers x to y, then y is not the winner in the resulting profile. (Note that there is no requirement that x be at the top of the new voter's ballot or that y be at the bottom, a point to which we return below.) When applied to irresolute voting methods, we call this "resolute" participation. 36
36 Several authors have investigated what could be called "irresolute" participation-like criteria (recall Footnote 11), where one changes the initial assumption from F (P) = {x} to x ∈ F (P). For example, Perez (2001) considers the following axiom, Definition B.1. A voting method F satisfies resolute participation if for any P ∈ dom(F ) and P with X(P) = X(P ), V (P) ∩ V (P ) = ∅, |V (P )| = 1, and P + P ∈ dom(F ), and any x, y ∈ X(P), if F (P) = {x} and xP i y for i ∈ V (P ), then F (P + P ) = {y}.
It turns out that for linear profiles, Split Cycle satisfies resolute participation-but for a reason unrelated to the main idea of participation, namely that Split Cycle satisfies the following stronger property.
Definition B.2. A voting method F satisfies winner continuity if for any P ∈ dom(F ) and P with X(P) = X(P ), V (P) ∩ V (P ) = ∅, and |V (P )| = 1, if F (P) = {x} and P + P ∈ dom(F ), then
x ∈ F (P + P ).
Note, for example, that Plurality satisfies winner continuity, while the Borda voting method does not.
Proposition B.3. Restricted to linear profiles, Split Cycle satisfies winner continuity.
Proof. Suppose SC(P) = {x}. Further suppose x ∈ SC(P + P ), so there is some z ∈ X(P) such that
M argin P+P (z, x) > Cycle# P+P (z, x).(11)
Since z ∈ SC(P), by Lemma 3.11 there are distinct y 1 , . . . , y n with y 1 = x and y n = z such that y 1 Dy 2 D . . . Dy n−1 Dy n in the defeat graph of P.
Since |V (P )| = 1, it follows from (11) that M argin P (z, x) = 0 or M argin P (z, x) > 0.
Case 1: M argin P (z, x) = 0. Then since P is a linear profile, for each i ∈ {1, . . . , n−1}, M argin P (y i , y i+1 )
is even, and since y i Dy i+1 , it is greater than 0, so M argin P (y i , y i+1 ) ≥ 2. Since |V (P )| = 1, together M argin P (z, x) = 0 and (11) α i is greater than the splitting number of ρ; hence β, i.e., M argin P (z, x), is the splitting number of ρ. Thus, for each i ∈ {1, . . . , n − 1}, we have M argin P (y i , y i+1 ) ≥ M argin P (z, x) + 2 since the parity of all margins must be the same, given that P is a linear profile. Since |V (P )| = 1, it follows that there is a simple cycle
ρ = y 1 α 1 −→ y 2 α 2 −→ . . . α n−1 −→ y n β −→ y 1
called VC-participation: if x ∈ F (P) and P is a one-voter profile with a new voter i having xP i y, then y ∈ F (P + P ) implies x ∈ F (P + P ). He then observes that no Condorcet consistent voting method satisfies VC-participation. However, it is not clear that this criterion is a plausible normative requirement on a voting method. Suppose, for example, that i's ranking is zxyw, and i's joining the election results in a change from F (P) = {x, w} to F (P + P ) = {z, y}. It is not clear that we should impose a criterion that prohibits such a change, which seems to be a strict improvement from i's point of view.
in the margin graph of P + P in which β , i.e., M argin P+P (z, x), is not greater than any α i . But this contradicts (11).
Corollary B.4. Restricted to linear profiles, Split Cycle satisfies resolute participation.
Proof. Immediate from Proposition B.3.
While positive and negative involvement entail the analogous coalitional properties (recall Lemma 5.45), resolute participation does not entail the analogous coalitional property.
Definition B.5. A voting method F satisfies resolute coalitional participation if for any P ∈ dom(F ) and
P with X(P) = X(P ), V (P) ∩ V (P ) = ∅, and P + P ∈ dom(F ), and any x, y ∈ X(P), if F (P) = {x} and xP i y for all i ∈ V (P ), then F (P + P ) = {y}.
Proposition B.6.
1. Split Cycle does not satisfy resolute participation on strict weak order profiles.
2. Split Cycle does not satisfy resolute coalitional participation even on linear profiles.
Proof. For part 1, by Debord's Theorem, there is a strict weak order profile P whose margin graph is shown on the left below: On the right, we show the margin graph of the profile P + P where P is a one-voter profile whose voter has cP i bP i dP i a. Although bP i d, we go from SC(P) = {b} to SC(P + P ) = {d}. On the right, we show the margin graph of the profile P + P where P is a two-voter profile whose two voters both have cP i bP i dP i a. Although both voters have bP i d, we go from SC(P) = {b} to SC(P+P ) = {d}.
In our view, the examples in the proof of Proposition B.6 show that participation is too strong to require.
Its violation can be rationalized as follows. In P, d is defeated by a; yet with the new voter(s) having dP i a, d is no longer defeated by a (or anyone else) in P + P . In P, b is not defeated by c (or anyone else); yet with the new voter(s) having cP i b, b becomes defeated by c in P + P . In short, the new voters help d against its main threat, a, and hurt b against its main threat, c, resulting in the change of the winning set from {b} to {d}. It does not matter, in this case, that the new voters help b against d, because b and d do not threaten to defeat each other in the presence of the cycles.
In the example used in the proof of Proposition B.6.2, there is a powerful symmetry argument: if b is the unique winner for the margin graph on the left above, then d must be the unique winner for the margin graph on the right above, assuming a neutrality property for margin graphs-that the names assigned to nodes do not matter-satisfied by Split Cycle (and the methods in Appendices C.1-C.7).
Definition B.7. A voting method F satisfies margin graph neutrality if for any profiles P and P , if there is a weighted directed graph isomorphism h : M(P) → M(P ), then F (P ) = h[F (P)].
For example, the map c → a, a → c, b → d, d → b is a weighted directed graph isomorphism from the margin graph on the left in the proof of Proposition B.6.2 to the margin graph on the right (imagine turning the margin graph on the left 180 • -then it matches the margin graph on the right except for the names of nodes). Despite the fact that the right margin graph is obtained from the left margin graph by adding two voters who rank b over d, candidate b on the left and candidate d on the right are in isomorphic situations.
Thus, if b is the winner on the left, d must be the winner on the right by margin graph neutrality.
Finally, note that the phenomenon with b and d above can happen only when b and d are in a cycle.
Indeed, we have the following version of participation when the two relevant candidates are cycle free.
Proposition B.8. For any profiles P and P with X(P) = X(P ) and V (P) ∩ V (P ) = ∅ and any x, y ∈ X, if x ∈ SC(P) and xP i y for all i ∈ V (P ), and there is no cycle in M(P) or M(P + P ) containing x and y, then y ∈ SC(P + P ).
Proof. Since x ∈ SC(P), y does not defeat x in P. Since there is no cycle in M(P) involving x and y, it follows that M argin P (x, y) ≥ 0. Hence M argin P+P (x, y) > 0, and by hypothesis, there is no cycle involving in M(P + P ). Hence x defeats y in P + P , so y ∈ SC(P + P ).
C Other Methods
In this appendix, we give definitions of the other voting methods in Figure 2, as well citations or proofs for the claims about their properties in the figure. Typically it is assumed that the domain of these voting methods is the domain of linear profiles, though many properties continue to hold for the domain of strict weak order profiles. To avoid repetition, we note the following: anonymity and neutrality for each method are immediate from the definitions; Pareto is obvious for all methods except GETCHA/GOCHA, for which we give an example violation; expansion consistency implies strong stability for winners, and stability for winners implies immunity to spoilers and immunity to stealers. Finally, additional examples of violations of positive involvement can be found in Holliday and Pacuit 2021b, Appendix A.
C.1 Ranked Pairs (Tideman 1987)
Let P be a profile and T a linear order on the set X(P) × X(P) of pairs of candidates (the tiebreaking ordering). We say that a pair (x, y) of candidates has a higher priority than a pair (x , y ) of candidates using the tiebreaking ordering T when either M argin P (x, y) > M argin P (x , y ) or M argin P (x, y) = M argin P (x , y )
and (x, y) T (x , y ). Given a profile P and a tiebreaking ordering T of X(P) × X(P), we construct a Ranked Pairs ranking P,T of X(P) according to the following procedure:
1. Initialize P,T to ∅.
2. If all pairs (x, y) with x = y and M argin P (x, y) ≥ 0 have been considered, then return P,T . Otherwise let (a, b) be the pair with the highest priority among those with a = b and M argin P (a, b) ≥ 0 that have not been considered so far.
3. If P,T ∪ {(a, b)} is acyclic, then add (a, b) to P,T ; otherwise, add (b, a) to P,T . Go to step 2.
When the procedure terminates, P,T is a linear order. 37 A linear order L on X(P) is a Ranked Pairs ranking for P if L = P,T for some tiebreaking ordering T of X(P) × X(P). Then the set RP (P) of Ranked Pairs winners is the set of all x ∈ X(P) such that x is the maximum of some Ranked Pairs ranking for P.
C.2 Beat Path (Schulze 2011)
Let M be a margin graph. A (simple) path from x to y in M is a sequence x 1 , . . . , x k of distinct nodes in
M where x 1 = x, x k = y, and for i ∈ {1, . . . , k − 1}, x i αi → x i+1 . The strength of a path x 1 , . . . , x k in M is S M ( x 1 , . . . , x k ) = min{α i | x i αi → x i+1 , 1 ≤ i ≤ k − 1}.
37 This is a standard algorithm for Ranked Pairs, but if our goal is only to select winners rather than to produce a linear order of the whole set of candidates, then to save some steps we can run the procedure above for only those pairs (x, y) with M argin P (x, y) > 0, in line with our description of Ranked Pairs in Section 3.3. Then a winner according to P,T is a maximal element of P,T , i.e., an x for which there is no y P,T x, and x is a Ranked Pairs winner in P if x is a winner according to P,T for some tiebreaking ordering T .
Given a profile P, let P ath P (x, y) be the set of all paths from x to y in M(P). The strength of x over y in P is
Strength P (x, y) = max{S M(P) (p) | p ∈ P ath P (x, y)} P ath P (x, y) = ∅ 0 otherwise.
Then the set BP (P) of Beat Path winners is the set of all x ∈ X(P) such that there is no y ∈ X(P) such that Strength P (y, x) > Strength P (x, y).
Stability criteria See Propositions 4.10 and 4.14.
Other criteria For proofs that Beat Path satisfies reversal symmetry, the Condorcet winner and loser criteria, Smith, ISDA, independence of clones, single-voter and asymptotic resolvability, and non-negative C.3 Minimax (Simpson 1969, Kramer 1977 The set of winners for Minimax, also known as the Simpson-Kramer method, are the candidates whose largest majority loss is the smallest, i.e., for a profile P,
M inimax(P) = argmin x∈X max({M argin P (y, x) | y ∈ X}).
Stability criteria For the satisfaction of immunity to spoilers, if a ∈ M inimax(P −b ), M argin P (a, b) > 0, and b ∈ M inimax(P), then a must still be among the candidates in P whose largest majority loss is smallest, so a ∈ M inimax(P). For the violation of partial immunity to stealers, see Proposition 4.14.
Other criteria See Felsenthal 2012 for violations of reversal symmetry (under 'preference inversion') and the Condorcet loser criterion (also shown in the proof of Proposition 4.14), as well as proofs of the Condorcet winner and non-negative reponsiveness criteria. Violation of independence of clones is discussed in Tideman 1987. For violations of Smith and hence ISDA, see Darlington 2016, p. 10. For the satisfaction of singlevoter resolvability, see Tideman 1987, and for asymptotic resolvability, the argument is the same as given for Ranked Pairs in Appendix C.1.
Fact C.1. Minimax satisfies rejectability.
Proof. Given x ∈ M inimax(P), modify the margin graph M(P) to M such that for all y ∈ X(P) \ {x}, (i) if there is no edge from x to y in M(P), add an edge from x to y in M , and (ii) increase the weights of all incoming edges to y to be larger than the largest majority loss of x in P, such that all weights in M have the same parity as the weights in M(P). Then x is clearly the unique Minimax winner in M , and M is the margin graph of a profile P (which is linear if P is) by Debord's Theorem. Finally, since Minimax clearly satisfies the overwhelming majority criterion (recall Lemma 5.29), P may be used to obtain the P + required for rejectability as in the proof of Proposition 5.30.
For the satisfaction of positive and negative involvement, see Pérez 2001, p. 613. C.4 Copeland (Copeland 1951) The Copeland score of a candidate x is the number of candidates to whom x is majority preferred minus the number majority preferred to x. The Copeland winners are the candidates with maximal Copeland score:
Copeland(P) = argmax x∈X |{y ∈ X(P) | M argin P (x, y) > 0}| − |{y ∈ X(P) | M argin P (y, x) > 0}|.
Stability criteria Fact C.2. Copeland satisfies immunity to spoilers.
Proof. If a ∈ Copeland(P −b ), so a's Copeland score is maximal in P −b , and M argin P (a, b) > 0, then a's Copeland score in P is maximal among the original candidates in X(P −b ); if in addition b ∈ Copeland(P), then a's Copeland score in P is maximal among all candidates in X(P), so a ∈ Copeland(P).
However, if we do not assume b ∈ Copeland(P), then it is easy to construct a profile P in which b has a higher Copeland score in P than a does (this requires |X(P)| ≥ 6 if M (P) is a tournament and |X(P)| ≥ 5 otherwise), so that a ∈ Copeland(P). Thus, Copeland violates immunity to stealers.
Fact C.3. Copeland satisfies partial immunity to stealers.
Proof. Suppose a is the unique Condorcetian candidate in P, but b steals the election from a in P, so (i) a ∈ Copeland(P −b ), (ii) a → P b, (iii) a ∈ Copeland(P), and (iv) b ∈ Copeland(P). Since a's Copeland score is maximal in P −b by (i) and increases by 1 from P −b to P by (ii), together (iii) and (iv) imply that b has the maximum Copeland score in P, so Copeland(P) = {b} and there is some c ∈ X(P) with b → P c. Now since b is not Condorcetian, for any c ∈ X(P) such that b → P c, we have b ∈ Copeland(P −c ); then since b's Copeland score decreases by only 1 from P to P −c , it follows from b ∈ Copeland(P −c ) that there is a d ∈ F (P −c ) whose Copeland score in P is one less than that of b in P. We claim that d → P b. For if d → P b, then since d's Copeland score in P, which is one less than that of b, is at least that of a by (iii)-(iv), and d's Copeland score does not decrease from P to P −b given d → P b, whereas a's Copeland score does decrease from P to P −b by (ii), it follows that d's Copeland score is greater than that of a in P −b , contradicting (i). Thus, d → P b. But then since d's Copeland score is at least that of a in P, it follows by (i)-(ii) that d ∈ Copeland(P −b ), which with d → P b implies that d is Condorcetian, contradicting the assumption that a is the unique Condorcetian candidate in P.
However, Copeland does not satisfy stability for winners with tiebreaking, as shown by the following.
Example C.4. Consider a profile P with X(P) = {a, b, c, d} whose majority graph M (P) has a → b, b → c, and b → d, but no other edges; then a is the unique Condorcetian candidate, but Copeland(P) = {a, b}.
Other criteria It is easy to see that Copeland satisfies reversal symmetry, the Condorcet winner and loser criteria, Smith, ISDA, and non-negative responsiveness. For the failure of independence of clones, see C.5 GETCHA (Smith 1973) For the definition of GETCHA, see Definition 5.10 in Section 5.2.3.
Stability criteria It is easy to see that GETCHA satisfies expansion consistency.
Other criteria To see that GETCHA fails Pareto, consider the following.
Example C.5. In the following profile, all voters prefer a to x, but x is among the GETCHA winners: It is easy to see that GETCHA satisfies the Condorcet winner and loser criteria, as well as non-negative responsiveness. For reversal symmetry, note that if P is a profile with GET CHA(P) = {x}, then x is a Condorcet winner. Thus, x is a Condorcet loser in P r , so x ∈ GET CHA(P r ). It is also not difficult to see that GETCHA satisfies independence of clones, using an alternative characterization of GETCHA. Given a profile P, let a ; P b mean that M argin P (a, b) ≥ 0. Let ; * P be the transitive closure of ; P .
Lemma C.6 (Schwartz 1986, Corollary 6.2.2). For any profile P,
GET CHA(P) = {x ∈ X(P) | for all y ∈ X(P) : x ; * P y}.
By definition, GETCHA satisfies the Smith criterion, and we proved ISDA in Footnote 32. For the failure of rejectability and single-voter resolvability, see Proposition 5.28. The failure of asymptotic resolvability for k ≥ 3 follows from the fact that GETCHA selects a unique winner only if there is a Condorcet winner, and for 3 or more candidates, the proportion of profiles with a Condorcet winner does not go to 1 as the number of voters goes to infinity (DeMeyer and Plott 1970). To see that GETCHA fails positive and negative involvement, consider the following examples.
Example C.7. In a three-candidate, two-voter profile P with aP i bP i c and cP j aP j b, GET CHA(P) = {a, b, c}, yet adding one voter k such that bP k aP k c results in a profile P in which a is the Condorcet winner, so GET CHA(P ) = {a}.
Example C.8. Consider any three-candidate profile P with a Condorcet winner a, so GET CHA(P) = {a}, in which M argin P (a, c) = 1; then adding a voter with the ranking cP i aP i b results in a profile P in which M argin P (a, b) > 0 but M argin P (a, c) = 0, which implies GET CHA(P ) = {a, b, c}.
C.6 GOCHA (Schwartz 1986) For the definition of GOCHA, see Definition 5.13 in Section 5.2.3.
Stability criteria
It is easy to see that GOCHA satisfies stability for winners using Lemma 5.14. The proof of Proposition 5.16 shows that GOCHA does not satisfy strong stability for winners.
Other criteria For the violation of Pareto, the same example given for GETCHA in Section C.5 works for GOCHA. It is easy to see that GOCHA satisfies reversal symmetry, the Condorcet winner and loser criteria, and non-negative responsiveness (see Felsenthal 2012). It is well known that GOCHA satisfies the Smith criterion, and ISDA can be proved using Lemma 5.14. 38 For the satisfaction of independence of clones, see
Tideman 1987. GOCHA fails rejectability, single-voter resolvability, and asymptotic resolvability for k ≥ 3 by the same reasoning as for GETCHA, using the fact that in the limit as the number of voters goes infinity, GOCHA is equivalent to GETCHA. For the failure of positive involvement and negative involvement, see C.7 Uncovered Set (Fishburn 1977, Miller 1980 The Uncovered Set in voting is usually attributed to Fishburn (1977) and Miller (1980), though the covering relation appears in earlier game-theoretic work of Gillies (1959). Fishburn defined his version of the Uncovered Set for arbitrary margin graphs, whereas Miller defined his only for tournaments, i.e., directed graphs in which the edge relation → is not only asymmetric but also weakly complete: for all distinct nodes x, y, either
x → y or y → x. Fishburn and Miller's definitions are equivalent for tournaments but not for margin graphs that are not weakly complete, which may arise from profiles with an even number of voters or non-linear ballots. Several non-equivalent definitions of the Uncovered Set for arbitrary margin graphs appear in the literature (see Bordes 1983;Peris and Subiza 1999;Penn 2006;Duggan 2013), and some of these versions differ in their axiomatic properties. As examples, we will consider the versions due to Fishburn and Gillies.
Given a margin graph M and nodes x, y in M, say that y left-covers x in M if for all nodes z in M, if z → y, then z → x. 39 Then the Fishburn and Gillies versions of the Uncovered Set are defined by: U C F ish (P) = {x ∈ X(P) | there is no y ∈ X(P): y left-covers x but x does not left-cover y in M(P)}; U C Gill (P) = {x ∈ X(P) | there is no y ∈ X(P): y → x and y left-covers x in M(P)}.
Note that U C F ish (P) ⊆ U C Gill (P). A useful alternative characterization of U C Gill is given by the following "two-step" principle (see, e.g., Duggan 2013, Proposition 12(ii)): x ∈ U C Gill (P) if and only if for all y ∈ 38 Suppose a ∈ GOCHA(P −x ), so there is a b ∈ X(P −x ) such that b → * P −x a but a → * P −x b. Then b → * P a. If a → * P b, then we are done: a ∈ GOCHA(P). If a → * P b, then given a → * P −x b, it follows that x is on the path witnessing a → * P b, which together with b → * P a implies that there is a path from x to a in M(P). Then since x ∈ GET CHA(P) and there can be no path from a candidate outside GET CHA(P) to one inside GET CHA(P), it follows that a ∈ GET CHA(P). Hence a ∈ GOCHA(P), since GOCHA satisfies the Smith criterion. Conversely, suppose a ∈ GOCHA(P), so there is a b ∈ X(P) such that b → * P a but a → * P b. If x is not on the path witnessing b → * P a, then b → * P −x a but a → * P −x b, so we are done: a ∈ GOCHA(P −x ). If x is on the path witnessing b → * P a, then since x ∈ GET CHA(P) and there can be no path from a candidate outside GET CHA(P) to one inside GET CHA(P), it follows that a ∈ GET CHA(P). Hence by ISDA for GETCHA, a ∈ GET CHA(P −x ) and hence a ∈ GOCHA(P −x ), since GOCHA satisfies the Smith criterion. 39 Miller's (1980) definition uses the right-sided version: y right-covers x in M if for all z, if x → z, then y → z. If → is weakly complete, then left-covering and right-covering are equivalent. X(P)\{x}, M argin P (x, y) ≥ 0 or there is a z ∈ X(P) such that M argin P (x, z) ≥ 0 and M argin P (z, y) > 0.
Stability criteria
The method U C F ish satisfies stability for winners and hence immunity to spoilers, since if M argin P (a, b) > 0, then b does not left-cover a. However, it violates strong stability for winners.
Example C.10. Consider the profile P below where U C F ish (P −b ) = {a, c, d}, M argin P (a, b) ≥ 0, but U C F ish (P) = {b}: Hence U C F ish also violates expansion consistency. By contrast, U C Gill satisfies expansion consistency, as one can easily see using the two-step characterization above. Fact C.11. U C F ish and U C Gill satisfy ISDA.
Other criteria
Proof. Suppose x ∈ X(P) \ GET CHA(P) and U C ∈ {U C F ish , U C Gill }. To see that U C(P −x ) ⊆ U C(P), suppose a ∈ U C(P −x ). By the Smith criterion for U C and ISDA for GETCHA, U C(P −x ) ⊆ GET CHA(P −x ) = GET CHA(P), so together a ∈ U C(P −x ) and x ∈ X(P) \ GET CHA(P) imply M argin P (a, x) > 0, so a ∈ U C(P) by stability for winners. Now we claim that U C(P) ⊆ U C(P −x ). By definition, a ∈ U C Gill (P) (resp. a ∈ U C F ish (P)) if and only if for all b ∈ X(P) \ {a}, we have a ; b (resp. a left-covers b) in P or there is a c ∈ X(P) such that a ; c → b. Now suppose a ∈ U C(P). To show a ∈ U C Gill (P −x ) (resp. a ∈ U C F ish (P −x )), we must show that for any b ∈ X(P −x ) \ {a}, we have a ; b (resp. a left-covers b)
in P −x or there is a c ∈ X(P −x ) such that a ; c → b. Since a ∈ U C Gill (P) (resp. a ∈ U C F ish (P)), we have a ; b (resp. a left-covers b) in P or there is a c ∈ X(P) such that a ; c → b. If a ; b (resp. a left-covers b) in P, then this holds in P −x , so we are done. So suppose it is not the case that a ; b (resp. a left-covers b) in P, but instead there is a c ∈ X(P) such that a ; c → b. For U C Gill , since it is not the case that a ; b, we have b → a. Then since a ∈ U C Gill (P) ⊆ GET CHA(P), it follows from c → b → a that c ∈ GET CHA(P), so c = x. For U C F ish , since a ∈ U C F ish (P) ⊆ GET CHA(P), a left-covers every candidate in P outside GET CHA(P). Hence from our assumption that a does not left-cover b in P, it follows that b ∈ GET CHA(P). Then since c → b, we have c ∈ GET CHA(P), so c = x. Thus, in either case, c ∈ X(P −x ) and a ; c → b, so we are done. eliminated at any stage of the iteration procedure starting from P + P , so x ∈ IRV (P + P ). For the failure of negative involvement, see Fishburn and Brams 1983 (under the "no show paradox").
C.9 Plurality
The Plurality score of a candidate is the number of voters who rank that candidate uniquely in first place.
The Plurality voting method selects as winners all candidates whose Plurality scores are maximal. The problems with Plurality are well known (see Laslier 2012).
Stability criteria Example 1.1 shows that Plurality violates immunity to spoilers. If we consider removing Bush from the election instead of Nader, then it also shows that Plurality violates immunity to stealers.
Other criteria For the failure of reversal symmetry, see Felsenthal 2012. It is well known that Plurality violates the Condorcet winner and Condorcet loser criteria (again see Felsenthal 2012). It is clear that Plurality satisfies non-negative responsiveness. The failure of the Smith and ISDA criteria follows from the failure of the Condorcet winner criterion. Example 1.1 shows that Plurality does not satisfy independence of clones. The satisfaction of single-voter and asymptotic resolvability and positive and negative involvement is obvious. That Plurality satisfies rejectability follows from Lemma 5.37 given that it satisfies single-voter resolvability and clearly homogeneity.
D Frequency of Irresoluteness
The graphs in this appendix show the frequency with which Split Cycle and several other voting methods select more than one winner, as well as the sizes of the winning sets conditional on there being more than one winner, before tiebreaking. Thus, they show how often a tiebreaking procedure must be applied.
We evaluated the voting methods on profiles with different numbers of candidates and voters, using different probability models to generate profiles. Probability models for generating linear profiles are more common, so in this section all profiles are linear. Below we explain the probability models, the different types of graphs, and some conclusions drawn from the data.
Probability models We considered several probability models for generating profiles with n candidates and m voters. According to the impartial culture (IC) model, each such profile is equally likely. Equivalently, each voter chooses a linear order of the n candidates at random, and the voters' choices are independent.
In the Pólya-Eggenberger urn model (Berg 1985), to generate a profile given a parameter α ∈ [0, ∞), each voter in turn randomly draws a linear order from an urn. Initially the urn is the set of all linear orders of the n candidates. If a voter randomly chooses L from the urn, we return L to the urn plus αn! copies of L. IC is the special case where α = 0. The Impartial Anonymous Culture (IAC) is the special case where α = 1/n!. Following Boehmer et al. 2021, for each generated profile, we chose α according to a Gamma distribution with shape parameter k = 0.8 and scale parameter θ = 1 for the model we call "urn."
In the Mallow's model (see Mallows 1957;Marden 1995), to generate a profile, the main idea is to fix a reference linear ordering of the candidates and to assign to each voter a ranking that is "close" to this reference ranking. Closeness to the reference ranking is defined using the Kendall-tau distance between rankings, depending on a dispersion parameter φ. Setting φ = 0 means that every voter is assigned the reference ranking, and setting φ = 1 is equivalent to the IC model. Formally, to generate a profile given a reference ranking L 0 of the set X of candidates and φ ∈ (0, 1], the probability that a voter's ballot is the linear order L of X is P r L0,φ (L) = φ τ (L,L0) /C where τ (L, L 0 ) = |X| 2 − |L ∩ L 0 | is the Kendell-tau distance of L to L 0 , and C is a normalization constant. For each profile, we chose φ by first randomly selecting what Boehmer et al. (2021) call a rel-φ value, which together with the number m of candidates determines φ. See Boehmer et al. 2021 for details on this parameterization of the Mallow's model in terms of rel-φ values.
In addition to generating profiles using a single reference ranking L 0 , we considered generating profiles using two reference rankings, which are the reverse of each other. E.g., L 0 ranks candidates from more liberal to more conservative, while L −1 0 ranks candidates in the opposite order. The set of voters is divided into two groups, each associated with one of the reference rankings. Each voter is equally likely to be assigned to either of the two groups. Formally, the probability that a voter's ballot is L is 1 2 P r L0,φ (L) + 1 2 P r L −1 0 ,φ (L).
Types of graphs We include three types of graphs with data from simulated elections. For all three types, for each data point in a graph, we sampled 25,000 profiles with an even number n of voters and 25,000
profiles with the next odd number n + 1 of voters, displayed in the graph below the even number, in order to have a mix of even and odd-sized electorates.
The first type of graph concerns the frequency of irresoluteness. The graphs on the left of Figures 4, 5, 6, and 7 show the frequency of multiple winners for several voting methods as the number of candidates ranges from 5 to 30 and the number of voters range from 4 to 5,001. On the right of the figures, we use the boxen plot or letter-valued plot (Hofmann et al. 2017) representation of the quantiles of the sizes of the winning sets for profiles with multiple winners. The black dots outside of the boxes are the "outliers."
The second type of graph concerns the frequency of different winners. This can be understood in two ways. First, how often do two methods produce different sets of tied winners? Second, if we assume that given a set of tied winners, the ultimate winner will be chosen randomly according to a uniform probability distribution, how often do two methods not only produce different sets of tied winners but also produce different ultimate winners after random tiebreaking? Figure 8 shows the frequency of different winners for Split Cycle vs. Beat Path in these two senses according to several probability models. This is related to irresoluteness because in either of the two senses of having different winners, Split Cycle can differ from Beat Path only when Split Cycle outputs multiple winners before tiebreaking.
Choice of methods We selected the voting methods with which to compare Split Cycle as follows. In light of Section 3.3, a key comparison is to the most well-known refinements of Split Cycle, namely Beat Path and Ranked Pairs. However, we did not include Ranked Pairs due to the computational difficulty of determining the Ranked Pairs winners for elections with small numbers of voters (see Brill and Fischer 2012); but for those combinations of voters and candidates for which we were able to compute Ranked Pairs for all sampled profiles, Ranked Pairs was similar to Beat Path in irresoluteness. We chose to include Copeland because like Split Cycle, GETCHA, and GOCHA, Copeland does not satisfy the resolvability criteria of Definitions 5.34
and 5.38, yet Copeland is one of the most discriminating of all C1 voting methods (recall Section 5.4.1 for the definition of C1, and see Brandt and Seedig 2014 on the discriminating power of different C1 methods).
As for our choice to include Uncovered Set, 41 it follows from a result of Moulin (1986, Theorem 1) that for any C1 voting method F satisfying neutrality and expansion consistency and linear profile P with an odd number of voters, U C(P) ⊆ F (P) (for an analogous result for an even number of voters, using a definition of the Uncovered Set that satisfies expansion consistency, see Peris and Subiza 1999, Theorem 1). Thus, by comparing Split Cycle to the Uncovered Set, we are comparing Split Cycle to the most discriminating of all C1 methods satisfying expansion consistency, and by comparing Split Cycle to Copeland, we are comparing Split Cycle to one of the most discriminating of all C1 methods.
Discussion We highlight the following takeaway points about the results of our simulations:
• The IC model can be viewed as a "worst case scenario" for irresoluteness (cf. Tsetlin et al. 2003), so we expect that in practice the frequency of multiple winners will be substantially lower. We also ran our simulations for the IAC model but the graphs were almost indistinguishable from those of IC, so we omit them here. By contrast, we also tried one, two, and three-dimensional spatial models and a single-peaked model from Boehmer et al. 2021, and in these models, all of the methods were essentially resolute except with small numbers of voters, so we omit these graphs as well.
• The boxen plots show that when there are multiple winners, generally there are very few for Split
Cycle, Beat Path, and Copeland, whereas there are significantly more for Uncovered Set and many for GETCHA. This pattern holds across the different probability models for profiles.
• Unlike for the other voting methods, for Copeland the proportion of profiles with multiple winners actually increases slightly as we increase the number of voters from 4 to 5,001, except under the Mallow's model with one reference ranking in which all methods tend toward resoluteness.
• Unlike for the other voting methods, for Copeland the proportion of profiles with multiple winners is largely insensitive to the number of candidates-in fact, it decreases slightly from 10 to 30 candidates under the IC model.
• Graphs for the urn model appear roughly as compressed versions of the graphs for the IC model, but there are some qualitative differences. For example, Split Cycle compares more favorably with Copeland in terms of irresoluteness according to the urn model than according to IC. For example, for 20 candidates and 5,000/5,001 voters, Copeland is more resolute than Split Cycle according to the IC model but has about the same frequency of irresoluteness as Split Cycle according to the urn model.
• For the Mallow's model, the difference between one and two reference rankings is striking. With only one reference ranking, representing a society in which voters gravitate to different degrees toward a single ranking of the candidates, all of the methods are nearly resolute by 5,000/5,001 voters. By contrast, with two, reversed reference rankings, representing a society in which voters are divided into two groups gravitating toward reversed rankings, the results are not far from those of the IC model.
14
Cf. Tideman (1987, p. 206): "The GOCHA rule, in a sense, is only half a voting rule. It does not address the issue of what should be done to resolve cycles."
Example 3. 8 .
8For a more complicated example, consider the following margin graph, repeated three times to highlight the three different simple cycles:The splitting number of the cycle b → d → c → b is 4; the splitting number of the cycle b → a → c → b is 6; and the splitting number of the cycle b → a → d → c → b is 4. In each cycle, the edge with the smallest margin in that cycle is not a defeat. After discarding these edges (i.e., the b → d edge in the red cycle, the a → c edge in the blue cycle, and the a → d edge in the green cycle), the remaining edges are defeats:Since d is the only undefeated candidate, d is the unique Split Cycle winner.
Fact 4 . 3 .
43Stability for winners is equivalent to the conjunction of immunity to spoilers and stealers.
Both a and c are undefeated according to Split Cycle; Ranked Pairs picks only a; and Beat Path picks only c.
Remark 4. 11 .
11For P −b as in the proof of Proposition 4.10, Split Cycle also picks {a}, so Ranked Pairs and Stable Voting do as well, while Minimax picks the Condorcet loser {c}. In P, Split Cycle picks {a, c}, Ranked Pairs and Stable Voting pick {a}, and Minimax picks {c}.
First
, we claim that RP (P −b ) = {a}. We lock in edges in the following order: (a, e), (e, c), (c, d); then ignore (d, a), since locking it in would create a cycle (aecda); then (a, c), (e, d). The only candidate with no incoming edge locked in is a, so indeed RP (P −b ) = {a}. Then since M argin P (a, b) > 0, a is Condorcetian for Ranked Pairs in P. Next, we claim that RP (P) = {c}. We lock in edges in the following order: (a, b), (a, e), (b, e), (c, b); then ignore (e, c), since locking it in would create a cycle (ecbe); then (d, c), (d, a); then ignore (a, c), since locking it in would create a cycle (acda); also ignore (e, d), since locking it in would create a cycle (daed); finally, (d, b). The only candidate with no incoming edge locked in is c, so indeed RP (P) = {c}.
Remark 4 . 13 .
413For P −b in the proof of Proposition 4.12, Split Cycle, Beat Path, Stable Voting, and Minimax all pick {a}. In P, Split Cycle picks {a, c}, while Beat Path, Stable Voting, and Minimax all pick {a}.
be a margin graph obtained from M(P) by changing all of b's negative margins (if any) against candidates in X(P) \ {a} to be positive and such that all margins between distinct candidates in M are distinct, while keeping all other margins from M(P) the same. Let P be a profile with M(P ) = M , which exists by Theorem 2.5. Since b ∈ RP (P)
Remark 4 . 19 .
419In the profile used in the proof of Proposition 4.18, Ranked Pairs and Stable Voting select only a, while Beat Path and Minimax select only b. Here Beat Path violates partial immunity to stealers, as a is the only Condorcetian candidate for Beat Path and yet b steals the election from a.
Proposition 4. 23 .
23Split Cycle satisfies expansion consistency.Proof. The proof is similar to that of Proposition 4.17.If a ∈ SC(P| Y ) ∩ SC(P| Z ), then for all c ∈ Y , M argin P| Y (c, a) ≤ Cycle# P| Y (c, a), and for all c ∈ Z, M argin P| Z (c, a) ≤ Cycle# P| Z (c, a). Clearly for all c ∈ Y , M argin P (c, a) = M argin P| Y (c,a), and for all c ∈ Z, M argin P (c, a) = M argin P| Z (c, a). Moreover, for all c ∈ Y , Cycle# P| Y (c, a) ≤ Cycle# P (c, a), and for all c ∈ Z, Cycle# P| Z (c, a) ≤ Cycle# P (c, a). Thus, for all c ∈ Y ∪ Z = X(P), M argin P (c, a) ≤ Cycle# P (c, a). Hence a ∈ SC(P).
chains of M (P), ordered by the majority relation, are as follows:(c, a, g, b) (c, g, d, b) (c, e, a, g) (c, d, b, e) (c, d, e, a) (f, c, g) (e, a, f, g) (d, b, e, f ) (d, a, b, f ) (d,e, a, f ). Then one can check that Banks(P| {a,b,c,e,f } ) ∩ Banks(P| {a,d,f,g} ) a ∈ Banks(P) = {c, d, e, f }.
Remark 4 . 25 .
425Expansion consistency as in Definition 4.22 is the analogue for voting methods ofSen's (1971, p. 314) expansion consistency (also known as γ) condition on choice functions. A choice function on a nonempty set X is a function C : ℘(X)\{∅} → ℘(X)\{∅} such that for all nonempty S ⊆ X, ∅ = C(S) ⊆ S.Then C satisfies expansion consistency if for all nonempty S, T ⊆ X, C(S) ∩ C(T ) ⊆ C(S ∪ T ).
C({a, b, c}, {a, b, c}) = {a, b}, C({a, b, c}, {a, b}) = {b}, C({a, b, c}, {a, c}) = {a}, C({a, b, c}, {b, c}) = {b}; C({a, b}, {a, b}) = {a}; C({a, c}, {a, c}) = {a}; C({b, c}, {b, c}) = {b}. This violates full expansion consistency because a ∈ C({a, b}, {a, b}) ∩ C({a, c}, {a}) but a ∈ C({a, b, c}, {a, b})
C
∩ F (P) = ∅ if and only if C \ {c} ∩ F (P −c ) = ∅.
Then clearly a defeats b in P. Now pick c, d ∈ X(P) with d = b such that M argin P (c, d) is the largest margin in M(P) of any edge not going to b. Suppose for contradiction that c does not defeat d in P. Then there is a simple cycle ρ containing c and d such that M argin P (c, d) is strictly less than the other margins along the cycle, at least one of which is a margin of an edge not going to b. But this contradicts the fact that M argin P (c, d) is the largest margin in M(P) of any edge not going to b. Thus, c defeats d. Hence SC(P) contains neither b nor d, so |SC(P)| ≤ |X(P)| − 2.For part 2, consider the sequence of margin graphs of the following form (for a definition and code to generate the sequence, see https://github.com/epacuit/splitcycle)2.5, there are (linear) profiles realizing each margin graph in the sequence. In each margin graph, the arrow from the bottom right candidate to the bottom left candidate is a defeat; all arrows pointing to the bottom right candidate are defeats; but no other arrows are defeats, since each has the weakest margin in a cycle. Thus, all candidates are undefeated except the bottom two candidates.
a, b ∈ X(P ), if a defeats b in P , then a defeats b in P + mP . Assume a defeats b in P , so M argin P (a, b) > Cycle# P (a, b), so M argin P (a, b) − Cycle# P (a, b) ≥ 1. Then for all m ≥ n, since M argin mP (a, b) = m × M argin P (a, b) and Cycle# mP (a, b) = m × Cycle# P (a, b), 34 This is the terminology from Myerson 1995. Cf.Smith's (1973) "Archimedean property" andYoung's (1975) "continuity."we have M argin mP (a, b) − Cycle# mP (a, b) ≥ m ≥ n = 2|V (P)| + 1. Also note that M argin P+mP (a, b) ≥ M argin mP (a, b) − |V (P)| and Cycle# P+mP (a, b) ≤ Cycle# mP (a, b) + |V (P)|.
then M argin P (a, b) ≥ M argin P (a, b), and SC(P ) = {x}. For then by Lemma 5.29, there is an m ∈ N such that SC(P + mP ) = {x}, and for all a, b ∈ X(P), if M argin P (a, b) > 0, then M argin P+mP (a, b) ≥ M argin P (a, b). As V (P) ⊆ V (P + mP ), we may take P + = P + mP for rejectability.Suppose |SC(P)| > 1 and x ∈ SC(P). We show how to modify the margin graph M(P) to a margin graphM on X(P) such that (i) all edges between nodes are preserved from M(P) to M , (ii) no weights on edges decrease from M(P) to M , and (iii) SC(M ) = {x} (recall Remark 3.6). Then Debord's Theorem yields a profile P whose margin graph is M . By (i)-(ii), we have that for all a, b ∈ X(P), if M argin P (a, b) > 0, then M argin P (a, b) ≥ M argin P (a, b). By (iii), SC(P ) = {x}. Let the set of edges in M be the set of all edges in M(P) plus an edge from x to any y such that M argin P (x, y) = 0. Let n be the largest margin in M(P). Each edge (a, b) in M has weight either n + 1 or n + 3 according to the following rules (we use M argin M and Cycle# M with their obvious meanings): 1. if the edge (a, b) occurs on a shortest simple path 35 from x to b in M , set M argin M (a, b) = n + 3; 2. otherwise, set M argin M (a, b) = n + 1.
Figure 3 :
3diagram for the proof of Proposition 5.30.
Proposition 5 . 35 .
535Split Cycle does not satisfy single-voter resolvability even with respect to linear profiles.Proof. Recall the margin graph of the linear profile P from the proof of Proposition 4.14 showing that Minimax and Beat Path do not satisfy immunity to stealers:
in M(P), by Definition 5.41 we have
(
resp. least favorite) candidates (seeDuddy 2014). But we see no problem with the addition of voters who rank x and y as tied changing the winner of an election with majority cycles from x to y, given how the new voters change x's and y's pairwise performance against other candidates.None of Beat Path, Ranked Pairs, Copeland, GETCHA/GOCHA, or Uncovered Set satisfies positive or negative involvement. The failure of positive and negative involvement has been called "a common flaw in Condorcet voting correspondences"(Pérez 2001). However, Split Cycle does not have this flaw.Proposition 5.47. Split Cycle satisfies positive and negative involvement.
For any a, b ∈ X(P) \ {c}, M argin P (a, b) = M argin P−c (a, b). 2. For any b ∈ X(P) \ C and e ∈ C \ {c}, M argin P (c, b) = M argin P−c (e, b).
it suffices to show that for every simple cycle ρ in M(P) extending a → b, there is a simple cycle ρ in M(P −c ) extending a → b such that Split#(ρ ) ≥ Split#(ρ). If ρ does not contain c, then take ρ = ρ. Suppose ρ does contain c. Case 1: a ∈ C. Then apply Lemma A.2 with c 1 := c and c 2 := a to obtain a simple cycle ρ in M(P) extending a → b, but not containing c, such that Split#(ρ ) ≥ Split#(ρ); since ρ does not contain c, it is also a simple cycle in M(P −c ) with the desired properties. Case 2: a ∈ C. Then apply Lemma A.2 with c 1 := c, c 2 ∈ C \ {c} and reason as in Case 1.Lemma A.4. Let d ∈ C and e ∈ C \ {c}.1. For anyb ∈ X(P) \ C, Cycle# P (d, b) = Cycle# P−c (e, b); 2. For any a ∈ X(P) \ C, Cycle# P (a, d) = Cycle# P−c (a, e).Proof. For part 1, for any simple cycle ρ in M(P) extending d → b, by Lemma A.2 with c 1 := d and c 2 := e, there is a simple cycle ρ in M(P) extending e → b, but not containing c (since e ∈ C \ {c}), such that Split#(ρ ) ≥ Split#(ρ). Since ρ does not contain c, it is also a simple cycle in M(P −c ) extending e → b with the same margins. Hence Cycle# P (d, b) ≤ Cycle# P−c (e, b). Next, suppose ρ is a simple cycle in M(P −c ) extending e → b. Then ρ is also a simple cycle in M(P) extending e → b with the same margins. Thus, by Lemma A.2 with c 1 := e and c 2 := d, there is a simple cycle ρ in M(P) extending d → b such thatSplit#(ρ ) ≥ Split#(ρ). Hence Cycle# P (d, b) ≥ Cycle# P−c (e, b).The proof of part 2 is analogous.Proposition A.5. For any b ∈ X(P) \ C, we have b ∈ SC(P) if and only if b ∈ SC(P −c ). Hence Split Cycle is such that non-clone choice is independent of clones.Proof. Suppose b ∈ SC(P −c ), so there is an a ∈ X(P) \ {c} such that a defeats b in P −c . Then by Lemmas A.1.1 and A.3, a defeats b in P. Now suppose b ∈ SC(P), so there is an a ∈ X(P) such that a defeats bin P. Case 1: a = c. Then by Lemmas A.1.1 and A.3 again, a defeats b in P −c . Case 2: a = c. Then by Lemmas A.1.2 and A.4.1 with d := c, each e ∈ C \ {c} defeats b in P −c . Proposition A.6. C ∩ SC(P) = ∅ if and only if C \ {c} ∩ SC(P −c ) = ∅. Hence Split Cycle is such that clone choice is independent of clones.
It then follows by the definition of a set of clones (Definition 5.22), Lemma A.1.1, and Lemma A.4.2 that a defeats every e ∈ C \ {c} in P −c . Hence C \ {c} ∩ SC(P −c ) = ∅. Similarly, if C \ {c} ∩ SC(P −c ) = ∅, then there is some a ∈ X(P −c ) \ C that defeats some e ∈ C \ {c} in P −c . It then follows by Definition 5.22, Lemma A.1.1, and Lemma A.4.2 that a defeats every d ∈ C in P. Hence C ∩ SC(P) = ∅.
imply M argin P+P (z, x) = 1. In addition, since |V (P )| = 1, from M argin P (y i , y i+1 ) ≥ 2 we have M argin P+P (y i , y i+1 ) ≥ 1. Thus, we have a simple cycle margin graph of P + P in which δ, i.e., M argin P+P (z, x), is not greater than any γ i . But this contradicts (11).Case 2: M argin P (z, x) > 0. Together with y 1 Dy 2 D . . . Dy n−1 Dy n , this means there is a simple cycle margin graph of P. Moreover, from y 1 Dy 2 D . . . Dy n−1 Dy n , it follows that for each i ∈ {1, . . . , n − 1},
For part 2
2, by Debord's Theorem, there is a linear profile P whose margin graph is shown on the
See Lamboray 2008 ,
2008Fischer 2012, and for discussion of the axiomatic and computational properties of Ranked Pairs.Stability criteria See Propositions 4.12 and 4.15.Other criteria Proofs that Ranked Pairs satisfies the Condorcet winner and loser criteria, single-voter resolvability, and non-negative responsiveness can be found in Tideman 1987. For the satisfaction of reversal symmetry, Smith, and ISDA, see Schulze 2011,Table 2. See Remark 5.25 on the status of independence of clones. For the satisfaction of rejectability, see Corollary 5.31. Asymptotic resolvability follows from the fact that the proportion of profiles that are uniquely weighted goes to 1 as the number of voters goes to infinity, and Ranked Pairs selects a unique winner in any uniquely-weighted profile. For the failure of positive involvement and negative involvement, seePérez 2001, p. 612.
responsiveness, see Schulze 2022. For the satisfaction of rejectability, see Corollary 5.31. For an example of a simultaneous failure of positive and negative involvement, where adding two voters with the ranking aef cbd changes the unique Beat Path winner from a to d, see Example 7 of Schulze 2022.
Tideman 1987 .
1987For the failure of rejectability and single-voter resolvability, see Proposition 5.28. For the failure of asymptotic resolvability for k ≥ 3, consider a majority graph with three candidates in a top cycle followed by a linear order of the remaining candidates, so the top three candidates are Copeland winners; the proportion of profiles realizing such a majority graph does not go to 0 as the number of voters goes to infinity (Harrison-Trainor 2022). For the failure of positive and negative involvement, see Pérez 2001, § 4.1.
Pérez 2001 , 9 .
20019§ 4.1, where GOCHA is called "Top Cycle", or Felsenthal and Nurmi 2016, where GOCHA is called "Schwartz". The following is a simple example of the failure of positive involvement.Example C.Consider , so GOCHA(P) = {x, y, z}, and add a new voter with xP i yP i z to obtain y with y and z tied, so GOCHA(P ) = {y}.
For a proof that the Uncovered Set satisfies Pareto, see Duggan 2013, Proposition 52. For reversal symmetry, the argument is the same as we gave for GETCHA in Appendix C.5. For the satisfaction of the Condorcet criterion under various definitions of the Uncovered Set, see Duggan 2013, Propositions 4, 5, and 13. The Condorcet loser and non-negative responsiveness criteria are also straightforward to check. For the satisfaction of the Smith criterion under all standard definitions of the Uncovered Set, see Duggan 2013, Propositions 4, 5, and 14.
Figure 4 :Figure 5 :Figure 6 :Figure 7 :Figure 8 :
45678The profiles were generated using the IC model. Results for the IAC model are almost the same. On the left, the purple line for the Uncovered Set is on top of the red line for GETCHA. The profiles were generated using the urn model with α chosen according to a Gamma distribution with shape parameter k = 0.8 and scale parameter θ = 1 as inBoehmer et al. 2021. On the left, the purple line for the Uncovered Set is on top of the red line for GETCHA. The profiles were generated using the Mallows model with dispersion parameter φ chosen as described in the main text. On the left, the purple line for the Uncovered Set is on top of the red line for GETCHA. The profiles were generated using the Mallows model with two reference rankings, which are the reverse of each other. On the left, the purple line for the Uncovered Set is on top of the red line for GETCHA. The graphs in the left column show the percentage of profiles in which Split Cycle and Beat Path output different sets, sampling profiles according to five probability models. The graphs in the right column show the percentage of profiles such that (i) Split Cycle and Beat Path output different sets of winners and (ii) randomly selecting a Split Cycle winner and randomly selecting a Beat Path winner resulted in different ultimate winners.
For two candidates, Instant Runoff is simply Majority Voting, so the Instant Runoff winner is d. But now suppose an additional Republican candidate r joins the race:One possible
d
d
p
p
p
d
37 29 34
r
d
p
d
p
d
p
r
r
Instant Runoff works by first removing the candidate who received the fewest first place votes-in this case,
candidate d-from all ballots, resulting in the following:
37 29 34
r
p
p
p
r
r
Perhaps surprisingly, well-known voting methods such as Instant Runoff, Ranked Pairs, and Beat Path fail to satisfy the negative involvement criterion. In an example of Fishburn and Brams, two voters are unable to make it to an Instant Runoff election, due to their car breaking down. They later realize that had they voted in the election, their least favorite candidate would have won. For a simplified version of the Fishburnand Brams example, consider the following example for Instant Runoff (Pacuit 2019, § 3.3):
at https://github.com/epacuit/splitcycle. All of the examples in the paper have been verified in a Jupyter notebook available in the linked repository. Most of the proofs of properties of Split Cycle have been formalized in the Lean Theorem Prover at https://github.com/chasenorman/Formalized-Voting, as described in Holliday et al. 2021.Split
Cycle
Ranked
Pairs
Beat
Path
Mini-
max Copeland
GETCHA
/GOCHA
Uncovered
Set
Instant
Runoff Plurality
Immunity to
Spoilers (4.1)
Split Cycle, when Beat Path does disagree with Split Cycle, Beat Path almost always violates the strong stability for winners axiom that Split Cycle satisfies, according to several standard probability models.10/11 20/21 50/51 100/101 500/501 1,000/1,001 5,000/5,001
IC 100%
99%
96%
94%
92%
90%
88%
IAC 100%
97%
93%
92%
90%
90%
90%
Urn 100%
97%
93%
91%
90%
90%
89%
Mallows 100%
98%
96%
93%
93%
92%
91%
It is also possible that while no one candidate wins its matches against all others, there is at least one candidate who wins or ties its matches against all others, where a match between a and b is tied if the same number of voters rank a above b as rank b above a. All such candidates will count as undefeated according to the definition of Split Cycle below.4 There is a more computationally efficient way to calculate the Split Cycle winners (see Footnote 20), but this simple two-step procedure is appropriate for explaining the method to voters.
Also seeKasper et al. 2019, where positive and negative involvement are called the "Top Property" and "Bottom Property", respectively. A closely related criterion for unique winners is given byRichelson (1978) under the name 'voter adaptability'.9 Unlike Instant Runoff, this is a Condorcet consistent voting method. We have slightly modified Moulin's example to avoid the use of a tiebreaking rule, at the expense of adding two new voters rather than one in the second election scenario.
We officially define a profile as a pair (P, X(P)) due to a technicality: unlike the set of voters, the set of candidates cannot necessarily be recovered from the function P.
We assume that e does not defeat d because of the perfect cycle involving d, f , and e.16 Here we take IIA to state that if two profiles are alike with respect to how everyone votes on x vs. y, then x defeats y in the one profile if and only if x defeats y in the other.17 In fact, in Holliday and Pacuit 2021a, we characterize the Split Cycle defeat relation using an axiom of Coherent IIA that is stronger than weak IIA.
Baigent's theorem also assumes that profiles assign strict weak orders to voters, not just linear orders, but his result also holds for the domain of all linear profiles (also seeCampbell and Kelly 2000).
Since Lemma 3.17 allows us to define the Split Cycle defeat relation in terms of the strength of strongest paths, we can efficiently calculate Split Cycle using a modification of the Floyd-Warshall algorithm used bySchulze (2011) to calculate Beat Path. See the Python implementation at https://github.com/epacuit/splitcycle.
This formulation relies on the assumption that voters submit linear orders of the candidates. Zavist and Tideman allow ties in voter ballots and use a randomizing device to generate the linear TBRC from the designated voter's ballot, in case it contains ties. Thus, Zavist and Tideman define a probabilistic voting method.
We used the Gillies version of the Uncovered Set for profiles with an even number of voters. For an odd number of voters with linear ballots, the different versions of Uncovered Set are equivalent.
In the profile P from Example C.10, observe that {a, c, d} is a set of clones in P, U C F ish (P) = {b}, but U C F ish (P −c ) = {a, b}, so U C F ish does not satisfy the condition that clone choice is independent of clones. C Example, recall Definition 5.23Example C.12. In the profile P from Example C.10, observe that {a, c, d} is a set of clones in P, U C F ish (P) = {b}, but U C F ish (P −c ) = {a, b}, so U C F ish does not satisfy the condition that clone choice is independent of clones (recall Definition 5.23).
Uncovered Set selects a unique winner only if there is a Condorcet winner). For the failure of positive and negative involvement under all standard definitions of the Uncovered Set. Uncovered Set violates rejectability, single-voter resolvability, and asymptotic resolvability for k ≥ 3 by the same reasoning as for GETCHA (i.e. see Pérez 2001, § 4.1Uncovered Set violates rejectability, single-voter resolvability, and asymptotic resolvability for k ≥ 3 by the same reasoning as for GETCHA (i.e., Uncovered Set selects a unique winner only if there is a Condorcet winner). For the failure of positive and negative involvement under all standard definitions of the Uncovered Set, see Pérez 2001, § 4.1.
) removes all such candidates; and if, at some stage of the removal process, all remaining candidates have the same number of first-place votes (so all candidates would be removed), then all remaining candidates are selected as winners. Alternatively, the "parallel universe" version of Instant Runoff (cf. Freeman et al. 2015, § 3) says that a wins in P if there is a candidate b with the least number of first-place votes in P. Instant Runoff Instant Runoff is usually defined only for profiles P in which each ballot P i is a linear order of some subset of X(P). Instant Runoff iteratively removes the candidate with the least number of first-place votes, until there is a candidate with a majority of the first-place votes. The question then arises of what to do if there are two or more candidates with the least number of first-place votes. One version of Instant Runoff (Taylor and Pacelli. such that a wins according to the parallel universe version of Instant Runoff in P −bC.8 Instant Runoff Instant Runoff is usually defined only for profiles P in which each ballot P i is a linear order of some subset of X(P). Instant Runoff iteratively removes the candidate with the least number of first-place votes, until there is a candidate with a majority of the first-place votes. The question then arises of what to do if there are two or more candidates with the least number of first-place votes. One version of Instant Runoff (Taylor and Pacelli 2008, p. 7) removes all such candidates; and if, at some stage of the removal process, all remaining candidates have the same number of first-place votes (so all candidates would be removed), then all remaining candidates are selected as winners. Alternatively, the "parallel universe" version of Instant Runoff (cf. Freeman et al. 2015, § 3) says that a wins in P if there is a candidate b with the least number of first-place votes in P such that a wins according to the parallel universe version of Instant Runoff in P −b .
for axiomatic and computational properties of Instant Runoff. Stability criteria Example 1.2 shows that Instant Runoff violates (even partial) immunity to spoilers. See Freeman, If we consider removing the Progressive from the election instead of the Republican, then it also shows that Instant Runoff violates (even partial) immunity to stealersSee Freeman et al. 2014 and Wang et al. 2019 for axiomatic and computational properties of Instant Runoff. Stability criteria Example 1.2 shows that Instant Runoff violates (even partial) immunity to spoilers. If we consider removing the Progressive from the election instead of the Republican, then it also shows that Instant Runoff violates (even partial) immunity to stealers.
The parallel universe version of Instant Runoff satisfies independence of clones (Tideman 1987), while the simultaneous removal version does not: if there are 4 voters with the ranking abc, 3 with bca, and 3 with cba, then the clones b and c are both eliminated in the first round, so a wins, whereas in the election with just a and b, b wins. 40 The failure of Smith and ISDA follows from the failure of the Condorcet winner criterion. For the satisfaction of single-voter resolvability, see Tideman 1987 (where Instant Runoff is called "Alternative vote"). That Instant Runoff satisfies rejectability follows from Lemma 5.37 given that it satisfies singer-voter resolvability and clearly homogeneity. It is well known that Instant Runoff violates the Condorcet winner and non-negative responsiveness criteria but satisfies the Condorcet loser criterion. Asymptotic resolvability follows from the fact that for any number of candidates, the proportion of profiles in which there is a tie in the number of first place votes for two candidates goes to 0. For positive involvement, suppose x ∈ IRV (P), so x is not eliminated at any stage of the iteration procedure starting from P. Then where P is a one-voter profile whose voter ranks x in first place, clearly x is not 40 Thanks to Dominik Peters for this exampleOther criteria For the failure of reversal symmetry, see Felsenthal 2012 (under "preference inversion" for "Alternative vote"). It is well known that Instant Runoff violates the Condorcet winner and non-negative responsiveness criteria but satisfies the Condorcet loser criterion (Felsenthal 2012). The parallel universe version of Instant Runoff satisfies independence of clones (Tideman 1987), while the simultaneous removal version does not: if there are 4 voters with the ranking abc, 3 with bca, and 3 with cba, then the clones b and c are both eliminated in the first round, so a wins, whereas in the election with just a and b, b wins. 40 The failure of Smith and ISDA follows from the failure of the Condorcet winner criterion. For the satisfaction of single-voter resolvability, see Tideman 1987 (where Instant Runoff is called "Alternative vote"). That Instant Runoff satisfies rejectability follows from Lemma 5.37 given that it satisfies singer-voter resolvability and clearly homogeneity. Asymptotic resolvability follows from the fact that for any number of candidates, the proportion of profiles in which there is a tie in the number of first place votes for two candidates goes to 0. For positive involvement, suppose x ∈ IRV (P), so x is not eliminated at any stage of the iteration procedure starting from P. Then where P is a one-voter profile whose voter ranks x in first place, clearly x is not 40 Thanks to Dominik Peters for this example.
Social Choice and Individual Values. Kenneth J Arrow, John Wiley & Sons, IncNew York2nd editionKenneth J. Arrow. Social Choice and Individual Values. John Wiley & Sons, Inc., New York, 2nd edition, 1963.
Twitching weak dictators. Nick Baigent, 10.1007/bf01229471Journal of Economics. 474Nick Baigent. Twitching weak dictators. Journal of Economics, 47(4):407-411, 1987. doi:10.1007/bf01229471.
Majority Judgement: Measuring, Ranking and Electing. Michel Balinski, Rida Laraki, 10.7551/mitpress/9780262015134.001.0001MIT PressBostonMichel Balinski and Rida Laraki. Majority Judgement: Measuring, Ranking and Electing. MIT Press, Boston, 2010. doi:10.7551/mitpress/9780262015134.001.0001.
Paradox of voting under an urn model: The effect of homogeneity. Sven Berg, 10.1007/BF00127533Public Choice. 47Sven Berg. Paradox of voting under an urn model: The effect of homogeneity. Public Choice, 47:377-387, 1985. doi:10.1007/BF00127533.
Putting a compass on the map of elections. Niclas Boehmer, Robert Bredereck, Piotr Faliszewski, Rolf Niedermeier, Stanisław Szufa, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21). Zhi-Hua Zhouthe Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21)Niclas Boehmer, Robert Bredereck, Piotr Faliszewski, Rolf Niedermeier, and Stanisław Szufa. Putting a compass on the map of elections. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), pages 59-65. International Joint Conferences on Artificial Intelligence, 2021.
On the possibility of reasonable consistent majoritarian choice: Some positive results. Georges Bordes, 10.1016/0022-0531(83)90024-8Journal of Economic Theory. 311Georges Bordes. On the possibility of reasonable consistent majoritarian choice: Some positive results. Journal of Economic Theory, 31(1):122-132, 1983. doi:10.1016/0022-0531(83)90024-8.
Voting procedures. J Steven, Peter C Brams, Fishburn, 10.1016/s1574-0110(02)80008-xHandbook of Social Choice and Welfare. Kenneth J. Arrow, Amartya K. Sen, and Kotaro SuzumuraAmsterdamNorth-Holland1Steven J. Brams and Peter C. Fishburn. Voting procedures. In Kenneth J. Arrow, Amartya K. Sen, and Kotaro Suzumura, editors, Handbook of Social Choice and Welfare, volume 1, pages 173-236. North- Holland, Amsterdam, 2002. doi:10.1016/s1574-0110(02)80008-x.
Rolling the dice: Recent results in probabilistic social choice. Felix Brandt, Trends in Computational Social Choice. Ulle EndrissAI AccessFelix Brandt. Rolling the dice: Recent results in probabilistic social choice. In Ulle Endriss, editor, Trends in Computational Social Choice, pages 3-26. AI Access, 2017.
On the discriminative power of tournament solutions. Felix Brandt, Hans Georg Seedig, ; M Lübbecke, A Koster, P Letmathe, R Madlener, B Peis, G Walther, doi.org/10.1007/978-3-319-28697-6_8Operations Research Proceedings. ChamSpringerFelix Brandt and Hans Georg Seedig. On the discriminative power of tournament solutions. In M. Lübbecke, A. Koster, P. Letmathe, R. Madlener, B. Peis, and G. Walther, editors, Operations Research Proceedings 2014, pages 53-58, Cham, 2014. Springer. doi:doi.org/10.1007/978-3-319-28697-6_8.
On the structure of stable tournament solutions. Felix Brandt, Markus Brill, Hans Georg Seedig, Warut Suksompong, 10.1007/s00199-016-1024-xEconomic Theory. 652Felix Brandt, Markus Brill, Hans Georg Seedig, and Warut Suksompong. On the structure of stable tourna- ment solutions. Economic Theory, 65(2):483-507, 2018. doi:10.1007/s00199-016-1024-x.
The price of neutrality for the ranked pairs method. Markus Brill, Felix Fischer, Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12). the Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12)AAAI PressMarkus Brill and Felix Fischer. The price of neutrality for the ranked pairs method. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12), pages 1299-1305. AAAI Press, 2012.
Weak independence and veto power. Donald E Campbell, Jerry S Kelly, 10.1016/s0165-1765(99)00209-8Economics Letters. 662Donald E. Campbell and Jerry S. Kelly. Weak independence and veto power. Economics Letters, 66(2): 183-189, 2000. doi:10.1016/s0165-1765(99)00209-8.
Common voting rules as maximum likelihood estimators. Vincent Conitzer, Tuomas Sandholm, Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI-05). the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI-05)Arlington, VirginiaAUAI PressVincent Conitzer and Tuomas Sandholm. Common voting rules as maximum likelihood estimators. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI-05), pages 145- 152, Arlington, Virginia, 2005. AUAI Press.
Notes from a seminar on applications of mathematics to the social sciences. A H Copeland, University of MichiganA 'reasonable' social welfare functionA. H. Copeland. A 'reasonable' social welfare function. Notes from a seminar on applications of mathematics to the social sciences, University of Michigan, 1951.
Minimax is the best electoral system after all. Richard B Darlington, arXiv:1606.04371Richard B. Darlington. Minimax is the best electoral system after all. arXiv:1606.04371, 2016.
Bernard Debord, Caractérisation des matrices des préférences nettes et méthodes d'agrégation associées. Mathématiques et sciences humaines. 97Bernard Debord. Caractérisation des matrices des préférences nettes et méthodes d'agrégation associées. Mathématiques et sciences humaines, 97:5-17, 1987.
The probability of a cyclical majority. Frank Demeyer, Charles R Plott, Econometrica. 382Frank DeMeyer and Charles R. Plott. The probability of a cyclical majority. Econometrica, 38(2):345-354, 1970.
An axiomatic characterization of Split Cycle. Yifeng Ding, Wesley H Holliday, Eric Pacuit, arXiv:2210.12503Yifeng Ding, Wesley H. Holliday, and Eric Pacuit. An axiomatic characterization of Split Cycle. arXiv:2210.12503, 2022.
Condorcet's principle and the strong no-show paradoxes. Conal Duddy, 10.1007/s11238-013-9401-4Theory and Decision. 77Conal Duddy. Condorcet's principle and the strong no-show paradoxes. Theory and Decision, 77:275-285, 2014. doi:10.1007/s11238-013-9401-4.
Uncovered sets. John Duggan, 10.1007/s00355-012-0696-9Social Choice and Welfare. 413John Duggan. Uncovered sets. Social Choice and Welfare, 41(3):489-535, 2013. doi:10.1007/s00355-012- 0696-9.
. Bhaskar Dutta, Jean-Francois Laslier, 10.1007/s003550050158Comparison functions and choice correspondences. Social Choice and Welfare. 16Bhaskar Dutta and Jean-Francois Laslier. Comparison functions and choice correspondences. Social Choice and Welfare, 16:513-532, 1999. doi:10.1007/s003550050158.
Strategic candidacy and voting procedures. Bhaskar Dutta, Matthew O Jackson, Michel Le Breton, 10.1111/1468-0262.00228Econometrica. 694Bhaskar Dutta, Matthew O. Jackson, and Michel Le Breton. Strategic candidacy and voting procedures. Econometrica, 69(4):1013-1037, 2001. doi:10.1111/1468-0262.00228.
Candidate stability and nonbinary social choice. Lars Ehlers, John A Weymark, 10.1007/s00199-002-0279-6Economic Theory. 222Lars Ehlers and John A. Weymark. Candidate stability and nonbinary social choice. Economic Theory, 22 (2):233-243, 2003. doi:10.1007/s00199-002-0279-6.
Steve Eppley, Beatpath criterion, Tideman, and BCM. Steve Eppley. Beatpath criterion, Tideman, and BCM, 2000. URL http://lists.electorama.com/ pipermail/election-methods-electorama.com/2000-February/068928.html.
Candidate stability and nonbinary social choice. Hülya Eraslan, Andrew Mclellan, 10.1016/j.jet.2003.09.005Journal of Economic Theory. 1171Hülya Eraslan and Andrew McLellan. Candidate stability and nonbinary social choice. Journal of Economic Theory, 117(1):29-54, 2004. doi:10.1016/j.jet.2003.09.005.
Borda paradox in the 2017 Iranian presidential election: empirical evidence from opinion polls. Mehdi Feizi, Rasoul Ramezanian, Saeed Malek Sadati, 10.1007/s10101-019-00233-3Economics of Governance. 21Mehdi Feizi, Rasoul Ramezanian, and Saeed Malek Sadati. Borda paradox in the 2017 Iranian presi- dential election: empirical evidence from opinion polls. Economics of Governance, 21:101-113, 2020. doi:10.1007/s10101-019-00233-3.
Review of paradoxes afflicting procedures for electing a single candidate. Dan S Felsenthal, 10.1007/978-3-642-20441-8_3Electoral Systems: Paradoxes, Assumptions and Procedures. BerlinSpringerDan S. Felsenthal. Review of paradoxes afflicting procedures for electing a single candidate. In Electoral Systems: Paradoxes, Assumptions and Procedures, pages 19-92, Berlin, 2012. Springer. doi:10.1007/978- 3-642-20441-8_3.
Two types of participation failure under nine voting methods in variable electorates. Dan S Felsenthal, Hannu Nurmi, 10.1007/s11127-016-0352-5Public Choice. 168Dan S. Felsenthal and Hannu Nurmi. Two types of participation failure under nine voting methods in variable electorates. Public Choice, 168:115-135, 2016. doi:10.1007/s11127-016-0352-5.
Varieties of failure of monotonicity and participation under five voting methods. Dan S Felsenthal, Nicolaus Tideman, 10.1007/s11238-012-9306-7Theory and Decision. 75Dan S. Felsenthal and Nicolaus Tideman. Varieties of failure of monotonicity and participation under five voting methods. Theory and Decision, 75:59-77, 2013. doi:10.1007/s11238-012-9306-7.
Condorcet social choice functions. C Peter, Fishburn, 10.1137/0133030SIAM Journal on Applied Mathematics. 333Peter C. Fishburn. Condorcet social choice functions. SIAM Journal on Applied Mathematics, 33(3):469-489, 1977. doi:10.1137/0133030.
Paradoxes of preferential voting. C Peter, Steven J Fishburn, Brams, 10.2307/2689808Mathematics Magazine. 564Peter C. Fishburn and Steven J. Brams. Paradoxes of preferential voting. Mathematics Magazine, 56(4): 207-214, 1983. doi:10.2307/2689808.
On the axiomatic characterization of runoff voting rules. Rupert Freeman, Markus Brill, Vincent Conitzer, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14). the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14)AAAI PressRupert Freeman, Markus Brill, and Vincent Conitzer. On the axiomatic characterization of runoff voting rules. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14), pages 675-681. AAAI Press, 2014.
International Foundation for Autonomous Agents and Multiagent Systems. Rupert Freeman, Markus Brill, Vincent Conitzer, Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015). the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015)Richland, SCGeneral tiebreaking schemes for computational social choice. ISBN 9781450334136Rupert Freeman, Markus Brill, and Vincent Conitzer. General tiebreaking schemes for computational social choice. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015), pages 1401-1409, Richland, SC, 2015. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9781450334136.
Manipulation of voting schemes: A general result. Allan Gibbard, 10.2307/1914083Econometrica. 414Allan Gibbard. Manipulation of voting schemes: A general result. Econometrica, 41(4):587-601, 1973. doi:10.2307/1914083.
Solutions to general non-zero-sum games. Donald B Gillies, Contributions to the Theory of Games. A. W. Tucker and R. D. LucePrinceton University PressDonald B. Gillies. Solutions to general non-zero-sum games. In A. W. Tucker and R. D. Luce, editors, Contributions to the Theory of Games. Princeton University Press, 1959.
An analysis of random elections with large numbers of voters. Matthew Harrison-Trainor, 10.1016/j.mathsocsci.2022.01.002Mathematical Social Sciences. 116Matthew Harrison-Trainor. An analysis of random elections with large numbers of voters. Mathematical Social Sciences, 116:68-84, 2022. doi:10.1016/j.mathsocsci.2022.01.002.
Social choice under incomplete, cyclic preferences: Majority/minority-based rules, and composition-consistency. Jobst Heitzig, arXiv:0201285Jobst Heitzig. Social choice under incomplete, cyclic preferences: Majority/minority-based rules, and composition-consistency. arXiv:0201285, 2002.
Examples with 4 options for immune methods. Jobst Heitzig, Jobst Heitzig. Examples with 4 options for immune methods, 2004a. URL http://lists.electorama.com/ pipermail/election-methods-electorama.com//2004-May/078166.html.
River method -updated summary. Jobst Heitzig, Jobst Heitzig. River method -updated summary, 2004b. URL http://lists.electorama.com/pipermail/ election-methods-electorama.com/2004-October/014018.html.
Did Ralph Nader spoil a Gore presidency? a ballot-level study of Green and Reform Party voters in the 2000 presidential election. C Michael, Jeffrey B Herron, Lewis, Quarterly Journal of Political Science. 23Michael C. Herron and Jeffrey B. Lewis. Did Ralph Nader spoil a Gore presidency? a ballot-level study of Green and Reform Party voters in the 2000 presidential election. Quarterly Journal of Political Science, 2(3):205-226, 2007.
Letter-value plots: Boxplots for large data. Heike Hofmann, Hadley Wickham, Karen Kafadar, 10.1080/10618600.2017.1305277Journal of Computational and Graphical Statistics. 263Heike Hofmann, Hadley Wickham, and Karen Kafadar. Letter-value plots: Boxplots for large data. Journal of Computational and Graphical Statistics, 26(3):469-477, 2017. doi:10.1080/10618600.2017.1305277.
Axioms for defeat in democratic elections. H Wesley, Eric Holliday, Pacuit, 10.1177/09516298211043236arXiv:2008.08451Journal of Theoretical Politics. 334Wesley H. Holliday and Eric Pacuit. Axioms for defeat in democratic elections. Journal of Theoretical Politics, 33(4):475-524, 2021a. doi:10.1177/09516298211043236. arXiv:2008.08451.
Measuring violations of positive involvement in voting. H Wesley, Eric Holliday, Pacuit, 10.4204/EPTCS.335.17Theoretical Aspects of Rationality and Knowledge 2021 (TARK 2021). J. Y. Halpern and A. Perea335Wesley H. Holliday and Eric Pacuit. Measuring violations of positive involvement in voting. In J. Y. Halpern and A. Perea, editors, Theoretical Aspects of Rationality and Knowledge 2021 (TARK 2021), volume 335 of Electronic Proceedings in Theoretical Computer Science, pages 189-209, 2021b. doi:10.4204/EPTCS.335.17.
Stable Voting. H Wesley, Eric Holliday, Pacuit, 10.1007/s10602-022-09383-9arXiv:2108.00542Constitutional Political Economy, Forthcoming. Wesley H. Holliday and Eric Pacuit. Stable Voting. Constitutional Political Economy, Forthcoming. doi:10.1007/s10602-022-09383-9. arXiv:2108.00542.
Voting theorem in the Lean Theorem Prover. H Wesley, Chase Holliday, Eric Norman, Pacuit, 10.1007/978-3-030-88708-7_9arXiv:2110.08453Logic, Rationality, and Interaction. LORI 2021. S. Ghosh and T. IcardChamSpringer13039cs.LOWesley H. Holliday, Chase Norman, and Eric Pacuit. Voting theorem in the Lean Theorem Prover. In S. Ghosh and T. Icard, editors, Logic, Rationality, and Interaction. LORI 2021, volume 13039 of Lecture Notes in Computer Science, pages 111-127, Cham, 2021. Springer. doi:10.1007/978-3-030-88708-7_9. arXiv:2110.08453 [cs.LO].
Impossibility theorems involving weakenings of expansion consistency and resoluteness in voting. H Wesley, Chase Holliday, Eric Norman, Saam Pacuit, Zahedian, Michael A. Jones, David McCune, andWesley H. Holliday, Chase Norman, Eric Pacuit, and Saam Zahedian. Impossibility theorems involving weakenings of expansion consistency and resoluteness in voting. In Michael A. Jones, David McCune, and
Jennifer Wilson, Mathematical Analyses of Decisions, Voting, and Games, Contemporary Mathematics. Providence, RI, ForthcomingAmerican Mathematical SocietyJennifer Wilson, editors, Mathematical Analyses of Decisions, Voting, and Games, Contemporary Mathe- matics. American Mathematical Society, Providence, RI, Forthcoming. https://arxiv.org/abs/2208.06907.
An extension of the Moulin No Show Paradox for voting correspondences. L José, Joaquín Jimeno, Estefanía Pérez, García, 10.1007/s00355-008-0360-6Social Choice and Welfare. 333José L. Jimeno, Joaquín Pérez, and Estefanía García. An extension of the Moulin No Show Paradox for voting correspondences. Social Choice and Welfare, 33(3):343-359, 2009. doi:10.1007/s00355-008-0360-6.
New York City voters just adopted Ranked-Choice Voting in elections. Here's how it works. Time Magazine. Anna Purna, Kambhampaty , issueAnna Purna Kambhampaty. New York City voters just adopted Ranked-Choice Voting in elections. Here's how it works. Time Magazine, 2019. November 6, 2019 issue.
Empirical examples of voting paradoxes. M Marek, Kaminski, 10.4337/9781783470730.00029Handbook of Social Choice and Voting. Jac C. Heckelman and Nicholas R. MillerNorthhampton, MAEdward Elgar PublishingMarek M. Kaminski. Empirical examples of voting paradoxes. In Jac C. Heckelman and Nicholas R. Miller, editors, Handbook of Social Choice and Voting, pages 1367-387. Edward Elgar Publishing, Northhampton, MA, 2015. doi:10.4337/9781783470730.00029.
Spoiler effects in proportional representation systems: evidence from eight Polish parliamentary elections. M Marek, Kaminski, 10.1007/s11127-018-0565-xPublic Choice. 176Marek M. Kaminski. Spoiler effects in proportional representation systems: evidence from eight Polish parliamentary elections, 1991-2015. Public Choice, 176:441-460, 2018. doi:10.1007/s11127-018-0565-x.
Condorcet Consistency and the strong no show paradoxes. Laura Kasper, Hans Peters, Dries Vermeulen, 10.1016/j.mathsocsci.2019.03.002Mathematical Social Sciences. 99Laura Kasper, Hans Peters, and Dries Vermeulen. Condorcet Consistency and the strong no show paradoxes. Mathematical Social Sciences, 99:36-42, 2019. doi:10.1016/j.mathsocsci.2019.03.002.
Social Choice Theory: An Introduction. Jerry S Kelly, 10.1007/978-3-662-09925-4SpringerBerlinJerry S. Kelly. Social Choice Theory: An Introduction. Springer, Berlin, 1988. doi:10.1007/978-3-662-09925- 4.
A dynamical model of political equilibrium. H Gerald, Kramer, 10.1016/0022-0531(77)90011-4Journal of Economic Theory. 162Gerald H. Kramer. A dynamical model of political equilibrium. Journal of Economic Theory, 16(2):310-334, 1977. doi:10.1016/0022-0531(77)90011-4.
Condorcet and Borda: Voting paradoxes in the 2016 Republican presidential primaries. Peter Kurrild-Klitgaard, Trump, 10.1016/j.ejpoleco.2017.10.003European Journal of Political Economy. 55Peter Kurrild-Klitgaard. Trump, Condorcet and Borda: Voting paradoxes in the 2016 Republican presiden- tial primaries. European Journal of Political Economy, 55:29-35, 2018. doi:10.1016/j.ejpoleco.2017.10.003.
A prudent characterization of the Ranked Pairs Rule. Claude Lamboray, 10.1007/s00355-008-0319-7Social Choice and Welfare. 32Claude Lamboray. A prudent characterization of the Ranked Pairs Rule. Social Choice and Welfare, 32: 129-155, 2008. doi:10.1007/s00355-008-0319-7.
Tournament Solutions and Majority Voting. Jean-François Laslier, SpringerBerlinJean-François Laslier. Tournament Solutions and Majority Voting. Springer, Berlin, 1997.
And the loser is...plurality voting. Jean-François Laslier, 10.1007/978-3-642-20441-8_13Electoral Systems: Paradoxes, Assumptions and Procedures. Dan S. Felsenthal and Moshé Machover, editorsBerlinSpringerJean-François Laslier. And the loser is...plurality voting. In Dan S. Felsenthal and Moshé Machover, edi- tors, Electoral Systems: Paradoxes, Assumptions and Procedures, pages 327-351, Berlin, 2012. Springer. doi:10.1007/978-3-642-20441-8_13.
Third-party candidates and the 2000 presidential election. S P Christopher, Magee, 10.1111/1540-6237.8403006Social Science Quarterly. 843Christopher S. P. Magee. Third-party candidates and the 2000 presidential election. Social Science Quarterly, 84(3):29-35, 2003. doi:10.1111/1540-6237.8403006.
Non-null ranking models. I. C L Mallows, 10.2307/2333244Biometrika. 442C. L. Mallows. Non-null ranking models. I. Biometrika, 44(2):114-130, 1957. doi:10.2307/2333244.
Analyzing and Modeling Rank Data. John Marden, 10.1201/b16552CRC PressNew YorkJohn Marden. Analyzing and Modeling Rank Data. CRC Press, New York, 1995. doi:10.1201/b16552.
How majority rule might have stopped Donald Trump. The New York Times. Eric Maskin, Amartya Sen, issueEric Maskin and Amartya Sen. How majority rule might have stopped Donald Trump. The New York Times, 2016. April 28, 2016 issue.
. Books, Books, 2017a. January 19, 2017 issue.
A better way to choose presidents. The New York Review of Books. Eric Maskin, Amartya Sen, Eric Maskin and Amartya Sen. A better way to choose presidents. The New York Review of Books, 2017b. June 8, 2017 issue.
A new solution set for tournaments and majority voting: Further graph-theoretical approaches to the theory of voting. Nicholas R Miller, 10.2307/2110925American Journal of Political Science. 241Nicholas R. Miller. A new solution set for tournaments and majority voting: Further graph-theoretical approaches to the theory of voting. American Journal of Political Science, 24(1):68-96, 1980. doi:10.2307/2110925.
Reflections on Arrow's theorem and voting rules. Nicholas R Miller, 10.1007/s11127-018-0524-6Public Choice. 179Nicholas R. Miller. Reflections on Arrow's theorem and voting rules. Public Choice, 179:113-124, 2019. doi:10.1007/s11127-018-0524-6.
Choosing from a tournament. Hervé Moulin, 10.1007/BF00292732Social Choice and Welfare. 34Hervé Moulin. Choosing from a tournament. Social Choice and Welfare, 3(4):271-291, 1986. doi:10.1007/BF00292732.
Condorcet's principle implies the no show paradox. Hervé Moulin, 10.1016/0022-0531(88)90253-0Journal of Economic Theory. 451Hervé Moulin. Condorcet's principle implies the no show paradox. Journal of Economic Theory, 45(1): 53-64, 1988. doi:10.1016/0022-0531(88)90253-0.
Axiomatic derivation of scoring rules without the ordering assumption. Roger B Myerson, 10.1007/BF00182193Social Choice and Welfare. 121Roger B. Myerson. Axiomatic derivation of scoring rules without the ordering assumption. Social Choice and Welfare, 12(1):59-74, 1995. doi:10.1007/BF00182193.
Voting methods. Eric Pacuit, Edward N. ZaltaThe Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford UniversityFall 2019 editionEric Pacuit. Voting methods. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Meta- physics Research Lab, Stanford University, Fall 2019 edition, 2019.
Alternate definitions of the uncovered set and their implications. Elizabeth Maggie , Penn , 10.1007/s00355-006-0114-2Social Choice and Welfare. 271Elizabeth Maggie Penn. Alternate definitions of the uncovered set and their implications. Social Choice and Welfare, 27(1):83-87, 2006. doi:10.1007/s00355-006-0114-2.
The strong no show paradoxes are a common flaw in Condorcet voting correspondences. Joaquín Pérez, 10.1007/s003550000079Social Choice and Welfare. 183Joaquín Pérez. The strong no show paradoxes are a common flaw in Condorcet voting correspondences. Social Choice and Welfare, 18(3):601-616, 2001. doi:10.1007/s003550000079.
The supercovering relation, the pairwise winner, and more missing links between Borda and Condorcet. Raúl Pérez, - Fernández, Bernard De Baets, 10.1007/s00355-017-1086-0Social Choice and Welfare. 50Raúl Pérez-Fernández and Bernard De Baets. The supercovering relation, the pairwise winner, and more missing links between Borda and Condorcet. Social Choice and Welfare, 50:329-352, 2018. doi:10.1007/s00355-017-1086-0.
Condorcet choice correspondences for weak tournaments. E Josep, Begoña Peris, Subiza, 10.1007/s003550050141Social Choice and Welfare. 162Josep E. Peris and Begoña Subiza. Condorcet choice correspondences for weak tournaments. Social Choice and Welfare, 16(2):217-231, 1999. doi:10.1007/s003550050141.
Voting rules as statistical estimators. Marcus Pivato, 10.1007/s00355-011-0619-1Social Choice and Welfare. 402Marcus Pivato. Voting rules as statistical estimators. Social Choice and Welfare, 40(2):581-630, 2013a. doi:10.1007/s00355-011-0619-1.
Variable-population voting rules. Marcus Pivato, 10.1016/j.jmateco.2013.02.001Journal of Mathematical Economics. 493Marcus Pivato. Variable-population voting rules. Journal of Mathematical Economics, 49(3):210-221, 2013b. doi:10.1016/j.jmateco.2013.02.001.
F Richard, Michael C Potthoff, Munger, 10.1177/1532673X211009499Condorcet loser in 2016: Apparently Trump; Condorcet winner: Not Clinton? American Politics Research. 49Richard F. Potthoff and Michael C. Munger. Condorcet loser in 2016: Apparently Trump; Condorcet winner: Not Clinton? American Politics Research, 49(6):618-636, 2021. doi:10.1177/1532673X211009499.
A comparative analysis of social choice functions. Jeffrey T Richelson, III. Behavioral Science. 23Jeffrey T. Richelson. A comparative analysis of social choice functions, III. Behavioral Science, 23:169-176, 1978.
Candidate stability and voting correspondences. Carmelo Rodríguez-Álvarez, 10.1007/s00355-006-0126-ySocial Choice and Welfare. 273Carmelo Rodríguez-Álvarez. Candidate stability and voting correspondences. Social Choice and Welfare, 27 (3):545-570, 2006. doi:10.1007/s00355-006-0126-y.
Geometry of Voting. Donald G Saari, 10.1007/978-3-642-48644-9SpringerBerlinDonald G. Saari. Geometry of Voting. Springer, Berlin, 1994. doi:10.1007/978-3-642-48644-9.
Basic Geometry of Voting. Donald G Saari, 10.1007/978-3-642-57748-2SpringerBerlinDonald G. Saari. Basic Geometry of Voting. Springer, Berlin, 1995. doi:10.1007/978-3-642-57748-2.
Explaining all three-alternative voting outcomes. Donald G Saari, 10.1006/jeth.1999.2541Journal of Economic Theory. 872Donald G. Saari. Explaining all three-alternative voting outcomes. Journal of Economic Theory, 87(2): 313-355, 1999. doi:10.1006/jeth.1999.2541.
Monotonicity properties and their adaptation to irresolute social choice rules. M , Remzi Sanver, William S Zwicker, 10.1007/s00355-012-0654-6Social Choice and Welfare. 392/3M. Remzi Sanver and William S. Zwicker. Monotonicity properties and their adaptation to irresolute social choice rules. Social Choice and Welfare, 39(2/3):371-398, 2012. doi:10.1007/s00355-012-0654-6.
The Existence of a Strategy Proof Voting Procedure. Mark Satterthwaite, University of WisconsinPhD thesisMark Satterthwaite. The Existence of a Strategy Proof Voting Procedure. PhD thesis, University of Wisconsin, 1973.
A new monotonic, clone-independent, reversal symmetric, and condorcet-consistent singlewinner election method. Markus Schulze, 10.1007/s00355-010-0475-4Social Choice and Welfare. 36Markus Schulze. A new monotonic, clone-independent, reversal symmetric, and condorcet-consistent single- winner election method. Social Choice and Welfare, 36:267-303, 2011. doi:10.1007/s00355-010-0475-4.
The Schulze method of voting. Markus Schulze, arXiv:1804.02973v11Markus Schulze. The Schulze method of voting. arXiv:1804.02973v11, 2022.
The Logic of Collective Choice. Thomas Schwartz, 10.7312/schw93758Columbia University PressNew YorkThomas Schwartz. The Logic of Collective Choice. Columbia University Press, New York, 1986. doi:10.7312/schw93758.
Choice functions and revealed preference. Amartya Sen, 10.2307/2296384The Review of Economic Studies. 383Amartya Sen. Choice functions and revealed preference. The Review of Economic Studies, 38(3):307-317, 1971. doi:10.2307/2296384.
Internal consistency of choice. Amartya Sen, 10.2307/2951715Econometrica. 613Amartya Sen. Internal consistency of choice. Econometrica, 61(3):495-521, 1993. doi:10.2307/2951715.
Collective Choice and Social Welfare: An Expanded Edition. Amartya Sen, Harvard University PressCambridge, MassAmartya Sen. Collective Choice and Social Welfare: An Expanded Edition. Harvard University Press, Cambridge, Mass., 2017.
On defining areas of voter choice: Professor Tullock on stable voting. Paul B Simpson, 10.2307/1880533The Quarterly Journal of Economics. 833Paul B. Simpson. On defining areas of voter choice: Professor Tullock on stable voting. The Quarterly Journal of Economics, 83(3):478-490, 1969. doi:10.2307/1880533.
Aggregation of preferences with variable electorate. John H Smith, 10.2307/1914033Econometrica. 416John H. Smith. Aggregation of preferences with variable electorate. Econometrica, 41(6):1027-1041, 1973. doi:10.2307/1914033.
Social Choice and the Mathematics of Manipulation. Alan D Taylor, 10.1017/cbo9780511614316Cambridge University PressCambridgeAlan D. Taylor. Social Choice and the Mathematics of Manipulation. Cambridge University Press, Cam- bridge, 2005. doi:10.1017/cbo9780511614316.
Alan D Taylor, Allison M Pacelli, Mathematics and Politics: Strategy, Voting, Power, and Proof. Alan D. Taylor and Allison M. Pacelli. Mathematics and Politics: Strategy, Voting, Power, and Proof.
. 10.1007/978-0-387-77645-3SpringerNew York2nd editionSpringer, New York, 2nd edition, 2008. doi:10.1007/978-0-387-77645-3.
Independence of clones as a criterion for voting rules. T Nicolaus Tideman, 10.1007/bf00433944Social Choice and Welfare. 4T. Nicolaus Tideman. Independence of clones as a criterion for voting rules. Social Choice and Welfare, 4: 185-206, 1987. doi:10.1007/bf00433944.
The impartial culture maximizes the probability of majority cycles. Ilia Tsetlin, Michel Regenwetter, Bernard Grofman, 10.1007/s00355-003-0269-zSocial Choice and Welfare. 21Ilia Tsetlin, Michel Regenwetter, and Bernard Grofman. The impartial culture maximizes the probability of majority cycles. Social Choice and Welfare, 21:387-398, 2003. doi:10.1007/s00355-003-0269-z.
Practical algorithms for multi-stage voting rules with parallel universes tiebreaking. Jun Wang, Sujoy Sikdar, Tyler Shepherd Zhibing, Chunheng Zhao, Lirong Jiang, Xia, Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19). the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)AAAI PressJun Wang, Sujoy Sikdar, Tyler Shepherd Zhibing Zhao, Chunheng Jiang, and Lirong Xia. Practical algo- rithms for multi-stage voting rules with parallel universes tiebreaking. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19). AAAI Press, 2019.
Wikipedia contributors. Spoiler effect -Wikipedia, the free encyclopedia. Online; accessed 19Wikipedia contributors. Spoiler effect -Wikipedia, the free encyclopedia, 2020a. URL https://en. wikipedia.org/wiki/Spoiler_effect. [Online; accessed 19-February-2020].
Wikipedia contributors. Condorcet method -Wikipedia, the free encyclopedia. Online; accessed 19Wikipedia contributors. Condorcet method -Wikipedia, the free encyclopedia, 2020b. URL https://en. wikipedia.org/wiki/Condorcet_method#Use_of_Condorcet_voting. [Online; accessed 19-February- 2020].
Trump is not a (Condorcet) loser! Primary voters' preferences and the 2016 republican presidential nomination. Jonathan Woon, Sean Craig, Amanda Leifson, Matthew Tarpey, 10.1017/S1049096520000359Political Science and Politics. 533Jonathan Woon, Sean Craig, Amanda Leifson, and Matthew Tarpey. Trump is not a (Condorcet) loser! Primary voters' preferences and the 2016 republican presidential nomination. Political Science and Politics, 53(3):407-412, 2020. doi:10.1017/S1049096520000359.
Social choice scoring functions. H P Young, 10.1137/0128067SIAM Journal on Applied Mathematics. 284H. P. Young. Social choice scoring functions. SIAM Journal on Applied Mathematics, 28(4):313-355, 1975. doi:10.1137/0128067.
Complete independence of clones in the ranked pairs rule. T M Zavist, T Nicolaus Tideman, 10.1007/bf00303170Social Choice and Welfare. 6T. M. Zavist and T. Nicolaus Tideman. Complete independence of clones in the ranked pairs rule. Social Choice and Welfare, 6:167-173, 1989. doi:10.1007/bf00303170.
Introduction to the theory of voting. William S Zwicker, Handbook of Computational Social Choice. Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. ProcacciaWilliam S. Zwicker. Introduction to the theory of voting. In Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia, editors, Handbook of Computational Social Choice, pages 23-56.
. 10.1017/cbo9781107446984.003Cambridge University PressNew YorkCambridge University Press, New York, 2016. doi:10.1017/cbo9781107446984.003.
| zyda_arxiv-1522000 |
The Radiative Double Copy for Einstein-Yang-Mills Theory
16 Feb 2018
David Chester
Mani Bhaumik Institute of Theoretical Physics
Department of Physics and Astronomy
UCLA
90095-1547Los AngelesCAUSA
The Radiative Double Copy for Einstein-Yang-Mills Theory
16 Feb 2018
Recently, a double-copy formalism was used to calculate gravitational radiation from classical Yang-Mills radiation solutions. This work shows that Yang-Mills theory coupled to a biadjoint scalar field admits a radiative double copy that agrees with solutions in Einstein-Yang-Mills theory at the lowest finite order. Within this context, the trace-reversed metrich µν is a natural double copy of the gauge boson A µa . This work provides additional evidence that solutions in gauge and gravity theories are related, even though their respective Lagrangians and nonlinear equations of motion appear to be different.
I. INTRODUCTION
The Lagrangians and equations of motion for gauge and gravity theories appear to be rather different. Nevertheless, there are intriguing double-copy connections between their solutions. This includes the Kawai-Lewellen-Tye (KLT) tree-level relations between gauge and gravity amplitudes in string theory [1] and the Bern-Carrasco-Johansson (BCJ) doublecopy relations between diagrams in quantum field theory [2]. The BCJ double-copy relations are based on color-kinematics duality, which gives particularly simple constructions of gravity amplitudes starting from gauge-theory amplitudes.
At tree level the BCJ amplitude relations are proven [3][4][5][6][7]. Numerous calculations at higher loops provide evidence for the loop-level double-copy conjecture [8][9][10][11] and progress has been made to understand analogous monodromy relations, extending KLT relations to loop level [12][13][14][15][16][17]. Einstein-Yang-Mills scattering amplitudes [18][19][20][21] can also be found via the double copy [22][23][24] using the CHY formalism [25]. Biadjoint scalar fields can be used to find solutions in Yang-Mills [26], and solutions in a Yang-Mills-biadjoint-scalar theory have been shown to give scattering amplitudes in Einstein-Yang-Mills [27][28][29].
With the recent experimental detection of gravitational waves by LIGO [30], precision calculational tools for gravitational wave emission are essential. Exploiting color-kinematics duality to relate radiation solutions between Yang-Mills and general relativity is attractive because general relativity is difficult to solve and the double copy has been shown to work for a wide variety of gravity theories [31][32][33] The connection between radiation solutions of gauge theory and gravity has been described recently [34][35][36][37][38][39][40]. The first example of using the radiative double copy to find nonlinear terms in general relativity utilized perturbative Yang-Mills solutions [41]. Similarly a biadjoint scalar field can be used to find Yang-Mills radiation [42].
This work builds off the radiative double copy for general relativity found by Goldberger et al. [41] to find gravitational radiation in Einstein-Yang-Mills theory. By comparing the differential equations of the sources and fields in gauge theory and gravity, radiative diagrams are used to represent specific algebraic terms. Solutions in gravity can be found from Yang-Mills theory, and the diagrams with three-point vertices can be computed by stitching lowerorder solutions together. At leading order, the trace-reversed metric [43],h µν , is a natural double copy of the Yang-Mills potential A µa [44]. Motivation for a perturbative double copy can be seen at the Lagrangian level, as the linearized gravity Lagrangian is quite similar to the QED Lagrangian, a linearized version of the Yang-Mills Lagrangian. Similarly, these two theories both have an analogous linearized wave equation. Remarkably, radiation solutions of nonlinear gauge and gravity theories are related, at least when iterated perturbatively. A double copy of Yang-Mills-adjoint-scalar theory is also briefly mentioned, which can recover radiation solutions in Einstein-Maxwell theory.
While this paper focuses on classical solutions that could be calculated with more traditional methods [45][46][47][48][49][50][51][52][53][54], the hope is that the radiative double copy could help with difficult calculations that may be more cumbersome to do in general relativity alone. As more experimental data for gravitational radiation is collected, new methods for calculating complicated radiation processes are encouraged. Section (II) calculates radiation in Yang-Mills-biadjoint-scalar theory. Section (III) calculates radiation in Einstein-Yang-Mills theory and the double copy is confirmed by direct calculation. Section (IV) states our concluding remarks. Appendix (A) calculates details of the gravitational contribution to the energy momentum pseudotensor and Appendix (B)
gives radiative Feynman rules for simple diagrams with three-point vertices.
II. RADIATION IN YANG-MILLS-BIADJOINT-SCALAR THEORY
A. Equations of Motion and Initial Conditions
In this section, the non-Abelian radiation field for Yang-Mills-biadjoint-scalar field theory is computed to first order in the weak-field approximation. To start, the Lagrangian associated with the Yang-Mills-biadjoint-scalar theory is
L = − 1 4 F a µν F µνa + 1 2 D µ Φã a D µ Φã a − y 3 f abc fãbcΦã a Φb b Φc c , (2.1)
where f abc and fãbc refer to structure constants of different groups, the biadjoint scalar Φã a has an index associated with each gauge group, and y = −igg/2 relates the conventions of Ref. [27] with the conventions of Refs. [26,42]. In principle, there could be an O(Φ 4 ) term in the Lagrangian, but the coupling constant would have different dimensions than y and is not needed for the double copy. The non-Abelian field strength is given by (2.2) and the the mostly minus metric will be used, such that η µν = diag(1, −1, −1, −1). The covariant derivative is given by
F a µν (x) = ∂ µ A a ν (x) − ∂ ν A a µ (x) − gf abc A b µ (x)A c ν (x),D µ Φã a (x) = ∂ µ Φã a (x) − gf abc A b µ (x)Φã c (x). (2.
3)
The equations of motion for the Yang-Mills field is
D µ F µνa (x) − gf abc Φã b (x)D ν Φã c (x) = gJ νa (x), (2.4)
where J µa (x) is a non-Abelian vector current acting as a source for the Yang-Mills field and is covariantly conserved, such that D µ J µa = 0. The equation of motion for the biadjoint scalar field is
∂ µ D µ Φã a (x) − gf abc A b µ (x)D µ Φã c (x) − yf abc fãbcΦb b (x)Φc c (x) = yJã a (x). (2.5)
For N colliding charged particles, the worldline of particle α is x µ α (τ ) = b µ α +v µ α τ for τ → −∞. These initial conditions specify an impact parameter b µ αβ = b µ α − b µ β and a constant initial velocity v µ α which satisfies v 2 α = 1. For arbitrary times near and after the collision,
x µ α (τ ) = b µ α + v µ α τ + z µ α (τ ), (2.6)
where z µ α (τ ) is the deflection due to the Yang-Mills and biadjoint scalar fields. The vector source for N colliding charged particles is
J µa (x) = N α=1 dτ c a α (τ )v µ α (τ )δ d (x − x α (τ )), (2.7) where α is a particle number label, v µ α (τ ) = dx µ α (τ ) dτ
is the velocity, and c a α (τ ) is the associated adjoint color charge [55]. The biadjoint source Jã a (x) for N particles is
Jã a (x) = N α=1 dτ cã α (τ )c a α (τ )δ d (x − x α (τ )), (2.8)
where it is assumed that the color charges cã α (τ ) and c a α (τ ) are in two different gauge groups. The Lorenz gauge is taken by setting ∂ µ A µa = 0. In order to simplify these equations, the explicit dependence on the covariant derivatives is removed and gauge dependent sourceŝ J µa andJã a are defined such that
A µa (x) = gĴ µa (x), Φã a = yĴã a ,(2.9)
where ≡ ∂ ν ∂ ν . With these definitions, the pseudovector source iŝ
J µa = J µa + f abc A b ν (∂ ν A µc + F νµc ) + Φã b D µ Φã c ,(2.10)
where the pseudovector is locally conserved, ∂ µĴ µa = 0. The pseudoscalar source is given bŷ
Jã a = Jã a + g y f abc ∂ µ (A µb Φã c ) + A µb D µ Φã c + y g fãbcΦb b Φc c . (2.11)
Similar to the worldline x µ α (τ ), the color charges are dynamical and are given initial conditions c a α (τ ) = c a α and cã α (τ ) = cã α for τ → −∞. For times near and after the collision, c a α (τ ) = c a α +c a α (τ ), cã α (τ ) = cã α +cã α (τ ), (2.12) wherec a α (τ ) andcã α (τ ) are the corrections due to the Yang-Mills and biadjoint scalar fields. The time evolution of the momentum is dp µ α (τ ) dτ = gc a α (τ )F µνa (x α (τ ))v αν (τ ) − y∂ µ Φã a (x α (τ ))c a α (τ )cã α (τ ), (2.13) and the time evolution of the charges is
dc a α (τ ) dτ = gf abc v µ α (τ )A b µ (x α (τ ))c c α (τ ) − yf abc Φb b (x α (τ ))cb α (τ )c c α (τ ), dcã α (τ ) dτ = −yfãbcΦb b (x α (τ ))c b α (τ )cc α (τ ). (2.14)
These summarize all of the equations needed to iteratively solve for radiation in Yang-Millsbiadjoint-scalar theory within a weak-field approximation.
B. Solutions of the Radiation Fields
For weak fields, the lowest-order sources can be found from the initial conditions. The pseudocurrents in momentum space arê
J µa (k)| O(g 0 ) = N α=1 e ik·bα (2π)δ(k · v α )v a α c a α , Jã a (k)| O(y 0 ) = N α=1 e ik·bα (2π)δ(k · v α )cã α c a α ,(2.15)
which can be utilized to find the Yang-Mills and biadjoint scalar fields to lowest order from Eq. (2.9), giving
A µa (x)| O(g 1 ) = −g N α=1 l (2π)δ(l · v α ) e −il·(x−bα) l 2 v µ α c a α , Φã a (x)| O(y 1 ) = −y N α=1 l (2π)δ(l · v α ) e −il·(x−bα) l 2 cã α c a α . (2.16)
The lowest-order fields can be used to find the deflections of the sources, given by
m α d 2 z µ α (τ ) dτ 2 O(g 2 ) = gc a α ∂ µ A νa (x α (τ ))| O(g 1 ) − ∂ ν A µa (x α (τ ))| O(g 1 ) v αν , m α d 2 z µ α (τ ) dτ 2 O(y 2 ) = −y∂ µ Φã a (x α (τ ))| O(y 1 ) cã α c a α . (2.17)
Plugging in the derivatives of the lowest-order fields gives
m α d 2 z µ α (τ ) dτ 2 O(g 2 ) = ig 2 β =α (c a α c a β ) l (2π)δ(l · v β ) e −il·(b αβ +vατ ) l 2 (v α · v β )l µ − (v α · l)v µ β , m α d 2 z µ α (τ ) dτ 2 O(y 2 ) = −iy 2 β =α (c a α c a β )cã α cã β l (2π)δ(l · v β ) e −il·(b αβ +vατ ) l 2 l µ . (2.18)
Note that writing the color charge contraction as c α · c β would be ambiguous with our notation, as c a α c a β and cã α cã β are distinct. The first correction of the color charges to second order in g is given by
dc a α (τ ) dτ O(g 2 ) = gf abc v µ α A b µ (x α (τ ))| O(g 1 ) c c α , dc a α (τ ) dτ O(y 2 ) = −yf abc Φb b (x α (τ ))| O(y 1 ) cb α c c α , dcã α (τ ) dτ O(y 2 ) = −yfãbcΦb b (x α (τ ))| O(y 1 ) c b α cc α . (2.19)
Once again, plugging in the lowest-order fields gives
dc a α (τ ) dτ O(g 2 ) = −g 2 β =α f abc c b β c c α (v α · v β ) l (2π)δ(l · v β ) e −il·(b αβ +vατ ) l 2 , dc a α (τ ) dτ O(y 2 ) = y 2 β =α f abc c b β c c α cb α cb β l (2π)δ(l · v β ) e −il·(b αβ +vατ ) l 2 , dcã α (τ ) dτ O(y 2 ) = y 2 β =α fãbccb β cc α c b α c b β l (2π)δ(l · v β ) e −il·(b αβ +vατ ) l 2 .
(2.20)
These deflections can be utilized to find the sources to next order, which give the lowest-order radiation fields.
Taking the Fourier transform of Eq. (2.7) and integrating over the delta function gives
J µa (k) = N α=1 dτ e ik·xα(τ ) v µ α + dz µ α (τ ) dτ (c a α +c a α (τ )) ,(2.21)
Expanding these results perturbatively in g and y gives
J µa (k) = N α=1 dτ e ik·(bα+vατ ) (1 + ik · z α )v µ α c a α + v µ αc a α + dz µ α dτ c a α + O(g 2 y 2 ),(2.22)
where explicit τ dependence has been suppressed and only terms to second order in g or y are kept. Integrating Eqs. (2.18) and (2.20) allows for the second-order current to be found, which has Yang-Mills and biadjoint scalar contributions given by
J µa (k)| O(g 2 ) = g 2 N α=1 β =α lα,l β µ α,β (k) l 2 α k · v α if abc c b α c c β (v α · v β )v µ α (2.23) + c b α c b β m α c a α −v α · v β l µ β − k · l β k · v α v µ α + k · v α v µ β − k · v β v µ α , J µa (k)| O(y 2 ) = y 2 N α=1 β =α cã α cã β lα,l β µ α,β (k) l 2 α k · v α c b α c b β m α c a α l µ β − k · l β k · v α v µ α − if abc c b α c c β v µ α ,
where an extra integral over l α was added with a momentum conserving delta function such that k = l α + l β and
µ α,β (k) = (2π)δ(v α · l α ) e ilα·bα l 2 α (2π)δ(v β · l β ) e il β ·b β l 2 β (2π) d δ d (k − l α − l β ). (2.24)
The nonlinear field contributions to the pseudovector are represented by j µa , which gives the following second order contributions
j µa (k)| O(g 2 ) = g 2 N α=1 β =α if abc c b α c c β lα,l β µ α,β (k) [2k · v β v µ α − v α · v β l µ α ] , j µa (k)| O(y 2 ) = y 2 N α=1 β =α if abc c b α c c β cã α cã β lα,l β µ α,β (k)l µ α . (2.25)
While J µa and j µa were computed algebraically, they also can be represented diagrammatically. Fig. (1) depicts the diagrams associated withĴ µa to second order in g and y. The six diagrams are defined as where diagrams (1a), (1b), (1d), and (1e) give J µa (k) and diagrams (1c) and (1f) give j µa (k), both to second order in g and y. The source J µa (k) was split into two types of diagrams, as (1a) represents radiation that was emitted after the particle was deflected, while (1b) represents radiation that was emitted before the particle was deflected. As such, it is anticipated that (1b) and (1e) should be proportional to the undeflected quantities v µ α c a α , while (1a) and (1d) are in terms of corrections such as dz µ α dτ c a α and v µ αc a α . Diagrams (1c) and (1f) are computed in Appendix (B) from the three-point vertex with three vectors and the three-point vertex with two scalars and one vector, respectively. The six diagrams sum to giveĴ µa and satisfy the Ward identity k µĴ µa (k) = 0.
(1a) µa (k) ≡ N α=1 dτ e ik·(bα+vατ ) v µ αc a α (τ )| O(g 2 ) + c a α dz µ α (τ ) dτ O(g 2 ) , (1b) µa (k) ≡ N α=1 dτ e ik·(bα+vατ ) ik · z α (τ )| O(g 2 ) v µ α c a α , (1c) µa (k) ≡ j µa (k)| O(g 2 ) , (1d) µa (k) ≡ N α=1 dτ e ik·(bα+vατ ) v µ αc a α (τ )| O(y 2 ) + c a α dz µ α (τ ) dτ O(y 2 ) , (1e) µa (k) ≡ N α=1 dτ e ik·(bα+vατ ) ik · z α (τ )| O(y 2 ) v µ α c a α , (1f) µa (k) ≡ j µa (k)| O(y 2 ) , (2.26) α β α (1a) (1b) β α (1c) β α β (1d) α β (1f)Summing up the three diagrams (1a)-(1c) is algebraically equivalent toĴ µa (k)| O(g 2 ) , givinĝ J µa (k)| O(g 2 ) = g 2 N α=1 β =α lα,l β µ α,β (k) if abc c b α c c β 2(k · v β )v µ α + (v α · v β ) l 2 α k · v α v µ α − l µ α + c b α c b β m α l 2 α c a α k · v α v α · v β k · l β k · v α v µ α − l µ β + k · v α v µ β − k · v β v µ α . (2.27)
which is the pure Yang-Mills contribution found by Ref. [41]. Summing up the three diagrams
(1d)-(1f) is equivalent toĴ µa (k)| O(y 2 ) , givinĝ J µa (k)| O(y 2 ) = −y 2 N α=1 β =α cã α cã β lα,l β µ α,β (k) × c b α c b β m α l 2 α c a α k · v α k · l β k · v α v µ α − l µ β + if abc c b α c c β l 2 α k · v α v µ α − l µ α . (2.28)
The radiative field must be gauge invariant and the above expression satisfies the Ward identity k µĴ µa (k)| O(gy 2 ) = 0, as the identity must be satisfied order by order. Adding the above contributions to Eq. (2.27) gives the total source,Ĵ µa . To find the radiation field A µa rad from the sourceĴ µa [41,56],
A µa rad (x) = g 4πr dω 2π e −iωtĴ µa (k),(2.29)
where k µ = ω(1, x/r).
III. GRAVITATIONAL RADIATION IN EINSTEIN-YANG-MILLS THEORY
A. Equations of Motion and Initial Conditions
The action for the Einstein-Yang-Mills-dilaton theory in consideration is
S = d d x √ −g − 2 κ 2 R − 1 4 g µρ g νσ Fã µν Fã ρσ + 2 κ 2 (d − 2)g µν ∂ µ φ∂ ν φ − m dτ e φ ,(3.1)
where φ is the dilaton field and dτ = g µν dx µ dx ν . By varying the action above, the energymomentum pseudotensor contributions from the Yang-Mills field and the dilaton are given by
8πGT µν = R µν − 1 2 g µν R − (d − 2) ∂ µ φ∂ ν φ − 1 2 g µν g ρσ ∂ ρ φ∂ σ φ + 8πG g ρσ Fã µρ Fã νσ − 1 4 g µν g ρσ g λτ Fã ρλ Fã στ . (3.2)
According to Dirac, |g|T µν is the density and flux of energy and momentum for matter [57] such that in the presence of gravity, N partices contribute
|g|T µν (x) = N α=1 m α dτ v µ α (τ )v ν α (τ )δ d (x − x α (τ )). (3.3)
A weak-field approximation is taken by introducing h µν as
g µν = η µν + κh µν , g µν = η µν − κh µν + κ 2 h µρ h ν ρ + . . . , |g| ≡ −det(g µν ) = 1 + κh − κ 2 2 (h µν h µν − h 2 ) + . . . ,(3.4)
where h ≡ h ρ ρ and the radiation can be calculated perturbatively in powers of κ. Textbook presentations of gravitational waves often focus on linearized gravity [43], which introduces the trace-reversed metric,h
µν ≡ h µν − 1 2 η µν h,(3.5)
and find that h µν = − κ 2 T µν . If an effective energy-momentum pseudotensorT µν was found to contain contributions from matter, nonlinear gravitational field contributions, and the other fields, then the following equation of motion can be solved iteratively within the context of the weak-field approximation
h µν = − κ 2T µν . (3.6)
Due to the harmonic gauge condition, the pseudotensor satisfies ∂ µT µν = 0. The field contributions to the pseudotensor t µν ≡T µν −T µν will be found by expanding Eq. (3.2). The pseudotensor slightly differs from the common pseudotensor used by Landau and Lifshitz [43,47,58] and is closer to ones used previously by Einstein and Dirac, givinĝ
T µν = T µν + t µν ≡ |g|T µν +t µν , (3.7)
wheret µν is conveniently defined to absorb (1 − |g|)T µν . In this section, the algebraic method of perturbing Einstein's field equations and iteratively solving for the radiation field is presented, leaving some technical details of the calculation oft µν to Appendix (A). Since the three-point graviton vertex is derived from the Lagrangian of the full theory, diagrams can encode how to find higher order field contributions from linearized field solutions. In Appendix (B), radiative Feynman rules are provided for the diagrams ontributing to t µν .
The Christoffel symbol Γ ρ µν and the Ricci tensor R µν are given by
Γ ρ µν = 1 2 g ρσ (g σν,µ + g σµ,ν − g µν,σ ), R µν = Γ ρ µν,ρ − Γ ρ µρ,ν + Γ ρ σρ Γ σ µν − Γ ρ σν Γ σ µρ . (3.8)
After expanding the metric perturbatively in κ and applying the gauge condition
∂ µ h µν = 1 2 η µν h ,µ , Γ ρ µν = κ 2 h ρ ν,µ + h ρ µ,ν − h ρ µν, − κh ρσ (h σν,µ + h σµ,ν − h µν,σ ) + O(κ 3 ), (3.9) R µν = − κ 2 h µν + κ 2 2 [h ρσ (h µν,ρσ + h ρσ,µν − h σν,µρ − h µρ,σν ) + h µρ,σ h ρ,σ ν − h µρ,σ h σ,ρ ν + 1 2 h ρσ,µ h ρσ, ν + O(κ 3 ). (3.10)
This gives the Ricci scalar R,
R = (η µν −κh µν )R µν = − κ 2 h+κ 2 h ρσ h ρσ + 3 4 h ρσ,µ h ρσ,µ − 1 2 h µρ,σ h µσ,ρ +O(κ 3 ), (3.11) To lowest order, R µν − 1 2 g µν R ≈ − κ 2h
µν , and all higher order terms in Eq. (3.2) are subtracted to the other side of the equation to be absorbed into the definition of t µν . Splitting these terms between t µν | ∆h , t µν | ∆φ , and t µν | ∆A gives
t µν | ∆h = 2h ρσ (h µρ,νσ + h νσ,µρ − h µν,ρσ − h ρσ,µν ) + h µν h − 2h µρ h ν ρ − 2h νρ h µ ρ − 2h µρ,σ h ν ρ,σ − h ν σ,ρ − h ρσ,µ h ν ρσ, + η µν 2h ρσ h ρσ + h ρσ,λ 3 2 h ρσ,λ − h ρλ,σ , t µν | ∆φ = (d − 2) 2 κ 2 ∂ µ φ∂ ν φ − 1 2 η µν ∂ ρ φ∂ ρ φ , t µν | ∆A = −F µρã F νã ρ + 1 4 η µν F ρσã Fã ρσ ,(3.12)
where it is important to raise the indices on R µν − 1 2 g µν R with g µν to get all of the necessary terms.
Similar to the previous section, the position of the particle is given by x µ α (τ ), which has deflections z µ α (τ ) which must be calculated from the field solutions. The matter is also assumed to have a color charge cã α (τ ), but their corrections do not source the lowest-order gravitational radiation field. The Christoffel symbol can be used to find the force on each particle, giving
m α d 2 z µ α (τ ) dτ 2 ∆h = −Γ µ νρ m α v ν α v ρ α . (3.13)
The equation of motion utilized for the dilaton is
m α d 2 z µ α (τ ) dτ 2 ∆φ = m α v αν ∂ µ φv ν α .
(3.14)
While this equation differs slightly from Ref. [41], both of our total pseudotensors agree and are the physical object that satisfies the gauge-invariant Ward identity. The force due to the gauge field is
m α d 2 z µ α (τ ) dτ 2 ∆A =gcã α F µνã v αν (τ ).
(3.15) B. Solutions of the Radiation Fields Fig. (2) shows nine diagrams which contribute to gravitational radiation for Einstein-Yang-Mills theory. Algebraically, the first three diagrams for the pure gravity contributions
are (2a) µν + (2b) µν = |g|T µν | ∆h,O(κ 2 ) , (2c) µν =t µν | ∆h,O(κ 2 ) ≡ t µν | ∆h,O(κ 2 ) + 1 − |g| T µν | ∆h,O(κ 2 ) ,(3.16)
while the diagrams with internal dilatons algebraically represent Focusing on the energy-momentum tensor |g|T µν , Solving for the appropriate field equations gives h µν , φ, and A µa to lowest order,
(2d) µν + (2e) µν = |g|T µν | ∆φ,O(κ 2 ) , (2f) µν =t µν | ∆φ,O(κ 2 ) ≡ t µν | ∆φ,O(κ 2 ) ,(3.|g|T µν = N α=1 m α dτ e ik·xα(τ ) v µ α + dz µ α (τ ) dτ v ν α + dz ν α (τ ) dτ . (3.19) α β α (2a) (2b) α β (2d) (2e) β α β α β (2c) (2f) β α α β α (2g) (2h) β α β (2i)h µν (x)| O(κ 1 ) = κ 2 N α=1 m α l (2π)δ(l · v α ) e −il·(x−bα) l 2 v µ α v ν α − η µν d − 2 , φ(x)| O(κ 2 ) = 1 (d − 2) κ 2 2 N α=1 m α l (2π)δ(l · v α ) e −il·(x−bα) l 2 , A µã (x)| O(g 1 ) = −g N α=1 l (2π)δ(l · v α ) e −il·(x−bα) l 2 cã α v µ α . (3.20)
Plugging the lowest-order field solutions into Eqs. (3.13), (3.14), and (3.15) gives
d 2 z µ α (τ ) dτ 2 ∆h = i κ 2 2 β =α m β l β (2π)δ(l β · v β ) e −il β ·(x−b β ) l 2 β × 2(v α · v β )k · v α v µ β − 2k · v α d − 2 v µ α − (v α · v β ) 2 − 1 d − 2 l µ β , d 2 z µ α (τ ) dτ 2 ∆φ = −i d − 2 κ 2 2 β =α m β l β (2π)δ(l β · v β ) e −il β ·(x−b β ) l 2 β l µ β , (3.21) d 2 z µ α dτ 2 ∆A = ig 2 β =α cã α cã β m α l (2π)δ(l · v β ) e −il·(b αβ +vατ ) l 2 (v α · v β )l µ − (l · v α )v µ β .
The corrections to the position are useful for finding |g|T µν (k),
|g|T µν (k) = N α=1 m α dτ e ik·(bα+vατ +zα(τ )) v µ α + dz µ α (τ ) dτ v ν α + dz ν α (τ ) dτ , |g|T µν (k)| O(κ 2 ) = N α=1 m α dτ e ik·(bα+vατ ) ik · z α v µ α v ν α + dz µ α dτ v ν α + v µ α dz ν α dτ . (3.22)
The lowest order term proportional to v µ α v ν α may be dropped, as it was used find the solution to h µν in Eq. (3.20). Focusing on the corrections due to gravity,
|g|T µν (k)| ∆h = κ 2 2 α =β m α m β lα,l β µ α,β (k)l 2 α (3.23) × v µ α v ν α 2v α · v β k · v β k · v α + 2 d − 2 − l β · k (k · v α ) 2 (v α · v β ) 2 − 1 d − 2 − 2v α · v β (v µ α v ν β + v ν α v µ β ) + 1 k · v α (v α · v β ) 2 − 1 d − 2 (v µ α l ν β + v ν α l µ β ) .
Additionally, the dilaton contributes
|g|T µν (k)| ∆φ = 1 d − 2 κ 2 2 α =β m α m β lα,l β µ α,β (k)l 2 α × −v µ α v ν α l β · k (k · v α ) 2 + 1 k · v α (v µ α l ν β + v ν α l µ β ) . (3.24)
Finally, the gauge boson contributes
|g|T µν (k)| ∆A =g 2 N α=1 β =α cã α cã β lα,l β µ α,β (k) l 2 α k · v α v µ α v ν α v α · v β k · l β k · v α − k · v β + (v µ α v ν β + v ν α v µ β )(k · v α ) − (v µ α l ν β + v ν α l µ β )(v α · v β ) .(3.25)
Appendix (A) works out the algebraic details of computing t µν | ∆h andt µν | ∆h . In summary, gravity contributeŝ
t µν (k)| ∆h = κ 2 2 N α=1 β =α m α m β lα,l β µ α,β (k) 2v µ α v ν α (k · v β ) 2 − l 2 α d − 2 (3.26) + v µ α v ν β + v ν α v µ β l 2 α v α · v β − k · v α k · v β − 2 (v µ α l ν α + v ν α l µ α ) (v α · v β k · v β ) + l µ α l µ α (v α · v β ) 2 − 1 d − 2 + η µν k · v α k · v β v α · v β − l 2 α 2 (v α · v β ) 2 − 1 d − 2 .
Similarly, Eq. (3.12) the dilaton contributeŝ
t µν (k)| ∆φ = 1 (d − 2) κ 2 2 N α=1 β =α m α m β lα,l β µ α,β (k) −l µ α l ν β + η µν l α · l β 2 . (3.27)
When calculated algebraically from Eq. (3.12), the gauge boson contributeŝ
t µν (k)| ∆A =g 2 N α=1 β =α cã α cã β lα,l β µ α,β (k) 1 2 v µ α v ν β + v ν α v µ α l α · l β (3.28) + (v µ α l ν α + v ν α l µ α ) k · v β − l µ α l ν α v α · v β − 1 2 η µν (k · v α k · v β + v α · v β l α · l β ) .
In Appendix (B), the three-point boson vertices of Einstein-Yang-Mills theory are used to find the same results for t µν via radiative Feynman rules.
Summing all contributions to order κ 2 gives the contributions from pure dilaton gravity,
T µν (k)| O(κ 2 ) = κ 2 2 N α=1 β =α m α m β lα,l β µ α,β (k) v µ α v ν α 2(k · v β ) 2 + 2k · v β l 2 α v α · v β k · v α − l 2 α (v α · v β ) 2 k · l β (k · v α ) 2 − (v µ α v ν β + v ν α v µ β ) l 2 α v α · v β + k · v α k · v β − (v µ α l ν α + v ν α l µ α )(v α · v β ) l 2 α v α · v β k · v α + 2k · v β + l µ α l ν α (v α · v β ) 2 + η µν (v α · v β ) l 2 α v α · v β 2 + k · v α k · v β . (3.29)
which agree with the results found in Ref. [41]. |g|T µν | ∆A andt µν | ∆A give the additional contributions in Einstein-Yang-Mills theory,
T µν (k)| O(g 2 ) =g 2 N α=1 β =α lα,l β µ α,β (k) v µ α v ν α v α · v β k · l β (k · v α ) 2 − k · v β k · v α l 2 α + 1 2 (v µ α v ν β + v ν α v µ β )l 2 α (3.30) + (v µ α l ν α + v ν α l µ α ) l 2 α v α · v β k · v α + k · v β − l µ α l ν α v α · v β − 1 2 η µν k · v α k · v β + l 2 α v α · v β .
Adding this result to Eq. (3.29) gives the total source for gravitational radiation for Einstein-Yang-Mills theory. Next, we show that this result agrees precisely with what is found with the radiative double-copy method.
C. The Radiative Double Copy
In order to use the double copy to find gravitational radiation in Einstein-Yang-Mills theory, the same replacement rules used for general relativity [41] may be used with the radiation found in Yang-Mills-biadjoint-scalar theory. The replacement rules are
g → κ 2 , y →g, c a α → p ν α , if a 1 a 2 a 3 → − 1 2 (η ν 1 ν 3 (q 1 − q 3 ) ν 2 + η ν 1 ν 2 (q 2 − q 1 ) ν 3 + η ν 2 ν 3 (q 3 − q 2 ) ν 1 ), J µa (k) →T µν (k), (3.31)
where the momenta q 1 + q 2 + q 3 = 0. Similar to the Ward identity k µĴ µa = 0, we can shiftT µν by terms proportional to either k µ or k ν , such that k µT µν = k νT µν = 0, which shifts the gauge-dependent pseudotensor into the harmonic gauge. Since Ref. [41] showed that the radiative double copy could recoverT µν | O(κ 2 ) and Ref. [40] showed how to use
T µν (k)|g2 =g 2 N α=1 β =α m α m β cã α cã β lα,l β µ α,β (k) v α · v β l 2 α v ν α k · v α k · l β k · v α v µ α − l µ β (3.32) − 1 2 2k · v β v ν α − 2k · v α v ν β + v α · v β (l β − l α ) ν l 2 α k · v α v µ α − l µ α .
Shifting l µ β → (l β − l α ) µ /2 gives the gauge invariantT µν ,
T µν (k)| O(g 2 ) =g 2 N α=1 β =α m α m β cã α cã β lα,l β µ α,β (k) (3.33) × v α · v β l 2 α v ν α k · v α k · l β k · v α v µ α − 1 2 (l β − l α ) µ − 1 2 2k · v β v ν α − 2k · v α v ν β + v α · v β (l β − l α ) ν l 2 α k · v α v µ α − l µ α .
Symmetrizing this result gives the appropriate final expression forT µν ,
T µν | O(g 2 ) = −g 2 κ 2 2 N α=1 β =α m α m β cã α cã β lα,l β µ α,β (k) (3.34) × v µ α v ν α k · v β k · v α − v α · v β (k · v α ) 2 k · l β l 2 α − 1 2 (v µ α v ν β + v ν α v µ β )l 2 α − (v µ α l ν α + v ν α l µ α ) v α · v β k · v α l 2 α + k · v β + l µ α l ν α (v α · v β ) + 1 2 η µν l 2 α (v α · v β ) ,
where the gauge condition allows for v µ α k ν = 1 2 η µν k·v α . This result agrees precisely with what was found in Eq. (3.31), demonstrating that the radiative double copy holds for Einstein-Yang-Mills theory to leading order.
D. Einstein-Maxwell Theory
Since it is more physically relevant to scatter massive point particles with electric charge rather than particles with weak-isospin or color, an Abelian U(1) gauge symmetry is also worth studying. The action for fields in Einstein-Maxwell theory is
S = d d x |g| − 2 κ 2 R − 1 4 g µρ g νσ F µν F ρσ . (3.35)
When comparing with Einstein-Yang-Mills theory, the Maxwell field A µ can be recovered from a single component of the Yang-Mills field A µã . In order to find results in Einstein-Maxwell theory from Einstein-Yang-Mills theory, care must be taken with the coupling constants. For example, the Maxwell current density for point particles is given by
J µ (x) = e N α=1 q α dτ v µ α (τ )δ d (x − x α (τ )),(3.36)
where q α = −1 for electrons, such that eq α represents the electric charge of particle α.
In order to recover Einstein-Maxwell theory from Einstein-Yang-Mills, one must substitutẽ g → e and cã α → q α , given our conventions forg and the normalization of the Lagrangian given in Eq. (3.1). Applying these substitutions to Eq. (3.31) would give gravitational radiation in Einstein-Maxwell theory. At higher orders, fãbc would be sent to zero as well.
In terms of the radiative double copy, an adjoint scalar field Φ a could also be seen as a single component of the biadjoint scalar feld Φã a . Results for Yang-Mills-adjoint-scalar theory can easily be found from Eq. (2.28) by properly sending cã α → q α and reinterpreting y as the coupling constant of the adjoint scalar theory. It is straightforward to see that the double copy of Yang-Mills-adjoint-scalar theory gives solutions in Einstein-Maxwell theory with the replacement rules shown in Eq. (3.31) and y → e.
IV. CONCLUSIONS
In previous work, the double copy has been applied to gravitational radiation in general relativity with a dilaton, which suggested that schematic radiative diagrams may be useful for depicting sources of radiation [41]. Similarly, it was shown that the same replacement rules can be used to find Yang-Mills radiation from biadjoint-scalar radiation [42] and that ghosts can be used to remove the dilaton [40].
In this work, the gravitational radiation produced by colliding color charges was found within the context of Einstein-Yang-Mills theory. Our primary result demonstrates that the double copy can be used to find radiation in Einstein-Yang-Mills theory from Yang-Millsbiadjoint-scalar theory. These calculations provided insight on how a radiative diagrammatic scheme closer to Feynman diagrams used for scattering amplitudes may be possible. Furthermore, radiation in Einstein-Maxwell theory can be found via similar methods. This work suggests that it may be possible to develop systematic rules for constructing radiative diagrams that can be used to calculate radiation to higher orders, at least for initial conditions associated with N particle scattering. It appears that rules for worldline propagators would be needed, in addition to the typical rules used for scattering amplitudes.
In future work, it would be interesting to investigate if the radiative double copy holds for higher orders, as the precise replacement rules are not yet known. Additional efforts to perform the integrals are also needed. The gravitational interactions between the quantized spin of Dirac particles would be an interesting theoretical challenge, while considering the scattering of macroscopic mass distributions with classical angular momentum would be more applicable for experiments such as LIGO. Studying the formation of bound states due to higher order effects would also be important.
The author would like to thank Zvi Bern for constant guidance, as well as Walter Goldberger, Donal O'Connell, Alexander Ridgway, Jedidiah Thompson, and Julio Parra-Martinez for various discussions.
Appendix A: Derivation of Gravitational Radiation from Pseudotensor
In this section, the steps for deriving the gravitational radiation coming from nonlinear gravitational interactions are provided. In Section (III), Einstein's field equations to first order for weak gravitational fields was found to be
h µν = − κ 2T µν ,(A1)
where the energy-momentum pseudotensorT µν = T µν + t µν = |g|T µν +t µν contains the nonlinear corrections to the linearized field equations, such that the purely gravitational component of the pseudotensor t µν is given by Eq. (3.12)
t µν = 2h ρσ (h µρ,νσ + h νσ,µρ − h µν,ρσ − h ρσ,µν ) + h µν h − 2h µρ h ν ρ − 2h νρ h µ ρ (A2) − 2h µρ,σ h ν ρ,σ − h ν σ,ρ − h ρσ,µ h ,ν ρσ + η µν 2h ρσ h ρσ + h ρσ,λ 3 2 h ρσ,λ − h ρλ,σ .
In order to solve for this, the lowest-order solution of the gravitational field is used
h µν (x) = κ 2 N α=1 m α lα (2π)δ(l α · v α ) e −ilα·(x−bα) l 2 α v µ α v ν α − η µν d − 2 ,(A3)
which gives rise to a source for the nonlinear gravitational interaction via t µν . Each term in t µν is second order in h µν , so one is related to particle α and another to particle β, giving a double sum. The summation and integrals on all terms will have the following form
t µν = κ 2 2 N α=1 β =α m α m β lα,l β µ α,β (k)I µν ,(A4)
where I µν is the integrand containing many terms. For the integrand, focusing on the 2)) portion of the solution to h µν and manually plug these pieces into Eq. (A2) gives tensor would give the same radiation amplitude. Making such changes gives
(v µ α v ν α − η µν /(d −I µν = v µ α v ν α 2(k · v β ) 2 − 4l 2 α d − 2 + (v µ α v ν β + v ν α v µ β ) l 2 α (v α · v β ) − k · v α k · v β − 2(v α · v β )k · v β (v µ α l ν α + v ν α l µ α ) + l µ α l ν α (v α · v β ) 2 − 1 d − 2 + η µν 2(k · v α ) 2 d − 2 + k · l α 2 (v α · v β ) 2 − k · l α 2(d − 2) + η µν − 2(k · v β ) 2 d − 2 + 2l 2 α (d − 2) 2 − 2l 2 β (d − 2) 2 + 4l 2 α (d − 2) 2 + 2l α · l β (d − 2) 2 (A7) − 2l 2 α + 3 2 l α · l β (v α · v β ) 2 − 2 d − 2 + d (d − 2) 2 + v α · v β k · v α k · v β + l α · l β (d − 2) 2 .
By considering that α and β are symmetric, all particle labels may be switched for any term, which allows further simplification to give the final result
I µν = v µ α v ν α 2(k · v β ) 2 − 4l 2 α d − 2 + (v µ α v ν β + v ν α v µ β ) l 2 α (v α · v β ) − k · v α k · v β − 2(v α · v β )k · v β (v µ α l ν α + v ν α l µ α ) + l µ α l ν α (v α · v β ) 2 − 1 d − 2 + η µν v α · v β k · v α k · v β − l 2 α 2 (v α · v β ) 2 − 1 d − 2 .(A8)
To more easily compare with the diagrammatic method,t µν is found by adding the lowest-
order term of 1 − |g| T µν , where T µν (x) ≈ N α=1 m α lα (2π)δ(v α · l α )e −ilα·(x−bα) v µ α v ν α , h(x) ≈ −κ d − 2 β =α m β l β (2π)δ(l β · v β ) e −il β ·(x−b β ) l 2 β , 1 − |g| T µν ≈ 1 d − 2 κ 2 2 m α m β lα,l β µ α,β (k)2l 2 α v µ α v ν α .(A9)
Adding this to t µν givest µν ≡ κ 2 2 N α=1 β =α m α m β lα,l β µ α,β (k)Î µν , such that
I µν = v µ α v ν α 2(k · v β ) 2 − 2l 2 α d − 2 + (v µ α v ν β + v ν α v µ β ) l 2 α (v α · v β ) − k · v α k · v β − 2(v α · v β )k · v β (v µ α l ν α + v ν α l µ α ) + l µ α l ν α (v α · v β ) 2 − 1 d − 2 + η µν v α · v β k · v α k · v β − l 2 α 2 (v α · v β ) 2 − 1 d − 2 .(A10)
As shown in the next appendix, this result agrees precisely with a diagram involving the three-point graviton vertex.
Appendix B: Some Radiative Feynman Rules
Yang-Mills and Biadjoint-Scalar Theory
A Feynman diagram approach can be used to find the results for diagrams (1c) and
(1f), shown in Fig. (1). Expanding the kinetic term of the Lagrangian, the O(A 3 ) term corresponding to the three-point vector boson interaction is
− 1 4 F a µν F µνa = −∂ µ A a ν gf abc A µb A νc + . . . .(B1)
This term in the Lagrangian gives the textbook non-Abelian three-point vector boson vertex,
given by
Γ µa,νb,ρc (k, p, q) = f abc ((k ν − q ν )η µρ + (p ρ − k ρ )η νµ + (q µ − p µ )η ρν ) ,(B2)
where A a µ is associated with the momentum k, A b ν is associated with p, and A c ρ is associated with q.
The three-point vertex for two biadjoint scalars and one adjoint vector field can be used to efficiently calculate a piece radiation, which comes from the kinetic term of the biadjoint scalar. Focusing on the terms in the Lagrangian to O(Φ 2 A),
1 2 (D µ Φã) a (D µ Φb) a δãb = gf abc δãc(∂ µ Φã a )A µb Φc c + . . . .(B3)
Taking the appropriate functional derivatives and properly symmetrizing gives the threepoint vertex for two scalars and one vector, Γã a,νb,cc (k, p, q) = f abc δãc (k ν − q ν ) .
The three-point vertices above can be used to find diagrams (1c) and (1f), giving
(1c) µa (k) = 1 2 lα,l β A b ν (l α )| O(g 1 ) iΓ µa,νb,ρc (−k, l α , l β )A c ρ (l β )| O(g 1 ) (2π) d δ d (k − l α − l β ), (1f) µa (k) = 1 2 lα,l β Φb b (l α )| O(y 1 ) iΓb b,µa,cc (l α , −k, l β )Φc c (l β )| O(y 1 ) (2π) d δ d (k − l α − l β ), (B5)
where a symmetry factor of 1/2 has been added.
The solutions needed for these diagrams were found in Eq. (2.16), giving
A µa (l α )| O(g 1 ) = −g N α=1 (2π)δ(l α · v α ) e ilα·bα l 2 α v µ α c a α , Φ aã (l α )| O(y 1 ) = −y N α=1 (2π)δ(l α · v α ) e ilα·bα l 2 α c a α cã α .(B6)
Plugging in these solutions gives
(1c) µa (k) = g 2 2 α =β if abc c b α c c β lα,l β µ α,β (k) −2k · v α v µ β + 2k · v β v µ α + v α · v β (l β − l α ) µ , (1f) µa (k) = y 2 2 β =α if abc c b α c c β cã α cã β lα,l β µ α,β (k)(l α − l β ) µ .(B7)
Due to the antisymmetry of f abc c b α c c β , switching α ↔ β for a term multiplied by this factor introduces a minus sign, allowing further simplification,
(1c) µa (k) = g 2 α =β if abc c b α c c β lα,l β µ α,β (k) [2k · v β v µ α − (v α · v β )l µ α ] , (1f) µa (k) = y 2 β =α if abc c b α c c β cã α cã β lα,l β µ α,β (k)l µ α .(B8)
Note how this result agrees with the algebraic method found in Eq. (2.25).
General Relativity and Einstein-Yang-Mills Theory
Next, the three-point graviton vertex will be used to stitch together lower order gravitational field solutions to generate a piece of the gravitational radiation field. The three-point graviton vertex from DeWitt [59] and utilized by Sannan [60] is
V µα,νβ,σγ (k 1 , k 2 , k 3 ) = sym − 1 2 P 3 (k 1 · k 2 η µα η νβ η σγ ) − 1 2 P 6 (k 1ν k 1β η µα η σγ ) (B9) + 1 2 P 3 (k 1 · k 2 η µν η αβ η σγ ) + P 6 (k 1 · k 2 η µα η νσ η βγ ) + 2P 3 (k 1ν k 1γ η µα η βσ )
− P 3 (k 1β k 2µ η αν η σγ ) + P 3 (k 1σ k 2γ η µν η αβ ) + P 6 (k 1σ k 1γ η µν η αβ )
+ 2P 6 (k 1ν k 2γ η βµ η ασ ) + 2P 3 (k 1ν k 2µ η βσ η γα ) − 2P 3 (k 1 · k 2 η αν η βσ η γµ ) ,
where P 3 and P 6 refers to a permutation of k 1 , k 2 , and k 3 resulting in 3 or 6 terms, respectively, and sym applies a symmetrization across µα, νβ, and σγ. For example, P 3 (k 1 · k 2 η µν η αβ η σγ ) = k 1 · k 2 η µν η αβ η σγ + k 2 · k 3 η νσ η βγ η µα + k 3 · k 1 η µσ η αγ η νβ , sym[η µν η αβ ] = 1 4 (η µν η αβ + η µβ η να + η να η µβ + η αβ η µν ) .
Expanding P 3 and P 6 gives
V µα,νβ,σγ (k 1 , k 2 , k 3 ) = sym − 1 2 (k 1 · k 2 + k 2 · k 3 + k 3 · k 1 ) η µα η νβ η σγ − 1 2 k ν 1 k β 1 η µα η σγ + k σ 1 k γ 1 η µα η νβ k µ 2 k α 2 η νβ η σγ + k σ 2 k γ 2 η µα η νβ + k µ 3 k α 3 η νβ η γσ + k ν 3 k β 3 η µα η γσ + 1 2 k 1 · k 2 η µν η αβ η σγ + k 2 · k 3 η νσ η βγ η µα + k 3 · k 1 η µσ η αγ η νβ + k 1 · k 2 η µα η νσ η βγ + k 1 · k 2 η νβ η µσ η αγ + k 2 · k 3 η νβ η µσ η αγ + k 2 · k 3 η σγ η µν η αβ + k 3 · k 1 η σγ η µν η αβ + k 3 · k 1 η µα η νσ η βγ + 2 k ν 1 k γ 1 η µα η βσ + k σ 2 k µ 2 η νβ η γµ + k µ 3 k β 3 η σγ η αν − k β 1 k µ 2 η αν η σγ + k γ 2 k ν 3 η βσ η µα + k β 3 k µ 1 η γµ η νβ
+ k σ 1 k γ 2 η µν η αβ + k µ 2 k α 3 η νσ η βγ + k ν 3 k β 1 η σµ η γα + k σ 1 k γ 1 η µν η αβ + k ν 1 k β 1 η µσ η αγ + k µ 2 k α 2 η νσ η βγ + k σ 2 k γ 2 η νµ η γα + k ν 3 k β 3 η σµ η γα + k µ 3 k α 3 η σν η αβ + 2 k ν 1 k γ 2 η βµ η ασ + k µ 1 k γ 2 η αν η βσ + k σ 2 k α 3 η γν η βµ + k ν 2 k α 3 η βσ η γµ + k µ 3 k β 1 η ασ η γν + k σ 3 k β 1 η γµ η ασ + 2 k ν 1 k µ 2 η βσ η γα + k σ 2 k ν 3 η γµ η αβ + k µ 3 k σ 1 η αν η βγ (B11) − 2 k 1 · k 2 η αν η βσ η γµ + k 2 · k 3 η βσ η γµ η αν + k 3 · k 1 η γµ η αν η γσ .
To find the radiative field contribution from this three-point vertex, two instances of the lowest-order field solution will be stitched together with this vertex to find a higher order contribution. The lowest-order field in momentum space is given by
h ρσ (l α ) = κ 2 N α m α e ilα·bα l 2 α (2π)δ(l α · v α ) v ρ α v σ α − η ρσ d − 2 .(B12)
The three-point vertex allows for a purely gravitational source to be found, which corresponds to a component of the pseudotensor, t µν . This component of the source that generates radiation is given by t σλ (k) = 1 2 lα,l β V µρ,ντ,σλ (−l α , −l β , k)h µρ (l α )h ντ (l β )δ d (k − l α − l β ).
Since the lowest-order solutions for l α and l β are symmetric, the symmetrization is only needed for indices σ and λ in V µρ,ντ,σλ . Focusing on the integrand and breaking down the two lowest-order solutions into four terms gives
h µρ (l α )h ντ (l β ) ∝ v µ α v ρ α v ν β v τ β − 1 d − 2 v µ α v ρ α η ντ + v ν β v τ β η µρ + 1 (d − 2) 2 (η µρ η ντ ) .
(B14)
Using Mathematica to perform the index contractions gives
(2c) στ (k) = κ 2 2 N α=1 β =α m α m β lα,l β µ α,β (k) 2v σ α v λ α (k · v β ) 2 − l 2 α d − 2 (B15) + v σ α v λ β + v λ α v σ β l 2 α v α · v β − k · v α k · v β − 2 v σ α l λ α + v λ α l σ α (v α · v β k · v β ) + l σ α l λ α (v α · v β ) 2 − 1 d − 2 + η σλ k · v α k · v β v α · v β − l 2 α 2 (v α · v β ) 2 − 1 d − 2 ,
where this result gives the integrand of the diagram (2c), as shown in Eq. (3.26).
For calculating the additional gravitational radiation diagrams due to Yang-Mills contributions, the Feynman rules for scattering outlined by Rodigast's thesis give the necessary three-point vertex [61,62]. The Feynman rule for the three-point vertex with two gluons and one graviton can be found from the interaction term in the Lagrangian,
L = √ −gg µρ g νσ ∂ µ A a ν ∂ [ρ A a σ] + . . . ≈ κ η µτ η ρλ η νσ + η µρ η ντ η σλ − 1 2 η τ λ η µρ η νσ h τ λ ∂ µ A a ν ∂ [ρ A a σ] .(B16)
Taking the functional derivatives and properly symmetrizing over all indices and momenta gives Γ τ λ,µã,νb (k, p, q) = −2iδãb p (τ q λ) η µν + 1 2 p · q(η τ µ η λν η τ ν η λµ − η τ λ η µν )
+ 1 2 η τ λ p ν q µ − q µ η ν(λ p τ ) − p ν η µ(τ q λ) ,(B17)
where a factor of 2/κ was added to have the same conventions as DeWitt's three-point vertex. This allows us to use the same formula for calculating the contribution to the radiation source. By reusing the lowest-order result for A µa (l)| O(g 1 ) and switching a →ã, the solution to diagram (2i) is (2i) µν (k) = 1 2 lα,l β iΓ µν,ρã,σb (−k, l α , l β )Aã ρ (l α )| O(g 1 ) Ab σ (l β )| O(g 1 ) δ d (k − l α − l β ).
Plugging in the lowest-order solution gives (2i) µν (k) =g 2 N α=1 β =α lα,l β µ α,β (k)cã α cã β 1 2
(v µ α v ν β + v ν α v µ β − η µν v α · v β )l α · l β + v α · v β l (µ α l ν) β + 1 2 η µν k · v α k · v β − k · v β v (µ α l ν) β − k · v α v (µ β l ν) α ,(B19)
FIG. 1 :
1These diagrams represent all of the contributions toĴ µa in Yang-Mills-biadjoint-scalar theory. Straight lines represent matter fields, curly lines represent Yang-Mills fields, and doublydashed lines represent biadjoint scalar fields.
17) and the diagrams with internal gauge bosons represent(2g) µν + (2h) µν = |g|T µν | ∆A,O(g 2 ) , (2i) µν =t µν | ∆A,O(g 2 ) ≡ t µν | ∆A,O(g 2 ) . (3.18) Since 1 − |g| is purely gravitational,t µν | ∆φ ≡ t µν | ∆φ andt µν | ∆A ≡ t µν | ∆A .Similar to Eq. (2.26), the diagrams (2a), (2b), (2d), (2e), (2g), and (2h) sum to give |g|T µν , (2c), (2f), and (2i) givet µν , and all nine sum to give the locally conserved pseudotensorT µν , where all expressions are only kept to second order in κ org. While the diagrammatic representation may be useful for organizing higher order computations, it is simple enough to calculate |g|T µν as a single algebraic expression for the purpose of confirming the validity of the radiative double copy to leading order.
FIG. 2 :
2These diagrams represent all of the contributions toT µν in Einstein-Yang-Mills theory.The wavy lines represent gravitational fields, the dashed lines represent dilaton fields and the curly lines represent Yang-Mills fields.
Yang-Mills ghosts to remove the dilaton, we focus on the additional terms introduced inEinstein-Yang-Mills theory. Applying the double copy replacement rules in Eq. (3.31) to
Eq. (2.28) giveŝ
Distributing these factors and reorganizing all of the terms with the same tensor index structure givesNext, the relation k 2 = l 2 α + 2l α · l β + l 2 β = 0 is used to simplify further. The identity a µ l ν β = a µ k ν − a µ l ν α and the gauge condition of the gravitational field allows for the gaugeinvariant shift a µ l ν β → 1 2 a · kη µν − a µ l ν α , since dotting this expression with the polarization
A Relation Between Tree Amplitudes of Closed and Open Strings. H Kawai, D C Lewellen, S H H Tye, 10.1016/0550-3213(86)90362-7Nucl. Phys. 269H. Kawai, D. C. Lewellen, and S. H. H. Tye, "A Relation Between Tree Amplitudes of Closed and Open Strings," Nucl. Phys. B269 (1986) 1-23.
New Relations for Gauge-Theory Amplitudes. Z Bern, J J M Carrasco, H Johansson, 10.1103/PhysRevD.78.085011arXiv:0805.3993Phys. Rev. 7885011hep-phZ. Bern, J. J. M. Carrasco, and H. Johansson, "New Relations for Gauge-Theory Amplitudes," Phys. Rev. D78 (2008) 085011, arXiv:0805.3993 [hep-ph].
Gravity as the Square of Gauge Theory. Z Bern, T Dennen, Y Huang, M Kiermaier, 10.1103/PhysRevD.82.065003arXiv:1004.0693Phys. Rev. 8265003hep-thZ. Bern, T. Dennen, Y.-t. Huang, and M. Kiermaier, "Gravity as the Square of Gauge Theory," Phys. Rev. D82 (2010) 065003, arXiv:1004.0693 [hep-th].
Perturbative Quantum Gravity as a Double Copy of Gauge Theory. Z Bern, J J M Carrasco, H Johansson, 10.1103/PhysRevLett.105.061602arXiv:1004.0476Phys. Rev. Lett. 10561602hep-thZ. Bern, J. J. M. Carrasco, and H. Johansson, "Perturbative Quantum Gravity as a Double Copy of Gauge Theory," Phys. Rev. Lett. 105 (2010) 061602, arXiv:1004.0476 [hep-th].
A Duality For The S Matrix. N Arkani-Hamed, F Cachazo, C Cheung, J Kaplan, 10.1007/JHEP03(2010)020arXiv:0907.5418JHEP. 0320hep-thN. Arkani-Hamed, F. Cachazo, C. Cheung, and J. Kaplan, "A Duality For The S Matrix," JHEP 03 (2010) 020, arXiv:0907.5418 [hep-th].
A Proof of the Explicit Minimal-basis Expansion of Tree Amplitudes in Gauge Field Theory. Y.-X Chen, Y.-J Du, B Feng, 10.1007/JHEP02(2011)112arXiv:1101.0009JHEP. 02112hep-thY.-X. Chen, Y.-J. Du, and B. Feng, "A Proof of the Explicit Minimal-basis Expansion of Tree Amplitudes in Gauge Field Theory," JHEP 02 (2011) 112, arXiv:1101.0009 [hep-th].
Proof of the fundamental BCJ relations for QCD amplitudes. L Cruz, A Kniss, S Weinzierl, 10.1007/JHEP09(2015)197arXiv:1508.01432JHEP. 09197hep-thL. de la Cruz, A. Kniss, and S. Weinzierl, "Proof of the fundamental BCJ relations for QCD amplitudes," JHEP 09 (2015) 197, arXiv:1508.01432 [hep-th].
Generic multiloop methods and application to N=4 super-Yang-Mills. J J M Carrasco, H Johansson, 10.1088/1751-8113/44/45/454004arXiv:1103.3298J. Phys. 44454004hep-thJ. J. M. Carrasco and H. Johansson, "Generic multiloop methods and application to N=4 super-Yang-Mills," J. Phys. A44 (2011) 454004, arXiv:1103.3298 [hep-th].
Five-Point Amplitudes in N=4 Super-Yang-Mills Theory and N=8 Supergravity. J J Carrasco, H Johansson, 10.1103/PhysRevD.85.025006arXiv:1106.4711Phys. Rev. 8525006hep-thJ. J. Carrasco and H. Johansson, "Five-Point Amplitudes in N=4 Super-Yang-Mills Theory and N=8 Supergravity," Phys. Rev. D85 (2012) 025006, arXiv:1106.4711 [hep-th].
Simplifying Multiloop Integrands and Ultraviolet Divergences of Gauge Theory and Gravity Amplitudes. Z Bern, J J M Carrasco, L J Dixon, H Johansson, R Roiban, 10.1103/PhysRevD.85.105014arXiv:1201.5366Phys. Rev. 85105014hep-thZ. Bern, J. J. M. Carrasco, L. J. Dixon, H. Johansson, and R. Roiban, "Simplifying Multiloop Integrands and Ultraviolet Divergences of Gauge Theory and Gravity Amplitudes," Phys. Rev. D85 (2012) 105014, arXiv:1201.5366 [hep-th].
Color-Kinematics Duality for Pure Yang-Mills and Gravity at One and Two Loops. Z Bern, S Davies, T Dennen, Y Huang, J Nohle, Z. Bern, S. Davies, T. Dennen, Y.-t. Huang, and J. Nohle, "Color-Kinematics Duality for Pure Yang-Mills and Gravity at One and Two Loops,"
. 10.1103/PhysRevD.92.045041arXiv:1303.6605Phys. Rev. 92445041hep-thPhys. Rev. D92 no. 4, (2015) 045041, arXiv:1303.6605 [hep-th].
Minimal Basis for Gauge Theory Amplitudes. N E J Bjerrum-Bohr, P H Damgaard, P Vanhove, 10.1103/PhysRevLett.103.161602arXiv:0907.1425Phys. Rev. Lett. 103161602hep-thN. E. J. Bjerrum-Bohr, P. H. Damgaard, and P. Vanhove, "Minimal Basis for Gauge Theory Amplitudes," Phys. Rev. Lett. 103 (2009) 161602, arXiv:0907.1425 [hep-th].
Open & Closed vs. Pure Open String Disk Amplitudes. S Stieberger, arXiv:0907.2211hep-thS. Stieberger, "Open & Closed vs. Pure Open String Disk Amplitudes," arXiv:0907.2211 [hep-th].
Monodromy and Jacobi-like Relations for Color-Ordered Amplitudes. N E J Bjerrum-Bohr, P H Damgaard, T Sondergaard, P Vanhove, 10.1007/JHEP06(2010)003arXiv:1003.2403JHEP. 063hep-thN. E. J. Bjerrum-Bohr, P. H. Damgaard, T. Sondergaard, and P. Vanhove, "Monodromy and Jacobi-like Relations for Color-Ordered Amplitudes," JHEP 06 (2010) 003, arXiv:1003.2403 [hep-th].
Dual Identities inside the Gluon and the Graviton Scattering Amplitudes. S H Henry Tye, Y Zhang, 10.1007/JHEP06(2010)071,10.1007/JHEP04(2011)114arXiv:1003.1732JHEP. 0671hep-th. Erratum: JHEP04,114(2011)S. H. Henry Tye and Y. Zhang, "Dual Identities inside the Gluon and the Graviton Scattering Amplitudes," JHEP 06 (2010) 071, arXiv:1003.1732 [hep-th]. [Erratum: JHEP04,114(2011)].
New Relations for Gauge-Theory and Gravity Amplitudes at Loop Level. S He, O Schlotterer, 10.1103/PhysRevLett.118.161601arXiv:1612.00417Phys. Rev. Lett. 11816161601hep-thS. He and O. Schlotterer, "New Relations for Gauge-Theory and Gravity Amplitudes at Loop Level," Phys. Rev. Lett. 118 no. 16, (2017) 161601, arXiv:1612.00417 [hep-th].
S He, O Schlotterer, Y Zhang, arXiv:1706.00640New BCJ representations for one-loop amplitudes in gauge theories and gravity. hep-thS. He, O. Schlotterer, and Y. Zhang, "New BCJ representations for one-loop amplitudes in gauge theories and gravity," arXiv:1706.00640 [hep-th].
On tree amplitudes of supersymmetric Einstein-Yang-Mills theory. T Adamo, E Casali, K A Roehrig, D Skinner, 10.1007/JHEP12(2015)177arXiv:1507.02207JHEP. 12177hep-thT. Adamo, E. Casali, K. A. Roehrig, and D. Skinner, "On tree amplitudes of supersymmetric Einstein-Yang-Mills theory," JHEP 12 (2015) 177, arXiv:1507.02207 [hep-th].
Relations for EinsteinYangMills amplitudes from the CHY representation. L Cruz, A Kniss, S Weinzierl, 10.1016/j.physletb.2017.01.036arXiv:1607.06036Phys. Lett. 767hep-thL. de la Cruz, A. Kniss, and S. Weinzierl, "Relations for EinsteinYangMills amplitudes from the CHY representation," Phys. Lett. B767 (2017) 86-90, arXiv:1607.06036 [hep-th].
New relations for EinsteinYangMills amplitudes. S Stieberger, T R Taylor, 10.1016/j.nuclphysb.2016.09.014arXiv:1606.09616Nucl. Phys. 913hep-thS. Stieberger and T. R. Taylor, "New relations for EinsteinYangMills amplitudes," Nucl. Phys. B913 (2016) 151-162, arXiv:1606.09616 [hep-th].
Direct Evaluation of n-point single-trace MHV amplitudes in 4d Einstein-Yang-Mills theory using the CHY Formalism. Y.-J Du, F Teng, Y.-S Wu, 10.1007/JHEP09(2016)171arXiv:1608.00883JHEP. 09171hep-thY.-J. Du, F. Teng, and Y.-S. Wu, "Direct Evaluation of n-point single-trace MHV amplitudes in 4d Einstein-Yang-Mills theory using the CHY Formalism," JHEP 09 (2016) 171, arXiv:1608.00883 [hep-th].
Einstein-Yang-Mills Scattering Amplitudes From Scattering Equations. F Cachazo, S He, E Y Yuan, 10.1007/JHEP01(2015)121arXiv:1409.8256JHEP. 01121hep-thF. Cachazo, S. He, and E. Y. Yuan, "Einstein-Yang-Mills Scattering Amplitudes From Scattering Equations," JHEP 01 (2015) 121, arXiv:1409.8256 [hep-th].
Einstein-Yang-Mills from pure Yang-Mills amplitudes. D Nandan, J Plefka, O Schlotterer, C Wen, 10.1007/JHEP10(2016)070arXiv:1607.05701JHEP. 1070hep-thD. Nandan, J. Plefka, O. Schlotterer, and C. Wen, "Einstein-Yang-Mills from pure Yang-Mills amplitudes," JHEP 10 (2016) 070, arXiv:1607.05701 [hep-th].
Amplitude relations in heterotic string theory and Einstein-Yang-Mills. O Schlotterer, 10.1007/JHEP11(2016)074arXiv:1608.00130JHEP. 1174hep-thO. Schlotterer, "Amplitude relations in heterotic string theory and Einstein-Yang-Mills," JHEP 11 (2016) 074, arXiv:1608.00130 [hep-th].
Scattering equations and Kawai-Lewellen-Tye orthogonality. F Cachazo, S He, E Y Yuan, 10.1103/PhysRevD.90.065001arXiv:1306.6575Phys. Rev. 90665001hep-thF. Cachazo, S. He, and E. Y. Yuan, "Scattering equations and Kawai-Lewellen-Tye orthogonality," Phys. Rev. D90 no. 6, (2014) 065001, arXiv:1306.6575 [hep-th].
Exact solutions for the biadjoint scalar field. C D White, arXiv:1606.04724hep-thC. D. White, "Exact solutions for the biadjoint scalar field," arXiv:1606.04724 [hep-th].
. M Chiodaroli, M Gnaydin, H Johansson, R Roiban, Scattering amplitudes in N = 2M. Chiodaroli, M. Gnaydin, H. Johansson, and R. Roiban, "Scattering amplitudes in N = 2
. Yang-Mills/Einstein Maxwell-Einstein, Supergravity, 10.1007/JHEP01(2015)081arXiv:1408.0764JHEP. 0181hep-thMaxwell-Einstein and Yang-Mills/Einstein supergravity," JHEP 01 (2015) 081, arXiv:1408.0764 [hep-th].
Spontaneously Broken Yang-Mills-Einstein Supergravities as Double Copies. M Chiodaroli, M Gunaydin, H Johansson, R Roiban, 10.1007/JHEP06(2017)064arXiv:1511.01740JHEP. 0664hep-thM. Chiodaroli, M. Gunaydin, H. Johansson, and R. Roiban, "Spontaneously Broken Yang-Mills-Einstein Supergravities as Double Copies," JHEP 06 (2017) 064, arXiv:1511.01740 [hep-th].
Simplifying amplitudes in Maxwell-Einstein and Yang-Mills-Einstein supergravities. M Chiodaroli, arXiv:1607.04129hep-thM. Chiodaroli, "Simplifying amplitudes in Maxwell-Einstein and Yang-Mills-Einstein supergravities," 2016. arXiv:1607.04129 [hep-th].
Observation of gravitational waves from a binary black hole merger. B P Abbott, LIGO Scientific Collaboration ; Virgo Collaboration CollaborationR Abbott, LIGO Scientific Collaboration ; Virgo Collaboration CollaborationT D Abbott, LIGO Scientific Collaboration ; Virgo Collaboration Collaboration10.1103/PhysRevLett.116.061102Phys. Rev. Lett. 11661102LIGO Scientific Collaboration and Virgo Collaboration Collaboration, B. P. Abbott, R. Abbott, T. D. Abbott, et al., "Observation of gravitational waves from a binary black hole merger," Phys. Rev. Lett. 116 (Feb, 2016) 061102.
. http:/link.aps.org/doi/10.1103/PhysRevLett.116.061102http://link.aps.org/doi/10.1103/PhysRevLett.116.061102.
A magic pyramid of supergravities. A Anastasiou, L Borsten, M J Duff, L J Hughes, S Nagy, 10.1007/JHEP04(2014)178arXiv:1312.6523JHEP. 04178hep-thA. Anastasiou, L. Borsten, M. J. Duff, L. J. Hughes, and S. Nagy, "A magic pyramid of supergravities," JHEP 04 (2014) 178, arXiv:1312.6523 [hep-th].
Are all supergravity theories Yang-Mills squared?. A Anastasiou, L Borsten, M J Duff, arXiv:1707.03234hep-thA. Anastasiou, L. Borsten, M. J. Duff, et al., "Are all supergravity theories Yang-Mills squared?," arXiv:1707.03234 [hep-th].
H Johansson, J Nohle, arXiv:1707.02965Conformal Gravity from Gauge Theory. hep-thH. Johansson and J. Nohle, "Conformal Gravity from Gauge Theory," arXiv:1707.02965 [hep-th].
Black holes and the double copy. R Monteiro, D O'connell, C D White, 10.1007/JHEP12(2014)056arXiv:1410.0239JHEP. 1256hep-thR. Monteiro, D. O'Connell, and C. D. White, "Black holes and the double copy," JHEP 12 (2014) 056, arXiv:1410.0239 [hep-th].
Static Spherically Symmetric Kerr-Schild Metrics and Implications for the Classical Double Copy. A K Ridgway, M B Wise, arXiv:1512.02243hep-thA. K. Ridgway and M. B. Wise, "Static Spherically Symmetric Kerr-Schild Metrics and Implications for the Classical Double Copy," arXiv:1512.02243 [hep-th].
The classical double copy for TaubNUT spacetime. A Luna, R Monteiro, D O'connell, C D White, 10.1016/j.physletb.2015.09.021arXiv:1507.01869Phys. Lett. 750hep-thA. Luna, R. Monteiro, D. O'Connell, and C. D. White, "The classical double copy for TaubNUT spacetime," Phys. Lett. B750 (2015) 272-277, arXiv:1507.01869 [hep-th].
The double copy: Bremsstrahlung and accelerating black holes. A Luna, R Monteiro, I Nicholson, D O'connell, C D White, 10.1007/JHEP06(2016)023arXiv:1603.05737JHEP. 0623hep-thA. Luna, R. Monteiro, I. Nicholson, D. O'Connell, and C. D. White, "The double copy: Bremsstrahlung and accelerating black holes," JHEP 06 (2016) 023, arXiv:1603.05737 [hep-th].
The Kerr-Schild double copy in curved spacetime. N Bahjat-Abbas, A Luna, C D White, arXiv:1710.01953hep-thN. Bahjat-Abbas, A. Luna, and C. D. White, "The Kerr-Schild double copy in curved spacetime," arXiv:1710.01953 [hep-th].
T Adamo, E Casali, L Mason, S Nekovar, arXiv:1706.08925Scattering on plane waves and the double copy. hep-thT. Adamo, E. Casali, L. Mason, and S. Nekovar, "Scattering on plane waves and the double copy," arXiv:1706.08925 [hep-th].
Inelastic Black Hole Scattering from Charged Scalar Amplitudes. A Luna, I Nicholson, D O'connell, C D White, arXiv:1711.03901hep-thA. Luna, I. Nicholson, D. O'Connell, and C. D. White, "Inelastic Black Hole Scattering from Charged Scalar Amplitudes," arXiv:1711.03901 [hep-th].
Radiation and the classical double copy for color charges. W D Goldberger, A K Ridgway, 10.1103/PhysRevD.95.125010arXiv:1611.03493Phys. Rev. 9512125010hep-thW. D. Goldberger and A. K. Ridgway, "Radiation and the classical double copy for color charges," Phys. Rev. D95 no. 12, (2017) 125010, arXiv:1611.03493 [hep-th].
Classical gluon and graviton radiation from the bi-adjoint scalar double copy. W D Goldberger, S G Prabhu, J O Thompson, 10.1103/PhysRevD.96.065009arXiv:1705.09263Phys. Rev. D96. 665009hep-thW. D. Goldberger, S. G. Prabhu, and J. O. Thompson, "Classical gluon and graviton radiation from the bi-adjoint scalar double copy," Phys. Rev. D96 no. 6, (2017) 065009, arXiv:1705.09263 [hep-th].
C Misner, K Thorne, J Wheeler, Gravitation. No. pt. 3 in Gravitation. W. H. Freeman. C. Misner, K. Thorne, and J. Wheeler, Gravitation. No. pt. 3 in Gravitation. W. H. Freeman, 1973. https://books.google.com/books?id=w4Gigq3tY1kC.
More On Cosmological Gravitational Waves And Their Memories. Y.-Z Chu, 10.1088/1361-6382/aa8392arXiv:1611.00018Class. Quant. Grav. 3419194001gr-qcY.-Z. Chu, "More On Cosmological Gravitational Waves And Their Memories," Class. Quant. Grav. 34 no. 19, (2017) 194001, arXiv:1611.00018 [gr-qc].
Gravitational radiation from compact binary systems: Gravitational wave forms and energy loss to second postNewtonian order. C M Will, A G Wiseman, 10.1103/PhysRevD.54.4813arXiv:gr-qc/9608012Phys. Rev. 54gr-qcC. M. Will and A. G. Wiseman, "Gravitational radiation from compact binary systems: Gravitational wave forms and energy loss to second postNewtonian order," Phys. Rev. D54 (1996) 4813-4848, arXiv:gr-qc/9608012 [gr-qc].
The Basics of gravitational wave theory. E E Flanagan, S A Hughes, 10.1088/1367-2630/7/1/204arXiv:gr-qc/0501041New J. Phys. 7204gr-qcE. E. Flanagan and S. A. Hughes, "The Basics of gravitational wave theory," New J. Phys. 7 (2005) 204, arXiv:gr-qc/0501041 [gr-qc].
Gravitational Radiation from Post-Newtonian Sources and Inspiralling Compact Binaries. L Blanchet, 10.12942/lrr-2014-2arXiv:1310.1528Living Reviews in Relativity. 17gr-qcL. Blanchet, "Gravitational Radiation from Post-Newtonian Sources and Inspiralling Compact Binaries," Living Reviews in Relativity 17 (Feb., 2014) 2, arXiv:1310.1528 [gr-qc].
Apparent ambiguities in the post-Newtonian expansion for binary systems. R A Porto, I Z Rothstein, 10.1103/PhysRevD.96.024062arXiv:1703.06433Phys. Rev. 96224062gr-qcR. A. Porto and I. Z. Rothstein, "Apparent ambiguities in the post-Newtonian expansion for binary systems," Phys. Rev. D96 no. 2, (2017) 024062, arXiv:1703.06433 [gr-qc].
Center-of-Mass Equations of Motion and Conserved Integrals of Compact Binary Systems at the Fourth Post-Newtonian Order. L Bernard, L Blanchet, G Faye, T Marchand, arXiv:1711.00283gr-qcL. Bernard, L. Blanchet, G. Faye, and T. Marchand, "Center-of-Mass Equations of Motion and Conserved Integrals of Compact Binary Systems at the Fourth Post-Newtonian Order," arXiv:1711.00283 [gr-qc].
Improved effective-one-body model of spinning, nonprecessing binary black holes for the era of gravitational-wave astrophysics with advanced detectors. A Boh, 10.1103/PhysRevD.95.044028arXiv:1611.03703Phys. Rev. D95. 444028gr-qcA. Boh et al., "Improved effective-one-body model of spinning, nonprecessing binary black holes for the era of gravitational-wave astrophysics with advanced detectors," Phys. Rev. D95 no. 4, (2017) 044028, arXiv:1611.03703 [gr-qc].
Radiation-Reaction Force on a Small Charged Body to Second Order. J Moxon, Flanagan, arXiv:1711.05212gr-qcJ. Moxon and a. Flanagan, "Radiation-Reaction Force on a Small Charged Body to Second Order," arXiv:1711.05212 [gr-qc].
Gravitational scattering, post-Minkowskian approximation and Effective One-Body theory. T Damour, 10.1103/PhysRevD.94.104015arXiv:1609.00354Phys. Rev. 9410104015gr-qcT. Damour, "Gravitational scattering, post-Minkowskian approximation and Effective One-Body theory," Phys. Rev. D94 no. 10, (2016) 104015, arXiv:1609.00354 [gr-qc].
Gravitational spin-orbit coupling in binary systems, post-Minkowskian approximation and effective one-body theory. D Bini, T Damour, 10.1103/PhysRevD.96.104038arXiv:1709.00590Phys. Rev. 9610104038gr-qcD. Bini and T. Damour, "Gravitational spin-orbit coupling in binary systems, post-Minkowskian approximation and effective one-body theory," Phys. Rev. D96 no. 10, (2017) 104038, arXiv:1709.00590 [gr-qc].
High-energy gravitational scattering and the general relativistic two-body problem. T Damour, arXiv:1710.10599gr-qcT. Damour, "High-energy gravitational scattering and the general relativistic two-body problem," arXiv:1710.10599 [gr-qc].
Classical yang-mills theory in the presence of external sources. P Sikivie, N Weiss, 10.1103/PhysRevD.18.3809Phys. Rev. D. 18P. Sikivie and N. Weiss, "Classical yang-mills theory in the presence of external sources," Phys. Rev. D 18 (Nov, 1978) 3809-3821.
. https:/link.aps.org/doi/10.1103/PhysRevD.18.3809https://link.aps.org/doi/10.1103/PhysRevD.18.3809.
Classical electrodynamics. J D Jackson, WileyNew York, NY3rd ed. ed.J. D. Jackson, Classical electrodynamics. Wiley, New York, NY, 3rd ed. ed., 1999. http://cdsweb.cern.ch/record/490457.
P Dirac, General Theory of Relativity. Physics Notes. Princeton University PressP. Dirac, General Theory of Relativity. Physics Notes. Princeton University Press, 1975. https://books.google.com/books?id=qkWPDAAAQBAJ.
Chaper 11 -the gravitational field equations. L Landau, E Lifshitz, 10.1016/B978-0-08-025072-4.50018-6The Classical Theory of Fields. L. Landau and E. LifshitzPergamon, Amsterdam2of Course of Theoretical Physics. fourth edition ed.L. Landau and E. Lifshitz, "Chaper 11 -the gravitational field equations," in The Classical Theory of Fields (Fourth Edition), L. Landau and E. Lifshitz, eds., vol. 2 of Course of Theoretical Physics, pp. 259 -294. Pergamon, Amsterdam, fourth edition ed., 1975. https://www.sciencedirect.com/science/article/pii/B9780080250724500186.
Quantum theory of gravity. iii. applications of the covariant theory. B S Dewitt, 10.1103/PhysRev.162.1239Phys. Rev. 162B. S. DeWitt, "Quantum theory of gravity. iii. applications of the covariant theory," Phys. Rev. 162 (Oct, 1967) 1239-1256.
. https:/link.aps.org/doi/10.1103/PhysRev.162.1239https://link.aps.org/doi/10.1103/PhysRev.162.1239.
Gravity as the limit of the type-ii superstring theory. S Sannan, 10.1103/PhysRevD.34.1749Phys. Rev. D. 34S. Sannan, "Gravity as the limit of the type-ii superstring theory," Phys. Rev. D 34 (Sep, 1986) 1749-1758.
. https:/link.aps.org/doi/10.1103/PhysRevD.34.1749https://link.aps.org/doi/10.1103/PhysRevD.34.1749.
Absence of gravitational contributions to the running Yang-Mills coupling. D Ebert, J Plefka, A Rodigast, 10.1016/j.physletb.2008.01.037arXiv:0710.1002Phys. Lett. 660hep-thD. Ebert, J. Plefka, and A. Rodigast, "Absence of gravitational contributions to the running Yang-Mills coupling," Phys. Lett. B660 (2008) 579-582, arXiv:0710.1002 [hep-th].
One-Loop Divergences of the Yang-Mills Theory Coupled to Gravitation. A Rodigast, Humboldt UniversityPhD thesisA. Rodigast, One-Loop Divergences of the Yang-Mills Theory Coupled to Gravitation. PhD thesis, Humboldt University, August, 2008.
| zyda_arxiv-1560000 |
Absence of Right-Handed Neutrino in Weak Interactions: Explanation via Nonlinear Electroweak Model
21 Jun 2010 (Dated: September 28, 2010)
Bill Dalton
Department of Physics, Astronomy and Engineering Science
St Cloud State University
Absence of Right-Handed Neutrino in Weak Interactions: Explanation via Nonlinear Electroweak Model
21 Jun 2010 (Dated: September 28, 2010)numbers: 1110Lm1130-j1315+g1366-a
The nonlinear SU (2) electroweak model is used to explain the absence of the right-handed neutrino in weak interactions. Two covariant eigenvalue constraints which affect the transformation lead to two classes of right-handed leptons, and make possible invariant mass terms without the Higgs doublet. A covariant picture of neutrinos with mass is presented. A new invariant form for the boson potentials is described in which the boson mass terms arises via the adjoint field. This model also indicates a different region of matter involving coupled leptons that are "blind" to the massless electromagnetic field but "see" four massive potentials that are themselves blind to the electromagnetic field. We argue that these more difficult to detect "dark" fields provide a possible contribution to the missing mass. Based on work published by the author several years ago [1], a detailed study of one particular type of nonlinear realization of Lie groups was recently presented in [2]. In these realizations the presence of one field induces a nonlinear transformation on a second field. Earlier references to similar nonlinear realizations, including coset realizations, can be found in [1], [3], [4] and [5]. One new feature of the realizations in [2] is the introduction of a covariant eigenvalue constraint on the transformations.A second feature is that the linear and nonlinear components lead to separately conserved currents for each group parameter. The three nonlinear currents reduce to a single conserved current. The latter is the electromagnetic current at one point on the adjoint sphere. These features are characteristic of the particular type of extended transformations studied in [2].In [2] two new invariant forms involving the boson potentials were discussed for groups such as SU(n) with structure constants antisymmetric in all three indices. For nonlinear SU (2) application, it was shown that the Lagrangian for the standard gauge SU (2) × U (1) electroweak model of [6] and [7] is invariant under these transformations. The covariant eigenvalue conditions on the right-handed lepton component leads to two classes of right-handed leptons.These have two consequences. First, they reduce the transformation on the right-handed lepton field to a diagonal form, requiring the covariant potential for the right-handed leptons to become diagonal in one case and zero in the other. From this it follows that the coupling constants g V and g A are the same as found in the the standard gauge model. The second consequence is that the eigenvector equations require that one right-handed neutrino vanish at one place on the adjoint space h unit sphere. At this point (called the north pole in [2]) the mass ratio MZ MW for the intermediate bosons take on the usual value and the mass for the A µ field is zero. This is the only place on the h sphere where this experimentally observed combination happens. At other places on the sphere the ratio MZ MW changes, the A µ field becomes massive and this right-handed neutrino field is not zero. The second righthanded neutrino is not required to vanish at this pole point but the constraint explains why it does not participate in weak interactions. Leptons and potentials at points other than the north pole do not "see" the massless electromagnetic field. Instead, they see a heavy A µ vector field.One strong experimental observation is the absence of the right-handed neutrino in weak interactions. A second is neutrino oscillations which can be explained if the neutrinos can have mass. The standard gauge SU (2) × U (1) electroweak model accommodates, but does not explain the absence of the right-handed neutrino in weak interactions. In addition, the standard model arrives at the appropriate g V and g A coupling constants by using different hyperfine constants for the right-and left-handed lepton components. A model that can explain both the experimentally observed absence of the right-handed neutrino in weak interactions together with the experimentally supported g V and g A coupling constants must be taken seriously. This is especially true if the same theory explains how a second right-handed neutrino can exist that does not participate in the weak interactions. This makes possible a consistent covariant picture of leptons needed to describe weak interactions and neutrino oscillations. In physics, there is an important difference between accommodating, versus explaining, experimental observation. The purpose of this paper is to describe and highlight the explanations provided by the nonlinear model of [2] for the electroweak interactions. We give detailed expressions for both the conserved linear and nonlinear current components at the north pole point.Following the the nonlinear realizations of SU (2) in [2], transformations on the stacked spinor stateL = ν LēL have generation action given by [T a , L] = i 2 σ a L + iξ a
The nonlinear SU (2) electroweak model is used to explain the absence of the right-handed neutrino in weak interactions. Two covariant eigenvalue constraints which affect the transformation lead to two classes of right-handed leptons, and make possible invariant mass terms without the Higgs doublet. A covariant picture of neutrinos with mass is presented. A new invariant form for the boson potentials is described in which the boson mass terms arises via the adjoint field. This model also indicates a different region of matter involving coupled leptons that are "blind" to the massless electromagnetic field but "see" four massive potentials that are themselves blind to the electromagnetic field. We argue that these more difficult to detect "dark" fields provide a possible contribution to the missing mass.
PACS numbers: 11.10.Lm, 11.30.-j, 13.15.+g, 13.66.-a Based on work published by the author several years ago [1], a detailed study of one particular type of nonlinear realization of Lie groups was recently presented in [2]. In these realizations the presence of one field induces a nonlinear transformation on a second field. Earlier references to similar nonlinear realizations, including coset realizations, can be found in [1], [3], [4] and [5]. One new feature of the realizations in [2] is the introduction of a covariant eigenvalue constraint on the transformations. A second feature is that the linear and nonlinear components lead to separately conserved currents for each group parameter. The three nonlinear currents reduce to a single conserved current. The latter is the electromagnetic current at one point on the adjoint sphere. These features are characteristic of the particular type of extended transformations studied in [2].
In [2] two new invariant forms involving the boson potentials were discussed for groups such as SU(n) with structure constants antisymmetric in all three indices. For nonlinear SU (2) application, it was shown that the Lagrangian for the standard gauge SU (2) × U (1) electroweak model of [6] and [7] is invariant under these transformations. The covariant eigenvalue conditions on the right-handed lepton component leads to two classes of right-handed leptons.These have two consequences. First, they reduce the transformation on the right-handed lepton field to a diagonal form, requiring the covariant potential for the right-handed leptons to become diagonal in one case and zero in the other. From this it follows that the coupling constants g V and g A are the same as found in the the standard gauge model. The second consequence is that the eigenvector equations require that one right-handed neutrino vanish at one place on the adjoint space h unit sphere. At this point (called the north pole in [2]) the mass ratio MZ MW for the intermediate bosons take on the usual value and the mass for the A µ field is zero. This is the only place on the h sphere where this experimentally observed combination happens. At other places on the sphere the ratio MZ MW changes, the A µ field becomes massive and this right-handed neutrino field is not zero. The second righthanded neutrino is not required to vanish at this pole point but the constraint explains why it does not participate in weak interactions. Leptons and potentials at points other than the north pole do not "see" the massless electromagnetic field. Instead, they see a heavy A µ vector field.
One strong experimental observation is the absence of the right-handed neutrino in weak interactions. A second is neutrino oscillations which can be explained if the neutrinos can have mass. The standard gauge SU (2) × U (1) electroweak model accommodates, but does not explain the absence of the right-handed neutrino in weak interactions. In addition, the standard model arrives at the appropriate g V and g A coupling constants by using different hyperfine constants for the right-and left-handed lepton components. A model that can explain both the experimentally observed absence of the right-handed neutrino in weak interactions together with the experimentally supported g V and g A coupling constants must be taken seriously. This is especially true if the same theory explains how a second right-handed neutrino can exist that does not participate in the weak interactions. This makes possible a consistent covariant picture of leptons needed to describe weak interactions and neutrino oscillations. In physics, there is an important difference between accommodating, versus explaining, experimental observation. The purpose of this paper is to describe and highlight the explanations provided by the nonlinear model of [2] for the electroweak interactions. We give detailed expressions for both the conserved linear and nonlinear current components at the north pole point.
Following the the nonlinear realizations of SU (2) in [2], transformations on the stacked spinor stateL = ν LēL have generation action given by
[T a , L] = i 2 σ a L + iξ a 1 2 (−I + H)L, H = h 3 U √ 2h − U √ 2h + U −h 3 U , h ± = 1 √ 2 (h 1 ± ih 2 ) (1)
Here, U is the unit matrix in four dimensions, Y L = −1L and the three components of h transform via the adjoint representation with [T a , h k ] = ǫ akl h l and h k h k = 1. Transformation conditions for the ξ a field components are given in [2], but because these components will not enter the Lagrangian, they will not be discussed here. The three components of the h field will enter the Lagrangian.
In contrast to the gauge model, the group parameters for nonlinear realizations are constant with the 'local' nature of the transformation arising via the ξ and h fields. The covariant derivative acting on L has the same general form as in the common electroweak gauge model.
D µ L = ∂ µ L − i 2 PL, P = (gW 3 µ − g ′ β µ )U g(W 1 µ − iW 2 µ )U g(W 1 µ + iW 2 µ )U (−gW 3 µ − g ′ β µ )U = N Z µ U g √ 2W − U g √ 2W + U [−N cos 2θ w Z µ − 2qA µ ]U(2)
Following [2], we use the standard potential relations
W 3 µ β µ = cos(θ w ) sin(θ w ) −sin(θ w ) cos(θ w ) Z µ A µ ,(3)
with the parameter notation cos(
θ w ) = g N , sin(θ w ) = g ′ N , N = (g ′ ) 2 + g 2 with the charge q = g ′ g N .
The transformations on the potentials are given by
[T a , B µ ] = 1 g ′ ∂ µ ξ a , [T a , W l µ ] = ǫ alk W k µ − ξ a h i ǫ ikl W k µ + 1 g ∂ µ (ξ a h l ). (4)
Notice that the action on the W l µ potentials has a linear and nonlinear component, where the latter involve the ξ a and h k fields. With Γ µ = γ µ I the Lagrangian term for the left handed lepton has the standard form
K L = 1 2 [iLΓ µ D µ L + (iLΓ µ D µ L) * ](5)With V k = V h k where V is a group invariant, but not necessarly a space-time constant, C l µ = V i C ilk ∂ µ V k and C ijk = ǫ ijk for SU (2), we have from [2] the following invariants K a = g 2 2 V 2 W l µ W µ l − W l µ V l W µ k V k − gW l µ C µ l (6) F µ = gW l µ V l − β µ V g ′ , K b = 1 2 F µ F µ(7)
Notice that the individual vectors F µ are invariant under SU (2). The quadratic forms K a and K b are invariant under both SU (2) and the Lorentz group.
In the limit that V 1 → 0, V 2 → 0 the expression involving the A µ A µ factor vanishes and the invariant K a + K b reduces to
K a + K b → V 2 2 (2g 2 W − µ W µ + + N 2 Z µ Z µ ).(8)
The space-time dependence of V = V 3 at this north pole is not specified, but in the constant limit V → ν 0 /2, the invariant (8) at the north pole, has the same form as the expression involving the W ± µ boson and Z µ mass terms obtained in the standard model [2]. In this alternate model the source of the mass for the bosons is shifted from the doublet to the adjoint field. A potential U (V ) such as the hat potential may be added to "explain" the constant limit for V 3 in lowest order. The physics question is "Are the masses of the intermediate bosons constant throughout the universe?" There is insufficient data to answer this question. However, with the possible association made in [2] of the non north pole region with dark matter, the space-time dependence of V would be directly linked to the distribution of dark matter.
We emphasize that at this north pole point, the A µ becomes massless, but only at this point. At other points on the sphere the A µ field becomes massive. Fields in this non north pole domain are disjoint from fields at the north pole. They "see" a massive A µ field. Starting from the north pole, a rotation to the non pole domain would mean the presence of V ± fields. The very presence of these give mass to the A µ field. The non pole domain represents a vast source of massive leptons and bosons. It was pointed out in [2], that the fields in this domain could provide a significant contribution to the missing mass. With (4), the standard Yang-Mills [8] field expressions are covariant. These are used in [2] for both the standard and alternate Lagrangians to incorporate the kinetic terms for the boson fields.The invariant (6) contains a first order kinetic term for the V k components. An invariant quadratic kinetic terms for the V k components is
K V = 1 2 (∂ µ V k )∂ µ V k .(9)
To complete the construction of the invariant lepton Langrangian, we consider an alternate formulation for which Y R = −1R where R is the right-handed lepton field withR = ν RēR . Initially the transformations on R are expressed exactly like those for L.
[T a , R] = i 2 σ a R + iξ a 1 2 − I + H R(10)
Following [2], we impose the matrix eigenvalue constraint HR = h · σR = λR. The eigenvalues are λ ± = ±h where h = √ h k h k = 1. This matrix eigenvalue equation is covariant under the group for either of the eigenvalues. For the eigenvalue λ = −1 the transformation generators reduce to the form
[T a , R − ] = i 2 σ a R − − iξ a R − ,(11)
with a diagonal local nonlinear part. Notice that we got a (−1) from the Y R − = −1R − condition, and a second (−1) from the eigenvalue λ = −1, giving a net factor of (-2). In the gauge picture, ( with the notation of [2] ) this factor is obtained by imposing the condition Y e R = −2e R . The covariant derivative becomes
D − µ R − = ∂ µ R − + iB µ g ′ R − g ′ B µ = −N sin θ w 2 Z µ + qA µ(12)
Except that R − includes the right-handed neutrino component, this expression has the same form as the covariant derivative for the "singlet" component of the standard electroweak model. To confront the presence of the right-handed neutrino component, we look at the eigenvector equations for λ = −1.
(1 + h 3 )ν − R + √ 2h − e − R = 0, √ 2h + ν − R + (1 − h 3 )e − R = 0 (13)
These two equations require that the right-handed neutrino ν − R vanish at the north pole h 3 = 1. This is the point where the A µ field becomes massless and the intermediate boson mass ratio MZ MW takes on the observed value. This is one reasonable explanation of the observed absence of the right-handed neutrino in weak interactions. This is important, especially when combined with the fact that it also reduces the covariant derivative term for the right-handed lepton to the diagonal form needed to give the g V and g A relations that are consistent with observation. In this picture, most of our experiments involving charged particles takes place at this north pole, since this is the only place on the sphere where we have the massless electromagnetic field. The constraint (13) does not mean that ν − R vanish at other points on the h sphere.
Motivated by observations of neutrino oscillations we consider the second eigenvalue constraint (λ = +1). For this case we have
[T a , R + ] = i 2 σ a R + , D µ R + = ∂ µ R +(14)
For λ = +1 the eigenvector constraint is
(−1 + h 3 )ν + R + √ 2h − e + R = 0, √ 2h + ν + R − (1 + h 3 )e + R = 0.(15)
At the north pole, these equations require that e + R → 0, but give no restriction on ν + R . Notice that in the λ = +1 case the right-handed components have no covariant potential term. This means that R + plays no role in weak decay at the north pole. This is consistent with observation.
For each eigenvalue case we have the following invariant form
KR = 1 2 [iRΓ µ D µ R + (iRΓ µ D µ R) * ],(16)
where in each case the appropriate diagonal covariant derivative discussed above is used. We also have for each case the invariant lepton mass forms.
Km = −m[LR +RL] = −m[ν R ν L +ν L ν R ] − m[ē R e L +ē L e R ].(17)
Here m is an invariant used to represent the mass of the lepton field. The reader should recall that in the standard model the lepton mass term involved a product of a constant times a Higgs doublet component. Here, we could express the mass as a product like m = GV where G is a constant. This product form is not needed for invariance of the Lagrangian, but may be introduced for other reasons. At the north pole ν − R → 0 and e + R → 0. At this point the combined right-handed kinetic Lagrangian term reduces to
KR − + KR + =→ i 2 [ē − R γ µ ∂ µ e − R − (∂ µē − R )γ µ e − R ] −B µ g ′ē− R γ µ e − R + i 2 [ν + R γ µ ∂ µ ν + R − (∂ µν + R )γ µ ν + R ] (18)
The corresponding combined mass term becomes
Km − + Km + → −m − [ē − R e L +ē L e − R ] −m + [ν + R ν L +ν L ν + R ](19)
The field equations become
− iγ µ ∂ µ e − R + g ′ B µ γ µ e − R + m − e L = 0(20) −iγ µ ∂ µ e L − 1 2 P 21 µ γ µ ν L + P 22 µ γ µ e L + m − e − R = 0(21) −iγ µ ∂ µ ν + R + m + ν L = 0(22) −iγ µ ∂ µ ν L − 1 2 P 11 µ γ µ ν L + P 12 µ γ µ e L + m + ν + R = 0(23)
In the limit that the potentials vanish, m + becomes the mass of the free neutrino and m − becomes the mass of the free electron. The three linear and nonlinear components of these transformations have separately conserved currents. In [2] the three nonlinear currents reduced to a common conserved current j µ . At the north pole for the standard Lagrangian with Y e R = −2e R and the transformations given in [2], this current is
j µ =ē L γ µ e L +ē R γ µ e R − i W µρ − W + ρ − W µρ + W − ρ +i (∂ µ Φ * 1 )Φ 1 − (∂ µ Φ 1 )Φ * 1 + N cos 2θ w Z µ + 2qA µ Φ * 1 Φ 1 − g √ 2 W µ + Φ * 2 Φ 1 + W µ − Φ 2 Φ * 1 .(24)
This current at this pole is proportional to the electromagnetic current density. The usual gauge theory practice is to set Φ 1 = 0 and Φ 2 = νo √ 2 , a constant, in lowest order to generate the intermediate boson masses with the Higgs doublet. Recall that here we are using SU (2) realized nonlinearly versus a gauge SU (2) × U (1). In this picture the nonlinear components generate the electromagnetic current conservation at the north pole.
For the alternate Lagrangian we drop the terms involving the Higgs doublet Φ. The linear transformations of the V space components give no contribution to the nonlinear current. With Y R − = −1R − we have
j µ =ē L γ µ e L +ē − R γ µ e − R − i W µρ − W + ρ − W µρ + W − ρ(25)
For the alternate Lagrangian the linear currents at the north pole become
J µ a = − 1 2 (ē L γ µ e L +ē − R γ µ e − R )σ 22 a +(ν L γ µ ν L +ν + R γ µ ν + R )σ 11 a +ν L γ µ e L σ 12 a +ē L γ µ ν L σ 21 a − W µρ − (ǫ a1k + iǫ a2k )W k ρ + W µρ + (ǫ a1k − iǫ a2k )W k ρ + cos(θ w )ǫ a3k W k ρ − Z µρ +iN cos 2 (θ w )[W µ + W ρ − − W ρ + W µ − ] + sin(θ w )ǫ a3k W k ρ − F µρ +iq[W µ + W ρ − − W ρ + W µ − ] (26)
Recall that at this north pole point e + R → 0 and ν − R → 0, so these fields do not appear in the conserved currents at the north pole. Even when the potential contributions to the conserved linear currents vanish, the righthanded neutrino ν + R still contributes to the conserved currents. The conserved transition currents J µ 1 and J µ 2 involve cross terms between the left-handed electron and neutrino fields. There is a fundamental difference in the conserved currents for the standard and alternate Lagrangian given in [2]. In the standard Lagrangian, e R transforms as a singlet and there is no ν R component. The conserved linear current is the same as (26) if we drop the right handed components, and add the following Higgs doublet contribution.
J µ (Φ) = i 2 (D µ Φ) † Φ − (D µ Φ) † Φ †(27)
The standard covariant derivative for Φ is given in [2]. The currents in the above expressions are for the north pole point only. At non pole points on the sphere none of the lepton components vanish. Before the field equations and conserved currents can be obtained, the constraint equations must be incorporated into the Lagrangian. Since this nonlinear theory is consistent with observations at the north pole point, it is reasonable to think that the other regions on the adjoint sphere likewise exist in nature. At all regions on the sphere other than the two poles, both lepton components in each eigenvalue case have mass. All four boson potential fields are very massive. The mass of the A µ field decreases to zero as one approaches the north pole, but is very heavy near the south pole. This large region between the poles may be difficult to access in the laboratory because in this region interactions are with four massive potentials. Because of the large masses, interactions would perhaps be fast. The mass of the leptons and four heavy boson fields in the region between poles could provide a significant contribution to the missing mass. It is perhaps incorrect to call the leptons, V space components and vector bosons at points other than the north pole, "dark" matter. They simply do not interact with the massless electromagnetic field that exists only at the north pole in this picture. Observation will depend on appropriate detectors for this region. For instance, the e fields in this region would not be seen as curved tracks in a magnetic field, nor affected by electromagnetic accelerators, no matter how high the energy. At present, detection is indirectly via gravity.
AcknowledgmentsThe author would like to think Professor Kevin Haglin for reading this manuscript and for many useful discussions on the variety of topics raised in this study.
. B J Dalton, International Journal of Theoretical Physics. 21765B. J. Dalton, International Journal of Theoretical Physics 21, 765 (1982).
Submitted for publication. The preprint can be. Bill Dalton, arXiv:1005.2384v1hep-thBill Dalton, Submitted for publication. The preprint can be found at arXiv:1005.2384v1 [hep-th].
. B J Dalton, J. Math. Phys. 201520B. J. Dalton, J. Math. Phys. 20, 1520 (1979).
. B J Dalton, International Journal of Theoretical Physics. 23765B. J. Dalton, International Journal of Theoretical Physics 23, 765 (1984).
. B J Dalton, J. Math. Phys. 191335B. J. Dalton, J. Math. Phys. 19, 1335 (1978).
. S Weinberg, Phys. Rev Lett. 191264S. Weinberg, Phys. Rev Lett. 19, 1264 (1967).
A Salam, Elementary Particle Theory. N. SvaratholmStockholmAlmquist and ForlagA. Salam, Elementary Particle Theory, ed. N. Svaratholm, Stockholm: Almquist and Forlag, (1968).
. C N Yang, R L Mills, Phys. Rev. 96191C. N. Yang and R. L. Mills, Phys. Rev. 96, 191 (1954)
| zyda_arxiv-1567000 |
Exploring Complex Dynamical Systems via Nonconvex Optimization
December 2022
Hunter Elliott
Exploring Complex Dynamical Systems via Nonconvex Optimization
December 2022
Cataloging the complex behaviors of dynamical systems can be challenging, even when they are well-described by a simple mechanistic model. If such a system is of limited analytical tractability, brute force simulation is often the only resort. We present an alternative, optimization-driven approach using tools from machine learning. We apply this approach to a novel, fully-optimizable, reaction-diffusion model which incorporates complex chemical reaction networks (termed "Dense Reaction-Diffusion Network" or "Dense RDN"). This allows us to systematically identify new states and behaviors, including pattern formation, dissipation-maximizing nonequilibrium states, and replication-like dynamical structures.
Introduction
Chemical reaction systems driven far from equilibrium can demonstrate striking complexity in both the temporal and spatial variations of the concentration of their constituent chemical species (Figure 1, Appendix B.1, [1]). This complexity can be captured in simple reactiondiffusion (RD) models, and has been appreciated to be relevant for understanding a variety of nonequilibrium phenomena ranging from biological pattern formation [2] [3] to the emergence of entropy-producing "dissipative structures" more broadly [4], and has even been speculated to be important for the earliest stages of abiogenesis [5][6] [7]. The underlying microscopic physicochemical principles are well understood, and with minimal assumptions simple differential models incorporating these principles agree well with experiment [8] [9][10] [11].
Despite this mechanistic understanding, a reductionist approach to examining these systems often bears little fruit. Writing down the partial differential equations describing the system does not easily yield a full picture of the allowed behaviors via analytical means, requiring instead a focus on reduced models or specific behaviors [12] [13] [14] [15]. Some analytical approaches may define parameter ranges corresponding to stable or complex dynamics, but cannot in the general case enumerate which complex behaviors and states are attainable [16]. This is further complicated by the fact that many of these systems display chaotic behavior, implying that over time they may visit an infinity of states [17][18] [19]. Forward numerical simulation can be carried out with high accuracy, yet for systems of even moderate complexity a brute-force exploration of state and parameter space is at best tedious and at worst prohibitively computationally intensive [3].
One alternative "inverse desig" approach, explored here, is to first choose a specific behavior of interest and then ask whether points in the configuration space of a model can be found which correspond to this behavior. If the behavior of interest can be formulated as an easily evaluable mathematical function of the system state and or dynamics, this allows the exploration of the system's behavior to be framed as an optimization problem. While this optimization problem is in most cases nonconvex, some tools from modern machine learning (ML) may still be applied. In this way rather than attempting to fully catalog the possible behaviors a chemical reaction system can display, we instead test a hypothesis regarding a particular state or dynamic of interest, side-stepping both the analytical intractability and the prohibitive complexity of brute force search.
The inverse design of RD systems has previously been framed as an optimization problem, albeit without physically realistic chemical models and with applications often aimed at image processing or texture synthesis [20][21] [22] [23]. Some have designed realistic RD systems, but using heuristics [24] [25]. Other inverse design approaches working with physically realistic RD models avoid optimization and instead aim to produce RD system modules that compartmentalize complexity and allow programmatic design of modular or hierarchical structures [26] or cellular automata-like boolean dynamics [27]. Some methods allow inverse design of CRN dynamics but without a diffusion component or spatial organization [28]. Machine learning approaches have been applied to related realistic models, but in unrelated ways; either as methods to approximate the forward model [29][30] [31], or to learn models from real world data [32][33] [34]. Here we do not approximate the physical model nor do we use any real world data. Instead we use only the optimization approaches from ML to explore not just steady states but dynamics and thermodynamics of an un-approximated, known, yet flexible, physically realistic forward model.
The approach we present has it's limitations, not only in the systems and behaviors which can be explored, but also in the conclusions that can be formed from these explorations: The failure of the optimization to converge is an absence of proof that a state or dynamic is possible, not a proof of absence. Still, we aim to demonstrate here that it can complement existing approaches and may find use in investigations not only of driven nonequilibrium chemical reaction systems but in complex dynamical systems more broadly.
An Optimization Approach
At a high level, this approach requires that we specify a model which simulates the dynamics of the system of interest, as well as a loss function representing a hypothesis about a dynamical behavior the system may be capable of. The loss is simply a scalar function of the model's state, dynamics, and parameters which takes on small values if and only if the behavior or state of interest is exhibited. If the model and loss are both differentiable, we can use model construction and optimization approaches from machine learning to minimize the loss and attempt to realize the behavior of interest. Here we focus on a novel 'dense reaction-diffusion network' model or "Dense RDN", where an arbitrary number of chemical species interchange via a reaction network while also diffusing in two spatial dimensions. The chemical reactions, their rate constants, the diffusion coefficients, and the initial conditions are all determined by the optimization. The loss function is of the investigator's choice, and could represent simple behaviors like stability or bi-stability, or more complex behaviors, of which we present several examples in Section 3.
Preliminaries
More formally, we assume that it is possible to specify a forward model Ψ(X, θ Ψ ) which propagates a state X through time, parameterized by θ Ψ . The model should give differentials ∂X ∂t = Ψ(X, θ Ψ ) which can be used in e.g. forward Euler integration:
X t+1 = X t + Ψ(X t , θ Ψ )∆t = X t + ∆X t(1)
With a time step size ∆t giving a change in state ∆X t (see Appendix A.1 for details on how we ensure this step size is appropriately small). The initial conditions X 0 are generated from a random vector z by a model X 0 = G(z, θ G ) (in this work a neural network) parameterized by θ G . Repeated application of (1) to these initial conditions gives a time evolution X = [X 0 , X 1 , ..., X T ]. We define θ = θ Ψ ∪ θ G for convenience and require that both Ψ and G be differentiable almost everywhere with respect to both X and θ. Finally, we assume a scalar loss function L(θ) can be specified such that as the loss approaches a minimum value the behavior of interest is exhibited in X . As a simple example, if we sought to identify steady states we could define L(θ) = ∆X t 2 . Thus, in general we seek parameters θ * :
θ * = arg min θ L(θ)(2)
Because L is fully differentiable, gradient-based optimization techniques such as stochastic gradient descent (SGD) can be applied. If the optimization converges to a sufficiently low value of the loss such that the resulting dynamics meet the investigator's criteria, then we will have determined initial conditions and transition model parameters which result in a time evolution X that exhibits the behavior of interest, thus proving it is within the possible dynamics of the model. In contrast, if the optimization fails to yield such parameters, we cannot conclude that the desired behavior is impossible, only that this procedure failed to parameterize it.
An expectation of success? At first glance this may seem a doomed endeavor. Our expressed interest is in exploring models Ψ(X, θ) with intractably complex, nonlinear dynamics. This makes the relationship between θ and X highly nonlinear and provides no guarantee of convexity in our loss L. Nonconvex optimization is difficult (NP-hard), yet we propose to use local optimization methods such as SGD to find θ * . Nonetheless we are encouraged by two observations. First, we do not require that we find a global minimum of the loss, only a 'sufficiently low' local minimum, such that the resulting structure and/or dynamics meet the investigator's criteria for the behavior of interest. Second, the entire field of deep learning (DL, a sub-field of machine learning), a field which has seen remarkable success, relies on such optimizations succeeding despite their apparent in-feasibility. The reasons for the empirically observed reliable convergence of such nonconvex optimizations is still an active area of research, and not addressed here. However, one proposed explanation which we conjecture is of relevance here is this: When the parameter space being optimized is of sufficiently high dimensionality, the existence of true local minima with high loss values becomes increasingly unlikely [35]. We do not systematically study the conditions required for convergence. We do however demonstrate that by embedding high-dimensional physicochemical models within yet higher dimensional neural networks and employing optimization approaches used in DL, we are able to achieve convergence with a variety of loss functions. This at least demonstrates empirically that the optimization approach popularized in data driven machine learning can be applied to a data-free, purely forward simulation-driven exploration of complex dynamical models.
A Dense Reaction-Diffusion Network Model
We chose reaction-diffusion (RD) models as a simple yet accurate model which can display spatiotemporally complex behaviors and which has at least conceptual relevance for physics, chemistry, and biology. Even simple RD systems have been observed experimentally to produce complex and even chaotic behavior which is well described by these models [8][9][10] [11]. When driven by the constant influx of reactants, these models represent nonequilibrium systems of interest in thermodynamics and possibly even abiogenesis [5] [36]. While much prior work has analyzed hand-designed chemical reaction sets, here we begin with a large, dense network of chemical reactions and allow the appropriate reaction set to be determined during the optimization.
Chemistry: Chemical Reaction Network
In this work the forward model Ψ is a reaction-diffusion model. The chemical reactions in this model comprise what we refer to as a "dense" chemical reaction network (CRN), because it contains every possible reaction which matches these three reaction prototypes:
A + B 2C A B A + 2B 3B(3)
By "reaction prototype" we mean that, while the CRN may contain any number of chemical species N s , the letters in (3) simply indicate that A,B,C must be 3 different species, with the specified stoichiometries (See Appendix B). The forward and reverse rates of each reaction are free parameters κ = {k f 1 , k r1 , k f 2 , k r2 , ...} ⊂ θ Ψ which, with standard mass action kinetics, yield a net change in state due to chemical reactions ∆X t Rxn . With the reaction types in (3), the resulting dynamics are highly nonlinear, and we can be assured that complex behavior is at least possible (See Appendix B.3), despite much of the kinetic parameter space producing uninteresting behavior (e.g. monotonic relaxation to equilibrium). This reaction network structure also encompasses much of the uni-, bi-, and tri-molecular reactions possible amongst 3 or more chemical species.
While the simultaneous existence of all of these reactions is perhaps not probable in a real-world CRN, note that the optimization can set reaction rates to ∼ 0, and so we are effectively optimizing not only for the rates of reactions but also which reactions to include.
Nonequilibrium: Flow Reactor Drive
We model the system as being within a 'flow reactor', so it is maintained away from equilibrium by a constant influx of reactants. This influx produces a corresponding outflow, giving a net change in concentration for chemical species X i due to this drive of [37][1]:
∆X i t Drv = f x i − f X i t ∆t(4)
Where again both the per-species feed concentrations x i and the shared flow rate f are determined during the optimization.
Space: Diffusion and Initial Conditions
To allow for spatial organization, the CRN exists within a discretized two dimensional domain such that the concentration of chemical species i at position (u, v) is given by X i (u, v). The initial conditions X 0 are generated from random vector z by G which is instantiated as a neural network, similar to convolutional generator models such as DCGAN [38] (see Appendix A.4 for details).
Each chemical species also undergoes diffusion:
∆X i t Dif = D i ∇ 2 X i t ∆t(5)
With an optimizable diffusion coefficient D i ⊂ θ Ψ . The final combined change in state then is the sum of the contributions from reactions, drive, and diffusion:
∆X t = ∆X t Rxn + ∆X t Drv + ∆X t Dif(6)
Together these terms and their parameters define a space of nonequilibrium physicochemical models which is capable of both mundane and spatiotemporally complex behavior, and is flexible enough to allow the optimization procedure to determine the actual states and dynamics it adopts.
Results
All that remains to fully specify an optimization is the loss function L encoding a behavior of interest. In effect, this loss function encodes an hypothesis about the possible states and/or dynamics of the model, and we investigate several such hypotheses here.
Pattern Formation
Pattern formation -emergent structured spatial variation in the concentrations of chemical species -is a hallmark behavior of reaction-diffusion systems [2] [39]. Since postulated by Turing in 1952 [40] it has been repeatedly recapitulated experimentally and is understood to be important for biological pattern formation [2] [3] and even hypothesized to be relevant for the earliest stages of abiogenesis [5][6] [7].
We seek to identify kinetic parameter (θ Ψ ) regimes which correspond to the ability of the system to support stable patterns of arbitrary structure. As these will be driven nonequilibrium patterns, we choose a fitting arbitrary structure as our 'target': An Image of Ilya Prigogine (Figure 2a, left) as a tribute to his seminal work on dissipative structures [4] [41]. We therefore define a loss term which encourages the concentration distribution of chemical species i to match this target,X:
L(θ) = E Ψ(X t−1 , θ Ψ ) i −X 2 = E X i t −X 2(7)
Where the expectation is taken both over time and X 0 ∼ G(z, θ G ), in practice implemented via random sampling in time and from z. Furthermore we aim to identify stable patterns, so we introduce an additional loss term which encourages temporal stability:
L(θ) = E X i t −X 2 + λ ∆ ∆X t 1(8)
This ensures that we are not just optimizing initial conditions which match the target, but also a dynamical model which preserves it over time. Successful convergence to patterns which mimic this target requires many of the optimization heuristics that have become commonplace in deep learning, as well as some task-and model-specific adjustments, detailed in Appendix D. Nonetheless it is possible, as shown in Figure 2a, where a pattern found in Figure 2: A 4-chemical species dense reaction-diffusion network optimized to match a target (a, leftmost panel) supports semi-stable dissipative structures (a, right panels). A second 5-chemical species example (b) is less stable, showing dynamic pattern formation. c) Measuring correlation over time for the example in a) confirms that the optimization of both the initial conditions and reaction network are necessary for fidelity and stability. Solid lines show means, bands show +/one standard deviation over n = 32 randomizations.
a 4-species CRN shows reasonable fidelity to the target (Average Pearson's r of .88 over the optimized timescale of 32s), as well as stability over timescales dramatically longer than those employed during the optimization (Figure 2c). In other experiments the target pattern is not as temporally stable, and the optimization has clearly converged to a kinetic regime corresponding to dynamic pattern formation (Figure 2b). We confirm that the optimization of both the initial conditions X 0 as well as the reaction network parameters θ Ψ are necessary, by randomizing these and comparing the resulting correlation with the target over time (Figure 2c, see also Appendix D.4).
This therefore confirms that we can de novo identify a combination of initial conditions and reaction network kinetics and topologies (See Appendix B.4) which support the formation of arbitrary patterns purely through optimization.
Dissipation Maximization
As these are driven nonequilibrium systems it is natural to ask how the behavior of these models varies as the rate of entropy production -the dissipation rate -increases. In a system which is not relaxing towards equilibrium the dissipation rate is the rate at which input drive energy is converted to entropy. The thermodynamics of this Dense RDN system are well defined([37] [42], see Appendix C). The entropy production rates are differentiable functions of the time evolution X , and so can be used to formulate a loss function, broken down in terms of the diffusion and reaction dissipation rates σ T ot = σ Rxn + σ Dif :
L(θ) = E e −σ Rxn (Xt,θ)−σ Dif (Xt,θ)(9)
Note that we exponentiate the negative dissipation rates so that this is a minimization problem bounded below by zero; the raw dissipation rates are not bounded and can vary by several orders of magnitude, which can induce numerical instability in the optimization. Finally, we seek states which are stably nonequilibrium, rather than those which are simply undergoing dissipative relaxation to equilibrium, so we introduce a term which encourages solutions for which the dissipation rate is unchanging:
L(θ) = E e −σ Rxn (Xt,θ))−σ Dif (Xt,θ)) + λ σ Var e −σ T ot(10)
Where the variance is only over time and where the drive flow rate f is constrained so that the maximum dissipation rate is finite. Minimization of (10) is dominated by the reaction dissipation rate, which is invariant under permutation of spatial positions and so unsurprisingly does not induce any spatial structure. Still, we can examine the resulting reaction networks as examples of high-entropy generation rate CRNs (see Appendix B.4 for a visualization of the CRN. Note in this example the drive flow rate f is fixed to .03s −1 ).
If we instead include only the diffusion dissipation rate term, which requires spatial nonuniformity in concentration to be non-zero, we find interesting structures which stably dissipate drive energy at a rate well above those seen without optimization and over timescales much longer than the 64 seconds used during optimization ( Figure 3). It is worth noting that the form of these structures was not in any way pre-specified but rather emerges purely from the interaction between the dissipation-maximization loss function and the physicochemical properties of the model.
Dissipative Distributions
In practice all of the preceding optimizations suffer from 'mode collapse' in the generative model G such that it converges to producing only a single initial condition (meaning that the expectations in (8) (10) are effectively over time only). It may be desirable to instead produce a distribution, not only of X 0 but of its time evolution as well. We can enforce this by introducing a decoder modelẑ t = E(X t ) which is optimized to reconstruct the random binary vector z used to generate X 0 , via an associated loss term:
L z (θ,ẑ) = E [H(z,ẑ t )](11)
Where H is the cross entropy and E is implemented as a convolutional neural network. With a deterministic forward model (11) is trivially satisfiable, so we also introduce a spatiotemporally variable, uniformly distributed noise term f (u, v) which corresponds to stochastic variation in the flow rate:
∆X i t Drv = (f + f ) x i − X i t ∆t(12)
With f (u, v) ∼ Unif(0, 0.8) (Gaussian low-pass filtered to avoid spatial discontinuities) and indicating the Hadamard product. This serves to ensure some approximate f -ball separation between the samples of X t .
Minimizing a the sum of (10) and (11) yields θ * corresponding to a distribution of states which maintain their uniqueness over time, despite the randomly fluctuating drive ( Figure 4). In the example shown the decoder is able to reconstruct the 16-bit z vector with 100% accuracy for over 1000 time points, thus implying that this dynamical system is capable of transmitting information through time with a channel capacity proportional to the Shannon entropy of z, despite the noisy environment. Figure 4: Distributions of dissipative structures in a 5-chemical species Dense RDN found by simultaneously maximizing diffusive entropy production and a loss that requires the z vector used to generate the initial conditions be reconstructable at every time point. Each row is a sample from z and the columns correspond to t = 10, 100, 1000s from left to right.
Dynamic Structure
Thus far we have solved for specific states X t of dynamical systems or for properties of their transitions ∆X t . It may be interesting instead to optimize directly for properties of the full time evolution X . Dissipative structures in reaction-diffusion models have been previously shown to undergo particle-like motion [43] [44], and even 'replication' of simple spot patterns [1][45] [46]. Here we seek structural reorganization and motion without explicitly specifying the dynamical model or any of the states X t . We do this by optimizing a loss which requires similarity between X 0 and two shifted versions of X T (Figure 5a):
L(θ) = X 0 − T −w (X T ) 2 + X 0 − T +w (X T ) 2(13)
Where T −w (X) indicates a vertical translation of X by a distance −w. There is however a trivial solution to (13) which consists of uniform, time-invariant concentrations of all chemical species. We therefore introduce a loss term which encourages the spatial standard deviation in concentrations at the end of the time series to be close to or above a target value β * :
L ST D (θ, β * ) = 1 N s i Max 0, β * − Var [X i T ] 2(14)
Where the variance is over spatial positions (u, v). We then minimize the sum of (13) and (14). By requiring this similarity between a central region at the beginning of the time series and two distinct regions of X T at the end of the time series ( Figure 5a) the optimization must converge to not simply translation but something vaguely akin to replication to minimize this loss. This requires optimization over longer timescales, necessitating a modified 'incremental' optimization procedure (See Appendix D.3). Nonetheless the optimization converges finally to a combination of a chemical reaction network (Appendix Figure 6b) and initial conditions which roughly matches the desired dynamics (Figure 5b). The replication fidelity is modest, with an average Pearson's correlation between the 'parent' structure at t = 0 and the two 'daughter' structures at t = 352s of .893. This is of course not true replication. The two 'daughter' patterns on the right of Figure 5b are not capable themselves of dividing, and if they were this loss structure would not handle the inevitable crowding and collisions between their offspring. Nonetheless, this serves as an example of relatively complex dynamical reorganization which emerges completely from the form of the loss and the properties of the physicochemical model.
Discussion
We have demonstrated through several examples that if a hypothesis about the possible states and dynamics of a complex system can be appropriately represented mathematically as a loss function, the search through the combinatorially vast space of possible behaviors of the system can be guided by nonconvex optimization.
This approach has a fundamental limitation; failure of the optimization to converge does not constitute falsification of the hypothesis. Rather, this is an absence of proof of the hypothesis, not a proof of absence of the hypothesized behavior.
Still, this approach allowed the optimization-driven identification of diffusion-coupled chemical reaction networks which can stably support predetermined spatial patterns (3.1).
While the examples we show here are not emergent but rather dependent on optimized initial conditions, flavors of our approach may nonetheless find use as improvements on previously demonstrated techniques in e.g. micro/nanofabrication [47][48] [24][49] [26]. Importantly, our approach does not require pre-specification of the specific reactions to include but rather allows the optimization to select them from within an initial dense reaction network.
Rather than optimizing for specific states we can also search for dynamics, and we've shown here that we can do this without specifying a priori either the shape or motion (3.3).
While the form of the loss we've chosen gives "replication-esque" dynamics, in reality the similarity to replication is quite superficial: The replication is low-fidelity and unstable, and likely requires longer timescales, more complex CRNs, losses which specifically encourage stability, or a combination of these, in order to make the analogy to true replication less flimsy. Still, the ability to derive purely from optimization a non-isotropic concentration distribution that can dramatically re-organize in such a manner is at least a step towards demonstrating the feasibility of complex true replicators in a system without compartmentalization but rather only reaction and diffusion.
Perhaps most interestingly, this approach allows us to examine the dissipative structures that emerge when we search for models with extreme thermodynamic properties e.g. maximal entropy production rates (3.2). Here we show only qualitative observations, and note that for emergent spatial structure in this model only the diffusion component of the dissipation rate should be maximized. These structures were found in the presence of a simple, spatiotemporally invariant drive. However it has been hypothesized that the adaptations relevant for the emergence of persistent nonequilibrium phenomena (such as life) are induced while dissipating energy from more complex, difficult-to-exploit drives [50] [51]. It is possible that, if combined with appropriate spatiotemporally variable and chemically complex drives, this optimization approach could help shed light on these hypotheses.
Finally, while we focus on a specific reaction-diffusion type model here, the method presented is applicable to any differentiable model and loss function. The intersection of physics, chemistry and biology is littered with systems where reductionism has produced a simple and accurate differential model of a complex system, and yet this model has, as yet, failed to yield a comprehensive understanding of the phenomenon it describes [52]. This knowledge gap is mirrored in an observation made by Turing himself, one of the first to investigate mathematical models of RD systems:
This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false [53].
In analogy, when the 'fact' of the mechanistic model of a complex system is known, and yet existing methods struggle to map the universe of dynamics that are consequences of this fact, we hope that the method described here provides a complementary approach to closing that gap in understanding.
All source code is available at https://github.com/hunterelliott/dense-rdn
A.1 Minimizing Numerical Integration Error
With highly nonlinear differential models and a simple Euler integration scheme, we must be careful to ensure that the results of the optimizations are still reflective of the physicochemical model and not simply artifacts of numerical integration error. We minimize the effect of these errors in several ways.
First, we control the step size ∆X t via penalty terms in the loss. In 3.1 this is a natural part of the loss (Eq. 8) which encourages stable pattern formation, and we set λ ∆ = 100 in those experiments. For the remaining experiments we introduce a penalty only for step sizes that exceed a threshold δ M ax :
L δ (θ, δ M ax ) = λ δ 1 U V T (u,v),t Max [|∆X t (u, v)| − δ M ax , 0](15)
Where we used λ δ = 1000 and δ M ax = .05 for all experiments except those in 3.3 which used λ δ = 1.0 and δ M ax = .3. The full loss then is the sum of L δ and the loss described for each experiment.
Additionally, for all results presented we verified that the results were unchanged if we ran forward simulations with the time step ∆t decreased by two orders of magnitude (from 1.0 to .01). This decreased time step should dramatically reduce numerical error and so the fact that our results are unchanged indicates they are unlikely to be artifactual.
A.2 Local Concentration Visualizations
When the number of chemical species to be visualized N s is >3 we first perform a principal component analysis (PCA) projection of the concentration dimensions. We retain only the largest 3 components and map these to the intensities of the red, green and blue channels, respectively, of an RGB image for visualization. In all cases presented here these three components explained > 90% of the concentration variance. Note that this approach allows approximate visualization of arbitrarily complex chemical species mixtures, but precludes inclusion of a concentration color scale bar. This is however unimportant, given the fact that both the concentrations and reaction rate constants are determined simultaneously in the optimization; they could both be arbitrarily re-scaled and so we are effectively working in arbitrary concentration units.
A.3 Diffusion Modeling
We used an isotropic discrete approximation of the Laplacian operator [54] to approximate Eq. (5). In all cases diffusion coefficients were constrained to lie between .05 and .02 × 10 −5 m 2 /s. The lower bound serves to prevent runaway accumulation of chemical species and spatial discontinuities in concentration. The upper bound was chosen to prevent numerical artifacts (checkerboard patterns, oscillation) given the 1s time step and discrete Laplacian approximation. Using these units for the diffusion coefficients the physical dimension of a single spatial element of X (the 'pixel size') is .01m, but given that the diffusion coefficients and concentrations are all optimized simultaneously and could be arbitrarily re-scaled, all the units are effectively arbitrary. All spatial domains were 64 × 64 pixels except for in 3.3 where the dimensions were 48 × 96.
A.4 Neural Networks
The generator model G(z, θ G ) is a simple architecture consisting of repeated blocks of stride 2 convolution transpose, batch normalization [55] and a tanh activation. The final generated spatial domain is therefore 2 N blocks in both width and height, and the number of blocks varies within the presented results accordingly. The output is also filtered with a fixed 2D gaussian kernel (σ = 1.0 pixels), to avoid introducing spatial discontinuities and therefore numerical error in diffusion simulations and diffusion entropy rate calculations. Finally, we exclude batch norm from the last layer (as in DCGAN [38]) and instead add only an optimizable scaling that is constrained to prevent negative concentrations and limit the maximum generated concentration to 10.0 (in arbitrary units).
The encoder models used in 3.2.1 use a similar structure, with stride 2 convolutions followed by a tanh activation and batch normalization.
Experimentation was not extensive and constrained by hardware limitations, but in general we saw no obvious improvement in convergence properties or final optimized values from using ReLU activations or higher capacity generator or encoder architectures.
B Chemical Reaction Networks
B.1 Introductory Example Reaction Network
The dynamics shown in Figure 1 are derived from a 'coupled Gray-Scott' reaction system, which consists of two standard Gray-Scott reaction systems [56] linked by an additional reaction which allows interconversion of the two autocatalysts.
B.2 Dense Chemical Reaction Networks
The reaction networks used here are "dense" in the sense that they contain all possible reactions matching a particular prototype. That is, if a reaction network contained species S = {A, B, C, D} then for the first reaction in (3) we would include:
A + B 2C A + C 2B B + C 2A A + B 2D ...(16)
While for the second reaction A B we would include every possible reversible unimolecular interconversion e.g. A B, A C, A D, B C, and so on. More formally, we would generate reaction sets matching each of the reaction prototypes given in (3):
R 3.1 = {X + Y 2Z|∀X, Y, Z ∈ S, X = Y, Y = Z, X = Z} R 3.2 = {X Y |∀X, Y ∈ S, X = Y } R 3.3 = {X + 2Y 3Y |∀X, Y ∈ S, X = Y }(17)
Every reaction in these sets is governed by independent forward and reverse reaction rates which are free parameters determined during the optimization. This gives e.g. a total of 40 reactions for a 5-species dense CRN (including forward and reverse), and 24 reactions for a 4 species dense CRN. To model the dynamics of these chemical reactions we assume standard mass-action kinetics. Using the matrix representations commonly used in chemical reaction network theory, the reaction rate calculations ∆X t Rxn are matrix multiplications [16] [57], which are efficiently calculated on the GPUs used for both forward simulation and optimization.
B.3 CRN Motivation
We chose the reaction network described above both for the variety of reactions which comprise it, as well as it's potential for diverse behavior. This gives the optimization process a broad space of possible reaction sets and dynamics to explore, allowing for the possibility of interesting behavior without pre-specifying it.
The reactions include many if not most stereotypical uni-bi-and tri-molecular reactions. Given sufficient timescale, more complex reaction mechanisms can be approximated with these more elementary reactions. Of more specific interest they include, as a subset, the reactions from well-studied reaction-diffusion models exhibiting complex behavior such as the Gray-Scott reaction system [56] or the Brussellator [4]. It is therefore not surprising that chemical reaction network theory tells us, on the basis of the topology and stoichiometry of these reactions, that complex dynamics are at least possible [16].
B.4 Optimized Reaction Networks
The optimization process effectively chooses a specific reaction network from the many possible networks the model described above can represent. We can visualize this reaction network as a graph, with arrows pointing from reactants to products, and with the style of arrow indicating the 'strength' of that reaction: Darker, larger arrows indicate larger reaction rate constants, and the graph is laid out such that strongly reacting species should be closer together (shorter arrows).
This allows us to compare the optimized reaction network from, for example, the dissipationmaximizing loss used in 3.2 (Figure 6a), to that from the replication-like-dynamics-inducing loss used in 3.3 (Figure 6b). It is worth noting that these are just the reaction rate constants, and the net flux through these reaction pathways depends on the state X and its time progression. Figure 3a as well as the reaction network from replicationlike dynamics (b) corresponding to the results in Figure 5. Darker, thicker arrows indicate larger reaction rate constants.
C Thermodynamics
The instantaneous entropy production rate of a single volume element X(u, v) due to chemical reactions is given by [37] [58]:
∂S ∂t Rxn = σ Rxn = k B j ν + j − ν − j ln ν + j ν − j(18)
Where ν + j and ν − j are the local forward and reverse reaction rates of reaction j in that volume element and and k B is Boltzmann's constant, giving ∂S ∂t units of J K·L·s . In practice we replace ν j with Max (ν j , 1 × 10 −5 ) for numerical stability.
The instantaneous local entropy production rate due to diffusion in a volume element X(u, v) is [58]:
∂S ∂t Dif = σ Dif = k B i D i X i (u, v) ∇X i (u, v) 2(19)
Where again D i is the diffusion coefficient for species i and X i (u, v) is the concentration of species i at position (u, v). We add a small numerical stabilizer (0.1) to the denominator of (19) to avoid division by zero. The total entropy production rate is simply the sum of these two components:
σ T ot = σ Rxn + σ Dif(20)
Rather than integrate this rate to produce a net change in entropy ∆S T ot = σ T ot ∆t we report and maximize the average of the instantaneous rates to avoid explicit dependence on the time and length scale of optimization.
D Optimization
D.1 Initialization
As is commonly observed in neural network optimization, we found that initialization was important for convergence. Specifically, in our case models initialized with uniform random but low reaction rates (k f i , k ri < 1 × 10 −3 ∀ i) and similarly low drive flow rates and concentrations failed to converge. Models initialized to more highly dissipative regimeswith higher reaction rates and higher drive flow rates -converged more reliably. Exhaustive exploration of initialization dependence is prohibitively computationally intensive, so we instead mimicked the kinetic parameters used in [1], generalizing it to our dense CRNs: One autocatalytic reaction per chemical species was initialized with a rate constant of 1. All other reaction rates were set to 1 × 10 −3 + r with r ∼ Unif (−1 × 10 −4 , 1 × 10 −4 ) as this was found to be low enough to prevent large ∆X and the associated numerical inaccuracies and/or runaway reaction rates. The small uniform random noise r breaks symmetry in the reaction rates.
Feed concentrations x i for the flow-reactor drive were initialized to 1 4 Na i where N a i is the number of autocatalytic reactions producing species i. This scaling was found to produce a highly dissipative yet somewhat kinetically and numerically stable initial state.
Diffusion coefficients were initialized from Unif (.05, .2) × 10 −5 m 2 /s, uniformly within the numerically stable regime. We used the ADAM optimizer [59], with learning rates and moving averages β 1 as given in Table 1 (β 2 of .999 was used in all experiments). With these highly nonlinear dynamical models we found that, especially for optimizations with longer timescales, gradient clipping was essential for stability of convergence. We used gradient norm clipping at 0.5 as implemented in Keras [60]. Learning rates were automatically decreased and optimization was automatically terminated via Keras' callbacks with patience parameters as given in Table 1. In some experiments we used RMSD instead of the L2 norm given in Eq. 7 to simplify changing the size of the spatial domain without altering the magnitude of the loss, and found this gave similar results to the L2 norm as expected.
D.2 Heuristics and Hyperparameters
Calculating gradients with these models requires backpropagation through the entire time series X , introducing significant memory overhead. For this reason, and because in most cases we are optimizing for a single X 0 , we set the batch size to 1 except for in Sect. 3.2.1, where we used 2. For experiments where dissipation losses were included, λ σ was set to 1.0 except for in 3.2.1 where it was 0.04. During optimization the reaction rate constants and feed concentrations were constrained to be non-negative, and the flow rate was constrained to between .01 and 1.0. Diffusion coefficients were constrained to within their initialization range (A.3).
D.3 Incremental Optimization
For the experiments shown in 3.3 we must optimize over timescales long enough for the full replication-like dynamics to occur. Optimization from a random initialization above ∼ 250 time points (equivalent to 250 seconds) was highly unstable, in large part due to an inability to balance vanishing and exploding gradients in the highly nonlinear CRN. Convergence was finally achieved via an "incremental" optimization approach, where we first optimize at shorter timescales and then use the parameters θ * from shorter timescales to initialize optimization at longer timescales. Additionally, at the longest timescales the variance target Optimization T β * lr lr decrease by 1 2 at 1 256 1.0 1 × 10 −3 (1, 2, 3, 50, 100, 150) × 10 2 2 320 1.0 5 × 10 −6 (5, 10) × 10 2 3 352 0.5 1 × 10 −6 (20, 40) × 10 2 Table 2: The incremental optimization stages that produced the results in Figure 5. The result from each optimization was used to initialize the next.
β * seemed to be at odds with the replication fidelity loss (Eq. 13), and decreasing this value gave better parent-daughter correlations at convergence. It is worth noting that the large number of iterations required resulted in long optimization; the final result presented here required > 600k weight updates in total for a wall clock time of more than 26 days. The hyperparameters used for each optimization that produced Figure 5 are given in Table 2.
The ADAM optimizer β 1 was set to .95 for all optimizations.
D.4 Control Analyses
We performed additional analyses to test the necessity and sufficiency of the optimization of each component of the models. These computational "ablation experiments" correspond to several conditions:
• "Optimized" -All components optimized.
• "Random X 0 " -Optimized initial conditions X 0 replaced with normally distributed random noise with the same mean and variance as the optimized initial conditions (truncated to prevent negative concentrations).
• "Initial θ Ψ " -Optimized reaction network parameters and diffusion coefficients replaced with their values from the start of optimization (as described in D.1).
• "Random θ Ψ " -Optimized reaction network parameters and diffusion coefficients replaced with normally distributed random noise with the same mean and variance as the optimized parameters (truncated to prevent negative reaction rates or diffusion coefficients outside the numerically stable range).
The results of these experiments are shown in Figure 2c and Figure 3c with some additional shown here and described below. These results confirm that all optimized components are required for persistent correlation with the target and for persistently high dissipation rates.
In Figure 7a we show the control experiments as in the main text but for the second example (from Figure 2b). We then repeat the same analyses for both examples but with the time step now two orders of magnitude shorter, to further reinforce that the stability and fidelity is not a result of the optimization exploiting numerical error (Figure 7b-c).
In Figure 8a we show the same result as the main text but for the second example (from Figure 3b). We also run both examples at a longer timescale to demonstrate that the optimized states are stably dissipative and produce entropy at consistently higher rates than those with non-optimized components (Figure 8b-c). Note that in this case the models with mean and standard deviation-matched random kinetic parameters ("Random θ Ψ ", shown in red) are highly unstable and display significant numerical integration error (as evidenced by their behavior changing with a smaller time step). This further reinforces both the necessity and effectiveness of the numerical error avoidance methods described in Section A.1. Figure 3b. Both this panel and 3c use 1/100th the 1s time step used during optimization, to confirm the differences are not due to numerical error. (b-c) Increasing the timescale by more than a factor of 10 confirms that these are stable nonequilibrium states and that the increase in dissipation rate provided by the optimization is persistent for the models from both Figure 3b (shown in b) and Figure 3a (shown in c). Solid lines show means, bands show +/-one standard deviation over n = 32 randomizations. Note that with the .1s time step in b) and c) the models with random kinetics show high variability, often due to numerical error (broad red bands).
Figure 1 :
1An example of the complex dynamics possible in reaction-diffusion systems. Each panel shows the local concentration of 5 chemical species, from t = 0s (left) to t = 1000s (right). All concentration visualizations are PCA projected to 3 colors and in arbitrary units (Appendix A.2).
Figure 3 :
3Maximizing the rate of entropy production from diffusion gives interesting filamentous structures (a-b) in the local concentrations of a 5-chemical species Dense RDN. We confirm the contribution of each optimized component in (c) by a comparison of the dissipation rate vs. time for the optimized example in a) (blue) to those with randomized initial conditions (orange), the parameters the optimization was initialized to (green) and models with randomized kinetics (red). Solid lines show means, bands show +/-one standard deviation over n = 32 randomizations.
Figure 5 :
5An example of the optimization-derived structure and associated replication-like dynamics in a 5-chemical species Dense RDN. (a) shows the structure of the loss, which requires similarity between the region in orange at t = 0 and the two regions in red and green at t = T . (b) shows the resulting local concentration vs. time from t = 0s (left) to t = 352s (right). Compare the structure in the leftmost panel of b) to the two structures in the rightmost panel.
Figure 6 :
6Visualization of the optimized reaction network graphs from dissipation maximization (a) corresponding to the results in
Figure T
Figure T lr lr decrease by 1 2 at lr patience es patience ADAM β 1 2a 32 1 × 10 −3 (5, 10, 15) × 10 2 5 × 10 2 1 × 10 3 .95 2b 64 1 × 10 −3 (5, 10, 15) × 10 2 5 × 10 2 1 × 10 3 .95 3a 64 1 × 10 −3 -2 × 10 3 6 × 10 3 .995 3b 64 1 × 10 −3 -2 × 10 3 6 × 10 3 .995 4 128 2.5 × 10 −4 (1, 2) × 10 4 2 × 10 3 6 × 10 3 .995
Figure T lr lr decrease by 1 2 at lr patience es patience ADAM β 1 2a 32 1 × 10 −3 (5, 10, 15) × 10 2 5 × 10 2 1 × 10 3 .95 2b 64 1 × 10 −3 (5, 10, 15) × 10 2 5 × 10 2 1 × 10 3 .95 3a 64 1 × 10 −3 -2 × 10 3 6 × 10 3 .995 3b 64 1 × 10 −3 -2 × 10 3 6 × 10 3 .995 4 128 2.5 × 10 −4 (1, 2) × 10 4 2 × 10 3 6 × 10 3 .995
Figure 7 :
7(a) Correlation with target vs time for the example in Figure 2b. Both this panel and Figure 2c use one tenth of the 1s time step used during optimization. (b-c) Further decreasing the time step to .01s for the model from Figure 2b (shown in b) and for Figure 2a (shown in c) provides additional evidence the differences are not due to numerical error. Solid lines show means, bands show +/-one standard deviation over n = 32 randomizations.
Figure 8 :
8(a) Dissipation rate vs. time for the example in
Table 1: Hyperparameters for presented results. lr: learning rate, lr decrease: iterations at which learning rate was decreased on a fixed schedule, lr patience: patience parameter for automatic learning rate decreases, es patience: patience parameter for automatic early stopping. lr decrease and patience columns are in units of iterations ('epoch' has no meaning in this context).lr decrease by 1
2 at lr patience es patience ADAM β 1
2a
32
1 × 10 −3
(5, 10, 15) × 10 2
5 × 10 2
1 × 10 3
.95
2b
64
1 × 10 −3
(5, 10, 15) × 10 2
5 × 10 2
1 × 10 3
.95
3a
64
1 × 10 −3
-
2 × 10 3
6 × 10 3
.995
3b
64
1 × 10 −3
-
2 × 10 3
6 × 10 3
.995
4
128 2.5 × 10 −4
(1, 2) × 10 4
2 × 10 3
6 × 10 3
.995
Complex patterns in a simple system. John E Pearson, Science 261. 5118John E Pearson. "Complex patterns in a simple system". In: Science 261.5118 (1993), pp. 189-192.
Reaction-diffusion model as a framework for understanding biological pattern formation". In: science 329. Shigeru Kondo, Takashi Miura, 5999Shigeru Kondo and Takashi Miura. "Reaction-diffusion model as a framework for un- derstanding biological pattern formation". In: science 329.5999 (2010), pp. 1616-1620.
Pattern formation mechanisms of self-organizing reactiondiffusion systems. Amit N Landge, Developmental biology. 460Amit N Landge et al. "Pattern formation mechanisms of self-organizing reaction- diffusion systems". In: Developmental biology 460.1 (2020), pp. 2-11.
Symmetry breaking instabilities in dissipative systems. II. Ilya Prigogine, René Lefever, The Journal of Chemical Physics. 48Ilya Prigogine and René Lefever. "Symmetry breaking instabilities in dissipative sys- tems. II". In: The Journal of Chemical Physics 48.4 (1968), pp. 1695-1700.
Biological order, structure and instabilities1. Ilya Prigogine, Gregoire Nicolis, Quarterly reviews of biophysics. 4Ilya Prigogine and Gregoire Nicolis. "Biological order, structure and instabilities1". In: Quarterly reviews of biophysics 4.2-3 (1971), pp. 107-148.
In silico simulations reveal that replicators with limited dispersal evolve towards higher efficiency and fidelity. Péter Szabó, Nature. 420Péter Szabó et al. "In silico simulations reveal that replicators with limited dispersal evolve towards higher efficiency and fidelity". In: Nature 420.6913 (2002), pp. 340-343.
From self-replication to replicator systems en route to de novo life. Paul Adamski, Nature Reviews Chemistry. 4Paul Adamski et al. "From self-replication to replicator systems en route to de novo life". In: Nature Reviews Chemistry 4.8 (2020), pp. 386-403.
Pattern formation by interacting chemical fronts. J Kyoung, Lee, Science 261. 5118Kyoung J Lee et al. "Pattern formation by interacting chemical fronts". In: Science 261.5118 (1993), pp. 192-194.
Modelling wave propagation across a series of gaps. Gavin R Armstrong, Physical Chemistry Chemical Physics. 6Gavin R Armstrong et al. "Modelling wave propagation across a series of gaps". In: Physical Chemistry Chemical Physics 6.19 (2004), pp. 4677-4681.
Oscillatory kinetics and spatio-temporal self-organization in reactions at solid surfaces. Gerhard Ertl, Science. 254Gerhard Ertl. "Oscillatory kinetics and spatio-temporal self-organization in reactions at solid surfaces". In: Science 254.5039 (1991), pp. 1750-1755.
Excitation waves in reaction-diffusion media with non-monotonic dispersion relations. T Chad, Oliver Hamik, Steinbock, New Journal of Physics. 558Chad T Hamik and Oliver Steinbock. "Excitation waves in reaction-diffusion media with non-monotonic dispersion relations". In: New Journal of Physics 5.1 (2003), p. 58.
Spot bifurcations in three-component reaction-diffusion systems: The onset of propagation. M Or-Guil, Physical Review E. 576432M Or-Guil et al. "Spot bifurcations in three-component reaction-diffusion systems: The onset of propagation". In: Physical Review E 57.6 (1998), p. 6432.
The tanh method, a simple transformation and exact analytical solutions for nonlinear reaction-diffusion equations. A H Khater, Chaos, Solitons & Fractals. 14AH Khater et al. "The tanh method, a simple transformation and exact analytical solutions for nonlinear reaction-diffusion equations". In: Chaos, Solitons & Fractals 14.3 (2002), pp. 513-522.
Beyond activator-inhibitor networks: the generalised Turing mechanism. Stephen Smith, Neil Dalchau, arXiv:1803.07886arXiv preprintStephen Smith and Neil Dalchau. "Beyond activator-inhibitor networks: the generalised Turing mechanism". In: arXiv preprint arXiv:1803.07886 (2018).
An updated kernel-based Turing model for studying the mechanisms of biological pattern formation. Shigeru Kondo, Journal of Theoretical Biology. 414Shigeru Kondo. "An updated kernel-based Turing model for studying the mechanisms of biological pattern formation". In: Journal of Theoretical Biology 414 (2017), pp. 120- 127.
Foundations of Chemical Reaction Network Theory. Martin Feinberg, Springer202Martin Feinberg. Foundations of Chemical Reaction Network Theory. Vol. 202. Springer, 2019.
Reaction-diffusion waves in biology. Vitaly Volpert, Sergei Petrovskii, Physics of life reviews. 6Vitaly Volpert and Sergei Petrovskii. "Reaction-diffusion waves in biology". In: Physics of life reviews 6.4 (2009), pp. 267-310.
One-dimensional dynamics in a multicomponent chemical reaction. H Reuben, Alan Simoyi, Harry L Wolf, Swinney, Physical Review Letters. 49245Reuben H Simoyi, Alan Wolf, and Harry L Swinney. "One-dimensional dynamics in a multicomponent chemical reaction". In: Physical Review Letters 49.4 (1982), p. 245.
Chemical turbulence: chaos in a simple reaction-diffusion system. E Otto, Rössler, Zeitschrift für Naturforschung A 31. 10Otto E Rössler. "Chemical turbulence: chaos in a simple reaction-diffusion system". In: Zeitschrift für Naturforschung A 31.10 (1976), pp. 1168-1172.
Sketching reaction-diffusion texture. Ly Phan, Cindy Grimm, Proceedings of the Third Eurographics conference on Sketch-Based Interfaces and Modeling. the Third Eurographics conference on Sketch-Based Interfaces and ModelingLy Phan and Cindy Grimm. "Sketching reaction-diffusion texture". In: Proceedings of the Third Eurographics conference on Sketch-Based Interfaces and Modeling. 2006, pp. 107-114.
A formal methods approach to pattern recognition and synthesis in reaction diffusion networks. Ezio Bartocci, IEEE Transactions on Control of Network Systems. 5Ezio Bartocci et al. "A formal methods approach to pattern recognition and synthesis in reaction diffusion networks". In: IEEE Transactions on Control of Network Systems 5.1 (2016), pp. 308-320.
Differentiable Programming of Reaction-Diffusion Patterns. Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, arXiv:2107.06862arXiv preprintAlexander Mordvintsev, Ettore Randazzo, and Eyvind Niklasson. "Differentiable Pro- gramming of Reaction-Diffusion Patterns". In: arXiv preprint arXiv:2107.06862 (2021).
Texture Generation with Neural Cellular Automata. Alexander Mordvintsev, Eyvind Niklasson, Ettore Randazzo, arXiv:2105.07299arXiv preprintAlexander Mordvintsev, Eyvind Niklasson, and Ettore Randazzo. "Texture Generation with Neural Cellular Automata". In: arXiv preprint arXiv:2105.07299 (2021).
Fractal assembly of micrometrescale DNA origami arrays with arbitrary patterns. Grigory Tikhomirov, Philip Petersen, Lulu Qian, Nature. 552Grigory Tikhomirov, Philip Petersen, and Lulu Qian. "Fractal assembly of micrometre- scale DNA origami arrays with arbitrary patterns". In: Nature 552.7683 (2017), pp. 67- 71.
A three-step framework for programming pattern formation. S Natalie, Mark Scholes, Isalan, Current opinion in chemical biology. 40Natalie S Scholes and Mark Isalan. "A three-step framework for programming pattern formation". In: Current opinion in chemical biology 40 (2017), pp. 1-7.
Designing modular reaction-diffusion programs for complex pattern formation. Dominic Scalise, Rebecca Schulman, Technology 2.01Dominic Scalise and Rebecca Schulman. "Designing modular reaction-diffusion pro- grams for complex pattern formation". In: Technology 2.01 (2014), pp. 55-66.
Emulating cellular automata in chemical reaction-diffusion networks. Dominic Scalise, Rebecca Schulman, Natural Computing. 15Dominic Scalise and Rebecca Schulman. "Emulating cellular automata in chemical reaction-diffusion networks". In: Natural Computing 15.2 (2016), pp. 197-214.
Synthesizing and tuning stochastic chemical reaction networks with specified behaviours. Niall Murphy, Journal of The Royal Society Interface. 1520180283Niall Murphy et al. "Synthesizing and tuning stochastic chemical reaction networks with specified behaviours". In: Journal of The Royal Society Interface 15.145 (2018), p. 20180283.
Autonomous cellular neural networks: a unified paradigm for pattern formation and active wave propagation. O Leon, Chua, IEEE Transactions on Circuits and Systems I: Fundamental theory and applications. 42Leon O Chua et al. "Autonomous cellular neural networks: a unified paradigm for pattern formation and active wave propagation". In: IEEE Transactions on Circuits and Systems I: Fundamental theory and applications 42.10 (1995), pp. 559-577.
Reaction diffusion system prediction based on convolutional neural network. Angran Li, Scientific reports. 10Angran Li et al. "Reaction diffusion system prediction based on convolutional neural network". In: Scientific reports 10.1 (2020), pp. 1-9.
Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Jaideep Pathak, Physical review letters. 12024102Jaideep Pathak et al. "Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach". In: Physical review letters 120.2 (2018), p. 024102.
Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Jaideep Pathak, Chaos: An Interdisciplinary Journal of Nonlinear Science. 27121102Jaideep Pathak et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data". In: Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017), p. 121102.
Weakly supervised learning technique for solving partial differential equations; case study of 1-d reaction-diffusion equation. Behzad Zakeri, International Congress on High-Performance Computing and Big Data Analysis. SpringerBehzad Zakeri et al. "Weakly supervised learning technique for solving partial differen- tial equations; case study of 1-d reaction-diffusion equation". In: International Congress on High-Performance Computing and Big Data Analysis. Springer. 2019, pp. 367-377.
Deep learning algorithm for data-driven simulation of noisy dynamical system. Kyongmin Yeo, Igor Melnyk, Journal of Computational Physics. 376Kyongmin Yeo and Igor Melnyk. "Deep learning algorithm for data-driven simulation of noisy dynamical system". In: Journal of Computational Physics 376 (2019), pp. 1212- 1231.
Identifying and attacking the saddle point problem in highdimensional non-convex optimization. Yann N Dauphin, Advances in neural information processing systems. 27Yann N Dauphin et al. "Identifying and attacking the saddle point problem in high- dimensional non-convex optimization". In: Advances in neural information processing systems 27 (2014).
Spatial models of prebiotic evolution: soup before pizza?. István Scheuring, 33In: Origins of life and evolution of the biosphereIstván Scheuring et al. "Spatial models of prebiotic evolution: soup before pizza?" In: Origins of life and evolution of the biosphere 33.4 (2003), pp. 319-355.
Modern thermodynamics: from heat engines to dissipative structures. Dilip Kondepudi, Ilya Prigogine, John Wiley & SonsDilip Kondepudi and Ilya Prigogine. Modern thermodynamics: from heat engines to dissipative structures. John Wiley & Sons, 2014.
Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, arXiv:1511.06434arXiv preprintAlec Radford, Luke Metz, and Soumith Chintala. "Unsupervised representation learn- ing with deep convolutional generative adversarial networks". In: arXiv preprint arXiv:1511.06434 (2015).
Pattern formation mechanisms in reactiondiffusion systems. K Vladimir, Vanag, Irving R Epstein, International Journal of Developmental Biology. 53Vladimir K Vanag and Irving R Epstein. "Pattern formation mechanisms in reaction- diffusion systems". In: International Journal of Developmental Biology 53.5-6 (2009), pp. 673-681.
The Chemical Basis of Morphogenesis. Am Turing, Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 237AM Turing. "The Chemical Basis of Morphogenesis". In: Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 237.641 (1952), pp. 37-72.
Time, structure, and fluctuations. Ilya Prigogine, Science. 201Ilya Prigogine. "Time, structure, and fluctuations". In: Science 201.4358 (1978), pp. 777- 785.
Entropy production in a two-dimensional reversible Gray-Scott system. Hitoshi Mahara, Tomohiko Yamaguchi, Masatsugu Shimomura, Chaos: An Interdisciplinary Journal of Nonlinear Science. 1547508Hitoshi Mahara, Tomohiko Yamaguchi, and Masatsugu Shimomura. "Entropy produc- tion in a two-dimensional reversible Gray-Scott system". In: Chaos: An Interdisci- plinary Journal of Nonlinear Science 15.4 (2005), p. 047508.
Solitons in a reaction-diffusion system. C Henry, Tuckwell, Science 205.4405. Henry C Tuckwell. "Solitons in a reaction-diffusion system". In: Science 205.4405 (1979), pp. 493-495.
Pattern formation in reaction-diffusion systemsdissipative solitons in physical systems. Mathias Bode, H-G Purwins, Physica D: Nonlinear Phenomena. 86Mathias Bode and H-G Purwins. "Pattern formation in reaction-diffusion systems- dissipative solitons in physical systems". In: Physica D: Nonlinear Phenomena 86.1-2 (1995), pp. 53-63.
Experimental observation of self-replicating spots in a reactiondiffusion system. Kyoung-Jin Lee, Nature. 369Kyoung-Jin Lee et al. "Experimental observation of self-replicating spots in a reaction- diffusion system". In: Nature 369.6477 (1994), pp. 215-218.
Thermodynamics and the structure of living systems. D Nathaniel, Virgo, University of Sussex BrightonPhD thesisNathaniel D Virgo et al. "Thermodynamics and the structure of living systems". PhD thesis. University of Sussex Brighton, 2011.
Reaction-diffusion processes at the nano-and microscales. R Irving, Bing Epstein, Xu, Nature nanotechnology. 11Irving R Epstein and Bing Xu. "Reaction-diffusion processes at the nano-and mi- croscales". In: Nature nanotechnology 11.4 (2016), pp. 312-319.
Principles and implementations of dissipative (dynamic) selfassembly. Marcin Fialkowski, Marcin Fialkowski et al. Principles and implementations of dissipative (dynamic) self- assembly. 2006.
Inverse design of soft materials via a deep learning-based evolutionary strategy. M Gabriele, Coli, 6731In: Science advances 8.3 (2022Gabriele M Coli et al. "Inverse design of soft materials via a deep learning-based evolutionary strategy". In: Science advances 8.3 (2022), eabj6731.
Statistical physics of adaptation. Nikolay Perunov, A Robert, Jeremy L Marsland, England, Physical Review X. 621036Nikolay Perunov, Robert A Marsland, and Jeremy L England. "Statistical physics of adaptation". In: Physical Review X 6.2 (2016), p. 021036.
Dissipative adaptation in driven self-assembly. L Jeremy, England, Nature nanotechnology. 10Jeremy L England. "Dissipative adaptation in driven self-assembly". In: Nature nan- otechnology 10.11 (2015), pp. 919-923.
More is different: broken symmetry and the nature of the hierarchical structure of science. Philip W Anderson, In: Science. 177Philip W Anderson. "More is different: broken symmetry and the nature of the hier- archical structure of science." In: Science 177.4047 (1972), pp. 393-396.
. A M Turing, I.-COMPUTING MACHINERY AND INTELLIGENCE". In: Mind LIX. 236A. M. Turing. "I.-COMPUTING MACHINERY AND INTELLIGENCE". In: Mind LIX.236 (1950), pp. 433-460.
The numerical treatment of differential equations. Lothar Collatz, SpringerBerlinLothar Collatz. "The numerical treatment of differential equations". In: Berlin: Springer (1966).
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, PMLR. 2015International conference on machine learning. Sergey Ioffe and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift". In: International conference on machine learning. PMLR. 2015, pp. 448-456.
Autocatalytic reactions in the isothermal, continuous stirred tank reactor: isolas and other forms of multistability. P Gray, Scott, In: Chemical Engineering Science. 381P Gray and SK Scott. "Autocatalytic reactions in the isothermal, continuous stirred tank reactor: isolas and other forms of multistability". In: Chemical Engineering Sci- ence 38.1 (1983), pp. 29-43.
A tutorial on Chemical Reaction Networks dynamics. David Angeli, IEEEEuropean Control Conference (ECC)David Angeli. "A tutorial on Chemical Reaction Networks dynamics". In: 2009 Euro- pean Control Conference (ECC). IEEE. 2009, pp. 649-657.
Calculation of the entropy balance equation in a non-equilibrium reaction-diffusion system. Hitoshi Mahara, Tomohiko Yamaguchi, In: Entropy 12. 12Hitoshi Mahara and Tomohiko Yamaguchi. "Calculation of the entropy balance equa- tion in a non-equilibrium reaction-diffusion system". In: Entropy 12.12 (2010), pp. 2436- 2449.
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, ICLR (Poster). Diederik P Kingma and Jimmy Ba. "Adam: A Method for Stochastic Optimization". In: ICLR (Poster). 2015.
Additional detail can be found in the source code. François Chollet, François Chollet et al. Keras. https://keras.io. 2015. the meth- ods. Additional detail can be found in the source code, available at https://github.com/ hunterelliott/dense-rdn.
| zyda_arxiv-1576000 |
THE WEIGHTED COMPOSITION OPERATORS ON THE LARGE WEIGHTED BERGMAN SPACES
10 May 2018
Inyoung Park
THE WEIGHTED COMPOSITION OPERATORS ON THE LARGE WEIGHTED BERGMAN SPACES
10 May 2018arXiv:1708.05934v2 [math.FA]
In this paper, we characterize bounded, compact or Schatten class weighted composition operators acting on Bergman spaces with the exponential type weights. Moreover, we give the proof of the necessary part for the boundedness of C φ on large weighted Bergman spaces given by[8].
INTRODUCTION
Let φ be an analytic self-map of the open unit disk D in the complex plane and u be an analytic function on D. The weighted composition operator with respect to u is defined by (uC φ )f (z) := u(z)f (φ(z)) where f belongs to the holomorphic function space H(D). There was much effort to characterize those analytic maps φ which induce bounded or compact weighted composition operators on the different holomorphic function spaces (see for example, [3,4,6,10,13,15]). When u(z) ≡ 1, it is well known that every composition operator is bounded on the standard weighted Bergman spaces by the Littlewood subordination principle.
On the other hand, in [8], Kriete and Maccluer showed that if there is a boundary point ζ such that the modulus of the angular derivative |φ ′ (ζ)| is less than 1 then the map φ induces an unbounded operator C φ on the Bergman space with the fast weights. Moreover, they showed that we don't know whether C φ is bounded or not when there is a boundary point |φ ′ (ζ)| = 1 by giving a explicit example map.
In this paper, we study the boundedness, the compactness and schatten class weighted composition operators uC φ on the Bergman space with rapidly decreasing weights. For an integrable radial function ω, let L p (ωdA) be the space of all measurable functions f on D such that
f p p,ω := D |f (z)| p ω(z)dA(z) < ∞, 0 < p < ∞,
where dA(z) is the normalized area measure on D. For a given 0 < p < ∞, the weighted Bergman space A p (ω) consists of f in the class of H(D) ∩ L p (ωdA). Throughout this paper, we consider the radial weights of the form ω(z) = e −ϕ(z) such that ϕ ∈ C 2 (D) and ∆ϕ(z) ≥ C > 0 for some constant C. When ϕ(z) = α log(1 − |z|) where α > −1, it represents the standard weighted Bergman spaces A p α (D). In this paper, we assume that ϕ satisfies the following conditions (∆ϕ(z)) − 1 2 =: τ (z) ց 0, and τ ′ (z) → 0 when |z| → 1 − so, we easily check that we exclude the standard weights since τ (r) = 1 − r when ϕ(r) = log(1 − r). In fact, these weights described above decrease faster than the standard weight (1 − |z|) α , α > 0. We can refer to Lemma 2.3 of [12] for the proof. Furthermore, we assume that τ (z)(1− |z|) −C increases for some C > 0 or lim |z|→1 − τ ′ (z) log 1 τ (z) = 0. Now, we say that the weight ω is in the class W if ϕ satisfies all conditions above. If ω belongs to the class W then the associated function τ (z) has the following properties (L):
(i) There exists a constant C 1 > 0 such that
τ (z) ≤ C 1 (1 − |z|), for z ∈ D.
(ii) There exists a constant C 2 > 0 such that
|τ (z) − τ (ξ)| ≤ C 2 |z − ξ|, for z, ξ ∈ D.
The typical weight in the class W is the type of
ω(r) = (1 − r) β exp −c (1 − r) α , β ≥ 0, α, c > 0,
and its associated function τ (z) = (1 − |z|) 1+ α 2 . We can find more examples of our weight functions and their τ functions in [12]. Additionally, we assume that fast weights ω ∈ W have the regularity imposing the rate of decreasing condition in [8],
lim t→0 ω(1 − δt) ω(1 − t) = 0, 0 < δ < 1. (1.1)
In [8], Kriete and Maccluer gave the following sufficient condition for the boundedness of C φ and the equivalent conditions for the compactness of C φ on A 2 (ω).
ω(z) ω(φ(z)) < ∞, (1.2)
then C φ is bounded on A 2 (ω). Moreover, the following conditions are equivalent:
(1) C φ is compact on A 2 (ω). (2) lim |z|→1 ω(z) ω(φ(z)) = 0. (3) d φ (ζ) > 1, ∀ζ ∈ ∂D where d φ (ζ) = lim inf z→ζ 1−|φ(z)| 1−|z| .
In the same paper, they show that (1.2) is even the equivalent condition for the boundedness of C φ on A 2 (ω) when ω(z) = e − 1 (1−|z|) α , α > 0. Therefore, we can expect that the condition (1.2) is still the equivalent condition for the boundedness of C φ on the Bergman spaces with weights ω in the class W. In fact, there is almost the converse of Theorem 1.1 in [5, Corollary 5.10] as follows:
Theorem 1.2. If lim inf r→1 ω(r) ω(M φ (r)) = ∞ then C φ is unbounded on A 2 (ω). Here, M φ (r) = sup θ |φ(re iθ )|.
In Section 3, we show that the condition (1.2) is a necessary condition of the boundedness of C φ on A 2 (ω), thus we have the following result for general p with the help of the Carleson measure theorem.
C φ is bounded on A p (ω), 0 < p < ∞ if and only if lim sup |z|→1 − ω(z) ω(φ(z)) < ∞.
In Section 4, we studied the weighted composition operator uC φ on A 2 (ω). In [6], we can find the characterizations of the boundedness, compactness and membership in the Schatten classes of uC φ defined on the standard weighted Bergman spaces in terms of the generalized Berezin transform. Using the same technique, we extended their results in [6] to the case of large weighted Bergman spaces A 2 (ω). Those our results are given by Theorem 4.3 and Theorem 4.5 in Section 4. Finally, we studied the membership of uC φ in the Schatten classes S p , p > 0 in Section 5.
Constants.
In the rest of this paper, we use the notation X Y or Y X for nonnegative quantities X and Y to mean X ≤ CY for some inessential constant C > 0. Similarly, we use the notation X ≈ Y if both X Y and Y X hold.
SOME PRELIMINARIES
In this section, we recall some well-known notions and collect related facts to be used in our proofs in later section.
2.1. Carleson type measures. Let τ be the positive function on D satisfying the properties (L) introduced in Section 1. We define the Euclidean disk D(δτ (z)) having a center at z with radius δτ (z). Let
m τ := min(1, C −1 1 , C −1 2 ) 4 , (2.1)
where C 1 , C 2 are in the properties (i), (ii) of (L). In [12], they show that for 0 < δ < m τ and a ∈ D,
1 2 τ (a) ≤ τ (z) ≤ 2τ (a)
, for z ∈ D(δτ (a)).
(2.
2)
The following lemma is the generalized sub-mean value theorem for |f | p ω.
|f (z)| p ω(z) β ≤ M δ 2 τ (z) 2 D(δτ (z)) |f | p ω β dA
for a sufficiently small δ > 0, and f ∈ H(D).
A positive Borel measure µ in D is called a (vanishing) Carleson measure for
A p (ω) if the embedding A p (ω) ⊂ L p (ωdµ) is (compact) continuous where L p (ωdµ) := f ∈ M(D) D |f (z)| p ω(z)dµ(z) < ∞ ,(2.3)
and M(D) is the set of µ-measurable functions on D. Now, we introduce the Carleson measure theorem on A p (ω) given by [11].
Theorem 2.2 (Carleson measure theorem). Let ω ∈ W and µ be a positive Borel measure on D.
Then for 0 < p < ∞, we have (1) The embedding I : A p (ω) → L p (ωdµ) is bounded if and only if for a sufficiently small δ ∈ (0, m τ ), we have sup z∈D µ(D(δτ (z))) τ (z) 2 < ∞.
(2) The embedding I :
A p (ω) → L p (ωdµ) is compact if and only if for a sufficiently small δ ∈ (0, m τ ), we have lim |z|→1 µ(D(δτ (z))) τ (z) 2 = 0.
From the above theorem, we note a Carleson measure is independent of p so that if µ is a Carleson measure on L 2 (ωdµ) then µ is a Carleson measure on L p (ωdµ) for all p.
The Reproducing
Kernel. Let the system of functions {e k (z)} ∞ k=0 be an orthonormal basis of A 2 (ω). It is well known that the reproducing kernel for the Bergman space A 2 (ω) is defined by
K(z, ξ) = K z (ξ) = ∞ k=0 e k (z)e k (ξ).
Unlike the standard weighted Bergman spaces, the explicit form of K(z, ξ) of A 2 (ω) has been unknown. However, we have the precise estimate near the diagonal given by Lemma 3.6 in [9],
|K(z, ξ)| 2 ≈ K(z, z)K(ξ, ξ) ≈ ω(z) −1 ω(ξ) −1 τ (z) 2 τ (ξ) 2 , ξ ∈ D(δτ (z)),(2.4)
where δ ∈ (0, m τ /2). Recently, in [1], they introduced the upper estimate for the reproducing kernel as follows: for z, ξ ∈ D there exist constants C, σ > 0 such that
|K(z, ξ)|ω(z) 1/2 ω(ξ) 1/2 ≤ C 1 τ (z)τ (ξ) exp −σ inf γ 1 0 |γ ′ (t)| τ (γ(t)) dt , (2.5)
where γ is a piecewise C 1 curves γ : [0, 1] → D with γ(0) = z and γ(1) = ξ. In the same paper, they remarked that the distance β ϕ denoted by
β ϕ (z, ξ) = inf γ 1 0 |γ ′ (t)| τ (γ(t)) dt
is a complete distance because of the property (i) of (L). By the completeness of β ϕ and the kernel estimate (2.5), we obtain the following lemma.
|k z (ξ)| = |K z (ξ)| K z 2,ω ω(ξ) −1/2 τ (ξ) e −σβϕ(z,ξ) ≤ C K e −σβϕ(z,ξ) , ∀ξ ∈ K.
By the completeness of β ϕ , we conclude that k z (ξ) converges to 0 uniformly on compact subsets of D when z approaches to the boundary.
2.3. The Julia-Caratheodory Theorem. For a boundary point ζ and α > 1, we define the nontangential approach region at ζ by
Γ(ζ, α) = {z ∈ D : |z − ζ| < α(1 − |z|)}.
A function f is said to have a nontangential limit at ζ if
∠ lim z→ζ z∈Γ(ζ,α) f (z) < ∞, for each α > 1.
Definition 2.4. We say φ has a finite angular derivative at a boundary point ζ if there is a point η on the circle such that
φ ′ (ζ) := ∠ lim z→ζ z∈Γ(ζ,α) φ(z) − η z − ζ < ∞, for each α > 1.
Theorem 2.5 (Julia-Caratheodory Theorem). For φ : D → D analytic and ζ ∈ ∂D, the following is equivalent:
(1) d φ (ζ) = lim inf z→ζ 1−|φ(z)| 1−|z| < ∞. (2) φ has a finite angular derivative φ ′ (ζ) at ζ. (3) Both φ and φ ′ have (finite) nontangential limits at ζ, with |η| = 1 for η = lim r→1 φ(rζ).
Moreover, when these conditions hold,
we have φ ′ (ζ) = d φ (ζ)ζη and d φ (ζ) = ∠ lim z→ζ 1−|φ(z)| 1−|z| .
In addition to the Julia-Caratheodory theorem, we use the Julia's lemma which gives a useful geometric result. For k > 0 and ζ ∈ ∂D, let
E(ζ, k) = {z ∈ D : |ζ − z| 2 ≤ k(1 − |z| 2 )}. A computation shows that E(ζ, k) = z ∈ D : 1 k+1 ζ − z ≤ k 1+k .
Theorem 2.6 (Julia's Lemma). Let ζ be a boundary point and φ : D → D be analytic. If d φ (ζ) < ∞ then |φ(ζ)| = 1 where lim n→∞ φ(a n ) =: φ(ζ) and {a n } is a sequence along which this lower limit is achieved.
Moreover, φ(E(ζ, k)) ⊆ E(φ(ζ), kd φ (ζ)) for every k > 0, that is, |φ(ζ) − φ(z)| 2 1 − |φ(z)| 2 ≤ d φ (ζ) |ζ − z| 2 1 − |z| 2 for all z ∈ D. (2.6) Note that (2.6) shows that d φ (ζ) = 0 if and only if φ is a unimodular con- stant. Moreover, when d φ (ζ) ≤ 1, φ(E(ζ, k)) ⊆ E(φ(ζ), k) thus, the set E(ζ, k) contains its image of φ when φ(ζ) = ζ.T x = k λ k x, e k e k , x ∈ H,
where the points {λ k } are nonnegative eigenvalues of T . This is referred to as the canonical form of a positive compact operator T . For 0 < p < ∞, a compact operator T belongs to the Schatten class S p on H if the sequence {λ k } belongs to the sequence space l p ,
T p Sp = k |λ k | p < ∞.
When 1 ≤ p < ∞, S p is the Banach space with the above norm and S p is a metric space when 0 < p < 1. In general, if T is a compact linear operator on H, we say that T ∈ S p if (T * T ) p/2 ∈ S 1 , 0 < p < ∞. Moreover,
(T * T ) p/2 ∈ S 1 ⇐⇒ T * T ∈ S p/2 . (2.7)
In particular, when T ∈ S 2 , we say that T is a Hilbert-Schmidt integral operator. It is well known that every Hilbert-Schmidt operator on L 2 (X, µ) is an integral operator induced by a function K ∈ L 2 (X × X, µ × µ) such that
T f (z) = X K(x, y)f (y) dµ(y), ∀f ∈ L 2 (X, dµ).
The converse is also true. To study all of the basic properties of Schatten class operators above, we can refer to §1.4 and §3.1 in [16].
COMPOSITION OPERATORS
C φ ON A p (ω)
In [8], they also gave the necessary condition for the boundedness of C φ on large Bergman spaces in terms of the angular derivative,
d φ (ζ) = lim inf z→ζ 1 − |φ(z)| 1 − |z| ≥ 1, ∀ζ ∈ ∂D. (3.1)
The following Lemma enables us to consider only the boundary points d φ (ζ) = 1 in the proof of Theorem 3.2.
Lemma 3.1. Let ω be a regular fast weight. Then if d φ (ζ) > 1 for ζ ∈ ∂D then lim z→ζ ω(z) ω(φ(z)) = 0.
Proof. We first assume that lim inf z→ζ
1−|φ(z)| 1−|z| = d φ (ζ) = 1 + 2ǫ < ∞ for some ζ ∈ ∂D. Then for ǫ > 0, there exists an open disk D r (ζ) centered at ζ with radius r such that 1 − |φ(z)| 1 − |z| > d φ (ζ) − ǫ, ∀z ∈ D r (ζ) ∩ D.
Thus, using the relation
x A ≥ 1 − A(1 − x) when A > 1 and 0 < x < 1, we have |φ(z)| ≤ 1 − (d φ (ζ) − ǫ)(1 − |z|) < 1 − (1 + ǫ)(1 − |z|) ≤ |z| 1+ǫ ,(3.2)
for z ∈ D r (ζ) ∩ D. On the other hand, if we take a sufficiently small t such that (1 − t) 1 1+ǫ ≥ 1 − 1+ǫ 2 1+ǫ t, then from the assumption (1.1) we obtain the following result,
ω(z) ω(|φ(z)|) ≤ ω(z) ω(|z| 1+ǫ ) = ω((1 − t) 1 1+ǫ ) ω(1 − t) ≤ ω(1 − 1+ǫ 2 1+ǫ t) ω(1 − t) −→ 0,
since ω(|φ(z)|) ≥ ω(|z| 1+ǫ ) by (3.2). When d φ (ζ) = lim inf z→ζ 1−|φ(z)| 1−|z| = ∞, we have the prompt result, |φ(z)| < |z| M , M > 1 for near ζ by the definition, since there exists M > 1 such that
log 1 |φ(z)| > M log 1 |z| ,
for |z| near 1. Thus, we complete the proof.
We now prove our first theorem.
φ E ζ, 1 − r 1 + r ⊆ E φ(ζ), 1 − r 1 + r , r ∈ (0, 1).
Here, the set E(ζ, 1−r 1+r ) introduced in Section 2.3 is the closed disk centered at 1+r 2 ζ with radius 1−r 2 so, rζ is the point on the boundary of E(ζ, 1−r 1+r ) closest to 0. Thus, we have
1 + r 2 φ(ζ) − φ(rζ) ≤ 1 − r 2 ,
that is, |φ(rζ)| ≥ r for 0 < r < 1 since |φ(ζ)| = 1 by Theorem 2.5. Therefore, we have r 0 ∈ (0, 1) such that
τ (φ(|z|ζ)) ≤ τ (z) for r 0 ≤ |z| < 1,(3.3)
since τ (z) is a radial decreasing function when |z| → 1 − . Now, define the following radial function M φ (r) = sup η∈∂D |φ(rη)|, 0 < r < 1.
Then (3.3) follows that
τ (M φ (|z|)) ≤ τ (φ(|z|ζ)) ≤ τ (z), for r 0 < |z| < 1. (3.4)
Now, we assume that there is a sequence {z n } converges to ζ with ω(zn) ω(φ(zn)) → ∞ when n → ∞. We can choose {ξ n } in D such that |ξ n | = |z n | and |φ(ξ n )| = M φ (|z n |). Using the relation C
* φ K z (ξ) = C * φ K z , K ξ = K z , C φ K ξ = K φ(z) (ξ), we have C * φ k z 2 2,ω = K φ(z) (φ(z)) K z 2 2,ω ≈ τ (z) 2 τ (φ(z)) 2 ω(z) ω(φ(z))
, ∀z ∈ D, thus by (3.4), we obtain
C φ 2 C * φ k ξn 2 2,ω ω(ξ n ) ω(φ(ξ n )) τ (ξ n ) 2 τ (φ(ξ n )) 2 = ω(z n ) ω(M φ (|z n |)) τ (z n ) 2 τ (M φ (|z n |)) 2 ≥ ω(z n ) ω(φ(z n ))
, for large n > N . Thus, we have its contradiction and complete our proof.
From Theorem 3.2 together with Theorem 1.1[KM], we conclude that (1.2) is the equivalent condition for the boundedness of C φ on A 2 (ω). Furthermore, by the change of variables in the measure theory in Theorem C in §39 of [7], we have
C φ f p p,ω = D |f • φ(z)| p ω(z) dA(z) = D |f (z)| p [ωdA] • φ −1 (z). Denote dµ φ ω (z) := ω(z) −1 [ωdA] • φ −1 (z).
Then C φ is (compact) bounded on A p (ω) for 0 < p < ∞ if and only if the measure µ φ ω is a (vanishing) Carleson measure. Therefore the Carleson measure theorem shows that the condition (1.2) is still valid for the boundedness for all ranges of p.
lim sup |z|→1 ω(z) ω(φ(z)) < ∞. (3.5) 4. WEIGHTED COMPOSITION OPERATORS uC φ ON A 2 (ω)
For a positive Borel measure µ, we introduce the Berezin transform µ on D given by
µ(z) = D |k z (ξ)| 2 ω(ξ)dµ(ξ), z ∈ D,
where k z is the normalized kernel of A 2 (ω). For δ ∈ (0, 1), the averaging function µ δ over the disks D(δτ (z)) is defined by
µ δ (z) = µ(D(δτ (z))) |D(δτ (z))| , z ∈ D.
In [2], we already calculated the following relation between the Berezin transform and the averaging function,
µ δ (z) ≤ C µ(z), z ∈ D. (4.1)
Moreover, we can see that (2),(3), and (4) in the following Proposition are equivalent in [2]. Before proving Proposition 4.2, we introduce the covering lemma which plays an essential role in the proof of many theorems including Carleson measure theorem.
Lemma 4.1. [11] Let τ be a positive function satisfying properties (L) and δ ∈ (0, m τ ). Then there exists a sequence of points {a k } ⊂ D such that (1) a j / ∈ D(δτ (a k )), for j = k.
(2) k D(δτ (a k )) = D. (3) D(δτ (a k )) ⊂ D(3δτ (a k )), where D(δτ (a k )) = z∈D(δτ (a k )) D(δτ (z)) (4) {D(3δτ (a k ))} is a covering of D of finite multiplicity N .
Here, we call the sequence {a k } ∞ k=1 δ-sequence. Now, we characterize Carleson measures on A 2 (ω) in terms of averaging function and Berezin transform. Let ω ∈ W and δ ∈ (0, m τ /2). For µ ≥ 0, the following conditions are equivalent:
(1) µ is a Carleson measure on A 2 (ω).
(2) µ is bounded on D.
(3) µ δ is bounded on D.
(4) The sequence { µ δ (a k )} is bounded for every δsequence {a k }.
|f (ξ)| 2 ω(ξ) sup ξ∈D(δτ (a k )) 1 τ (ξ) 2 D(δτ (ξ)) |f | 2 ωdA 1 τ (a k ) 2 D(3δτ (a k )) |f | 2 ωdA.
Therefore, by Lemma 4.1 and the inequality above, we have the desired result,
D |f (ξ)| 2 ω(ξ)dµ(ξ) ≤ ∞ k=1 D(δτ (a k )) |f (ξ)| 2 ω(ξ)dµ(ξ) ≤ ∞ k=1 µ(D(δτ (a k ))) sup ξ∈D(δτ (a k )) |f (ξ)| 2 ω(ξ) ∞ k=1 µ(D(δτ (a k ))) |D(δτ (a k ))| D(3δτ (a k )) |f | 2 ωdA sup a∈D µ δ (a)N f 2 2,ω . (4.2)
For an analytic self-map φ of D and a function u ∈ L 1 (D), we define the φ-Berezin transform of u by Proof. Given an analytic function u, we let dµ |u| 2 (z) = |u(z)| 2 ω(z)dA(z). Now, we define the positive measure
B φ u(z) = D |k z (φ(ξ))| 2 u(ξ)ω(ξ)dA(ξ).dµ φ ω,u (z) := ω(z) −1 dµ |u| 2 • φ −1 (z). (4.3)
By the change of variables in the measure theory, we have
D |u(z)(f • φ)(z)| 2 ω(z) dA(z) = D |(f • φ)(z)| 2 dµ |u| 2 (z) = D |f (z)| 2 ω(z) dµ φ ω,u (z)
Thus, the fact that the measure dµ φ ω,u is a Carleson measure is equivalent to that the Berezin transform µ φ ω,u is bounded on D by Proposition 4.2. On the other hand,
µ φ ω,u (z) = D |k z (ξ)| 2 ω(ξ) dµ φ ω,u (ξ) = D |k z (φ(ξ))| 2 |u(ξ)| 2 ω(ξ) dA(ξ) = B φ (|u| 2 )(z),(4.4)
for all z ∈ D. Thus, the proof is complete. Furthermore, from (4.2) and (4.1), we have
uC φ sup z∈D B φ (|u| 2 )(z). Finally, we obtain uC φ ≈ sup z∈D B φ (|u| 2 )(z) since B φ (|u| 2 )(z) ≤ uC φ for all z ∈ D.
As an immediate consequence of Theorem 4.3 we obtain more useful necessary condition for the boundedness of uC φ .
Corollary 4.4. Let ω ∈ W and u be an analytic function D. If uC φ is bounded on
A 2 (ω), then sup z∈D τ (z) τ (φ(z)) ω(z) 1/2 ω(φ(z)) 1/2 |u(z)| < ∞. (4.5)
Proof. For any z ∈ D, Theorem 4.3 follows that
∞ > B φ (|u| 2 )(φ(z)) = D |k φ(z) (φ(ξ))| 2 |u(ξ)| 2 ω(ξ) dA(ξ) ≥ D(δτ (z)) |k φ(z) (φ(ξ))| 2 |u(ξ)| 2 ω(ξ) dA(ξ) τ (z) 2 |k φ(z) (φ(z))| 2 |u(z)| 2 ω(z) τ (z) 2 τ (φ(z)) 2 ω(z) ω(φ(z)) |u(z)| 2 .
Before characterizing the compactness of uC φ , we recall the definition of the essential norm T e for a bounded operator T on a Banach space X as follows:
T e = inf{ T − K : K is any compact operator on X}.
For a bounded operator uC φ on A 2 (ω), we use the following formula introduced in [14] to estimate the essential norm of uC φ ,
uC ϕ e = lim n→∞ uC ϕ R n ,(4.6)
where R n is the orthogonal projection of A 2 (ω) onto z n A 2 (ω) defined by
R n f (z) = ∞ k=n a k z k for f (z) = ∞ k=0 a k z k .
The formula (4.6) can be obtained by the similar proof of Proposition 5.1 in [14].
In fact, we can find the proof of (4.6) adjusting to large weighted Bergman spaces in [8, p.775]. Now, we prove the following estimate for the essential norm of a bounded weighted composition operator uC φ on A 2 (ω).
Theorem 4.5. Let ω ∈ W and u be an analytic function on D. If uC φ is bounded on A 2 (ω), then there is an absolute constant C ≥ 1 such that
lim sup |z|→1 B φ (|u| 2 )(z) ≤ uC φ e ≤ C lim sup |z|→1 B φ (|u| 2 )(z).
Proof. For f 2,ω ≤ 1 and a fixed 0 < r < 1, we have
(uC φ R n )f 2 2,ω = D |R n f (ξ)| 2 ω(ξ) dµ φ ω,u (ξ) = D\rD |R n f (ξ)| 2 ω(ξ) dµ φ ω,u (ξ) + rD |R n f (ξ)| 2 ω(ξ) dµ φ ω,u (ξ),
where rD = {z ∈ D : |z| ≤ r}. By the orthogonality of R n , we have |R n f (ξ)| ≤ f 2,ω R n K ξ 2,ω and the following series is uniformly bounded on |ξ| ≤ r < 1,
R n K ξ 2 2,ω ∞ k=n 1 p k r 2k < ∞, where p k = 1 0 s 2k+1 ω(s)ds.
Thus, R n K ξ → 0 as n → ∞ so that |R n f (ξ)| also uniformly converges to 0 on rD as n → ∞. Therefore the second integral vanishes as n → ∞ since µ φ ω,u is a Carleson measure. For the first integral, we denote by µ φ ω,u,r := µ φ ω,u | D\rD . For a fixed r > 0, we easily calculate (D \ rD) ∩ D(δτ (z)) = ∅ for |z| + δτ (z) > r. Thus by (4.1) and (4.2), the first integral is dominated by
D |R n f | 2 ω dµ φ ω,u,r sup z∈D µ φ ω,u,r δ (z) R n f 2 2,ω ≤ sup |z|+δτ (z)>r µ φ ω,u δ (z) sup |z|+δτ (z)>r µ φ ω,u (z),
for all n > 0. Here, letting r → 1, then |z| → 1 − since τ (z) → 0. Thus we obtain the following upper estimate by (4.6) and the above inequality,
uC φ 2 e = lim n→∞ sup f 2,ω ≤1 (uC φ R n )f 2 2,ω lim sup |z|→1 µ φ ω,u (z) = lim sup |z|→1 B φ (|u| 2 )(z).
For the lower estimate, for any compact operator K on A 2 (ω), we have
uC φ 2 e ≥ uC φ − K 2 ≥ lim sup |z|→1 (uC φ )k z 2 2,ω = lim sup |z|→1 B φ (|u| 2 )(z).
In addition to Theorem 4.5, we have the following useful necessary condition for the compactness of uC φ on A 2 (ω). Corollary 4.6. Let ω ∈ W and u be an analytic function on D. If the weighted composition operator uC φ is compact on A 2 (ω) then
lim |z|→1 − τ (z) 2 τ (φ(z)) 2 ω(z) ω(φ(z)) |u(z)| 2 = 0. (4.7)
Proof. By Lemma 2.3, we know that the normalized kernel sequence {k z } converges to 0 weakly when |z| → 1. Since (uC φ ) * K z = u(z)K φ(z) , we have
0 = lim |z|→1 (uC φ ) * k z 2 2,ω = lim |z|→1 |u(z)| 2 K(φ(z), φ(z)) K(z, z) lim |z|→1 |u(z)| 2 τ (z) 2 τ (φ(z)) 2 ω(z) ω(φ(z))
.
SCHATTEN CLASS WEIGHTED COMPOSITION OPERATORS
In order to study the Schatten class composition operator, we use some known results of Toeplitz operators acting on A 2 (ω). First of all, we recall that the definition and some facts of Toeplitz operators defined on A 2 (ω). For a finite complex Borel measure µ on D, the Toeplitz operator T µ is defined by
T µ f (z) = D f (ξ)K(z, ξ)ω(ξ)dµ(ξ),
for f ∈ A 2 (ω). Here, it is not clear the integrals above will converge, so we give the following additional condition on µ, D |K(z, ξ)| 2 ω(ξ)d|µ|(ξ) < ∞, (5.1) for all z ∈ D. The following lemma is from Lemma 2.2 in [2].
Lemma 5.1. Let ω ∈ W, µ ≥ 0 and µ be a Carleson measure. Then we have T µ f, g ω = D f (ξ)g(ξ)ω(ξ)dµ(ξ), f, g ∈ A 2 (ω),
where f, g ω = D f (z)g(z)ω(z) dA(z).
It is well known that composition operators are closely related to Toeplitz operators on weighted Bergman spaces. From Lemma 5.1, we have the relation, (uC φ ) * (uC ϕ )f, g ω = D f (φ(z))g(φ(z))|u(z)| 2 ω(z)dA(z)
= D f (z)g(z)ω(z)dµ φ ω,u (z) = T µ φ ω,u f, g ω ,(5.2)
where dµ φ ω,u is defined by (4.3). Thus, we can use the results of T µ φ ω,u to see when the composition operators uC ϕ belong to S p . In other words, it suffices to show when T µ φ ω,u is in S p/2 since uC ϕ ∈ S p is equivalent to (uC ϕ ) * (uC ϕ ) ∈ S p/2 as we studied in Section 2.4.
The following lemma is the characterization of the membership in the Schatten ideals of a Toeplitz operator acting on A 2 (ω).
Lemma 5.2 (Theorem 4.6 [2]). Let ω ∈ W, δ ∈ (0, m τ /4) and 0 < p < ∞. For µ ≥ 0, the following conditions are equivalent:
(1) T µ ∈ S p (A 2 (ω)).
(2) µ ∈ L p (∆ϕdA).
(3) µ δ ∈ L p (∆ϕdA).
Theorem 5.3. Let 0 < p < ∞. Let ω ∈ W and u be an analytic function on D.
Then uC φ ∈ S p if and only if B φ (|u| 2 ) ∈ L p/2 (∆ϕdA).
Proof. If uC φ ∈ S p then (uC ϕ ) * (uC ϕ ) = T µ φ ω,u ∈ S p/2 by (5.2). Thus, we have the equivalent condition µ φ ω,u = B φ (|u| 2 ) ∈ L p/2 (∆ϕdA) from (4.4) and Lemma 5.2. |K(z, φ(ξ))| 2 |u(ξ)| 2 ω(ξ) dA(ξ)ω(z)dA(z) = D K(φ(ξ), φ(ξ))|u(ξ)| 2 ω(ξ) dA(ξ) ≈ D τ (ξ) 2 τ (φ(ξ)) 2 ω(ξ) ω(φ(ξ)) |u(ξ)| 2 ∆ϕ(ξ)dA(ξ) < ∞.
Theorem 1 . 3 .
13Let ω be a regular weight in the class W and φ be an analytic selfmap of D. Then the composition operator
Lemma 2. 3 .
3Let ω ∈ W. Then the normalized kernel function k z (ξ) uniformly converges to 0 on every compact subsets of D when |z| → 1 − .Proof. Given a compact subset K of D and z ∈ D, (2.4) and (2.5) follow that
2. 4 .
4Schatten Class. For a positive compact operator T on a separable Hilbert space H, there exist orthonormal sets {e k } in H such that
Theorem 3. 2 .
2Given a regular weight ω in W, let φ be an analytic self-map of D.If the composition operatorC φ is bounded on A 2 (ω),Proof. Since the composition operator is bounded, we have d φ (ζ) ≥ 1 for all boundary points ζ by (3.1). By Lemma 3.1, it suffices to check only the boundary point ζ satisfying d φ (ζ) = 1. For a boundary point ζ such that d φ (ζ) = 1, we have the following relation of inclusion by Theorem 2.6,
Theorem 3. 3 .
3Given a regular weight ω in W, let φ be an analytic self-map of D. Then the composition operator C φ is bounded on A p (ω), 0 < p < ∞ if and only if
Theorem 4 . 3 .
43Let ω ∈ W and u be an analytic function on D. Then the weighted composition operator uC φ is bounded on A 2 (ω) if and only if B φ (|u| 2 ) ∈ L ∞ (D). Moreover, uC φ ≈ sup z∈D B φ (|u| 2 )(z).
Corollary 5 . 4 .
54Let u be an analytic function on D and let φ be an analytic self-map of D. Then uC φ is a Hilbert-Schmidt operator if and only if |u(z)| 2 ∆ϕ(z)dA(z) < ∞. Proof. By Theorem 5.3 and (2.4), we have D |B φ (|u| 2 )(z)|∆ϕ(z)dA(
Theorem 1.1. Let ω be a regular weight in the class W. Iflim sup
|z|→1
Acknowledgements. The author would like to thank the referee for indicating various mistakes and giving helpful comments.
Pointwise estimate for the Bergman kernel of the weighted Bergman spaces with exponential type weights. S Asserda, A Hichame, C. R. Acad. Sci. Paris, Ser. I. 352S. Asserda, A. Hichame, Pointwise estimate for the Bergman kernel of the weighted Bergman spaces with exponential type weights, C. R. Acad. Sci. Paris, Ser. I 352 (2014). 13-16.
Schatten class Toeplitz operators acting on large weighted Bergman spaces. H Arroussi, I Park, J Pau, Studia Math. 352H. Arroussi, I. Park, J. Pau, Schatten class Toeplitz operators acting on large weighted Bergman spaces, Studia Math. 352 (2015). 203-221.
Weighted composition operators on the Dirichlet space: Boundedness and Spectral properties. I Chalendar, E A Gallardo-Gutiérrez, J R Partington, Math. Annalen. 363I. Chalendar, E. A. Gallardo-Gutiérrez, J. R. Partington, Weighted composition operators on the Dirichlet space: Boundedness and Spectral properties, Math. Annalen. 363 (2015), 1265-1279.
Weighted composition operators on Hardy spaces. M D Contreras, A G Hernandez-Diaz, J. Math. Anal. Appl. 263M. D. Contreras, A. G, Hernandez-Diaz, Weighted composition operators on Hardy spaces, J. Math. Anal. Appl 263 (2001), 224-233.
Composition Operators on Spaces of Analytic Functions. C Cowen, B Maccluer, CRC Press, IncC. Cowen, B. MacCluer, Composition Operators on Spaces of Analytic Functions, CRC Press, Inc. (1995).
Weighted composition operators on the Bergman space. Z Cuckovic, R Zhao, J. London Math. Soc. 70Z. Cuckovic, R. Zhao, Weighted composition operators on the Bergman space, J. London Math. Soc. 70 (2004), 499-511.
P Halmos, Measure Theory. SpringerP. Halmos, Measure Theory, Springer (1974).
Composition Operators on Large Weighted Bergman Spaces. T Kriete, B Maccluer, J. Indiana Univ. Math. 413T. Kriete and B. MacCluer, Composition Operators on Large Weighted Bergman Spaces, J. Indiana Univ. Math. 41 No. 3 (1992), 755-788.
Hankel Operators on the weighted Bergman spaces with exponential type weights. P Lin, R Rochberg, Integral Equations Operator Theory. 21P. Lin, R. Rochberg, Hankel Operators on the weighted Bergman spaces with exponential type weights, Integral Equations Operator Theory 21 (1995), 460-483.
Weighted composition operators on the Bloch space. S Ohno, R Zhao, Bull. Austral. Math. Soc. 63S. Ohno, R. Zhao, Weighted composition operators on the Bloch space, Bull. Austral. Math. Soc. 63 (2001), 177-185.
Embedding theorems for weighted classes of harmonic and analytic functions. V L Oleinik, J. Soviet Math. 9V. L. Oleinik, Embedding theorems for weighted classes of harmonic and analytic func- tions, J. Soviet Math. 9 (1978), 228-243.
Embedding theorems and integration operators on Bergman spaces with rapidly decreasing weights. J Pau, J A Peláez, J. Funct. Anal. 259J. Pau and J. A. Peláez, Embedding theorems and integration operators on Bergman spaces with rapidly decreasing weights, J. Funct. Anal. 259 (2010), 2727-2756.
Trace class criteria for Toeplitz and Composition operators on small Bergman spaces. J A Peláez, J Rättyä, Adv. Math. 293J. A. Peláez and J. Rättyä, Trace class criteria for Toeplitz and Composition operators on small Bergman spaces, Adv. Math. 293 (2016), 606-643.
The essential norm of a composition operator. J H Shapiro, Annals of Math. 125J. H. Shapiro, The essential norm of a composition operator, Annals of Math. 125 (1987), 375-404.
Composition operators between weighted Bergman spaces with admissible Bekollé weights. A K Sharma, S I Ueki, Banach J. Math. Anal. 8A. K. Sharma and S. I. Ueki, Composition operators between weighted Bergman spaces with admissible Bekollé weights, Banach J. Math. Anal. 8 (2014), 64-88.
BK21-MATHEMATICAL SCIENCES DIVISION. K Zhu, [email protected] York; PUSAN NATIONAL UNIVERSITY, BUSAN 46241, REPUBLIC OF KOREA E-mail addressMarcel DekkerOperator theory in function spacesK. Zhu, Operator theory in function spaces, Marcel Dekker, New York, 1990. BK21-MATHEMATICAL SCIENCES DIVISION, PUSAN NATIONAL UNIVERSITY, BUSAN 46241, REPUBLIC OF KOREA E-mail address: [email protected]
| zyda_arxiv-1578000 |
A Strategy-proof Mechanism For Networked Housing Markets
Youjia Zhang [email protected]
Institute for Interdisciplinary Information Sciences
Tsinghua University
Pingzhong Tang
Institute for Interdisciplinary Information Sciences
Tsinghua University
A Strategy-proof Mechanism For Networked Housing Markets
This paper studies a house allocation problem in a networked housing market, where agents can invite others to join the system in order to enrich their options. Top Trading Cycle is a well-known matching mechanism that achieves a set of desirable properties in a market without invitations. However, under a tree-structured networked market, existing agents have to strategically propagate the barter market as their invitees may compete in the same house with them. Our impossibility result shows that TTC cannot work properly in a networked housing market. Hence, we characterize the possible competitions between inviters and invitees, which lead agents to fail to refer others truthfully (strategy-proof). We then present a novel mechanism based on TTC, avoiding the aforementioned competition to ensure all agents report preference and propagate the barter market truthfully. Unlike the existing mechanisms, the agents' preferences are less restricted under our mechanism. Furthermore, we show by simulations that our mechanism outperforms the existing matching mechanisms in terms of the number of swaps and agents' satisfaction.
Introduction
Market design has been greatly influenced by the theory of house allocation mechanisms that allow agents to express preferences over houses and trade them without monetary compensation. Such a mechanism can be applied to various areas such as kidney exchange [Roth et al., 2004;Sönmez et al., 2020], house allocation [Shapley and Scarf, 1974;Abdulkadiroglu and Sönmez, 1999], and so on. Therefore, it has attracted researchers from various fields, including economics, mathematics, and computer science.
In the groundbreaking paper, [Shapley and Scarf, 1974] first formulated the house allocation problem as a mechanism design problem and developed a well-known matching mechanism, Top Trading Cycle, which is strategy-proof (truthful reporting preference is a dominant strategy) and Pareto efficient (resources are allocated to the maximum level of efficiency). However, they considered the case in which all agents in the housing market are only invited by the organizer. With the significant improvement in communication tools, people are interacting with others more frequently and easily than ever before. It is natural to develop such a mechanism over social networks. Indeed, agents might be interested in inviting their friends to the housing markets in order to enrich their options.
The work of mechanism design over social networks has been initiated by [Li et al., 2017]. They revealed that increasing the number of participants can improve the revenue of an auction, which is consistent with the result of [Bulow et al., 1996]. Taking social networks into consideration in the mechanism design problem is promising and has been developed in various fields such as resource allocation [Li et al., 2017], task collaboration [Golle et al., 2001], matching [Kawasaki et al., 2021].
An important open question in matching over a networked housing market is how to develop a mechanism that ensures agents report their information truthfully. For example, an agent might not invite his friends because they would compete for a house he prefers, which reduces other agents' options. Such an issue was first discovered by [Kawasaki et al., 2021]. They restricted the preference domain and found that TTC simultaneously satisfies strategy-proof and Pareto efficient under such settings; otherwise, it fails to achieve a set of properties. However, restricting the preference domains contradicts the purpose of matching mechanisms over a networked housing market, which is to enrich agents' options in order to obtain a better allocation. This paper proposes a novel matching mechanism that ensures strategy-proof without sacrificing all agents' preference domains. Indeed, we reveal the possible competition between inviters and invitees, which leads agents to benefit from misreporting. Inspired by the success of Top Trading Cycle [Shapley and Scarf, 1974] in traditional housing markets, we develop a matching mechanism based on TTC for networked housing markets, called Top Trading Cycle with Diffusion (TTCD). Aside from being strategy-proof, the allocation of TTCD is also stable, such that agents cannot improve from coalitions with their ancestors and descendants. We further show that TTCD has a promising number of swaps, which is preferable for organizers who charge for a swap.
The remainder of the paper is organized as follows. Section 2 reviews the relevant literature. Section 3 describes the model and a set of desirable properties. In Section 4, we briefly review the existing mechanisms and discuss the impossibilities. Section 5 proposes a novel matching mechanism and analyzes its properties. Section 6 provides the performance of TTCD by simulations. Finally, we conclude with some closing remarks and discuss possible future research in Section 7.
Literature Review
The seminal work [Shapley and Scarf, 1974] introduced house allocation as a mechanism design problem and proposed the Top Trading Cycle (TTC) mechanism as a solution with several desirable properties. Since then, the design of house allocation mechanisms and TTC have received much attention from both researchers and practitioners. [Roth, 1982] proved that TTC is strategy-proof, individually rational, and Pareto efficient. Furthermore, [Ma, 1994] verified the result of [Roth, 1982] and showed that TTC is the only mechanism that satisfies all those properties.
Several variations of TTC have been studied in the literature. For instance, [Alcalde-Unzu and Molis, 2011] generalized the TTC algorithm to the case in which agents are allowed to report indifference in preference. [Morrill, 2015] characterized the TTC in terms of justness, which allows students with higher priority to veto an objection. [Hakimov and Kesten, 2018] proposed an Equitable TTC in order to eliminate avoidable justified envy situations.
Mechanism design over social networks has been well studied in various fields such as marketing and auction. For example, [Emek et al., 2011] proposed a geometrical reward mechanism for marketing in which agents are rewarded for successfully referring others to purchase a product. [Li et al., 2017] introduced an auction over social networks that satisfies several important properties. For more details, readers can refer to the work of and the references therein. These studies demonstrate the potential of mechanism design with social networks as a valuable direction of research.
The study of mechanism design in networked housing markets was pioneered by [Kawasaki et al., 2021]. They revealed that it is impossible for a mechanism to be strategy-proof and Pareto efficient over a networked housing market. As a response, they proposed a modified TTC, which restricts the preference domain of all agents. [Gourvès et al., 2017;Zheng et al., 2020] studied the networked housing market where agents are only allowed to trade with their neighbors. [You et al., 2022] modified the algorithm of [Abdulkadiroglu and Sönmez, 1999] for a house allocation problem with existing tenants and social networks. Later on, [Yang et al., 2022] extended the work of [Kawasaki et al., 2021] into a graph network and enlarged the preference domain of agents who fail to invite others.
Model and Preliminaries
Consider a house allocation problem in a social network that consists of an organizer o and a set of n agents N = {1, 2, ..., n}. For each agent i ∈ N , he is endowed with a house h i , and the set of all houses is denoted as H = {h 1 , h 2 , ..., h n }. Note that the organizer o is not endowed with any house.
For each agent i ∈ N , he has a set of children r i ⊆ N . Each agent i ∈ N , he has a strict preference i over houses H, where h j i h i represents that agent i prefers house j over house i. Therefore, we denote θ i = ( i , r i ) as the type of agent i.
Agents are asked to report their types as part of the mechanism. We denote θ i = ( i , r i ) as the report type of agent i under the mechanism. Specifically, it is impossible to spread the information of the barter market to a non-existing child. Hence, r i ⊆ r i . Let θ = (θ 1 , θ 2 , ..., θ n ) = (θ i , θ −i ) be the reported types of all agents, and θ −i is the reported types of all agents excluding agent i. We denote Θ = (Π × r) be the reported type space of all agents, where Π is the preference list and r action spaces on reporting children.
For a given report profile θ , we generate a directed graph G(θ ) = (V (θ ), E(θ )), where V (θ ) ⊆ N ∪ {o}, and edge e(i, j) ∈ E(θ ) means that agent i invites agent j to join the barter market (j ∈ r i ). In particular, an agent can only join the market if all his ancestors are in the market and decide to invite their children. Without loss of generality, we assume that the organizer o invites all his children r o .
The organizer o aims to design a mechanism with a matching policy that incentivizes agents to invite their children to join the barter market in order to provide a broader range of options for the exchange. A matching policy x = (x i ) i∈N is a redistribution of houses to the agents, where x i (θ ) ∈ H is the house allocated to agent i under matching x. Let X be the set of all possible allocations.
Given the above settings, the networked housing market is a tuple (N, Π, r), and the formal definition of a matching mechanism under a networked market is defined as Definition 1. The networked matching mechanism M is defined by a matching policy x : Θ → X.
Properties
In this section, we define a set of important properties that a matching mechanism M on the social network should satisfy. All these properties are similar and inspired by related works [Kawasaki et al., 2021;Yang et al., 2022].
We begin with the formal definition of individual rationality, which ensures that if all agents report truthfully, they have no loss from joining the barter market. Definition 2 (Individual Rationality). The networked matching mechanism M is individually rational (IR) if
x i (θ i , θ −i ) i h i for all agents i ∈ N .
Strategy-proof is also a desirable property for the matching mechanism, which guarantees that reporting both children and preferences truthfully is a dominant strategy for all agents. Definition 3 (Strategy-proof). The networked matching mechanism M is strategy-proof
(SP) if x i (θ i , θ −i ) i x i (θ i , θ −i ) for all agents i ∈ N .
A Pareto efficient mechanism provides an outcome such that there is no other allocation where an agent can be better off without worsening other agents.
Definition 4 (Pareto Efficient). An allocation µ Pareto dominates another feasible allocation ν if the following conditions hold
• µ i i ν i for all i ∈ N ,
• µ j j ν j for some j ∈ N . The networked matching mechanism M is Pareto Efficient (PE) if there are no other feasible allocations that Pareto dominates x(θ).
The core property is widely used as a stability concept in cooperative game theory. The followings are the standard notions of core from the matching literature [Pycia, 2012]. Definition 5 (Blocking Coalition). Given an allocation
x(θ) ∈ X (with items set H S ⊆ H), we say a set of agents S ⊆ N is a blocking coalition for x(θ) if there exists an allo- cation x (θ) ∈ X such that • x i (θ) ∈ H S for all i ∈ S, • x i (θ) i x i (θ) for all i ∈ S, • x j (θ) j x j (θ) for some j ∈ S.
Intuitively, agents in S reallocate the house among themselves to have better allocations. Therefore, if there is no blocking coalition for an allocation, such an allocation is stable and belongs to the core. Definition 6 (Core). An allocation x(θ) is in the core if there exists no blocking coalition for it. Lemma 1. If an allocation x(θ) is in the core, then it is also PE.
Proof. Assume if x(θ) is not PE, then there exists other feasible allocation y(θ) which is PE, and a subset of S including all agents blocks x(θ) with y(θ), which contradicts the definition of core.
Existing Mechanisms and Impossibility Results
So far, we have defined the set of desirable properties that a matching mechanism should satisfy. In this section, we briefly review the existing matching mechanisms over networked housing markets. Moreover, we demonstrate that it is not possible for a matching mechanism over a networked housing market to simultaneously achieve IR, SP, and PE without restrictions on agents' preferences. Additionally, we also characterize the competition between inviters and invitees, leading to a mechanism that fails to satisfy SP.
Before introducing the matching mechanisms in detail, the following are some fundamental definitions and notations:
• A directed edge points from a parent node to a child node. (e.g., a is the parent of c, and c is the child of a in Figure 1.) • An ancestor (descendant) node of a node is either the parent (child) of the node or the parent (child) of some ancestor (descendant) of the node. (e.g., a is the ancestor of e, and e is the descendant of a.) • Nodes with the same parent are called siblings. (e.g., c is the sibling of d.)
Top Trading Cycle
Top Trading cycle (TTC) is a well-known algorithm for a house allocation problem, which was first proposed in [Shapley and Scarf, 1974].
Definition 7 (Top Trading Cycle). TTC algorithm works as follows 1. each agent points to the most preferred house 2. there must exist at least one cycle with a minimum length 1
3. for each cycle, assign each house to the agent pointing to it and remove the cycle from the market 4. return to step 1 until no agents remain in the market.
Moreover, without social networks, it satisfies all the properties we mentioned in Section 3.1 such as IR, SP, and PE [Ma, 1994]. Nevertheless, it is neither SP nor PE under a networked housing market, which is explained in the first impossibility result.
Modified Top Trading Cycle
With the success of TTC under general cases, [Kawasaki et al., 2021] extended the algorithm into a networked housing market, which is called modified TTC.
Definition 8 (modified TTC). The modified TTC works as follows 1. each agent points to the most preferred house owned by his parents, himself or descendants
Leave and Share
Later on, [Yang et al., 2022] extended the work of modified TTC into a graph network and enlarged the preference domain of particular agents whose parents are removed from the market. Definition 9 (Leave and Share). The Leave and Share works as follows 1. find the minimum agent i (distance from agent to the organizer)
2. agent i points to his most preferred house h j (owned by agent j who is agent i's parents, children or himself), then agent j does the same action, iteratively, until a cycle with a minimum length 1 is formed 3. for each cycle, assign each house to the agent pointing to it and remove the cycle from the market 4. reconnect the remaining agents and return to step 1 until no agents are left in the market.
Although Leave and Share works well in a graph network, most agents can only exchange with their neighbors (parents and children).
YRMH-IGYT
[You et al., 2022] studied the house allocation problem in which there exist some initially-vacant houses in the networked market. In other words, the number of houses is greater than the number of agents. Note that such a setting leads the problem less complicated than the traditional housing market. The mechanism is called You Request My House -I Get Your Turn (YRMH-IGYT). In order to keep consistency, we assume there are no vacant houses in the market. Definition 10 (YRMH-IGYT). YRMH-IGYT works as follows 1. find the minimum agent i (distance from agent to the organizer)
2. agent i points to his most preferred house h j (owned by agent j who is agent i's ancestors, children or himself), then agent j does the same action, iteratively, until a cycle with a minimum length 1 is formed 3. for each cycle, assign each house to the agent pointing to it and remove the cycle from the market 4. reconnect the remaining agents and return to step 1 until no agents are left in the market.
Similar to modified TTC, YRMH-IGYT restricts the preferences of agents, and there may exist an allocation that Pareto dominates the allocation of YRMH-IGYT.
Impossibility Results
We propose a novel matching mechanism for a networked housing market in response to the following impossibility results and aforementioned challenges, which are also discussed in [Kawasaki et al., 2021;Yang et al., 2022]. Theorem 1. For a networked housing market (N, Π, r) with n ≥ 3, no mechanism can achieve IR, SP, and PE simultaneously without restricting the preference domain.
Due to space constraints, proofs are given in Appendix. Since it is impossible for a matching mechanism to be IR, SP, and PE simultaneously over the networked housing market. We introduce a weaker definition of the core by taking the network settings into account.
Definition 11 (Core for Paths). For a networked market with θ, there exists a path p i from the organizer o to agent i ∈ N , denoting the set of all agents in p i as P i . Given an allocation x(θ) ∈ X, for any agent i ∈ N , we define an allocation x(θ) is in the core for paths if no subset of agents in P i can form a blocking coalition.
The definition of Core for Paths (CP) is similar to that of the Strict Core for Neighbors (SC4N) in [Kawasaki et al., 2021]. However, SC4N restricts the coalitions by two agents in a parent-child relationship, while CP focuses on coalitions formed by agents who are on the same path (agents who share a common ancestor, excluding the organizer).
Example 1 (CP and SC4N). Consider four agents N = {s, 1, 2, 3}, where s is the market owner, they have a relationship r s = {1}, r 1 = {2}, r 2 = {3}, and r 3 = ∅. Consider the following preferences:
• h 3 1 h 2 1 h 1 , • h 1 2 h 2 2 h 3 , • h 1 3 h 3 3 h 2 .
We have the following two allocations:
1. (SC4N) x 1 = h 1 , x 2 = h 3 and x 3 = h 2 , 2. (CP) x 1 = h 2 , x 2 = h 3 , and x 3 = h 1 .
The allocation (1) is SC4P, as agents 1 and 2 or agents 2 and 3 cannot form a blocking coalition to improve their allocations. However, such an allocation is not CP, as agents 1, 2, and 3 can form a larger coalition group to have a better outcome (allocation (2)).
Corollary 1. If an allocation is CP, it is also SC4N; however, if an allocation is SC4N, it may not be CP.
The following theorem highlights the key challenge for a matching mechanism over a networked housing market to guarantee agents invite all their children to join the barter market. In the following theorem, we use the term 'compete' to refer to the situation where agents i and j both have (point) the same house as their top preference.
Theorem 2. For a networked housing market (N, Π, r) with n ≥ 3, a matching mechanism is not SP if it allows agents i and j ∈ descendant(i) to compete for a house owned by
• an agent k who is an ancestor of agent i (e.g., k ∈ ancestor(i)),
• an agent k who is a descendant of agent j (e.g., k ∈ descendant(j)),
• an agent k who is both a descendant of agent i and an ancestor of agent j (e.g., k ∈ {descendant(i) ∩ ancestor(j)}), without restricting the preferences of their ancestors and descendants if they (agents i and j) are not selected by an agent between them, where ancestor(i) is the set of agents who are the ancestor of agent i, and descendant(i) for the set of descendants of agent i. Furthermore, if any agent i can select a house owned by agent j with no relationship (j / ∈ {ancestor(i), descendant(i), sibling(i)}), such a mechanism is not SP.
Top Trading Cycle with Diffusion
Given the impossibility results in Section 4.5, TTC fails to achieve IR, SP, and PE over a networked housing market. Moreover, we also explain how to restrict agents' preferences in order to keep the matching mechanism SP. Therefore, we propose a novel algorithm based on traditional TTC, which is called Top Trading Cycle with Diffusion (TTCD), in order to overcome the aforementioned challenges.
As highlighted by [Kawasaki et al., 2021], the presence of multiple paths to an agent can result in strategic behavior and incompatibility. For example, agents may strategically accept invitations from others. To simplify our analysis, we focus on the social network, which is a directed tree rooted at organizer o. (For graph networks, see Appendix.) Furthermore, we allow the organizer to invite multiple agents, which is not well discussed in related literature. Definition 12 (Top Trading Cycle with Diffusion). TTCD works as follows 1. each agent i ∈ r o (agents invited by the organizer o) points to the most preferred house owned by his siblings, himself or descendants 2. each agent i ∈ N \ r o points to the most preferred house owned by his ancestors, himself or descendants 3. if agents i and j ∈ descendant(i) point to the same house owned by agent k ∈ ancestor(i), update agent j points to his next preferred house 4. if agents i and j ∈ descendant(i) point to the same house owned by agent k ∈ descendant(j), update agent i points to his next preferred house 5. if agents i and j ∈ descendant(i) point to the same house owned by agent k ∈ descendant(i) and k / ∈ descendant(j), then agent k points to his most preferred house, and such house owner points to his most preferred house iteratively with the following rules until a cycle is formed
• if an agent points to a house owned by agent x ∈ {i, ancestor(i)}, agent x points to the most preferred house owned by agent i or ancestor(i) • if an agent points to a house owned by agent
x ∈ {j, descendant(j)}, agent x points to the most preferred house owned by agent j or descendant(j) 6. repeat steps 3, 4, and 5 until there are no conflicts 7. there must exist at least one cycle with a minimum length 1 8. for each cycle, assign each house to the agent pointing to it and remove the cycle from the market 9. return to steps 1 and 2 until no agents remain in the market TTCD is easy to understand, and it works similarly to traditional TTC. For agent i ∈ r o invited by the organizer, they are free to select any house owned by their descendants and siblings. For other agents, if there are no conflicts between agents and their ancestors/descendants, they are free to select any house in the corresponding tree branch. (Recall our analysis focuses on a directed tree network rooted at the organizer o.) Moreover, steps 3, 4, and 5 prevent the conflicts in Theorem 2 and guarantee all agents cannot be worse off from inviting others. Compared with the mechanisms in [Kawasaki et al., 2021;Yang et al., 2022], the preference restriction in our mechanism is less strict. This makes our mechanism more efficient and flexible than other existing mechanisms.
We demonstrate a running process of TTCD by using the example shown in Figure 2. The preference list is given in Table 1. • Figure 2a is the directed tree rooted at organizer o.
• All agents point to their most preferred house available in the market, which is shown in Figure 2b. Note that agent 9 cannot point to h 2 , as it is pointed by his ancestor agent 5. (Recall steps 1, 2, 3, and 4.)
• After the first iteration, agents 1, 3, 4, 6, 7, and 9 are removed from the market.
• Agent 2's most preferred house in the market is now h 8 .
• The allocation of agents N = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} is houses X = {h 7 , h 8 , h 6 , h 4 , h 2 , h 1 , h 3 , h 10 , h 9 , h 5 }.
Properties of TTCD
In this section, we show that TTCD satisfies all the desirable properties we mentioned in Section 3.1.
Lemma 2. Top Trading Cycle with Diffusion mechanism satisfies individually rational.
IR is obvious, as agent i never points to a house that is worse than h i for him, he is never allocated a house worse than h i under TTCD. Therefore, it is always beneficial for agents to join the system. Lemma 3. Top Trading Cycle with Diffusion mechanism is strategy-proof.
Proof. Note that the agent i's report type θ i consists of two information, his preference i and his children set r i .
(true preference i = i ) For a fixed r i , we show that agent i cannot obtain a better allocation by reporting
( i , r i ) such that x(( i , r i ), θ −i ) i x(( i , r i ), θ −i ).
Case 1: If agent i's favorite house is the house owned by himself h i , he can keep h i immediately by reporting i . Moreover, misreporting i leads him to point to a less preferred house and probably form a cycle with other agents. As a result, he is allocated a less preferred house. Hence, it is never optimal for agents to misreport in this case. (This also supplements the proof of IR.)
Case 2: Assuming agent i's favorite house is h j owned by agent j.
If there are no conflicts (no agents point to h j ), under TTCD, agent i always points to h j . Hence, the formation of the trading cycle, including agents i and j, depends on agent j and other agents, which is irrelevant to i . If agent i misreports i , it may form a trading cycle with a less preferred house.
If there exists a conflict (other agents point to h j ), based on the rule of TTCD, agent i's preference may be restricted, which depends on the conflict type. If agent i is not allowed to point h j , which means there exists an agent with a higher priority on selecting h j . Thus, whatever agent i misreports i , he is never allocated h j . If agent i is allowed point h j , the formation of the trading cycle depends on agent j, which is irrelevant to i . The problem goes back to the no conflicts case.
(all children r i = r i ). So far, we have proved all agents benefit from reporting true preference. In this part, we need to show that
x(( i , r i ), θ −i ) i x(( i , r i ), θ −i ), where r i ⊆ r i .
We only need to consider the situation in that agent i competes with his descendants.
Case 1: If agent i's favorite house is the house owned by himself h i , the allocation of h i is irrelevant to r i , and he keeps h i under TTCD.
Case 2: Assuming agent i's favorite house is h j owned by agent j.
If there are no conflicts, under TTCD, agent i always points to h j . Hence, the formation of the trading cycle, including agents i and j, depends on agent j and other agents. If agent i misreports r i , it may influence the availability of houses for other agents and fail to form a trading cycle including agents i and j.
If there exists a conflict and agent i is not allowed to point h j , whatever he misreports r i , he can never form a trading cycle including agent j. Moreover, misreporting r i may influence the availability of the next favorite house for agent i.
If agent i is allowed point h j , the formation of the trading cycle depends on agent j. The problem goes back to the no conflicts case.
Thus, reporting θ i = ( i , r i ) is optimal under TTCD.
Lemma 3 reveals that reporting private information truthfully is the dominant strategy under TTCD. Indeed, misreporting either the preference i or information of children r i leads to a worse allocation.
According to Theorem 1, as our mechanism is IR and SP, it is not Pareto Efficient. Recall the allocation X in Figure 2, agents 4 and 9 can swap their houses between them to have a better result without affecting other agents' allocations. Corollary 2. Top Trading Cycle with Diffusion is not Pareto efficient.
Although TTCD is not Pareto efficient considering the entire social network, we show that agents cannot collude with their ancestors or descendants to improve their allocations. Lemma 4. Over a directed tree networked market, the outcome of the Top Trading Cycle with Diffusion mechanism is in the core for paths.
Lemma 4 states that the allocation of TTCD is stable in each path of the tree network such that agents cannot improve their allocations by forming a small coalition group with their ancestors and descendants. Even though there may exist a coalition group in the allocation of TTCD, the agents in the group are not directly connected and thus cannot collude with each other under the network's settings.
As mentioned in [Kempe et al., 2003;Kawasaki et al., 2021], the number of swaps is a significant measure for evaluating a matching mechanism for a third-party organizer. Particularly when agents pay the organizer for a swap to a new house. Besides satisfying some desirable properties mentioned in Section 3.1, the organizer aims to maximize the number of swaps as much as possible. Lemma 5. The number of swaps under TTCD is higher than that under the modified TTC [Kawasaki et al., 2021].
Empirical Evaluations
In this section, we start with a random example to show the advantages of TTCD compared with other existing mechanisms (modified TTC, Leave and Share, and YRMH-IGYT). Then, we numerically compare these mechanisms by simulations. Figure 3: Preference: h5 1 h4 1 h2 1 h1, h3 2 h2, h2 3 h3, h1 4 h5 4 h4 and h1 5 h4 5 h5.
Considering the social network in Figure 3. The following are the allocations for each mechanism.
• modified TTC:
x 1 = h 1 , x 2 = h 3 , x 3 = h 2 , x 4 = h 5 ,
and x 5 = h 4 . • Leave and Share (LaS):
x 1 = h 4 , x 2 = h 3 , x 3 = h 2 ,
x 4 = h 1 , and x 5 = h 5 .
• YRMH-IGYT:
x 1 = h 4 , x 2 = h 3 , x 3 = h 2 , x 4 = h 1 , and x 5 = h 5 . • TTCD: x 1 = h 5 , x 2 = h 3 , x 3 = h 2 , x 4 = h 1 , and x 5 = h 4 .
Although TTCD does not always promise a Pareto efficient allocation, in Figure 3, the allocation of TTCD Pareto dominates the allocations of other existing mechanisms.
Simulations
We evaluate the performance of the mechanism by two criteria, the total number of swaps and the average improvement of each agent. The understanding of the number of swaps is intuitive, it indicates how many agents exchange their houses with others. As we discussed in Section 5.1, the organizer may aim to maximize the number of swaps. The second criterion, the average improvement of each agent, reflects how far the allocated house is from the initial house. For instance, agent 1's preference is h 3 1 h 2 1 h 1 . If he is allocated h 3 which is in the 1 st position of his preference and his initial house h 1 is in the 3 rd position, hence, he has made a 3 − 1 = 2 position improvement. It is worth mentioning that all existing mechanisms are IR, and therefore, it is impossible for position improvements to be negative. Moreover, we analyze the performance of the matching mechanism under the different sizes of tree networks. In order to keep consistency, we generate 50 random networks for each different scale of agents. Figure 4 illustrates the performance of four mechanisms in terms of the number of swaps. As modified TTC only allows agents to swap with their parents and descendants, it might be difficult to form a trading cycle with others. As a result, some agents keep their initial houses, leading to a lower number of swaps in the modified TTC.
Although the restriction of LaS (allowing swaps with parents and children) is stricter than that of modified TTC, it reconstructs the network after removing certain agents, which enlarges some agents' availability.
Additionally, YRMH-IGYT works similarly to LaS but enlarges the preference domain by allowing agents to select houses owned by their ancestors. Therefore, it generates more swaps than LaS.
In comparison to these mechanisms, TTCD has the least restriction on the preference domain, which results in more swaps, as evident in Figure 4. This observation also supports Lemma 5. Figure 5 shows the improvement in allocation for each agent. As previously stated, the other three mechanisms restrict all agents' preference domains; hence, the probability of forming a large trading cycle is low, and it is impossible for each agent to obtain the most preferred house. Consequently, the position improvement of each agent under these mechanisms is also limited. However, under TTCD, agents are allowed to select houses owned by their ancestors or descendants, which is not possible in the other three mechanisms. Thus, TTCD also outperforms all other mechanisms on position improvements.
Conclusion
In this paper, we study a matching mechanism over a networked housing market and propose a novel mechanism called Top Trading Cycle with Diffusion (TTCD). In other existing matching mechanisms, they limit all agents' preference domains in order to ensure the truthfulness of the mechanism. We characterize the possible competitions between inviters and invitees, resulting in an untruthful mechanism. Under TTCD, we update the policy based on the traditional TTC in order to avoid all those competitions. As a result, TTCD is strategy-proof which minimizes the restrictions on the preference domain. Besides other desirable properties, it maximizes the agents' satisfaction and the number of swaps.
Promising future work includes considering an allocation problem over social networks with monetary transfers and budget.
agent 1 can misreport his children to improve their utilities. Therefore, we update step 3 (the rule does not hold if agents 1 and 2 are siblings).
(Case 2) Considering the graph network in Figure 6 with preferences
• h 3 1 h 2 1 h 1 , • h 3 2 h 1 2 h 2 , • h 2 3 h 3 .
If the mechanism allows both agents 1 and 2 to compete for h 3 , based on TTCD for tree network, the allocation is {h 1 , h 3 , h 2 }. However, if agent 1 misreports agent 3, resulting in only agents 1 and 2 in the network, agents 1 and 2 swap their houses. For agent 1, the allocation is better by misreporting. Therefore, we add one new rule (step 5) to ensure the mechanism is strategy-proof.
The rest of the proof is the same as that of TTCD for tree networks. Proof. Consider the example given in Figure 7. There are 6 possible allocations, which are
Proof of Theorem 1
1. x 1 = h 1 , x 2 = h 2 and x 3 = h 3 . 2. x 1 = h 1 , x 2 = h 3 and x 3 = h 2 .
3. x 1 = h 2 , x 2 = h 1 and x 3 = h 3 .
4. x 1 = h 2 , x 2 = h 3 and x 3 = h 1 .
5. x 1 = h 3 , x 2 = h 1 and x 3 = h 2 .
6. x 1 = h 3 , x 2 = h 2 and x 3 = h 1 .
Obviously, allocations (2), (4), and (5) fail to achieve IR, as some agents are worse off from joining the barter market. For instance, given the allocation (2), agent 3 might refuse to join the market and keep h 3 .
Moreover, allocation (3) Pareto dominates allocation (1), as both agents 1 and 2 can have a better result in allocation (3). There exist two Pareto optimal allocations, which are allocations (3) and (6).
However, given the allocation (6), agent 2 can have a better allocation by not inviting agent 3 and forcing agent 1 to exchange with him, which is allocation (3). As a result, a mechanism that outputs allocation (6) is not SP. Now we consider the mechanism outputs the allocation (3). According to the preference list, there always exists a cycle between agents 1 and 3. In order to output the allocation 3, the mechanism has to force agent 3 pointing other houses rather than h 1 . Therefore, allocation (3) can only be obtained from a non-SP mechanism or restricting the preference domain of particular agents; for example, agent 3 can only choose h 3 .
More similar results can be found in [Kawasaki et al., 2021;Yang et al., 2022]. [Kawasaki et al., 2021] show that Top Trading Cycle fails to achieve IR, SP, and PE simultaneously in a social network. [Yang et al., 2022] prove there is no SP mechanism that outputs a Pareto optimal allocation over a networked housing market.
Proof of Theorem 2
Proof. (A house owned by an ancestor.) Consider the example given in Figure 7. If a mechanism allows both agents 2 and 3 to compete for h 1 , agent 2 may misreport r 2 = ∅, so that no one can compete h 1 with him and he is allocated h 1 in the end, which is better than h 2 . Therefore, it is beneficial for agents to misreport if there exists a competition between agents and their descendants in a house owned by their ancestors, which contradicts SP.
(A house owned by a descendant.) Consider the example given in Figure 7 with a preference list h 3 2 h 1 2 h 2 for agent 2. If a mechanism allows both agents 1 and 2 to compete for h 3 , agent 2 may misreport r 2 = ∅ in order to be allocated h 1 , which is better than h 2 .
(A house owned by an agent between two competitors) Figure 8: Preference: h2 1 h1, h4 2 h3 2 h2, h2 3 h3 and h1 4 h4.
Consider the example given in Figure 8. Both agents 1 and 3 prefer h 2 . If agent 3 invites agent 4, h 2 is allocated to agent 1. Otherwise, agent 3 obtains h 2 . As a result, agent 3 never invites others.
(A house owned by an agent in another chain.) Figure 9: Preference: h4 1 h3 1 h2 1 h1, h3 2 h1 2 h2 2 h4, h4 3 h1 3 h2 3 h3 and h3 4 h2 4 h4 4 h1.
Consider the example given in Figure 9. If a mechanism allows both agents 1 and 3 to compete for h 4 , agent 1 can misreport r 1 = 2, hence, agent 3 is not in the market and no one can compete h 4 with him.
Therefore, it is beneficial for agents to misreport if there exists a competition between agents and their descendants in a house owned by an agent in another chain, which contradicts SP.
Proof of Lemma 4
Proof. We prove Lemma 4 by contradiction. Assume there exists a set of agents S in a path p i (S ⊆ P i ) such that at least one of them has a better allocation without influencing others than under TTCD (e.g. y i (θ) i x i (θ) for all i ∈ S and y j (θ) j x j (θ) for some j ∈ S).
We start with the case with |S| = 2. Note that an agent can only join the system via referrals. Thus, agents in the coalition group S are fully connected to each other. Consider the case that agents i and j (S={i,j}) form a blocking coalition group, and r i = {j}. By Definition 5, any one of the following holds i. y i (θ) i x i (θ) and y j (θ) j x j (θ),
ii. y i (θ) i x i (θ) and y j (θ) j x j (θ).
As both agents i and j form a blocking pair, they are in one trading cycle. Hence, the only solution is to exchange their houses such that y i = h j and y j = h i . Furthermore, we can derive the preference list is i. h j i h i and h i j h j , ii. h j i h i and h i j h j .
Based on the preference list, under TTCD, agent i also points to h j and agent j points to h i at the same time, the allocation is the same as that in the blocking pair, which contradicts the definition of blocking coalition.
Due to space constraints, we omit the proof of the case |S| > 2, which is similar to |S| = 2. For example, if S = {i, j, k} with r i = j, r j = k, since under TTCD, agents can also point to the house owned by his ancestor, we can consider agents i and j as an agent i with h i and r i = k. h i = h i if h i k h j ; otherwise, h i = h j . Then, the problem goes back to |S| = 2.
Proof of Lemma 5
Proof. The main difference between TTCD and modified TTC is the preference domain. Specifically, modified TTC only allows agents to point to a house owned by their parents, descendants, and themselves. However, under TTCD, agents can point to any house owned by their ancestors, descendants, and themselves. Intuitively, the set of ancestors is greater than the set of parents for each agent. Moreover, TTCD also allows agents who are invited by the organizer to select any house owned by their siblings.
Hence, agents who remain unchanged under modified TTC may be allocated to a better house under TTCD, which increases the number of swaps.
Figure 1 :
1Basic notations
Figure 2 :
2A running example of TTCD.
Figure 4 :
4Total number of swaps with different sizes of networks.
Figure 5 :
5Average number of position improvements for each agent with different sizes of networks.
Figure 7 :
7The example to prove Theorem 1. Relationship: ro = {1}, r1 = {2}, r2 = {3} and r3 = ∅. Preference: h3 1 h2 1 h1, h1 2 h2 2 h3 and h1 3 h3 3 h2.
. there must exist at least one cycle with a minimum length 1 3. for each cycle, assign each house to the agent pointing to it and remove the cycle from the market 4. return to step 1 until no agents remain in the market.Despite the fact that modified TTC achieves IR and SP simultaneously, it restricts the preference of agents. Indeed, agents can only choose houses owned by their parents, descendants, and themselves. Furthermore, in Section 6, we show that there may exist an allocation in which Pareto dominates the allocation of modified TTC.
6. if agents i and j ∈ descendant(i) point to the same house owned by agent k ∈ descendant(i) and k / ∈ descendant(j), then agent k points to his most preferred house, and such house owner points to his most preferred house iteratively with the following rules until a cycle is formed• if an agent points to a house owned by agent x ∈ {i, ancestor(i)}, agent x points to the most preferred house owned by agent i or ancestor(i) • if an agent points to a house owned by agentx ∈ {j, descendant(j)}, agent x points to the most preferred house owned by agent j or descendant(j) 7. repeat steps 3, 4, and 5 until there are no conflicts 8. there must exist at least one cycle with a minimum length 1 9. for each cycle, assign each house to the agent pointing to it and remove the cycle from the market 10. return to steps 1 and 2 until no agents remain in the market TTCD for graph network is strategy-proof. Proof. We update two rules (steps 3 & 5) into TTCD. Such rules are used to prevent the following special cases.(Case 1) Considering the graph network inFigure 6with preferencesAgent 2 is the sibling of agent 1 and also his descendant. If both agents 2 and 3 compete for h 1 , and the mechanism restricts the preference of agent 2 (step 3 in TTCD for tree network), agent 2 can reject the invitation from agent 3 or
Yuval Emek, Ron Karidi, Moshe Tennenholtz, and Aviv Zohar. Mechanisms for multi-level marketing. Sönmez ; Atila Abdulkadiroglu, Tayfun Sönmez ; Jorge Alcalde-Unzu, Elena Molis, ; Bulow, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. the ninth ACM SIGKDD international conference on Knowledge discovery and data miningJon Kleinberg, andÉva TardosDavid Kempe88International Economic Reviewand Sönmez, 1999] Atila Abdulkadiroglu and Tayfun Sönmez. House allocation with existing ten- ants. Journal of Economic Theory, 88(2):233-260, 1999. [Alcalde-Unzu and Molis, 2011] Jorge Alcalde-Unzu and Elena Molis. Exchange of indivisible goods and indiffer- ences: The top trading absorbing sets mechanisms. Games and Economic Behavior, 73(1):1-16, 2011. [Bulow et al., 1996] Jeremy Bulow, Paul Klemperer, et al. Auctions versus negotiations. American Economic Re- view, 86(1):180-194, 1996. [Emek et al., 2011] Yuval Emek, Ron Karidi, Moshe Ten- nenholtz, and Aviv Zohar. Mechanisms for multi-level marketing. In Proceedings of the 12th ACM conference on Electronic commerce, pages 209-218, 2011. [Golle et al., 2001] Philippe Golle, Kevin Leyton-Brown, Ilya Mironov, and Mark Lillibridge. Incentives for shar- ing in peer-to-peer networks. In International workshop on electronic commerce, pages 75-87. Springer, 2001. [Gourvès et al., 2017] Laurent Gourvès, Julien Lesca, and Anaëlle Wilczynski. Object allocation via swaps along a social network. In 26th International Joint Conference on Artificial Intelligence (IJCAI'17), pages 213-219, 2017. [Hakimov and Kesten, 2018] Rustamdjan Hakimov and Onur Kesten. The equitable top trading cycles mechanism for school choice. International Economic Review, 59(4):2219-2258, 2018. [Kawasaki et al., 2021] Takehiro Kawasaki, Ryoji Wada, Taiki Todo, and Makoto Yokoo. Mechanism design for housing markets over social networks. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, pages 692-700, 2021. [Kempe et al., 2003] David Kempe, Jon Kleinberg, andÉva Tardos. Maximizing the spread of influence through a so- cial network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137-146, 2003. [Li et al., 2017] Bin Li, Dong Hao, Dengji Zhao, and Tao Zhou. Mechanism design in social networks. In Thirty- First AAAI Conference on Artificial Intelligence, 2017. [Ma, 1994] Jinpeng Ma. Strategy-proofness and the strict core in a market with indivisibilities. International Journal of Game Theory, 23(1):75-83, 1994.
Marek Pycia. Stability and preference alignment in matching and coalition formation. Morrill ; Thayer, Morrill, Games and Economic Behavior. 921EconometricaMorrill, 2015] Thayer Morrill. Making just school assign- ments. Games and Economic Behavior, 92:18-27, 2015. [Pycia, 2012] Marek Pycia. Stability and preference align- ment in matching and coalition formation. Econometrica, 80(1):323-362, 2012.
Kidney exchange. The Quarterly journal of economics. 119et al., 2004] Alvin E Roth, Tayfun Sönmez, and M UtkuÜnver. Kidney exchange. The Quarterly journal of economics, 119(2):457-488, 2004.
Strategy-proof house allocation with existing tenants over social networks. Alvin E Roth, Lloyd Roth, Herbert Shapley, Scarf ; Tayfun, M Sönmez, M Bumin Yenmez ; Bo Utkuünver, Ludwig You, Taiki Dierks, Minming Todo, Makoto Li, Yokoo, arXiv:2201.05787Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems. Yang et al., 2022] Tianyi Yang, Yuxiang Zhai, Dengji Zhao, Miao Li, and Xinwei Songthe 21st International Conference on Autonomous Agents and Multiagent Systems9arXiv preprintIncentive compatibility in a market with indivisible goods. Economics lettersRoth, 1982] Alvin E Roth. Incentive compatibility in a mar- ket with indivisible goods. Economics letters, 9(2):127- 132, 1982. [Shapley and Scarf, 1974] Lloyd Shapley and Herbert Scarf. On cores and indivisibility. Journal of mathematical eco- nomics, 1(1):23-37, 1974. [Sönmez et al., 2020] Tayfun Sönmez, M UtkuÜnver, and M Bumin Yenmez. Incentivized kidney exchange. Ameri- can Economic Review, 110(7):2198-2224, 2020. [Yang et al., 2022] Tianyi Yang, Yuxiang Zhai, Dengji Zhao, Miao Li, and Xinwei Song. One-sided matching with per- mission. arXiv preprint arXiv:2201.05787, 2022. [You et al., 2022] Bo You, Ludwig Dierks, Taiki Todo, Min- ming Li, and Makoto Yokoo. Strategy-proof house al- location with existing tenants over social networks. In Proceedings of the 21st International Conference on Au- tonomous Agents and Multiagent Systems, pages 1446- 1454, 2022.
Mechanism design powered by social interactions. Dengji Zhao ; Yue Zheng, Tianyi Yang, Wen Zhang, Dengji Zhao, arXiv:2010.04933Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. the 20th International Conference on Autonomous Agents and MultiAgent SystemsarXiv preprintBarter exchange via friends' friends, 2021] Dengji Zhao. Mechanism design powered by social interactions. In Proceedings of the 20th Interna- tional Conference on Autonomous Agents and MultiAgent Systems, pages 63-67, 2021. [Zheng et al., 2020] Yue Zheng, Tianyi Yang, Wen Zhang, and Dengji Zhao. Barter exchange via friends' friends. arXiv preprint arXiv:2010.04933, 2020.
| zyda_arxiv-1591000 |
Gauge and Matter Condensates in Realistic String Models
arXiv:hep-th/9110023v1 8 Oct 1991 September, 1991
S Kalara
Center for Theoretical Physics
Department of Physics
TEXAS A&M UNIVERSITY CENTER FOR THEORETICAL PHYSICS and Astroparticle Physics Group, Houston Advanced Research Center (HARC) The Woodlands
Texas A&M University College Station
77843-4242, 77381TX, TXUSA, USA
Jorge L Lopez
Center for Theoretical Physics
Department of Physics
TEXAS A&M UNIVERSITY CENTER FOR THEORETICAL PHYSICS and Astroparticle Physics Group, Houston Advanced Research Center (HARC) The Woodlands
Texas A&M University College Station
77843-4242, 77381TX, TXUSA, USA
D V Nanopoulos
Center for Theoretical Physics
Department of Physics
TEXAS A&M UNIVERSITY CENTER FOR THEORETICAL PHYSICS and Astroparticle Physics Group, Houston Advanced Research Center (HARC) The Woodlands
Texas A&M University College Station
77843-4242, 77381TX, TXUSA, USA
Gauge and Matter Condensates in Realistic String Models
arXiv:hep-th/9110023v1 8 Oct 1991 September, 1991* Supported by an ICSC-World Laboratory Scholarship.
We examine the inter-relationship of the superpotential containing hidden and observable matter fields and the ensuing condensates in free fermionic string models. These gauge and matter condensates of the strongly interacting hidden gauge groups play a crucial role in the determination of the physical parameters of the observable sector. Supplementing the above information with the requirement of modular invariance, we find that a generic model with only trilinear superpotential allows for a degenerate (and sometimes pathological) set of vacua. This degeneracy may be lifted by higher order terms in the superpotential. We also point out some other subtle points that may arise in calculations of this nature. We exemplify our observations by computing explicitly the modular invariant gaugino and matter condensates in the flipped SU (5) string model with hidden gauge group SO(10) × SU (4).September, 1991 * Supported by an ICSC-World Laboratory Scholarship.
Introduction and General Remarks
Nonperturbative dynamics of strongly interacting supersymmetric gauge theories has been explored using a wide variety of tools with great success [1,2]. A better understanding of dynamics may lead to solutions to some deep-rooted problems like supersymmetry breaking [3], the gauge hierarchy problem [4], and even string compactification [5]. In the context of string theory, the task of incorporating strongly interacting gauge theory dynamics becomes even more compelling since many crucial properties of the string theory may not be discerned until a deeper grasp of the vacuum structure of this kind of gauge theories is obtained [6].
Typically a theory which aims to go beyond the Standard Model and explain some of its features, may contain many different mass scales beyond M Z and may involve many other gauge degrees of freedom, some of which may even be strongly interacting. Examples of such theories include all grand unified theories [7], technicolor theories [8], and supergravity theories with or without strongly interacting hidden sectors [9]. In any of these theories one may propose the existence of a strongly interacting gauge theory for no other reason, save the problem at hand, i.e., elimination of elementary Higgs fields [8], supersymmetry breaking [10], etc. However, in string theory one finds that the consistency of the theory forces upon us the existence of a completely hidden and/or semi-hidden gauge group which has to reckoned with. Furthermore, one finds that in a large class of models, the theory also contains hidden matter fields. The interaction of the hidden matter fields with themselves and with the observable matter fields is in principle completely calculable [11]. It is the interplay between nonperturbative dynamics of the hidden sector and its possible effect on the observable sector which is of great interest.
In this paper we examine the reciprocal influence of the nonperturbative dynamics of strongly interacting gauge theories and the interaction among the matter fields in a class of models. The type of models we consider are characterized by two mass scales: Λ G where the gauge theory becomes strong, and the mass scale M (M ≫ Λ G ). The key ingredient that allows us to probe the theory in great detail is the tamed ultraviolet behavior of the theory due to supersymmetry and the presence of genuinely stringy discrete symmetries.
In a typical supergravity theory, without the benefit of any string input, the dynamics of the gaugino condensate depends primarily on the gauge coupling constant g and the masses of the matter fields charged under the strongly interacting gauge group. However, in string theory the gauge coupling g is related to the vacuum expectation value of the dilaton field S as 1/g 2 = 4Re S and consequently becomes a dynamical variable. Furthermore, the interactions of the matter fields are only dictated by the string dynamics and, in a class of theories, are calculable [12]. Additionally, string theory imposes very strong discrete symmetries (the so-called target space modular symmetries [13]) on the theory, making the structure of the gaugino condensate very tightly constrained.
In the preparatory examples treated in the literature [14,15,16,17]
Y 3 32π 2 = (32π 2 e) M/N−1 [cη(T )] 2M/N−6 [det M] 1/N e −32π 2 S/N , (1.1a) Π ij = Y 3 32π 2 M −1 ij , (1.1b)
where T is the modulus field and η the Dedekind function, c is an unknown constant, the composite superfield Y is defined as Y 3 = W α W α /S 3 0 where S 0 is the chiral compensator, and Π ij = C i C j are the matter condensates. Using one-loop renormalization group equations, the complementary case of M ≫ Λ G can also be incorporated [16]. For an alternate approach using an effective Lagrangian see Ref. [17]. However, a generic fullfledged string theory example introduces its own set of complications. It is the purpose of this paper to examine some of these involutions.
In a realistic example, to obtain a coherent picture of the vacuum structure of the theory requires a multipronged approach. Generically one finds that some of the matter fields which carry nontrivial gauge quantum numbers of the strongly interacting theory do not acquire mass at the trilinear level of the superpotential. As it can be easily shown that a strongly interacting supersymmetric gauge theory with massless matter fields is beset with the problem of pathological vacuum structure [1], this necessitates that the mass calculations for the matter fields be carried up to quartic order and beyond.
The presence of the modular invariance further restricts the type of terms that can arise [5]. In a class of models higher order nonrenormalizable terms can be calculated [12] and their modular invariance properties inferred [18]. The key point to note here is that the requirement of a stable vacuum mandates that the superpotential be probed up to the level at which the determinant of the mass matrix is nonzero. The presence of a zero eigenvalue in the mass matrix invariably leads to both theoretical (unstable vacua [1]) and experimental (breakdown of the equivalence principle [19]) difficulties.
Furthermore, strongly interacting supersymmetric gauge theories are notorious for containing a large class of degenerate vacua. Unless the effects of the superpotential are taken into account, the degeneracy remains unbroken. Additionally, many of the expositions in strongly interacting gauge theory dynamics are based on the global and/or local symmetries of the theory [1], thus in the absence of complete knowledge of the superpotential an unambiguous identification of the vacuum structure may not be possible.
The free-fermionic formulation of string theory [20] is specially suited to explore these questions, since the nonrenormalizable terms can be explicitly calculated [12] and modular properties of these terms can also be determined. Succintly put, a nonrenormalizable term Φ 1 . . . Φ N can be calculated by evaluating the correlator Φ 1 . . . Φ N . After taking into account the massless exchanges, a nonzero piece at low momenta signifies a presence of such a term in the superpotential. A priori, the existence of such a term in the superpotential would be inconsistent with modular invariance. However, it can be shown [18] that the nonzero value of the correlator Φ 1 . . . Φ N implies a nonzero value of the following series of Thus we see that the question of gaugino condensate cannot be answered independently of the superpotential. The effect of the superpotential is felt in determining the vacuum structure of the theory lifting the degeneracy of the vacuum, and in the specific value of the gaugino condensate.
correlators T p Φ 1 . . . Φ N , where p ≥ N − 3
Application to Flipped SU (5)
We now present an explicit example of the calculation of hidden massive matter condensates and the various subtle points that arise in realistic string models. To this end we study the flipped SU (5) string model [21] which has a rich hidden sector spectrum composed of two gauge groups, namely SO(10) and SU (4). Since SO(10) is expected to become strongly interacting at a scale (Λ 10 ≈ 10 14−16 GeV [22]) much higher than the respective SU (4) scale (Λ 4 ≈ 10 10−12 GeV [22]), in the following we deal exclusively with SO(10) condensates. Besides Λ 10 , two other mass scales come into play: the SU (5) × U (1) beaking scale, V, V ∼ 10 15−16 GeV; and the scale of singlet vevs needed to cancel the anomalous D-term, φ ∼ 10 17 GeV [21,6]. All these scales are to be compared with the string unification scale M SU ≈ 1.24 × g × 10 18 GeV [23,24] and the scale of nonrenormalizable terms in the superpotential M ≈ 10 18 GeV [12].
We first need to determine the mass matrix for the SO(10) fields. These belong to the 10 of SO(10) and are denoted by T i , i = 1 → 5. There are generally three sources of mass terms for these fields:
(i) T i T j φ N−2 : at N=3 we have the following terms [6,25]
Φ 23 T 2 1 , Φ 31 T 2 2 , Φ 23 T 2 4 , Φ 31 T 2 5 , φ 3 T 4 T 5 ,(2.1)
where the various φ's are singlet fields which will generally get vevs φ ∼ 10 17 GeV, but could vanish for consistency or phenomenological reasons. Following the methods in [26], it can be shown that no new terms arise at any order N ≥ 4, except for small corrections to the above terms (e.g., T 2 No such terms occur at N=4,6,8. At N=5 we get T 1 T 4 F 1F5 φ 3 /M 2 and at N=7
1 Φ 31φ 2 1 /M 2 ). (ii) T i T j F 1,3F5 φ N−4 : hereT 3 T 4 F 3F5 {φ 2 φ 3φ − , φ 2φ3 φ + , φ 3φ2 φ + }/M 4 , (2.2a) T 3 T 5 F 3F5 {φ 3φ − Φ 31 ,φ 3 φ + Φ 31 }/M 4 . (2.2b)
Yet higher order terms (N ≥ 9) are further suppressed by powers of ( φ /M ) ∼ 10 −2 .
(iii) T i T j T k T l φ N−4 : no such terms occur at N=4,5,7. At N=6 we get
T 3 T 3 T 4 T 4 φ 45 φ + /M 3 , (2.3a) T 1 T 1 T 4 T 5 {φ 1φ2 , φ 2φ1 }/M 3 , (2.3b) T 1 T 1 T 4 T 5 φ 1 Φ 31 /M 3 . (2.3c)
With yet higher orders further suppressed.
We should point out that a very useful constraint in searching for non-vanishing higherorder nonrenormalizable terms in the superpotential is given by the modular invariance of the all-orders superpotential, as discussed above.
The resulting T i mass matrix can be written as follows Since Π ij = Y 3 32π 2 M −1 ij , and M ij depends on Π ij also, the solution of the resulting equations can be rather nontrivial. We now invert the matrix M in various levels of approximation to exhibit the subtleties that can arise. As a first approximation let us drop all nonrenormalizable contributions to M. In this limit the matrix breaks up into three blocks with a zero eigenvalue for T 3 . The solution is Π 11,22,44,45,55 ∼ Y 3 32π 2 1 φ and Π 33 → ∞, with all the other condensates vanishing. Clearly this is an unphysical situation and we are forced to consider nonrenormalizable terms.
M ij = Φ 23 0 0 η 0 0 Φ 31 0 0 0 0 0 δ 44 δ 34 + ǫ 4 ǫ 5 η 0 δ 34 + ǫ 4 δ 33 + Φ 23 φ 2 0 0 ǫ 5 φ 2 Φ 31 (2.4) where δ ij = T i T j φ 45 φ + 1 M 3 ≡ δ M Π ij , (2.5a) ǫ 4,5 = F 3F5 φ 3 4,5 M 4 , (2.5b) η = F 1F5 φ 3 1 M 2 ,(2.
Next we keep the δ ij terms and neglect the ǫ 4,5 and η terms. Schematically the matrix becomes
φ 0 0 0 0 0 φ 0 0 0 0 0 δ 44 δ 34 0 0 0 δ 34 δ 33 + φ φ 2 0 0 0 φ 2 φ . (2.6)
The 2 × 2 submatrix involving T 1,2 remains unchanged, i.e.,
Π 11 ∼ Π 22 ∼ Y 3 32π 2 1 φ .
To simplify the calculation without loss of generality, let us set φ 2 = 0, thus decoupling the T 5 field, i.e., Π 45 = 0, Π 55 ∼ Y 3 32π 2 1 φ . The remaining 2 × 2 matrix can be easily inverted and the following three equations result
Π 44 1 − Y 3 32π 2 1 D δ M = 0, (2.7a) Π 33 1 − Y 3 32π 2 1 D δ M = Y 3 32π 2 φ D , (2.7b) Π 34 1 + Y 3 32π 2 1 D δ M = 0, (2.7c) where D = φ + δ M Π 33 δ M Π 44 − δ M Π 34 2 .
(2.8)
Equation ( and from (2.7b)
Π 33 = 1 2 Y 3 32π 2 φ D = − 1 2 M δ φ ∼ M 2 .
(2.10)
Also, Π 34 gets determined through D,
Π 34 = Y 3 32π 2 M δ 1/2 . (2.11)
In case (i), Eqn. (2.7a) gives Π 44 = 0 and/or Y 3 32π 2 δ DM = 1, both of which lead to unphysical or inconsistent solutions. Thus solution (ii) is preferred. Note that the field T 3 remains very light (δ 44 = 0), while Π 33 ∼ M 2 is finite.
For the case when all matter fields are the mass eigenstates, the usual clustering argument, which is based on the symmetries of the superpotential, would imply that Π 34 = 0. However, we see that T 3 , T 4 are not mass eigenstates due to a mixing term D . The solutions of the equations are more complicated, but they differ negligibly from the case ǫ 4 = 0. That is, even though ǫ 4 is negligible numerically, its presence determines the correct vacuum of the theory.
The above example shows the importance of knowing the higher-order superpotential to obtain physically acceptable results. We should point out that it may happen that some of the singlet vevs vanish, implying δ = 0 and/or ǫ 4,5 = 0. As discussed above, this scenario may lead to unphysical solutions and thus these vev choices appear disfavored.
Calculation of the Gaugino Condensate
We now calculate the only unknown left in the theory, namely the gaugino condensate Y 3 . Equation (1.1a) has been obtained by imposing the modular symmetry of SL(2, Z) on the effective action. Generally the modular group is much larger. For the class of free-fermionic models under consideration one finds that the symmetry group is SL(2, Z) 3 accompanied by three moduli fields [18]. Correspondingly the gaugino condensate is straightforwardly generalized, and in the case of SO(10) with M massive flavors we obtain
Y 3 32π 2 = (32π 2 e) M/8−1 [cη(T 1 )η(T 2 )η(T 3 )] −2 [det M] 1/8 e −32π 2 S/8 , (3.1)
where det M is the determinant of the mass matrix M calculated with modular invariant mass terms. This way Y 3 has modular weight (−1, −1, −1) and W ef f ∝ Y 3 [17] has the correct modular properties in this generalized case.
Let us now compute det M for the case analyzed in Eqs. (2.9)-(2.11) above where
det M = Φ 23 Φ 2 31 [δ 44 δ 33 − δ 2 34 + δ 44 Φ 23 ]. (3.2)
The trilinear masses Φ 23 and Φ 31 have modular weights 1 (−1, 0) and (0, −1) respectively and thus we take instead η 2 (T 1 )Φ 23 and η 2 (T 2 )Φ 31 . The δ ij terms come from the following superpotential term
a T 3 T 3 T 4 T 4 φ 45 φ + η 2 (T 1 )η 4 (T 2 ), (3.3)
where a is an O(1) calculable constant and the η functions insure that this contribution to the superpotential has the appropriate modular weight (i.e., (−1, −1)) [18]. This implies that δ/M defined in Eq. (2.5a) should be multiplied by η 2 (T 1 )η 4 (T 2 ) as well. With this information and the modular weights of the fields involved, one can readily determine whether additional η functions are needed to make the mass terms in (3.2) modular invariant. The final result is
det M = Φ 23 Φ 2 31 η 2 (T 1 )η 4 (T 2 )[(δ 33 δ 44 − δ 2 34 )η 2 (T 1 ) − δ 44 Φ 23 ], (3.4a) = −Φ 23 Φ 2 31 a φ 45 φ + M 3 Y 3 32π 2 η 6 (T 1 )η 8 (T 2 ), (3.4b)
where Eqs. (2.9) and (2.11) have been used.
To make our results for Y 3 32π 2 more transparent, we insert the missing units in Eq. (3.1) and set M = 5. The exponential factor then becomes M 19/8 e −32π 2 S/8 with M ≈ 10 18 GeV.
Using 4g 2 = 1/S and introducing the SO(10) condensation scale Λ 10 = M e 8π 2 /(βg 2 ) , with β = −3 × 8 + 5 = −19, this factor becomes Λ 19/8 10 , and thus we can write
Y 3 32π 2 = (32π 2 e) −3/8 [cη(T 1 )η(T 2 )] −2 Λ 3 10 det M Λ 10 1/8 , (3.5)
which is a modular invariant generalization of the usual gaugino condensate expression [1].
Note that since det M depends on Y 3 32π 2 as well, the final expression for Y 3 32π 2 is different, as follows (3.7)
Numerically, the Λ 3 10 (det M ′ Λ 10 ) 1/7 terms determine the scale of Y 3 32π 2 . For typical values of the parameters we obtain Λ 10 ∼ 10 15 GeV, det M ′ Λ 10 ∼ 10, and Y 3 32π 2 ∼ (10 15 GeV) 3 . The matter condensates then become Π 11 ∼ Π 22 ∼ Π 55 ∼ (10 14 GeV) 2 , Π 44 ∼ 0, Π 33 ∼ (10 18 GeV) 2 , and Π 34 ∼ (10 16 GeV) 2 .
Conclusions
Gauge and matter condensates are very important in the analysis of realistic stringderived models, since these typically involve hidden gauge groups which communicate with the observable sector, although rather feebly. The calculations presented in this paper address some of the difficulties that are likely to arise in typical models, such as the degeneracy of the vacuum and the need to explore the superpotential to high orders. The latter requires an understanding of the modular invariant properties of the superpotential to all orders in nonrenormalizable terms. We have made the above points explicit in the particular case of the flipped SU (5) model with SO(10) hidden gauge group and matter fields in the fundamental representation. In this case the gaugino condensate has to be solved self-consistently together with the matter condensates, a feature that may only be appreciated in a calculation in an explicit string model.
, i.e., examples with only one modulus field and the case where all matter fields of the hidden gauge group acquire masses through trilinear superpotential terms, most of the stringy information has been incorporated. Specifically, for the case of SU (N ) with M flavors in the fundamental representation, when the mass M of the matter fields is M ≪ Λ G , one finds[15]
and T p is a definite product of p moduli fields of the theory. Supplementing this observation with the requirement of modular invariance, we find that the coefficient of a given nonrenormalizable term can be summed up into a modular covariant function made up of products of Dedekind η functions of the moduli fields involved. A modular invariant gaugino condensate will also have to be reconciled with the nontrivial modular properties of nonrenormalizable terms.
F 1,3 andF 5 are the SU (5) fields which get vevs V, V respectively.
in (2.2). It is important to estimate the sizes of the various entries in M ij . With the above mentioned scales we obtain: φ ∼ 10 −1 M , η ∼ 10 −5 M , ǫ 4,5 ∼ 10 −7 M , and δ ∼ 10 −2 .
2.7c) can be solved if (i) Π 34 = 0 and/or (ii)
ǫ 4 T 3 T 4 ∼ 10 −7 M T 3 T 4 . The presence of such a term breaks the degeneracy of the vacuum: Π 44 = 0, Π 34 = 0 and Π 44 = 0, Π 34 = 0. One can repeat the above calculations taking ǫ 4 = 0 (and ǫ 5 = 0 for simplicity) and the only change in Eqs. (2.7) is in the righthand side of Eq. (2.7c) which becomes −
3 η 4 (T 1 )η 6 (T 2 ).
The modular weights of fields in free fermionic models are given in[24]. Here we also useT 3 , φ 45 , φ + : (−1/2, −1/2) and T 4 : (0, −1/2). Note that we only consider the modular properties under the T 1,2 moduli fields. The modular transformations under T 3 are less obvious and are still under investigation.
Acknowledgments: This work has been supported in part by DOE grant DE-FG05-91-ER-40633.
. D Amati, Phys. Rep. 162169For a review see, e.g., D. Amati, et. al., Phys. Rep. 162 (1988) 169.
. K Konishi, Phys. Lett. B. 135439K. Konishi, Phys. Lett. B 135 (1984) 439.
L Dixon, Proceedings of The Rice Meeting. B. Bonner and H. MiettinenThe Rice MeetingWorld Scientific811and references thereinL. Dixon, in Proceedings of The Rice Meeting, ed. by B. Bonner and H. Miettinen (World Scientific, 1990), p. 811, and references therein.
. I Affleck, M Dine, N Seiberg, Nucl. Phys. B. 256557and references thereinI. Affleck, M. Dine, and N. Seiberg, Nucl. Phys. B 256 (1985) 557 and references therein.
. S Ferrara, D Lüst, A Shapere, S Theisen, Phys. Lett. B. 225363S. Ferrara, D. Lüst, A. Shapere, and S. Theisen, Phys. Lett. B 225 (1989) 363;
. S Ferrara, D Lüst, S Theisen, Phys. Lett. B. 23339Phys. Lett. BS. Ferrara, D. Lüst, and S. Theisen, Phys. Lett. B 233 (1989) 147 and Phys. Lett. B 242 (1990) 39.
. J L Lopez, D V Nanopoulos, Phys. Lett. B. 25173J. L. Lopez and D. V. Nanopoulos, Phys. Lett. B 251 (1990) 73.
G G Ross, Grand Unified Theories. Benjamin/Cummings, MASee e.g., G. G. Ross, Grand Unified Theories, (Benjamin/Cummings, MA, 1983);
Grand Unification With and Without Supersymmetry and Cosmological Implications. C Kounnas, A Masiero, D V Nanopoulos, K A Olive, World Scient. Publ. CompSingaporeC. Kounnas, A. Masiero, D. V. Nanopoulos and K. A. Olive, Grand Unification With and Without Supersymmetry and Cosmological Implications, (World Scient. Publ. Comp., Singapore, 1984).
. E Farhi, L Susskind, Phys. Rep. 74277For a review see E. Farhi and L. Susskind, Phys. Rep. 74 (1981) 277.
. A Lahanas, D V Nanopoulos, Phys. Rep. 1451A. Lahanas and D. V. Nanopoulos, Phys. Rep. 145 (1987) 1.
. J P Derendinger, L Ibáñez, H Nilles, Phys. Lett. B. 15565J. P. Derendinger, L. Ibáñez, and H. Nilles, Phys. Lett. B 155 (1985) 65;
. M Dine, R Rohm, N Seiberg, E Witten, Phys. Lett. B. 15655M. Dine, R. Rohm, N. Seiberg, and E. Witten, Phys. Lett. B 156 (1985) 55.
. J Ellis, J L Lopez, D V Nanopoulos, Phys. Lett. B. 247257J. Ellis, J. L. Lopez, and D. V. Nanopoulos, Phys. Lett. B 247 (1990) 257.
. S Kalara, J Lopez, D V Nanopoulos, Phys. Lett. B. 245650Nucl. Phys. BS. Kalara, J. Lopez, and D. V. Nanopoulos, Phys. Lett. B 245 (1990) 421, Nucl. Phys. B 353 (1991) 650.
. See E G , J Schwarz, Caltech preprint CALT-68-1581See e.g., J. Schwarz, Caltech preprint CALT-68-1581 (1989).
. I Antoniadis, J Ellis, A B Lahanas, D V Nanopoulos, Phys. Lett. B. 24124I. Antoniadis, J. Ellis, A. B. Lahanas, and D. V. Nanopoulos, Phys. Lett. B 241 (1990) 24;
. C P Burguess, F Quevedo, Phys. Rev. Lett. 642611C. P. Burguess and F. Quevedo, Phys. Rev. Lett. 64 (1990) 2611;
. S Ferrara, N Magnoli, T R Taylor, G Veneziano, Phys. Lett. B. 245409S. Ferrara, N. Magnoli, T. R. Taylor, and G. Veneziano, Phys. Lett. B 245 (1990) 409;
. H P Nilles, M Olechowsky, Phys. Lett. B. 248268H. P. Nilles and M. Olechowsky, Phys. Lett. B 248 (1990) 268;
. P Binetruy, M K Gaillard, Phys. Lett. B. 253119P. Binetruy and M. K. Gaillard, Phys. Lett. B 253 (1991) 119;
. M Cvetic, Nucl. Phys. B. 361194M. Cvetic, et. al., Nucl. Phys. B 361 (1991) 194.
. D Lüst, T Taylor, Phys. Lett. B. 253335D. Lüst and T. Taylor, Phys. Lett. B 253 (1991) 335.
. B De Carlos, J A Casas, C Muñoz, Phys. Lett. B. 263248B. de Carlos, J. A. Casas, and C. Muñoz, Phys. Lett. B 263 (1991) 248;
. J A Casas, C Muñoz, CERN preprint CERN-TH.6187/91J. A. Casas and C. Muñoz, CERN preprint CERN-TH.6187/91.
. J Louis, SLAC preprint SLAC-PUB-5645J. Louis, SLAC preprint SLAC-PUB-5645 (1991).
. S Kalara, J L Lopez, D V Nanopoulos, in preparationS. Kalara, J. L. Lopez, and D. V. Nanopoulos, in preparation.
. J Ellis, S Kalara, K Olive, C Wetterich, Phys. Lett. B. 228264J. Ellis, S. Kalara, K. Olive, and C. Wetterich, Phys. Lett. B 228 (1989) 264.
. I Antoniadis, C Bachas, C Kounnas, Nucl. Phys. B. 28987I. Antoniadis, C. Bachas, and C. Kounnas, Nucl. Phys. B 289 (1987) 87;
. I Antoniadis, C Bachas, Nucl. Phys. B. 298586I. Antoniadis and C. Bachas, Nucl. Phys. B 298 (1988) 586;
. H Kawai, D C Lewellen, S H , H. Kawai, D.C. Lewellen, and S.H.-H.
. Tye, Phys. Rev. Lett. 573794Phys. Rev. DTye, Phys. Rev. Lett. 57 (1986) 1832; Phys. Rev. D 34 (1986) 3794;
. Nucl. Phys. B. 2881Nucl. Phys. B 288 (1987) 1;
. R Bluhm, L Dolan, P Goddard, Nucl. Phys. B. 309330R. Bluhm, L. Dolan, and P. Goddard, Nucl. Phys. B 309 (1988) 330;
. H Dreiner, J L Lopez, D V Nanopoulos, D Reiss, Nucl. Phys. B. 320401H. Dreiner, J. L. Lopez, D. V. Nanopoulos, and D. Reiss, Nucl. Phys. B 320 (1989) 401.
. I Antoniadis, J Ellis, J Hagelin, D V Nanopoulos, Phys. Lett. B. 23165I. Antoniadis, J. Ellis, J. Hagelin, and D. V. Nanopoulos, Phys. Lett. B 231 (1989) 65.
. G Leontaris, J Rizos, K Tamvakis, Phys. Lett. B. 243220G. Leontaris, J. Rizos, and K. Tamvakis, Phys. Lett. B 243 (1990) 220.
. V Kaplunovsky, Nucl. Phys. B. 307145V. Kaplunovsky, Nucl. Phys. B 307 (1988) 145;
. I Antoniadis, J Ellis, R Lacaze, D V Nanopoulos, CERN preprint CERN-TH.6136/91Phys. Lett. B). To appear inI. Antoniadis, J. Ellis, R. Lacaze, and D. V. Nanopoulos, CERN preprint CERN-TH.6136/91 (To appear in Phys. Lett. B).
S Kalara, J L Lopez, D V Nanopoulos, preprint CTP-TAMU-46/91Texas A & M University. To appear inS. Kalara, J. L. Lopez, and D. V. Nanopoulos, Texas A & M University preprint CTP-TAMU-46/91 (To appear in Phys. Lett. B).
. J Rizos, K Tamvakis, Phys. Lett. B. 251369J. Rizos and K. Tamvakis, Phys. Lett. B 251 (1990) 369.
. J L Lopez, D V Nanopoulos, Phys. Lett. B. 256150J. L. Lopez and D. V. Nanopoulos, Phys. Lett. B 256 (1991) 150.
| zyda_arxiv-1596000 |
On invariant subspaces of dissipative operators in a space with indefinite metric
6 Dec 2004
A A Shkalikov
On invariant subspaces of dissipative operators in a space with indefinite metric
6 Dec 2004
The theorem on the existence of maximal nonnegative invariant subspaces for a special class of dissipative operators in Hilbert space with indefinite inner product is proved in the paper. It is shown in addition that the spectra of the restrictions of these operators on the corresponding invariant subspaces lie in the closed upper half-plane. The obtained theorem is a generalization of well-known results of L. S. Pontrjagin, H. K. Langer, M. G. Krein and T. Ja. Azizov devoted to this subject.
Introduction
Let H be a separable Hilbert space with usual scalar product (x, y) and indefinite one [x, y] = (Jx, y), where J = P + − P − , and P + , P − are the orthoprojectors such that P + P − = P − P + = 0, P + + P − = I and I is the identity operator. Obviously, J admits such a representation if and only if J = J * and J 2 = I. The space {H, J} is called the Pontrjagin space and is denoted by Π κ , if either rank P + or rank P − is finite and equals κ. In the sequel, we work only with operators A for which the sum D + ⊕ D − is dense in H, where D ± = D(A) ∩ H ± . We will always assume that D(A) = D + ⊕ D − , otherwise we can consider the restriction of A to this domain. In this case the operator A can be represented as an operator matrix with respect to the decomposition H = H + ⊕ H − :
(1) A = P + AP + P + AP − P − AP + P − AP − := A 11 A 12 A 21 A 22 .
The vectors x = x + + x − ∈ H with x ± ∈ H ± are indentified in this representation with the colomns x = x + x − , and the action of A is determined by the formula
Ax = A x + x − = A 11 x + + A 12 x − A 21 x + + A 22 x − , x + ∈ D + , x − ∈ D − .
Pontrjagin [2] proved in 1944 the following fundamental result.
Pontrjagin Theorem. Let A be a self-adjoint operator in the space {H, J} and rank P + = κ < ∞. Then there exists a maximal nonnegative A-invariant subspace L (dim L = κ) such that the spectrum of the restriction A| L lies in the closed upper half-plane.
Starting from paper [2] the problem on the existence of maximal definite invariant subspaces has been a key-stone of the operator theory in Pontrjagin and Krein spaces. Krein [3] obtained an analogue of Pontrjagin theorem for unitary operators in Π κ and developed a new approach to the problem in question. An important generalization of Pontrjagin theorem was obtained by Langer [4,5] and Krein [6]. Let us present here the result [5]. Later on the theorems on the existence of A-invariant subspaces have been obtained for other classes of operators. Krein brought into consideration and investigated the class of definite operators, and Langer [7,8] proved the theorem on the existence of maximal definite invariant subspaces for a wider class of the so-called definitizable operators and obtained for these operators an analogue of the spectral theorem. Krein and Langer [9] and independently Azizov [10] showed that Pontrjagin theorem remains to be valid (as before in Pontrjagin space {H, J}, rank P + = κ < ∞)) if the condition for A to be self-adjoint is replaced by the condition to be maximal dissipative. Later on, Azizov and Khoroshavin [11] proved an analogue of Langer theorem for a class of nonstretching operators in Krein space, and Azizov [12,Ch. 2] proved that Langer theorem [5] remains to be valid for maximal dissipative operators in Krein space. A direct and shorter proof of the later result was suggested by the author [13].
The Langer condition D(A) ⊃ H + (or equivalently the boundedness of the operators A 11 , A 21 ) is rather restrictive. In particular, often in concrete problems (see [14,15], for example) the operator A 21 is unbounded.
Main result
The goal of the present paper is to obtain a generalization of Pontrjagin-Krein-Langer-Azizov theorem dropping out the Langer condition D(A) ⊃ H + , i. e. the condition for the operators A 11 and A 21 to be bounded. The essence of the assumptions formulated below can be expressed as follows: the operator A 22 is dominant with respect to the interlacing operators A 21 and A 12 , and the so-called transfer-function of the operator matrix (1) is bounded. Let us formulate the main result. First we shall make two remarks on the conditions of the above theorem. It is useful to view in mind that condition (ii) is valid if the operator A 21 is closable (hence the adjoint operator A * 21 is densely defined) and Later on, if we meet no confusions, we shall write µ instead of µI where I is the identity operator in H + , H − or in H.
D(A * 21 ) ⊃ D(A * 22 ) (it is known [1, Ch. 5] that the adjoint to the dissipative operator −A 22 is densely defined). In fact, if the condition D(A * 21 ) ⊃ D(A * 22 ) holds, then F * (µ) = A * 21 (A * 22 − µ) −1 is
Preliminary propositions
We shall premise several lemmas to the proof of Theorem. Lemmas 4 and 5 play the key role.
Lemma 1. A subspace L is maximal nonnegative (uniformly positive) if and only if it can be represented in the form
(2) L = x = x + Kx + , x + ∈ H + , where K : H + → H − is a linear operator with the norm K 1 ( K < 1). A nonnegative subspace L is maximal if and only if there exists no nonzero element y + ∈ H + such that [x, y + ] = (x, y + ) = 0 for all x ∈ L.
Proof. (See [2]). Assuming that L is nonnegative subspace in {H, J} we have
x + x − for all x = x + x − ∈ L.
Then the restriction Q = P + | L : L → P + (L) is a bijection, and Q −1 2. Therefore,
L = x = x + Kx + , x + ∈ P + (L), K = P − Q −1 .
Here K 1 if L is nonnegative and K < 1 if L is uniformly positive. Obviously, L is maximal if and only if P + (L) = H + . The second assertion of Lemma is also obvious.
The operator K participating in representation (2) is said to be the angle operator of the subspace L.
(3) G = A 12 (A 22 − µ) −1 , F = (A 22 − µ) −1 A 21 , S = A 11 − A 12 F be bounded for some µ ∈ ρ(A 22 )
. Then A is closable and its closure is given by the relation
(4) A = µ + 1 G 0 1 S − µ 0 0 A 22 − µ 1 0 F 1 .
More precisely, the domain and the action of A are defined by the relations
D(A) = x + x − ∈ H, x + ∈ H + , F x + + x − ∈ D − ⊂ D(A 22 ) , A x + x − = Sx + + G(A 22 − µ)(F x + + x − ) (A 22 − µ)(F x + + x − ) + µx − .
Proof. (Cf. [14]). One can easily check the validity of representation (4) for
x = x + x − ∈ D(A).
Since the operators G, S, F are bounded, we conclude that the first and the third matrix in the right hand-side of (4) are invertible, and the second one represents a closed operator. Therefore, A is closable and representation (4) is valid. The description of D(A) and the formula for the action of A follows from (4).
Lemma 3. Suppose that −A 22 is a maximal dissipative operator in H − and G(µ) = A 12 (A 22 − µ) −1 is compact for some µ ∈ C + . Then G(µ) → ∞ as Im µ → +∞.
Proof. It follows from the equation
G(λ) = G(µ) + (λ − µ)G(µ)(A 22 − λ) −1 that G(λ)
is compact for all λ ∈ C + . Further we make use from the relation
G(µ) = G(i)(A 22 + i)(A 22 − µ) −1 .
The norm of the operator function (A 22 + i)(A 22 − µ) −1 is uniformly bounded in the halfplain Im µ ε. The compact operator G(i) : H − → H + can be approximated with arbitrary accuracy in the norm operator topology by a finite rank operator. Hence it suffices to prove that Q(A 22 + i)(A 22 − µ) −1 → 0 as Im µ → ∞ for any operator Q of rank 1, namely, for Q = (·, ϕ − )ϕ + where ϕ ± ∈ H ± . Observe, that Q can be approximated with arbitrary accuracy in the norm operator topology by an operator of the form Q 0 = (·, ϕ 0 )ϕ + where ϕ 0 ∈ D(A * 22 ) (we already noted that the adjoint to a dissipative operator is densely defined). Now, the operator Q 0 (A 22 + i) is bounded, and (A 22 − µ) −1 1/ Im µ for µ ∈ C + . This gives the assertion of Lemma.
(5) L = K(S − µ + GL),
and then the restriction A| L is represented in the form
(6) A| L = Q −1 (S + GL)Q, where Q = P + | L : L → H + , Q −1 1 + K .
Proof. Let L = A 21 + (A 22 − µ)K be defined on D + and admit a bounded closure. Then
(A 22 − µ) −1 Lx + = (F + K)x + ∈ D −
for all x + ∈ H + . Recalling the description of D(A) obtained in Lemma 2 we find L ⊂ D(A) and
(7) (A − µ) x + Kx + = (S − µ + GL)x + Lx + .
Conversely, if L ⊂ D(A), then (F + K)x + ∈ D − . Hence L = (A 22 − µ)(F + K) is defined on the whole H + . Since the operator A : H → H is closed, its restriction A : L → H is also closed. The later operator is defined on the whole L. Therefore, it is bounded by virtue of the closed graph theorem, and it follows from (7) that the operator L is bounded. Now, suppose that the subspace L is A-invariant. Then given x + ∈ H + there exists an element y + ∈ H + such that
(8) (S − µ + GL)x + Lx + = y + Ky + .
This implies equation (5). Conversely, suppose that L ⊂ D(A) and equation (5) holds. Then relation (8) is valid, and it is equivalent to A-invariance of the subspace L. The last assertion of Lemma follows from (7). We remark only that the estimates Q 1 and Q −1 1 + K follow from the definition of Q.
[x, x] 2ε(π A + ) −1 x 2 , for x ∈ L(11)
where A + = A| L . If for some µ ∈ C + the estimate G(µ) = γ < 1 holds, then
(12) A + 2( S + γ(1 − γ) −1 ( S + |µ|)), S = A 11 − G(µ)A 21 .
Proof. We already noted reffering to [12, Ch. 2, § 2] that A is maximal dissipative in {H, J} under the assumptions of this Lemma (in fact, if JA admits a nontrivial dissipative extension in H, then the condition D(A) ⊃ H + implies that −A 22 admits nontrivial dissipative extensions in H − , and we come to a contradiction). The other assertions of Lemma but the last estimates were proved in the author's paper [13]. To prove estimate (12) we have to repeat partially the arguments from [13]. We do this in several steps.
Step 1. It follows from the condition D(A) ⊃ H + that the operator AP + is bounded. Take a number a > 2 AP + . Denote as before G(λ) = A 12 (A 22 − λ) −1 and show that
(13) G(λ) 2 + 2aε −1 for all λ ∈ C + .
Consider the operator
T (λ, a) = ia A 12 0 −A 22 + λ = (JA + ia) + (λ − ia)P − − AP + .
The operators JA + ia and T (λ, a) + AP + are maximal dissipative for λ ∈ C + . Moreover, we have the estimate
Im(T (λ, a)x, x) (a/2) x 2 for x ∈ D(T ) = H + ⊕ D(A 22 ),
provided that λ ∈ C + a = {λ | Im λ a}. Therefore T (λ, a) is invertible for λ ∈ C + a and T −1 (λ, a) 2a −1 . Since G(λ) = aP + T −1 (λ, a)P − , we have G(λ) 2 for λ ∈ C + a . Using the equation G(λ) = G(λ + ia) + iaG(λ + ia)(A 22 − λ) −1 we get estimate (13). Here we view in mind that Im(A 22 x, x) −ε(x, x) and hence (A 22 − λ) −1 ε −1 for λ ∈ C + . Step 2. It follows from representation (4) that λ ∈ ρ(A) ∩ C + if and only if the operator S(λ) −λ is invertible. Since the operator A 11 is bounded, we have (A 11 −λ) −1 = −λ −1 + O(λ −2 ) as λ → ∞. Viewing in mind that A 21 is bounded and G(λ) is subject to estimate (13) in C + we find (14) (
S(λ) − λ) −1 = (A 11 − λ) −1 (1 − G(λ)A 21 (A 11 − λ) −1 ) −1 = −λ −1 + O(λ −2 ),
as λ ∈ C + and λ → ∞. Hence the spectrum of A in C + is bounded.
Step 3. Take a contour Γ + consisting of a segment [−R, R] and the half of the circle C R in C + of the radius R and the center at zero. Taking R sufficiently large we may insure that the spectrum of A in C + lies inside Γ + . Consider the Riesz projector (10). Obviously, the subspace L = Q + (H) is A-invariant, and the restriction A + = A| L is a bounded operator. We can replace A by A + in (10). Then we have as R → ∞ 1 2πi
C R (λ − A + ) −1 dλ = 1 2πi C R (λ −1 + O(λ −2 )) −1 dλ = 1 2 I + O(R −1 )
.
Let x = Q + x ∈ L and y = (λ − A + ) −1 x. Then [x, x] = Re[Q + x, x] = 1 2 [x, x] + 1 2π R −R Im[y, (λ − A)y] dλ + O(R −1 ).
For λ ∈ R Im[y, (λ − A)y] = Im[Ay, y] ε(y, y).
Passing to the limit as R → ∞ and taking into account the inequality
x λ − A + y (|λ| + A + ) y , we get [x, x] 1 π ∞ −∞ Im[Ay, y] dλ ε π x 2 ∞ −∞ ( A + + |λ|) −2 dλ = 2ε π A + x 2 .
This proves that L is uniformly positive subspace, and the estimate (11) is valid.
Step 4. Let us prove that L is a maximal uniformly positive subspace. It easily follows from (4) that
(λ − A) −1 = (λ − S(λ)) −1 * * * ,
where by * we assign operators which representation is not used in the sequel. For z ∈ H + we have
(15) ((λ − A) −1 z, z) = ((λ − S(λ)) −1 z, z).
Integrating the function (2πi) −1 ((λ − A) −1 z, z) along the contour Γ + , using relations (14) and (15) for the integrals along the half circle C R and passing to the limit as R → ∞, we obtain
(Q + z, z) = [Q + z, z] = 1 2 (z, z) + 1 2π ∞ −∞ Im[Ay, y] dλ,
where y = (λ − A) −1 z. Consequently, 2(Q + z, z) (z, z) for all z ∈ H + . Hence there is no nonzero element z ∈ H + such that z ⊥ L = Q + (H). By lemma 1 the subspace L is maximal positive.
Step 5. Finally, let us prove estimate (12) provided that G(µ) = γ < 1 for some µ ∈ C + . Since L ⊂ D(A) and L is an A-invariant subspace, by Lemma 3 we have
L = K(S − µ + GL),
where K is the angle operator of the subspace L, and L = A 21 + (A 22 − µ)K. Consequently,
L = (1 − KG) −1 K(S − µ), L (1 − γ) −1 ( S + |µ|).
From (6) we get the inequality
A + 2( S + γ L ),
which implies estimate (12). Lemma is proved.
Lemma 6. Let a sequence of linear operators T n in the space H converge in the norm operator topology to an operator T . Suppose that the spectrum of T in a domain Ω ⊂ C is discrete. If σ(T n ) ∩ Ω = ∅ for all n, then σ(T ) ∩ Ω = ∅.
Proof. Given µ ∈ Ω ∩ ρ(T ) there exists a neighbourhood U δ (µ) such that U δ (µ) ⊂ ρ(T ) ∩ ρ(T n ) for all sufficiently large n, and
(16) (T n − λ) −1 − (T − λ) −1 → 0 as n → ∞
uniformly for λ ∈ U δ (µ). Take an arbitrary contour Γ in Ω which does not intersect the discrete spectrum of T . Taking from the cover {U δ (µ)} µ∈Γ a finite subcover, using relation (16) and viewing in mind that the spectra of the operators T n are empty inside Γ, we obtain that the Riesz projector of T along the contour Γ equals zero (since the corresponding Riesz projectors of T n equal zero). Consequently, the spectrum of T inside Γ is empty. By arbitrary choice of Γ the same is true inside Ω.
Proof of Theorem
Take a system of linear independent elements {ϕ n } ∞ 1 belonging to D + = H + ∩ D(A) such that the linear span of this system is dense in H + . Denote by H + n the finite dimensional subspaces with the bases {ϕ} n 1 and by P n the orthoprojectors on these subspaces. Consider the operator where µ ∈ C + and G = G(iε + µ), S n = P n (iε + S(iε + µ))P n , L n = A 21 P n + (A 22 − iε − µ)K n .
A n,ε = P n A 11 P n P n A 12 A 21 P n A 22 + iεJ, ε > 0,
We remark that we can write G in equation (17) instead of G n = P n G, since K n G n = K n G.
It will be shown that one can pass to the limit in the weak operator topology in equation (17) choosing a subsequence n k → ∞. The limit equation
(18) (1 − KG)L = K(S − µ), holds with K < 1, L = A 21 + (A 22 − iε − µ)K, L
const. By virtue of Lemmas 1 and 4 the subspace L with the angle operator K is A + iεJ-invariant and maximal uniformly positive. We remark that one can hardly realize a direct proof of the analogue of Lemma 5 for operators of the form A + iεJ, ε > 0, since there is no simple way to get representation (15) for S(λ) − λ. Further, the operators K, G, L, and S in equation (18) depend on ε. Choosing a proper subsequence ε n → 0 one can pass again to the limit in the weak operator topology and obtain equation (18) with an operator K, K 1, and the operators L = A 21 + (A 22 − µ)K and S = S(µ). Here L is bounded and by Lemmas 1 and 4 the subspace L with the angle operator K is A-invariant and maximal nonpositive. On this way we have also to prove that the spectra of the restrictions A + iεJ onto the invariant subspaces L ε lie in the upper half-plane C + for ε > 0 and in C + for ε = 0. From now on we realize the above plan.
By virtue of Lemma 3 we can choose a number µ ∈ C + such that G = G(iε + µ) < 1/2 for all 0 ε 1. The operator function iε + S(iε + µ) is continuous for 0 ε 1 in the norm operator topology. Hence there is a constant c such that (19) iε + S(iε + µ) c for all 0 ε 1.
It follows from (17) that L n 2(c + |µ|). We remark that
[x, x] δ(x, x)
for all x ∈ L n if and only if K n 1 − δ, where K n is the angle operator of the subspace L n . By Lemma 5 there is a number δ > 0 such that K n 1 − δ. The operators K n and L n acting from H + n into H − can be treated as operators from H + into H − after their zero extension on the orthogonal complement H + ⊖ H + n . Certainly, the norms of these operators are preserved. Since H + and H − are separable spaces and K n < 1 − δ, one can choose a weakly convergent subsequence K n j ⇀ K (here we make use of the fact that the unit ball of a separable Hilbert space is a compact set in the weak topology). Since the norms of the operators {L n j } are bounded by a constant 2(c + |µ|), one can choose from the sequence {L n j } a weakly convergent subsequence. Hence there are indices m = n k → ∞ such that K m ⇀ K, L m ⇀ L. Let us prove that
(20) L = A 21 + (A 22 − iε − µ)K. We have (A 22 − iε − µ) −1 L m = F (µ + iε)P m + K m ⇀ F (µ + iε) + K. Consequently, (A 22 − iε − µ) −1 L = F (µ + iε) + K,
and this implies relation (20). Now we remark that the weak convergence K n ⇀ K implies K n G ⇀ KG and GK n ⇀ GK for any bounded operator G. One can not guarantee the convergence K n GL n ⇀ KGL provided that the sequences {K n } and {L n } are weakly convergent. However, the convergence K n GL n ⇀ KGL does hold if G is a compact operator (in this case the convergence holds even in the norm operator topology). In fact, a compact operator G can be approximated with arbitrary accuracy in the norm operator topology by a finite rank operator, therefore it suffices to prove the convergence for an operator G = (·, v)u of rank 1. In the later case we have for all x ∈ H + and y ∈ H − (K n GL n x, y) = (L n x, v)(K n u, y) → (Lx, v)(Ku, y) = (KGLx, y).
Hence, one can pass to the weak limit in (17) and obtain the relation
(21) (1 − KG)L = K(S(iε + µ) + iε − µ),
where L = A 21 + (A 22 − iε − µ)K is a bounded operator and K 1 − δ with some δ > 0. As we already mentioned, by Lemmas 1 and 4 the subspace L with the angle operator K is A + iεJ-invariant and maximal uniformly positive. The restriction A + ε = (A + iεJ)| L is a bounded uniformly dissipative operator on the subspace L with the inner product [ , ], which is equivalent to the usual inner product in L, since the subspace L is uniformly positive. Consequently the spectrum of this restriction lies in the open upper half-plane C + . Now, we shall pass to the limit choosing a subsequence ε n → 0. Observe that (22) G(µ + iε) = G(µ) + iεG(µ)(A 22 − iε − µ) −1 S(µ + iε) = S(µ) + iεG(µ + iε)F µ).
Since G(µ + iε) < 1/2 for ε 0, it follows from (21) that L(ε) 2( S(µ) + ε(1 + G(µ) F (µ) ) + µ),
i. e. the norms of L(ε) are uniformly bounded for 0 < ε 1. Take any sequence K = K(ε n ) and choose a weakly convergent subsequence K(ε n j ). Further, choose a weakly convergent subsequence from the sequence L(ε n j ). On this way we find numbers ε m → 0 such that K n = K(ε m ) ⇀ K, L n = L(ε n ) ⇀ L. We can repeat the arguments applied while making the first limit procedure and obtain the relation L(µ) = A 21 + (A 22 − µ)K.
Taking into account relations (22) and recalling that the operator G(µ) is compact we can pass to the weak limit in relation (21) as ε m → 0. Thus we obtain that relation (21) holds with ε = 0 and the operators K, L, K 1, L const. By Lemma 4 the subspace L with the angle operator K is A-invariant and maximal nonnegative. From Lemma 4 we also have A| L = Q −1 (S(µ) + G(µ)L(µ))Q.
It was already proved that the spectra of the operators T (ε) = S(µ + iε) + G(µ + iε)L(µ + iε) + iε lie in C + for each ε > 0. It follows from relations (22) that T (ε) − iε = T (0) + C, where C is a compact operator. Hence the spectrum of T (0) in the half-plane Im λ −ε is discrete. Here ε > 0 is arbitrary number, therefore spectrum of T (0) in the open lower-half plane is discrete. From (22) we obtain T (ε n ) ⇒ T (0) taking into account that K n GL n ⇒ KGL if G is a compact operator. By Lemma 6 the spectrum of T (0) (and hence the spectrum of A| L ) lies in C + .
It is left to prove that A is a maximal dissipative operator in the space {H, J}. It follows from (4) that
A − µ + iαP + = 1 G(µ) 0 1 S(µ) − µ + iα 0 0 0 1 0 F (µ) 1 ,
provided that µ ∈ C + . Here the number α > Im µ can be choosen sufficiently large to guarantee the invertibility of the operator S(µ)−µ+iα. In this case the operator J(A−µ+iα) is dissipative in H and invertible. Therefore the dissipative operator JA is maximal dissipative. This ends the proof of Theorem.
It is called the Krein space if both later numbers are infinite. A subspace L in {H, J} is called nonnegative (uniformly positive), if [x, x] 0 ( ε(x, x) with some ε independent on x) for all x ∈ L. A nonnegative (uniformly positive) subspace L is said to be maximal if there are no nontrivial nonnegative (uniformly positive) extensions of this subspace. Maximal nonpositive and uniformly negative subspaces are defined analogiously. Let us represent the space H in the form H = H + ⊕ H − where H ± = P ± (H) are the ranges of the orthogonal projectors P ± . Consider a linear operator A in H with domain of definition D(A). The spectrum and the resolvent set of A is denoted further by σ(A) and ρ(A). An operator A is called dissipative in H if Im(Ax, x) 0 for all x ∈ D(A). A dissipative operator is called maximal dissipative if there are no nontrivial dissipative extensions of this operator. It is known [1, Ch. V, § 3.10] that the later condition holds if and only if ρ(A) ⊃ C + where C + is the open upper half-plane. An operator A is called dissipative (maximal dissipative) in the space {H, J} if JA is dissipative (maximal dissipative) in H. Analogiously, A is called symmetric (self-adjoint) in the space {H, J} if JA is symmetric (self-adjoint) in the space H.
Theorem. Let A be a dissipative operator inKrein space {H, J} and its domain D(A) = D + ⊕ D − be dense in H = H + ⊕ H − . Let (1) be the matrix representation of A in H + ⊕ H − and the following conditions hold: (i) the operator −A 22 is maximal dissipative in the space H − (and hence the resolvent (A 22 − µ) −1 exists for all µ ∈ C + ); (ii) the operator F (µ) = (A 22 − µ) −1 A 21 admits a bounded closure for some (and hence for all) µ ∈ C + ; (iii) the operator G(µ) = A 12 (A 22 − µ) −1 is compact for some (and hence for all) µ ∈ C + ; (iv) the operator S(µ) = A 11 − A 12 (A 22 − µ) −1 A 21 admits a bounded closure for some (and hence for all) µ ∈ C + . Then the closure A of the operator A is maximal dissipative in the space {H, J}, and there exists a maximal nonnegative A-invariant subspace L such that the spectrum of the restriction A| L lies in the closed upper half-plane. Moreover, L ⊂ D(A), i. e. the operator A| L is bounded.
defined on the whole H − and the adjoint to this operator is the closure of the densely defined operator F (µ) = (A 22 − µ) −1 A 21 . Consequently, both operators F * (µ) and F (µ) are bounded. The second remark concerns condition (i) which has not been met in the formulations of the previous theorems on this subject. However, it follows from [12, Ch. 2, Th. 2.9] that if D(A) ⊃ H + then A is maximal dissipative in {H, J} if and only if −A 22 is maximal dissipative in H − . Hence conditions (i)-(iv) are weaker then those in theorems of Pontrjagin, Krein, Langer and Azizov.
Lemma 2 .
2Let A be an operator with dense domain D(A) = D + ⊕ D − , the resolvent set ρ(A 22 ) be nonempty, and the operators
Lemma 4 .
4Let the conditions of Lemma 2 be preserved for an operator A as well as the notations (3) for the operators G, F , S, and A for the closure of A. Then a subspaceL = x = x + Kx + , x + ∈ H + withthe angle operator K : H + → H − lies in D(A) if and only if the operator L = A 21 + (A 22 − µ)K : H + → H − is well defined on D + and admits a bounded closure. If the later condition holds, then the subspace L is A-invariant if and only if
Lemma 5 .
5Let A be an uniformly dissipative operator in the space {H, J}, i. e.
( 9 )
9Im[Ax, x] ε(x, x) for x ∈ D(A), where ε > 0. Let D(A) ⊃ H + , and −A 22 be a maximal dissipative operator in H − . Then the operator A is maximal dissipative in {H, J}, the real axis belongs to the resolvent set ρ(A) and its spectrum in C + is bounded. If a Jordan contour Γ + surrounds the set σ(A) ∩ C + and (10) Q + = 1 2πi Γ + (λ − A) −1 dλ is the corresponding Riesz projector, then L = Q + (H) is an A-invariant maximal uniformly positive subspace. Moreover,
acting in the space H n = H + n ⊕ H − with the domain D(A n,ε ) = H + n ⊕ D − ⊂ D(A) = D + ⊕ D − . Let us sketch the plan of the proof. The conditions of Lemma 5 are fulfiled for the operators A n,ε , since Im[A n,ε x, x] = Im[Ax, x] + ε(x, x) for x ∈ D(A n,ε ). By virtue of Lemmas 4 and 5 there exist maximal uniformly positive subspaces L n with the angle operators K n : H + n → H − , K n < 1, such that (17)(1 − K n G)L n = K n (S n − µ),
Langer Theorem. Let A be a selfadjoint operator in Krein space {H, J} and D(A) ⊃ H + (the later condition holds if and only if A admits representation (1) where A 11 and A 12 are bounded). If in addition the operator A 12 = P + AP − is compact, then there exists a maximal A-invariant subspace L such that the spectrum of the restriction A| L lies in the closed upper half-plane.
Perturbation theory for linear operators. T Kato, Springer-VerlagNew York2nd editionT. Kato. Perturbation theory for linear operators (2nd edition). Springer-Verlag, New York, 1976.
Hermitian operators in spaces with indefinite metric// Izv. L S Pontrjagin, Acad. Nauk SSSR, Ser. Matem. 8in RussianL. S. Pontrjagin. Hermitian operators in spaces with indefinite metric// Izv. Acad. Nauk SSSR, Ser. Matem., 8 (1944) P. 243-280 (in Russian).
On an application of the fixed point principle in the theory of operators in a space with indefinite metric// Uspekhi Matem. M G Krein, P. 180-190Nauk. 50in RussianM. G. Krein. On an application of the fixed point principle in the theory of operators in a space with indefinite metric// Uspekhi Matem. Nauk, 50 (1950), P. 180-190 (in Russian).
. H Langer, Dokl. Acad. Nauk. SSSR. 1342in RussianH. Langer. On J-hermitian operators// Dokl. Acad. Nauk. SSSR, 134 (2) (1962) P. 263-266 (in Russian).
Eine Veralgemeinerung eines Satzes von L. H Langer, S. Pontrjagin// Math. Ann. 1525H. Langer. Eine Veralgemeinerung eines Satzes von L. S. Pontrjagin// Math. Ann., 152 (5) (1963), S. 434- 436.
On a new application of fixed point principle in the theory. M G Krein, Dokl. Acad. Nauk SSSR. 1545in RussianM. G. Krein. On a new application of fixed point principle in the theory of operators in a space with indefinite metric// Dokl. Acad. Nauk SSSR, 154 (5) (1966) P. 1023-1026 (in Russian).
Invariant subspaces of a class of operators in spaces with indefinite metric. H Langer, J. Funct. Anal. 193H. Langer. Invariant subspaces of a class of operators in spaces with indefinite metric// J. Funct. Anal., 19 (3) (1975), pp. 232-241.
Spectral functions of definitizable operators in Krein spaces// Lect. H Langer, Notes in Math. 948H. Langer. Spectral functions of definitizable operators in Krein spaces// Lect. Notes in Math., 948 (1982), pp. 1-46.
On definite subspaces and generalized resolvents of Hermitian Operators in spaces Π κ // Funkt. M G Krein, H Langer, 1971) P. 54-69English Transl. in Funct. Anal. and Appl. 52Anal. i Prilozh.M. G. Krein, H. Langer. On definite subspaces and generalized resolvents of Hermitian Operators in spaces Π κ // Funkt. Anal. i Prilozh., 5 (2) (1971) P. 59-71; 5 (3) (1971) P. 54-69 (in Russian). English Transl. in Funct. Anal. and Appl. 5 (1971).
Dissipative operators in Hilbert space with indefinite metric// Izv. T Ja Azizov, English Transl. in Math. USSR Izv. 373Acad. Nauk SSSR, Ser. Mat.T. Ja. Azizov. Dissipative operators in Hilbert space with indefinite metric// Izv. Acad. Nauk SSSR, Ser. Mat. 37 (3) (1973) P. 639-662 (in Russian). English Transl. in Math. USSR Izv. 7 (1973).
On invariant subspaces of operators acting in a space with indefinite metric// Funkt. T Ya, S A Azizov, Khoroshavin, Anal. i Prilozhenija. 144T. Ya. Azizov, S. A. Khoroshavin. On invariant subspaces of operators acting in a space with indefinite metric// Funkt. Anal. i Prilozhenija, 14 (4) (1980), P. 1-7;
. English Transl. in Funct. Anal. and Appl. 144English Transl. in Funct. Anal. and Appl., 14 (4) (1980).
Liner operators in space with indefinite metric. T Ja Azizov, I S Iokhvidov, John WileyChichesterT. Ja. Azizov, I. S.Iokhvidov, Liner operators in space with indefinite metric. John Wiley, Chichester, 1989.
On the existence of invariant subspaces of dissipative operators in spaces with indefinite metric// Fund. i prikl. matem. A A Shkalikov, 5in RussianA. A. Shkalikov. On the existence of invariant subspaces of dissipative operators in spaces with indefinite metric// Fund. i prikl. matem., 5 (2) (1999) P. 627-635 (in Russian).
The essential spectrum of some matrix operators. F V Atkinson, H Langer, R Mennicken, A Shkalikov, Math. Nachr. 167F. V. Atkinson, H. Langer, R. Mennicken, A. Shkalikov. The essential spectrum of some matrix operators// Math. Nachr., 167 (1994), pp. 5-20.
Spectral decomposition of symmetric operator matrices. R Mennicken, A A Shkalikov, Math. Nachr. 179R. Mennicken, A. A. Shkalikov. Spectral decomposition of symmetric operator matrices// Math. Nachr., 179 (1996), P. 259-273.
| zyda_arxiv-1616000 |
7 Mar 2018
Iván Schmidt
Marat Siddikov
Departamento de Física
Universidad Técnica Federico Santa María
Centro Científico -Tecnológico de Valparaíso
Casilla 110-VValparaísoChile
7 Mar 2018Digluon contribution to J/ψ production The paper is structured as follows. In the Section II we discuss the framework used for evaluations. In the Section III we introduce the paramertrizations of gluon PDFs and DPDFs used for our estimates. In Section IV we present our
In this paper we study the contribution of the double parton distributions of gluons to the charmonium production. Despite being suppressed in the heavy quark mass limit, numerically this contribution gives a sizeable correction to the leading order kT factorization result in LHC kinematics due to enhancement of gluonic densities in the small Bjorken xB limit. This contribution is not suppressed at large J/ψ momenta pT and thus presents one of the complementary mechanisms of charmonia production in this kinematics.I. INTRODUCTIONThe description of the charmonium hadroproduction remains one of the long-standing puzzles almost since its discovery. The large mass m c of the charm quark inspired applications of perturbative methods and consideration in the formal limit of infinitely heavy quark mass[1]. However, in reality the coupling α s (m c ) ∼ 1/3 is not very small, so potentially some mechanisms suppressed in the large-m c limit, numerically might give a sizeable contribution.The Color Singlet Model (CSM) of charmonia production [2-4] assumes that the dominant mechanism is the gluongluon fusion supplemented by emission of additional gluon, as shown in the diagram 1 of theFigure 1. Early evaluations in the collinear factorization framework led to incorrect results at large transverse momenta p T of charmonia and premature conclusions about the inability of CSM to describe the experimental data. The failure of the expansion over α s due to milder suppression of higher order terms at large p T [5, 6] and co-production of additional quark pairs [7, 8] motivated introduction of the phenomenological color octet contributions [9, 10]. The modern NRQCD formulation[11][12][13][14]17] constructs a systematic expansion over the Nonpertrubative Matrix Elements (NMEs) of different charmonia states which can be extracted from fits of experimental data. However, at present extracted matrix elements depend significantly on the technical details of the fit[14], which sheds doubts on the universality of extracted NMEs. At the same time, it was suggested that the results of the CSM evaluated in the k T -factorization framework (k T -CSM for short) might agree better with experimental data at large p T if the feed-down contributions from χ c and ψ(2S) decays are taken into account[18][19][20][21][22][23]. Inclusion of color octet contributions in k T -CSM framework improves agreement with data[24]. However, the uncertainty of the unintegrated parton distribution function (uPDF) is large in this kinematics, and for this reason situation with NRQCD contributions still remains ambiguous[24,25]. It was suggested that at large p T , a sizeable contribution might come from other mechanisms, like for example gluon fragmentation into J/ψ[26][27][28][29].In the aforementioned analysis it was not taken into account that in the small Bjorken-x B limit, as we approach saturation regime, the gluon densities grow rapidly, and more than one gluon from each hadron might interact with heavy quarks. In this paper we will focus on the first correction, which probes the Double Parton Distribution Function (DPDF) of gluon. According to recent theoretical[30][31][32][33][34]and experimental[35][36][37][38][39][40][41][42][43]studies, these objects might have rich internal structure due to possible correlation between partons[44], and in view of various sum rules which the DPDFs should satisfy[33].The DPDFs are usually studied in the double parton scattering (DPS)[30,32,[45][46][47]and double Drell-Yann processes[48]. However, the DPDFs might also contribute to the single hadron production, which is usually interpreted as being due to single-gluon distributions only. In case of the charmonium production, as was noticed in [49], the DPDFs might contribute already in the same order over O (α s ), as shown in the diagram 2 of theFigure 1. The relative contribution of the DPDF-induced process is growing with energy and in the LHC kinematics gives a sizeable contribution, up to twenty per cent of the theoretical prediction for the prompt J/ψ hadroproduction. At large momenta this contribution is suppressed due to additional convolution of the third gluon with k T -dependent gluon PDF. In this paper we suggest another mechanism, with emission of additional gluon, as shown in the diagram 3 of theFigure 1. Formally, the cross-section of this process is suppressed as O (α s ) compared to that of the diagram 1, however, as we will see below, it gives a sizeable contribution, on par with contribution of the diagram 2. In contrast to mechanism of [49], our contribution is not suppressed in the large-p T kinematics, and for this reason should be taken into account in comparison with experimental data. If one or both hadrons are polarized, the interference with leading order diagram gives rise to transverse spin asymmetries, which have been studied in detail theoreticaly[50][51][52]and experimentally[53]. In this paper we will focus on the case of unpolarized protons for which the interference term does not contribute.
p 1 p 2 J/ψ (1) p 1 p 2 J/ψ (2) p 1 p 2 J/ψ (3) Figure 1
. Diagram (1): A conventional Color Singlet Model (CSM) gluon-gluon fusion mechanism of J/ψ production. In our evaluations we also take into account feed-down contributions from χc and ψ(2S) decays, whose production amplitudes have similar topology (no gluon emission from quark loop in case of χc). Diagram (2): a higher twist mechanism suggested in [49]. Diagram (3): Example of the subprocess in which digluons may produce the same final state as CSM process (this paper, see section II for details). The two-gluon contribution may stem from either hadron. In all three diagrams summation over all permutations of gluons in heavy quark loop is implied.
numerical results and finally in Section V we draw conclusions.
II. EVALUATION OF THE AMPLITUDES
The cross-section of the charmonium production in the k T factorization framework reads as [18][19][20][21][22][23] dσ = α 3 s (µ) 512π 4ŝ2 polarization spin color
M gg→gJ/ψ ŝ,t 2 F (x 1 , k 1⊥ ) F (x 2 , k 2⊥ ) d 2 k 1⊥ d 2 k 2⊥ dy d 2 p T dy g(1)
where we introduced the shorthand notationŝ = x 1 x 2 s , the variables (y, p T ) are rapidity and transverse momentum of produced charmonium, (y g , k g⊥ ) are the rapidity and transverse momentum of the emitted gluon, (x i , k i⊥ ) are the light-cone fractions and transverse momenta of the incident gluons, with
x 1,2 = M 2 J/ψ + p 2 T √ s e ±y + k g⊥ √ s e ±yg ,(2)k g⊥ = p T − k 1⊥ − k 2⊥ .(3)
F (x i , k i⊥ ) in (1) are the unintegrated gluon parton distributions (uPDFs). The parton level amplitude M gg→gJ/ψ in (1) is given by a sum of diagrams with all possible permutations of gluon vertices in heavy quark loop in diagram 1 of Figure (1). We fix the renormalization scale µ as µ = M 2 J/ψ + p 2 T . For the J/ψ vertex the standard approximation is to neglect the internal motion of the quarks (formally O (α s (m c )) effect) and use [2][3][4]
J 3 S 1 = gǫ S J/ψ (p + m c ) 2(4)
where ǫ J/ψ is the polarization vector of J/ψ and the normalization constant g is fixed from the leptonic decay width
Γ J/ψ→e + e − , g = 3m J/ψ Γ J/ψ→e + e − 16πα 2 em Q 2 c , Q c = 2 3 .(5)
For gluon polarization vectors we used the light-cone gauge A + = 0, in which the parton distributions have a simple probabilistic interpretation. The evaluation of the Feynman diagrams is straightforward in the k T factorization framework and was done with the help of FeynCalc package [15,16]. The code for evaluation of the cross-section (1) is available on demand. The process which we study in this paper has the same final state as the CSM mechanism and may interfere with it, as shown in the diagram 1 of the Figure 2. As was discussed in detail in [50][51][52], the interference contributes only if one of the incident hadrons is transversely polarized and leads to transverse spin asymmetry sensitive to the three-gluon correlators suggested in [54]. This asymmetry has been measured by PHENIX collaboration [53], and very small value compatible with zero implies that the three-gluon correlators are negligible. For the same reason we will omit the interference diagrams shown in the Figure 2: they might contribute only if both hadrons are polarized. For the unpolarized protons, digluons should stem from the same hadron in the amplitude and its conjugate, as shown in the diagram 3 of the Figure 2. The diagram with digluon stemming from the lower proton differs from the diagram 3 in Figure 2 only by inversion of sign of rapidity y of J/ψ, so the final result has a symmetric form
p 1 p 2 J/ψ (1) p 1 p 2 J/ψ (2) p 1 p 2 J/ψ (3)dσ J/ψ (y) = dσ gg+g→J/ψ g (y) + dσ gg+g→J/ψ g (−y),(6)
where σ gg+g→J/ψ g is given by (7) is defined as [30,55] F
dσ gg+g→J/ψ g = α 4 s (µ) 8192 π 8ŝ2 polarization spin color M gg+g→g J/ψ 2 F (x 1a , k 1a⊥ , x 1b , k 1b⊥ , ∆ ⊥ ) (7) × F (x 2 , k 2⊥ ) d 2 k 1a⊥ d 2 k 1b⊥ d 2 ∆ ⊥ d 2 k 2⊥ dy d 2 p T dy g dx 1a /x 1a , the unintegrated double gluon distribution F (x 1a , k 1a⊥ , x 1b , k 1b⊥ , ∆ ⊥ ) which appears in(x 1a , k 1a⊥ , x 1b , k 1b⊥ , ∆ ⊥ ) =ˆd 2 y ⊥ e i∆ ⊥ ·y ⊥ˆd z − 1 2π dz − 2 2πˆd 2 z ⊥ 1 d 2 z ⊥ 2 e i(x1z − 1 +x2z − 2 )p + (8) × e −ik ⊥ 1 ·z ⊥ 1 −ik ⊥ 2 ·z ⊥ 2 p |O a (0, z 1 ) O a (y ⊥ , z 2 )| p , O a (y, z) = Π jj ′ a G +j ′ y − z 2 G +j y + z 2 ,(9)
and the matrix Π jj ′ a for gluon polarization labels a = g, ∆g, δg is given by
Π jj ′ g = δ jj ′ , Π jj ′ ∆g = iǫ jj ′ , Π jj ′ ∆g = τ jj ′ ,ll ′ ,(10)τ jj ′ ,ll ′ = 1 2 δ jk δ j ′ k ′ + δ jk ′ δ j ′ k − δ jj ′ δ kk ′ .
The diagram 3 in the Figure 1 is not gauge covariant on its own and should be supplemented with additional diagrams which contribute in the same order in O (α s (m c )), as shown in the Figure 3. The diagram 2 in the list corresponds to a feed-down contribution to gluon uPDF from digluon uPDF. The diagram 3 is a radiative correction to the process suggested in [49] (diagram 2 in the Figure 1). The diagram 4 gives nonzero contribution due to nontrivial color structure of the gauge group: the color independent part of the diagram with inverted direction of the quark loop contributes with opposite sign, for this reason the sum yields a nonzero contribution (2): a contribution to the gluon PDF from digluons which contributes in the same order. Diagram (3): radiative correction to the process suggested in [49]. Sum of diagrams with emission from any of three t-channel gluons is assumed. Diagram (4): process without three-gluon vertex, exists due to nontrivial color group structure. In all diagrams summation over all permutations of gluon vertices in quark loop are implied. The evaluation is quite straightforward and was done with FeynCalc [56,57] package. An important technical observation which allows to simplify significantly the evaluations is that the hard coefficient functions of all the diagrams in the Figure 3 effectively reduce to the sum over the permutations of four gluons in the V gggg→J/ψ vertex, as shown in the Figure 4. This allows us to perform numerically the symmetrization instead of evaluating explicitly all possible interference terms which stem from the amplitude and in its conjugate. In numerical evaluations of particular concern are the diagrams which stem from the interferences of the diagram 3 in the Figure 3: when squared (see diagram 1 in the Figure 5), in collinear limit they yield (together with the virtual corrections shown in the diagram 1' of the same Figure) the familiar gluon splitting kernel P gg [58][59][60]. When the diagram 3 interferes with other diagrams, as shown in diagrams (2, 3) of the Figure 5, additionally it might contain collinear and soft divergencies in certain points. Special care is needed near the points where the different singularities start overlapping and pinch the integration contour: in this case individual diagrams might contain real singularities. Due to complex structure of the integrand, demonstration of analytic cancellation of singularities is challenging, for this reason we used a numerical method which will be described in the section IV below. Numerically these diagrams give a very small contribution (see e.g. the Figure 5) and could be disregarded. This happens because the average rapidities of the emitted gluons in the amplitude and in its conjugate are different, and only a very small domain in the configuration space contributes to the interference.
∼ tr t a t b t c t d − tr t d t c t b t a = i 8 (f abe d cde + f cde d abe )(11)p 1 p 2 J/ψ (1) p 1 p 2 J/ψ (2) p 1 p 2 J/ψ (3) p 1 p 2 J/ψ (4)V gggg→J/ψ = J/ψ J/ψ + P i
III. PARAMETRIZATION OF GLUON PARTON DISTRIBUTIONS
For evaluation of the unintegrated gluon parton densities F (x, k ⊥ ) we use Kimber-Martin-Ryskin (KMR) parametrization [61] with collinear HERAPDF NLO [62,63] gluon density used as input. The color structure of the double gluon distribution in general case is given by [30] F aa ′ ,bb ′ = 1 64
1 F δ aa ′ δ bb ′ − √ 8 3 A F f aa ′ c f bb ′ c + 3 √ 8 5 S F d aa ′ c d bb ′ c(12)p 1 p 2 J/ψ(1)p 1 p 2 J/ψ (1 ′ ) p 1 p 2 J/ψ (2) p 1 p 2 J/ψ (3)+ 2 √ 10 10 F t aa ′ ,bb ′ 10 + t aa ′ ,bb ′ 10 * + 4 √ 27 27 F t aa ′ ,bb ′ 27 ,
where t i are generators of the color group in representation (i = 10, 10,27). In what follows, for the sake of simplicity we will consider that the color structure is given by only the first term ∼ δ aa ′ δ cc ′ , tacitly omitting other terms with nontrivial color structure. This choice does not violate any of the positivity bounds mentioned in [45]. For the kinematic dependent terms, we assume that emission of both gluons is uncorrelated and use the model suggested in [30] with additional k T -dependence,
F (x 1a , k 1a⊥ , x 1b , k 1b⊥ , ∆ ⊥ ) = F (x 1a , k 1a⊥ ) F (x 1b , k 1b⊥ ) e −Bg∆ 2 ⊥(13)
where the value of the diffractive slope B g is taken as a sum of values of gluon GPD slope [64],
B g ≈ (2 × 2.58 + 0.15 ln (1/x 1a ) + 0.15 ln (1/x 1b )) GeV −2 .(14)
For the case of the double parton scattering process pp → h 1 h 2 X, this parametrization leads to the so-called "pocket formula"
dσ pp→h1h2X = dσ pp→h1X dσ pp→h2X σ ef f(15)
where the cross-section σ ef f is a functional of the impact parameter profile of the parton distribution [30]. The experimental estimates of σ ef f from DPS depend on the hadrons h 1 , h 2 with typical values σ ef f ≈ 6−15 mb [37,39,65].
In the forward limit (∆ ⊥ → 0), which is much better understood due to smaller number of variables, after integration over the transverse momenta k i⊥ , the parametrization (13) yields for the collinear digluon distributions
G x 1 , x 2 , µ 2 F = G x 1 , µ 2 F G x 2 , µ 2 F(16)
Recently in [33] it was suggested a model of collinear digluon densities which takes into account all known sum rules and evolution equations. While in general their result might differ quite significantly from a factorized form (16), for large factorization scale µ 2 F M 2 J/ψ and small x 1,2 ≪ 1 the factorized form (16) holds within 10%. This result agrees with more general result of [32] that evolution to higher scales relevant for quarkonia production tends to wash out any correlations present at low scales.
IV. NUMERICAL RESULTS
As was discussed in Section II, the amplitude might contain soft and collinear divergencies in certain points. It is quite challenging to demonstrate analytically that such cancellation indeed happens, for this reason we use a numerical method suggested in [66,67] and implemented in SecDec package [68][69][70] widely used for numerical multiloop evaluations. This method consists in treating the Feynman regularizer +iδ as a finite parameter,
S(p) =p − m p 2 − m 2 + iδ .(17)
Similarly, we treat +iδ as a finite parameter in the gluon propagator complemented with Mandelstam-Leibbrandt prescription [71,72] 1
k + → k − k + k − + iδ .(18)
As was discussed in [66,67], the infrared and collinear singularities in individual diagrams translate into poles in δ, which however should eventually cancel in the infrared stable result. In the Figure 7 we plot the ratio
R(δ) = dσ (δ) dσ (5 × 10 −3 )(19)
as a function of parameter δ. Stability of the result for small δ ensures that the result is free of any infrared divergencies.
In the Figure 8 we compare contribution of our mechanism with k T -CSM results for prompt J/ψ production . As we can see, the contribution is enhanced at large p T and for p T sizeable contribution to the total result. However, in the p T -integrated cross-section, which is dominated by smallp T domain, the considered contribution is small ( 20 per cent even at forward rapidities), and by the order of magnitude agrees with mechanism [49]. For the sake of definiteness, we fixed the renormalization and factorization scales as µ R = µ F = p 2 ⊥ + M 2 J/ψ and estimate the higher order loop corrections varying the scale in the range (0.5, 2) p 2 T + M 2 J/ψ , in agreement with [73]. However, we would like to mention that for three-gluon vertex this prescription might be not very accurate since the effective scale in this case is controlled by the smallest virtuality [74] (which means that loop corrections could be large).
V. CONCLUSIONS
In this paper we studied the contribution of the double parton gluon densities to the J/ψ production. Though formally suppressed in the heavy quark mass limit, the suggested mechanism is significant and constitutes up to twenty per cent of the produced J/ψ, on par with the the contribution suggested in [49]. The suggested mechanism is not suppressed at large quarkonia momenta p T , and for this reason presents one of the possible mechanisms of charmonia production in this kinematics.
The considered contribution grows with energy, and we expect that similar trend holds for higher order multigluon contributions. At sufficiently small x B , eventually we approach the saturation regime, which is usually described by the phenomenological small-x B models with built-in saturation, like dipole model [75][76][77][78] or CGC [79,80]. These models can describe the J/ψ production [81][82][83], however the relation of the nonperturbative dipole cross-section to single and multiple gluon distributions in the DGLAP framework might be not straightforward and rely on model-dependent assumptions [75][76][77][78]84]. In case when the model admits interpretation in terms of the gluon distributions, usually the multigluon distributions are hard-coded in the underlying model, frequently being a simple product of single-gluon uPDFs in the impact parameter space [76]. At the same time, recent theoretical [30][31][32][33][34] and experimental [35][36][37][38][39][40][41][42][43] studies suggest that gluon DPDFs might be much more complicated objects due to possible correlation between partons [44], and in view of various sum rules which the DPDFs should satisfy [33]. In contrast to the small-x models, the suggested approach does not use eikonal approximation and can be used with arbitrary gluon DPDFs extracted from DPS experiments. The errorbars illustrate uncertainty due to higher order loop corrections and are estimated varying the renormalization scale µR in the range µR ∈ (0.5, 2) × p 2 ⊥ + M 2 J/ψ . Experimental points (green boxes) are from ATLAS [38]. Bottom: Ratio of our mechanism to the Color Singlet Mechanism as a function of J/ψ transverse momentum pT .
Figure 2 .
2Diagram (1): Interference of LO and NLO correlators which contributes only when upper hadron is polarized and leads to charmonium spin asymmetry studied in PHENIX [53]. Diagram (2): Contribution which probes three-gluon correlators in both hadrons and contributes only if both hadrons are polarized. Diagram (3): Contribution from gluon DPDFs which gives nonzero result even if both incident hadrons are not polarized. In all diagrams summation over all permutations of gluon vertices in quark loops is implied.
Figure 3 .
3Diagram (1): A digluon correction to the conventional Color Singlet Model (CSM) J/ψ production. Diagram
Figure 4 .
4The sum of the hard coefficient functions of the diagrams in the Figure 4 effectively correspond to four gluon-J/ψ vertex V gggg→J/ψ . Summation over all possible permutations Pi of the four gluons is implied.
Figure 5 .
5Diagram (1): Example of a diagram which contains a double log and which after resummation contributes to gluon splitting kernel Pgg. Another contribution to Pgg comes from virtual corrections (quark or gluon self-energy insertions into gluon lines), as shown in the Diagram (1') . Diagrams 2 and 3: examples of diagrams which possess collinear and soft divergencies.Though formally these diagrams should be taken into account, as explained in the text, numerically they give a very small contribution. In all three diagrams summation over all permutations of gluons in quark loop is implied.
Figure 6 .
6The relative contribution of the diagram (2) from theFigure 5to the total result.
Figure 7 .
7Dependence of the ratio R defined in (19) on parameter δ. Stability of the result at small δ is a numerical manifestation that collinear divergencies cancel in the full sum.
Figure 8 .
8(color online) Top: Cross-section of prompt J/ψ production (sum of direct and feed-down contributions) evaluated in CSM framework (upper red band) and digluon correction (lower blue band).
ACKNOWLDGEMENTSWe thank our colleagues at UTFSM university for encouraging discussions. Our special thanks go to Stanley Brodsky, who suggested the topic of this research and participated in some discussions. We also thank Sergey Baranov for discussions and technical clarifications regarding the references[18,21]. This work was supported in part by Fondecyt (Chile) grants 1140390 and 1140377, by Proyecto Basal FB 0821 (Chile), and by CONICYT grant PIA ACT1406 (Chile). Powered@NLHPC: This research was partially supported by the supercomputing infrastructure of the NLHPC (ECM-02). Also, we thank Yuri Ivanov for technical support of the USM HPC cluster where a part of evaluations has been done.
. J G Korner, G Thompson, Phys. Lett. B. 264185J. G. Korner and G. Thompson, Phys. Lett. B 264, 185 (1991).
. C H Chang, Nucl. Phys. B. 172425C. H. Chang, Nucl. Phys. B 172, 425 (1980).
. R Baier, R Ruckl, Phys. Lett. 102364R. Baier and R. Ruckl, Phys. Lett. 102B (1981) 364.
. E L Berger, D L Jones, Phys. Rev. D. 231521E. L. Berger and D. L. Jones, Phys. Rev. D 23, 1521 (1981).
. S J Brodsky, J P Lansberg, arXiv:0908.0754Phys. Rev. D. 8151502hep-phS. J. Brodsky and J. P. Lansberg, Phys. Rev. D 81, 051502 (2010) [arXiv:0908.0754 [hep-ph]].
. P Artoisenet, J M Campbell, J P Lansberg, F Maltoni, F Tramontano, arXiv:0806.3282Phys. Rev. Lett. 101152001hep-phP. Artoisenet, J. M. Campbell, J. P. Lansberg, F. Maltoni and F. Tramontano, Phys. Rev. Lett. 101, 152001 (2008) [arXiv:0806.3282 [hep-ph]].
. P Artoisenet, J P Lansberg, F Maltoni, hep-ph/0703129Phys. Lett. B. 65360HEP-PHP. Artoisenet, J. P. Lansberg and F. Maltoni, Phys. Lett. B 653 (2007) 60 [hep-ph/0703129 [HEP-PH]].
. A V Karpishkov, M A Nefedov, V A Saleev, arXiv:1707.04068Phys. Rev. D. 96996019hep-phA. V. Karpishkov, M. A. Nefedov and V. A. Saleev, Phys. Rev. D 96, no. 9, 096019 (2017) [arXiv:1707.04068 [hep-ph]].
. P L Cho, A K Leibovich, hep-ph/9505329Phys. Rev. D. 53150P. L. Cho and A. K. Leibovich, Phys. Rev. D 53, 150 (1996) [hep-ph/9505329].
. P L Cho, A K Leibovich, hep-ph/9511315Phys. Rev. D. 536203P. L. Cho and A. K. Leibovich, Phys. Rev. D 53, 6203 (1996) [hep-ph/9511315].
. G T Bodwin, E Braaten, G P Lepage, hep-ph/9407339Phys. Rev. D. 515853Phys. Rev. DG. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D 51, 1125 (1995) Erratum: [Phys. Rev. D 55, 5853 (1997)] [hep-ph/9407339].
. F Maltoni, M L Mangano, A Petrelli, hep-ph/9708349Nucl. Phys. B. 519361F. Maltoni, M. L. Mangano and A. Petrelli, Nucl. Phys. B 519, 361 (1998) [hep-ph/9708349].
. N Brambilla, E Mereghetti, A Vairo, arXiv:0810.2259Phys. Rev. D. 7979904Phys. Rev. D. hep-phN. Brambilla, E. Mereghetti and A. Vairo, Phys. Rev. D 79, 074002 (2009) Erratum: [Phys. Rev. D 83, 079904 (2011)] [arXiv:0810.2259 [hep-ph]].
. Y Feng, J P Lansberg, J X Wang, arXiv:1504.00317Eur. Phys. J. C. 757313hep-phY. Feng, J. P. Lansberg and J. X. Wang, Eur. Phys. J. C 75, no. 7, 313 (2015) [arXiv:1504.00317 [hep-ph]].
. V Shtabovenko, R Mertig, F Orellana, arXiv:1601.01167Comput. Phys. Commun. 207V. Shtabovenko, R. Mertig and F. Orellana, Comput. Phys. Commun., 207, 432-444, 2016, arXiv:1601.01167.
. R Mertig, M Böhm, A Denner, Comput. Phys. Commun. 64R. Mertig, M. Böhm, and A. Denner, Comput. Phys. Commun., 64, 345-359, 1991.
. N Brambilla, arXiv:1010.5827Eur. Phys. J. C. 711534hep-phN. Brambilla et al., Eur. Phys. J. C 71, 1534 (2011) [arXiv:1010.5827 [hep-ph]].
. S P Baranov, Phys. Rev. D. 66114003S. P. Baranov, Phys. Rev. D 66, 114003 (2002).
. B A Kniehl, D V Vasin, V A Saleev, hep-ph/0602179Phys. Rev. D. 7374022B. A. Kniehl, D. V. Vasin and V. A. Saleev, Phys. Rev. D 73, 074022 (2006) [hep-ph/0602179].
. B A Kniehl, V A Saleev, D V Vasin, hep-ph/0607254Phys. Rev. D. 7414024B. A. Kniehl, V. A. Saleev and D. V. Vasin, Phys. Rev. D 74, 014024 (2006) [hep-ph/0607254].
. S P Baranov, A Szczurek, arXiv:0710.1792Phys. Rev. D. 7754016hep-phS. P. Baranov and A. Szczurek, Phys. Rev. D 77, 054016 (2008) [arXiv:0710.1792 [hep-ph]].
. S P Baranov, A V Lipatov, N P Zotov, arXiv:1108.2856Phys. Rev. D. 8514034hep-phS. P. Baranov, A. V. Lipatov and N. P. Zotov, Phys. Rev. D 85, 014034 (2012) [arXiv:1108.2856 [hep-ph]].
. V A Saleev, M A Nefedov, A V Shipilova, arXiv:1201.3464Phys. Rev. D. 8574013hep-phV. A. Saleev, M. A. Nefedov and A. V. Shipilova, Phys. Rev. D 85, 074013 (2012) [arXiv:1201.3464 [hep-ph]].
. S P Baranov, A V Lipatov, arXiv:1611.10141Phys. Rev. D. 96334019hep-phS. P. Baranov and A. V. Lipatov, Phys. Rev. D 96, no. 3, 034019 (2017) [arXiv:1611.10141 [hep-ph]].
. S P Baranov, A V Lipatov, N P Zotov, arXiv:1508.05480Eur. Phys. J. C. 759455hep-phS. P. Baranov, A. V. Lipatov and N. P. Zotov, Eur. Phys. J. C 75, no. 9, 455 (2015) [arXiv:1508.05480 [hep-ph]].
. G T Bodwin, U R Kim, J Lee, arXiv:1208.5301JHEP. 121120hep-phG. T. Bodwin, U. R. Kim and J. Lee, JHEP 1211, 020 (2012) [arXiv:1208.5301 [hep-ph]].
. G T Bodwin, H S Chung, U R Kim, J Lee, arXiv:1403.3612Phys. Rev. Lett. 113222001hep-phG. T. Bodwin, H. S. Chung, U. R. Kim and J. Lee, Phys. Rev. Lett. 113, no. 2, 022001 (2014) [arXiv:1403.3612 [hep-ph]].
. E Braaten, M A Doncheski, S Fleming, M L Mangano, hep-ph/9405407Phys. Lett. B. 333548E. Braaten, M. A. Doncheski, S. Fleming and M. L. Mangano, Phys. Lett. B 333, 548 (1994) [hep-ph/9405407].
. E Braaten, T C Yuan, hep-ph/9507398Phys. Rev. D. 526627E. Braaten and T. C. Yuan, Phys. Rev. D 52, 6627 (1995) [hep-ph/9507398].
. M Diehl, D Ostermeier, A Schafer, arXiv:1111.0910JHEP. 120389JHEP 1603, 001 (2016). hep-phM. Diehl, D. Ostermeier and A. Schafer, JHEP 1203, 089 (2012) Erratum: [JHEP 1603, 001 (2016)] [arXiv:1111.0910 [hep-ph]].
. M Rinaldi, S Scopetta, V Vento, arXiv:1302.6462Phys. Rev. D. 87114021hep-phM. Rinaldi, S. Scopetta and V. Vento, Phys. Rev. D 87, 114021 (2013) [arXiv:1302.6462 [hep-ph]].
. M Diehl, T Kasemets, S Keane, arXiv:1401.1233JHEP. 1405hep-phM. Diehl, T. Kasemets and S. Keane, JHEP 1405, 118 (2014) [arXiv:1401.1233 [hep-ph]].
. K Golec-Biernat, E Lewandowska, M Serino, Z Snyder, A M Stasto, arXiv:1507.08583Phys. Lett. B. 750559hep-phK. Golec-Biernat, E. Lewandowska, M. Serino, Z. Snyder and A. M. Stasto, Phys. Lett. B 750, 559 (2015), [arXiv:1507.08583 [hep-ph]].
. M Rinaldi, F A Ceccopieri, arXiv:1611.04793Phys. Rev. D. 95334040hep-phM. Rinaldi and F. A. Ceccopieri, Phys. Rev. D 95, no. 3, 034040 (2017) [arXiv:1611.04793 [hep-ph]].
. V Khachatryan, CMS CollaborationarXiv:1406.0484JHEP. 140994hep-exV. Khachatryan et al. [CMS Collaboration], JHEP 1409, 094 (2014) [arXiv:1406.0484 [hep-ex]].
. M Aaboud, ATLAS CollaborationarXiv:1608.01857JHEP. 1611110hep-exM. Aaboud et al. [ATLAS Collaboration], JHEP 1611, 110 (2016) [arXiv:1608.01857 [hep-ex]].
. M Aaboud, ATLAS CollaborationarXiv:1612.02950Eur. Phys. J. C. 77276hep-exM. Aaboud et al. [ATLAS Collaboration], Eur. Phys. J. C 77, no. 2, 76 (2017) [arXiv:1612.02950 [hep-ex]].
. G Aad, ATLAS CollaborationarXiv:1104.3038Nucl. Phys. B. 850387hep-exG. Aad et al. [ATLAS Collaboration], Nucl. Phys. B 850, 387 (2011) [arXiv:1104.3038 [hep-ex]].
. G Aad, ATLAS Collaboration]arXiv:1301.6872New J. Phys. 1533038hep-exG. Aad et al. [ATLAS Collaboration], New J. Phys. 15, 033038 (2013) [arXiv:1301.6872 [hep-ex]].
. V M Abazov, D0 CollaborationarXiv:1402.1550Phys. Rev. D. 89772006hep-exV. M. Abazov et al. [D0 Collaboration], Phys. Rev. D 89, no. 7, 072006 (2014) [arXiv:1402.1550 [hep-ex]].
. V M Abazov, D0 CollaborationarXiv:1512.05291Phys. Rev. D. 93552008hep-exV. M. Abazov et al. [D0 Collaboration], Phys. Rev. D 93, no. 5, 052008 (2016) [arXiv:1512.05291 [hep-ex]].
. F Abe, CDF CollaborationPhys. Rev. Lett. 79584F. Abe et al. [CDF Collaboration], Phys. Rev. Lett. 79, 584 (1997).
. S Chatrchyan, CMS CollaborationarXiv:1312.5729JHEP. 140332hep-exS. Chatrchyan et al. [CMS Collaboration], JHEP 1403, 032 (2014) [arXiv:1312.5729 [hep-ex]].
. G Calucci, D Treleani, arXiv:1009.5881Phys. Rev. D. 8316012hep-phG. Calucci and D. Treleani, Phys. Rev. D 83, 016012 (2011) [arXiv:1009.5881 [hep-ph]].
. M Diehl, T Kasemets, arXiv:1303.0842JHEP. 1305150hep-phM. Diehl and T. Kasemets, JHEP 1305, 150 (2013), [arXiv:1303.0842 [hep-ph]].
. S P Baranov, A M Snigirev, N P Zotov, arXiv:1105.6276Phys. Lett. B. 705116hep-phS. P. Baranov, A. M. Snigirev and N. P. Zotov, Phys. Lett. B 705, 116 (2011) [arXiv:1105.6276 [hep-ph]].
. M Diehl, J R Gaunt, arXiv:1710.04408hep-phM. Diehl and J. R. Gaunt, arXiv:1710.04408 [hep-ph].
. M Diehl, J R Gaunt, D Ostermeier, P Plößl, A Schäfer, arXiv:1510.08696JHEP. 160176hep-phM. Diehl, J. R. Gaunt, D. Ostermeier, P. Plößl and A. Schäfer, JHEP 1601, 076 (2016) [arXiv:1510.08696 [hep-ph]].
. L Motyka, M Sadzikowski, arXiv:1501.04915Eur. Phys. J. C. 755hep-phL. Motyka and M. Sadzikowski, Eur. Phys. J. C 75, no. 5, 213 (2015) [arXiv:1501.04915 [hep-ph]].
. F Yuan, J Zhou, arXiv:0806.1932Phys. Lett. B. 668216hep-phF. Yuan and J. Zhou, Phys. Lett. B 668, 216 (2008) [arXiv:0806.1932 [hep-ph]].
. F Yuan, arXiv:0801.4357Phys. Rev. D. 7814024hep-phF. Yuan, Phys. Rev. D 78, 014024 (2008) [arXiv:0801.4357 [hep-ph]].
. Z B Kang, J W Qiu, W Vogelsang, F Yuan, arXiv:0810.3333Phys. Rev. D. 78114013hep-phZ. B. Kang, J. W. Qiu, W. Vogelsang and F. Yuan, Phys. Rev. D 78, 114013 (2008) [arXiv:0810.3333 [hep-ph]].
. A Adare, PHENIX CollaborationarXiv:1009.4864arXiv:1210.6683Phys. Rev. D. 8299904Phys. Rev. D. hep-ex. hep-exA. Adare et al. [PHENIX Collaboration], Phys. Rev. D 82, 112008 (2010) Erratum: [Phys. Rev. D 86, 099904 (2012)] [arXiv:1009.4864 [hep-ex], arXiv:1210.6683 [hep-ex]].
. X D Ji, Phys. Lett. B. 289137X. D. Ji, Phys. Lett. B 289, 137 (1992).
. J R Gaunt, W J Stirling, arXiv:0910.4347JHEP. 10035hep-phJ. R. Gaunt and W. J. Stirling, JHEP 1003, 005 (2010) [arXiv:0910.4347 [hep-ph]].
. R Mertig, M Bohm, A Denner, Comput. Phys. Commun. 64345R. Mertig, M. Bohm and A. Denner, Comput. Phys. Commun. 64, 345 (1991).
. V Shtabovenko, R Mertig, F Orellana, arXiv:1601.01167Comput. Phys. Commun. 207hep-phV. Shtabovenko, R. Mertig and F. Orellana, Comput. Phys. Commun. 207, 432 (2016) [arXiv:1601.01167 [hep-ph]].
. Y L Dokshitzer, Zh. Eksp. Teor. Fiz. 461216Sov. Phys. JETPY. L. Dokshitzer, Sov. Phys. JETP 46, 641 (1977) [Zh. Eksp. Teor. Fiz. 73, 1216 (1977)].
. V N Gribov, L N Lipatov, Sov. J. Nucl. Phys. 15Yad. Fiz.V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. 15, 438 (1972) [Yad. Fiz. 15, 781 (1972)].
. G Altarelli, G Parisi, Nucl. Phys. B. 126298G. Altarelli and G. Parisi,Nucl. Phys. B 126, 298 (1977).
. M A Kimber, A D Martin, M G Ryskin, hep-ph/0101348Phys. Rev. D. 63114027M. A. Kimber, A. D. Martin and M. G. Ryskin, Phys. Rev. D 63, 114027 (2001), [hep-ph/0101348].
. F D Aaron, H1 and ZEUS CollaborationsarXiv:0911.0884JHEP. 1001109hep-exF. D. Aaron et al. [H1 and ZEUS Collaborations], JHEP 1001, 109 (2010), [arXiv:0911.0884 [hep-ex]].
. H Abramowicz, H1 and ZEUS CollaborationsarXiv:1506.06042Eur. Phys. J. C. 7512hep-exH. Abramowicz et al. [H1 and ZEUS Collaborations], Eur. Phys. J. C 75, no. 12, 580 (2015) [arXiv:1506.06042 [hep-ex]].
. S V Goloskokov, P Kroll, hep-ph/0611290Eur. Phys. J. C. 50829S. V. Goloskokov and P. Kroll, Eur. Phys. J. C 50, 829 (2007), [hep-ph/0611290].
. H S Shao, Y J Zhang, arXiv:1605.03061Phys. Rev. Lett. 117662001hep-phH. S. Shao and Y. J. Zhang, Phys. Rev. Lett. 117, no. 6, 062001 (2016) [arXiv:1605.03061 [hep-ph]].
. F Yuasa, E De Doncker, N Hamaguchi, T Ishikawa, K Kato, Y Kurihara, J Fujimoto, Y Shimizu, arXiv:1112.0637Comput. Phys. Commun. 1832136hep-phF. Yuasa, E. de Doncker, N. Hamaguchi, T. Ishikawa, K. Kato, Y. Kurihara, J. Fujimoto and Y. Shimizu, Com- put. Phys. Commun. 183, 2136 (2012), [arXiv:1112.0637 [hep-ph]].
. E De Doncker, F Yuasa, Y Kurihara, J. Phys. Conf. Ser. 36812060E. de Doncker, F. Yuasa and Y. Kurihara, J. Phys. Conf. Ser. 368, 012060 (2012).
. J Carter, G Heinrich, arXiv:1011.5493Comput. Phys. Commun. 1821566hep-phJ. Carter and G. Heinrich, Comput. Phys. Commun. 182, 1566 (2011), [arXiv:1011.5493 [hep-ph]].
. S Borowka, J Carter, G Heinrich, arXiv:1204.4152Comput. Phys. Commun. 184hep-phS. Borowka, J. Carter and G. Heinrich, Comput. Phys. Commun. 184, 396 (2013) [arXiv:1204.4152 [hep-ph]].
. S Borowka, G Heinrich, S P Jones, M Kerner, J Schlenk, T Zirke, arXiv:1502.06595Comput. Phys. Commun. 196hep-phS. Borowka, G. Heinrich, S. P. Jones, M. Kerner, J. Schlenk and T. Zirke, Comput. Phys. Commun. 196, 470 (2015) [arXiv:1502.06595 [hep-ph]].
. S Mandelstam, Nucl. Phys. B. 213149S. Mandelstam, Nucl. Phys. B 213, 149 (1983).
. G Leibbrandt, Rev. Mod. Phys. 591067G. Leibbrandt, Rev. Mod. Phys. 59 (1987) 1067.
. S J Brodsky, G P Lepage, P B Mackenzie, Phys. Rev. D. 28228S. J. Brodsky, G. P. Lepage and P. B. Mackenzie, Phys. Rev. D 28 (1983) 228.
. M Binger, S J Brodsky, hep-ph/0602199Phys. Rev. D. 7454016M. Binger and S. J. Brodsky, Phys. Rev. D 74, 054016 (2006), [hep-ph/0602199].
. H Kowalski, D Teaney, hep-ph/0304189Phys. Rev. D. 68114005H. Kowalski and D. Teaney, Phys. Rev. D 68, 114005 (2003) [hep-ph/0304189].
. A H Rezaeian, M Siddikov, M Van De Klundert, R Venugopalan, arXiv:1212.2974Phys. Rev. D. 87334002hep-phA. H. Rezaeian, M. Siddikov, M. Van de Klundert and R. Venugopalan, Phys. Rev. D 87, no. 3, 034002 (2013) [arXiv:1212.2974 [hep-ph]].
. K J Golec-Biernat, M Wusthoff, hep-ph/9903358Phys. Rev. D. 60114023K. J. Golec-Biernat and M. Wusthoff, Phys. Rev. D 60, 114023 (1999) [hep-ph/9903358].
. J Bartels, K J Golec-Biernat, H Kowalski, hep-ph/0203258Phys. Rev. D. 6614001J. Bartels, K. J. Golec-Biernat and H. Kowalski, Phys. Rev. D 66, 014001 (2002) [hep-ph/0203258].
. L D Mclerran, R Venugopalan, hep-ph/9309289Phys. Rev. D. 492233L. D. McLerran and R. Venugopalan, Phys. Rev. D 49, 2233 (1994) [hep-ph/9309289].
. L D Mclerran, R Venugopalan, hep-ph/9311205Phys. Rev. D. 493352L. D. McLerran and R. Venugopalan, Phys. Rev. D 49, 3352 (1994) [hep-ph/9311205].
. B Z Kopeliovich, I Schmidt, M Siddikov, arXiv:1701.07134Phys. Rev. C. 95665203hep-phB. Z. Kopeliovich, I. Schmidt and M. Siddikov, Phys. Rev. C 95, no. 6, 065203 (2017) [arXiv:1701.07134 [hep-ph]].
. Z B Kang, Y Q Ma, R Venugopalan, arXiv:1309.7337JHEP. 140156hep-phZ. B. Kang, Y. Q. Ma and R. Venugopalan, JHEP 1401, 056 (2014) [arXiv:1309.7337 [hep-ph]].
. H Fujii, F Gelis, R Venugopalan, hep-ph/0510053Nucl. Phys. A. 774793H. Fujii, F. Gelis and R. Venugopalan, Nucl. Phys. A 774, 793 (2006) [hep-ph/0510053].
. T Altinoluk, N Armesto, G Beuf, M Martínez, C A Salgado, arXiv:1404.2219JHEP. 140768hep-phT. Altinoluk, N. Armesto, G. Beuf, M. Martínez and C. A. Salgado, JHEP 1407, 068 (2014) [arXiv:1404.2219 [hep-ph]].
| zyda_arxiv-1625000 |
Relativistic effects in neutrino-Fermi gas interactions
March 31, 2022
K Vantournhout
Department of Subatomic and Radiation Physics
Ghent University
Belgium
N Jachowicz
Department of Subatomic and Radiation Physics
Ghent University
Belgium
J Ryckebusch
Department of Subatomic and Radiation Physics
Ghent University
Belgium
Relativistic effects in neutrino-Fermi gas interactions
March 31, 2022arXiv:nucl-th/0511067v1 24 Nov 2005
We study neutrino interactions in a hadron gas within a relativistic framework. The hadron matter is described by a non-interacting Fermi gas in beta equilibrium. We show that the introduction of relativistic effects causes a sizable enhancement of the neutrino-scattering cross sections.
Neutrino interactions are pivotal in the dynamics of core-collapse supernova explosions and in the cooling of a newly formed neutron star. In recent years, several studies of neutrino scattering in nucleon matter at supra-nuclear densities have been made [1,2,3]. Thereby the influence of various aspects of the nuclear dynamics and of the weak nucleon current was investigated. In this work, we focus on the impact of the implementation of relativistic effects on the description of the process.
The differential cross section for the interaction of a neutrino ν(p ν ) with a hadron h 1 (p h 1 ) in the relativistic gas can be written as
N d 6 σ = αν ,α h 1 , α ℓ ,α h 2 F h 1 α h 1 ( p h 1 )d 3 p h 1 δ 4 (p ν + p h 1 − p ℓ − p h 2 ) (2π) 10 v rel |M(ν, h 1 → ℓ, h 2 )| 2 × (1 − F h 2 α h 2 ( p h 2 ))d 3 p h 2 (1 − F ℓ α ℓ ( p ℓ ))d 3 p ℓ ,(1)
where the reaction products are denoted by ℓ(p ℓ ) for the outgoing lepton, and h 2 (p 2 ) for the final nucleon, v rel is the relative velocity of the incident particles. The probability distributions are represented by the Fermi distributions F a αa ( p a ), the quantum numbers α a identify the state particle a is occupying. The Fermi distributions are related through the restrictions imposed by the beta-equilibrium conditions for the n, p and e − in the gas. The dynamics of the interaction is contained in the matrix element M(ν, h 1 → ℓ, h 2 ) that is calculated in first-order perturbation theory, using the full expression for the hadron vertex-function as given by [4]. The normalization factor N is determined such that Eq. (1) represents the scattering cross-section per nucleon.
When considering the effects of relativity, our study shows that the larger differences are caused by the relativistic description of the Fermi distribution and not by the implementation of relativity in the dynamics of the neutrino-nucleon interaction. For higher momenta, the relativistic Fermi distribution F (p) = [exp((E(p) − µ)/kT ) + 1] −1 obtains a slightly larger weight than the non-relativistic one. This is illustrated in the left panel of Fig. 1, showing that especially the tail of the distribution is affected by the difference between the relativistic E(p)= p 2 + m 2 and non-relativistic E(p) = p 2 /2m energy expressions. The open symbols correspond to our relativistic calculation, the full ones compare them to the non-relativistic calculation of Ref. [1]. Triangles represent charged-current neutrino processes, squares (circles) represent neutral-current scattering off protons (neutrons). The energy of the incident neutrino was taken to be three times the temperature of the nucleon matter.
These relativistic effects are carried over into the calculation of cross sections for interactions in the gas and thus have an important impact on neutrino opacities. Although the differences in the energy distributions are relatively small, the energy sensitivity of the cross sections (1), rising fast with increasing energies, ensures that the relativistic effects have a sizable influence on the interactions under study. Relativistic cross sections are generally larger than their non-relativistic counterparts. The right panel of Fig. 1 indeed shows that the opacities obtained within a relativistic calculation are considerably smaller than the ones of the non-relativistic study of Ref. [1]. Further work on the influence of relativistic effects, and on the implementation of correlations in the cross section calculation is in progress [5].
Figure 1 :
1The left panel compares a relativistic Fermi distribution (solid line) with a nonrelativistic one (dashed) at kT = 10 MeV, n = 0.16 fm −3 . The ratio of both is represented by the dashed-dotted curve. The right panel illustrates the impact of these differences on the neutrino mean free path λ = 1 nσ for nucleon matter in beta equilibrium at density n.
. S Reddy, M Prakash, J M Lattimer, Phys. Rev. D. 5813009S. Reddy, M. Prakash and J.M. Lattimer, Phys. Rev. D 58 (1998) 013009
. A Burrows, R F Sawyer, Phys. Rev. C. 58554A. Burrows and R.F. Sawyer, Phys. Rev. C 58 (1998) 554
. C J Horowitz, K Wehrberger, Nucl. Phys. A. 531665C.J. Horowitz and K. Wehrberger, Nucl. Phys. A 531 (1991) 665
. C H , Llewellyn Smith, Phys. Rep. 3C.H. Llewellyn Smith, Phys. Rep. 3C (1972)
. K Vantournhout, N Jachowicz, J Ryckebusch, in preparationK. Vantournhout, N. Jachowicz, and J. Ryckebusch, in preparation.
| zyda_arxiv-1640000 |
A rigorous stochastic theory for spike pattern formation in recurrent neural networks with arbitrary connection topologies
(Dated: February 8, 2022) 5 Feb 2022
Maik Schünemann
Computational Neurophysics Lab
Institute for Theoretical Physics
University of Bremen
BremenGermany
Udo Ernst
Computational Neurophysics Lab
Institute for Theoretical Physics
University of Bremen
BremenGermany
Marc Kesseböhmer
Institute for Dynamical Systems
Dept. of Mathematics and Computer Science
University of Bremen
BremenGermany
A rigorous stochastic theory for spike pattern formation in recurrent neural networks with arbitrary connection topologies
(Dated: February 8, 2022) 5 Feb 20221
Cortical networks exhibit synchronized activity which often occurs in spontaneous events in the form of spike avalanches. Since synchronization has been causally linked to central aspects of brain function such as selective signal processing and integration of stimulus information, participating in an avalanche is a form of a transient synchrony which temporarily creates neural assemblies and hence might especially be useful for implementing flexible information processing. For understanding how assembly formation supports neural computation, it is therefore essential to establish a comprehensive theory of how network structure and dynamics interact to generate specific avalanche patterns and sequences. Here we derive exact avalanche distributions for a finite network of recurrently coupled spiking neurons with arbitrary non-negative interaction weights, which is made possible by formally mapping the model dynamics to a linear, random dynamical system on the N -torus and by exploiting self-similarities inherent in the phase space. We introduce the notion of relative unique ergodicity and show that this property is guaranteed if the system is driven by a time-invariant Bernoulli process. This approach allows us not only to provide closed-form analytical expressions for avalanche size, but also to determine the detailed set(s) of units firing in an avalanche (i.e., the avalanche assembly). The underlying dependence between network structure and dynamics is made transparent by expressing the distribution of avalanche assemblies in terms of the induced graph Laplacian. We explore analytical consequences of this dependence and provide illustrating examples.In summary, our framework provides a major extension of previous analytical work which was restricted to regularly coupled or discrete state networks in the infinite network limit. For systems with a sufficiently homogeneous or translationally invariant coupling topology, we make an explicit link to critical states and the existence of scale-free distributions.
olfactory system [5,6], and vocal control in birdsong [7][8][9].
Spike synchronization constitutes a versatile mechanism for assembly formation. It may occur spontaneously on very short time scales and is much more efficient in driving post-synaptic cells than spikes arriving asynchronously [10,11]. The ability to quickly form or to break up neural ensembles with varying compositions of participating cells supports information processing in different aspects.
For instance, synchronization can indicate global dependencies among distributed local information in complex sensory scenes (e. g. [12]), while mutual synchronization between different brain areas can rapidly establish or suppress communication channel for selective information processing in dependence on task demands (e. g. [4,13,14]).
For optimally exploiting these functional opportunities, it has been suggested that cortical networks operate close to a critical state [15][16][17][18][19] in which spontaneous synchronization generates neural avalanches engaging large groups of cells ('assemblies') over far distances [20]. Formation of avalanches is fast since it does not require entrainment over a number of oscillation cycles as needed for synchronizing coupled phase oscillators. Indeed investigations of spontaneous synchronization in the brain revealed typical signatures of a dynamics being close to a critical state [21][22][23], such as power law distributions of avalanche sizes and durations [24][25][26] in combination with the observation of a large dynamical range [27].
However, it is not always size that matters. In a highly structured network like the brain, the specific composition of an activation pattern is of equal importance. It is the topology and efficacy of synaptic connections originating from the presently active neurons which will determine to what destination a signal will propagate, and if it will be enhanced or attenuated. In consequence, network function is defined by the pattern of neural activity only in combination with the microscopic structure of the network. The large reservoir of possible spike patterns in a system near criticality provides a good opportunity for a versatile processing here, but it is unclear how this property can be functionally exploited.
In general, interactions and synergies between a (near)-critical dynamics and the microscopic network structure have barely been addressed by theoretical work and are thus not well understood. Studies on neuronal avalanches commonly assume homogeneous and/or global connectivity, often in the limit of large networks, and focus mainly on determining the critical power-law exponents [28,29]. Structured networks are usually analyzed by focusing on particular connection schemes with certain fundamental statistical properties, such as small-world networks [30,31], scale-free networks [32], or branching processes [33,34], which then allow to compute global characteristics of the avalanche dynamics.
For example, assuming a locally tree-like structure [33,35,36] made it possible to analytically investigate robustness of the critical exponents against changes of the network topology. In parallel, there has been progress in formally understanding the structure-function relationship for recurrent networks with more general coupling structures. Using the theoretical framework of excitatory and linearly coupled Hawkes processes, analytical closed-form relations between the network adjacency matrix and equilibrium rates as well as spike count covariances were developed [37][38][39]. In contrast to studies relying on global statistical properties of network connectivity, these exact relations hold for arbitrary network topologies and allow to relate specific graph motifs and the spectral distribution of the network to the strength and structure of the resulting correlations. In general, however, the effect of network topology on particular avalanche patterns and spike assemblies has not yet been fully elucidated.
Here we bridge this gap by developing a formal framework which allows to rigorously analyze how particular network topologies shape avalanches and spike patterns in randomly driven networks of non-leaky integrate-and-fire units. For this purpose we employ the framework originally introduced by Eurich, Herrmann, and Ernst [28] (in the following termed EHE model), which has successfully been used to formally study neural avalanches in globally coupled homogeneous networks, i.e. with constant or block constant coupling matrices [12,40,41], and whose basic mathematical properties are well understood [42,43]. We extend the EHE model to arbitrary positive coupling matrices and derive closed-form expressions for the probabilities of arbitrary cell assemblies becoming transiently active in form of an avalanche [20]. This is possible by means of a suitably defined torus transformations which simplifies the seemingly highly complex spiking dynamics to a random walk on a finite dimensional torus. At the same time, the transform allows us to establish a mathematical link between the avalanche statistics and graph theoretical measures of the EHE network in terms of its adjacency matrix.
The article is structured as follows: First, the basic model and its extension to arbitrary network topologies will be introduced. Second, we will show that the dynamics of the model is equivalent to shifts on the N -torus and derive simple expressions for mean activation and spike covariances in the network. Next we will focus on analyzing spike patterns and derive various closed-form expressions for the probabilities of particular avalanche sequences from the corresponding state space volumes.
The corresponding mathematical expressions will then be linked to graph-theoretical measures before finally discussing some network examples.
II. MODEL STRUCTURE AND DYNAMICS
We employ a generalization of the Eurich-Herrmann-Ernst (EHE) model, which has been widely used to model neural avalanches [28,41,44], and to study avalanche dynamics analytically [40,42,43].
The model can be described as a randomly driven network of pulse-coupled non-leaky integrate-and- We formulate this model in discrete time, in which external input arrives at each time step to a randomly chosen unit. Avalanches resulting from units crossing their firing threshold occur on a fast timescale and complete before the next unit receives external input (separation of time scales). The external input dynamics is particularly simple: a random unit k is chosen with probability p k and its state u k is increased by an amount u 0 ∈ R ≥0 . Should this increase push the state of a unit above the firing threshold, an avalanche starts and evolves on a fast timescale. The avalanche dynamics F consists of repeatedly resetting the currently supra-threshold units by subtracting diag(U )A(u) and distributing internal activation by W A(u).
Here A(u) := δ({i ∈ [N ] : u i ≥ U i }) describes the supra-threshold units using δ(I) = i∈I e i with e i denoting the i-th unit vector in R [N ] . The avalanche terminates after τ steps or generations when all units are below threshold. Note that U , A and u are vectors, with U i , A i and u i designating their i-th components. This will be standard notation from here on until mentioned otherwise.
The dynamics F is formalized as follows, with F describing one generation of an avalanche and τ defining the termination condition:
F : R [N ] ≥0 → C, u → F τ (u) (u)(1)
with F : R
[N ] ≥0 → R [N ]
≥0 , u → u − (diag(U ) − W )A(u) (2) and τ : R N ≥0 → N 0 , u → min{n ∈ N 0 | F n (u) ∈ C}
External input given to unit k and the resulting avalanche dynamics can be formally combined into a single action T k given by
T k : C → C, u → F (u + u 0 e k ) ,(4)
which also includes the trivial case that no unit crosses threshold and thus there is no avalanche.
Throughout this study, we impose the condition
u 0 + N j=1 w ij < U i for all i ∈ [N ] ,(5)
ensuring that each unit can fire at most once during an avalanche (see Proposition A.1). This condition implies that diag(U ) − W is strictly diagonally dominant, hence its inverse
M := (diag(U ) − W ) −1(6)
exists.
Due to the separation of timescales, it is guaranteed that external input does not arrive during an avalanche, and thus we can track the detailed pattern of an avalanche by index sets listing which units fired at each generation of the avalanche. For this purpose we introduce the avalanche function a which provides a vector of index sets a(k, u) :
= ({j ∈ [N ] | (F i−1 (u + e k u 0 )) j ≥ U j }) i=1,...,τ (u+u 0 e k ) ∈ A(7)
as the avalanche started in state u by giving external input to unit k, where A is the set of all avalanches (see Definition E.7). For a particular avalanche a = (a n ) n=1,...,d ∈ A, the first generation Our goal is to derive the probability distribution of avalanches a ∈ A in dependence of the coupling matrix W and external input probabilities p. To make this mathematically rigorous, we model the dynamics as a random dynamical system, or more precisely as a skew-product dynamical system T :
T : Σ N × C → Σ N × C, T (k, u) := (σ(k), T k 1 (u))
Here, k = (k 1 , k 2 , . . .) ∈ [N ] N =: Σ N is a right-infinite sequence over the alphabet [N ] modeling the sequence of units receiving external inputs, and σ((k 1 , k 2 , . . .)) = (k 2 , k 3 , . . .) is the left shift operator. In order to turn this model into a random dynamical system, we equip Σ N × C with the Borel σ-algebra and a measure P. To model the randomness of the external input, P will be given as a product measure composed of the time invariant Bernoulli measure B p with success probabilities p on Σ N and a measure P on C. One of our main insights is that if u 0 / ∈ Q, there exists a unique choice for P such that T is ergodic with respect to P = B p × P for almost all non-negative coupling matrices, if and only if (Theorem C.4) every unit is reachable by a path via non-zero coupling weights starting from a unit receiving external input (p-reachability, see definition A.4).
Thus, we will always assume p-reachability of the coupling network in the following. With the unique ergodic measure P and a slight abuse of notation using a(k, u) := a(k 1 , u), the avalanche function a is a random variable with respect to P, which allows us to study the avalanche probabilitites P(a = a)
for a ∈ A.
In the remainder of this section, we illustrate the dynamics of the extended EHE model from a phase space perspective using the system displayed in Fig. 2: Iterations of the dynamics T induce a trajectory in phase space C. The trajectory u (1) → u (2) → u (3) → u (4) is obtained by three iterations of T starting in (k, u (1) ) with an external input sequence k = (2, 1, 2, . . .). While external input induces state shifts parallel to the axes, the avalanche dynamics F results in reinjection of points pushed outside of C by subtracting the thresholds U i of the activated units, and by distributing internal activation which induces shifts along column(s) of W . Note that a(k 1 , u (1) ) = (), denoting the empty avalanche, a(k 2 , u (2) ) = ({1}) and a(k 3 , u (3) ) = ({2}). It becomes apparent that the actions T k are discontinuous transformations, where simple shifts along the axes are followed by the more complicated avalanche dynamics F if u k + u 0 ≥ U k .
Note that during an avalanche, the internal, recurrent activation after reset pushes a state out of a region Λ, which is indicated in gray shading in Fig. 2. As a consequence, the state density becomes zero in Λ which we will thus designate as the non-inhabited region. The existence of Λ is a general feature of the model, as for all p-reachable networks there exists an inhabited region D which depends on W . D acts as a uniform attractor on the phase space in the sense that for B p -almost all input sequences k the projection of T n (k, C) onto its second component equals D for all n ≥ n 0 ∈ N (see Proposition B.3). In particular, any invariant density of states necessarily vanishes on Λ := C \ D and we can therefore proceed by analyzing the system restricted to Σ N × D with the associated restricted Borel σ-algebra.
For the simple two-dimensional system in Fig. 2, there are only four possible non-empty avalanches (a) State space for a two-dimensional EHE-model. States u 1 and u 2 span the state space C (unit rectangle) which consists of the inhabited region D (yellow shading), and the non-inhabited region Λ (gray shading). Black dots and solid arrows indicate a sample trajectory u (1) , . . . , u (4) during which external input u 0 is provided first to unit #2, then to unit #1 and finally to unit #2 again. The length of the solid arrows is u 0 . When the trajectory crosses the right or upper boundary of the unit cube (i.e., the firing thresholds U 1 = 1 or U 2 = 1), a unit spikes and its state is 'reinjected' at the opposite side of C (spike reset). Simultaneously, recurrent activation is distributed to all connected units, corresponding to shifts by columns of W (dashed arrows). Distribution of recurrent input can continue multiple times until no state is above threshold anymore, thus forming multiple generations of avalanches comprising different numbers of units. (b) Torus transformation for a two-dimensional EHE-model. Left: Copies (bright yellow) of the inhabited region D (dark yellow) tesselate the u 1 -u 2 plane. Equivalent points to u (3) , u (4) in the example trajectory introduced in (a) are labeled with u (3) , u (4) in translated copies of the inhabited region. They are reached by simple shifts u 0 , while reset and recurrent activation have no effect on the equivalent trajectory (black arrows). The colors of the line segments indicate the avalanche which is triggered when the trajectory crosses the corresponding border. In this example, purple, red, green, and blue designate the avalanches Since the unique choice of P for this system is always the uniform distribution supported on the inhabited region D (yellow region), also known as the Lebesgue measure on D, the probabilities P(a = a) are proportional to the lengths of the respectively colored boundary segments (Theorem C.1).
For ease of notation, we state in the following the main theorems for the special case U i = 1 for all i ∈ [N ] and give references to the general statements and corresponding proofs collected in the appendix.
In this section, we will take a closer look at the intricate dynamics of the skew-product dynamical system T . Most importantly, we will derive a linear transformation of the system which allows to represent the complex avalanche dynamics as simple translation dynamics on the N -torus. This central idea allows to show relative unique ergodicity of the system and to derive the equilibrium measure on the phase space in dependence of the weight matrix W . Using these new mathematical insights we derive closed form expressions for mean firing rates and their variance in the network. In addition, we derive closed form expressions for the probability that a particular unit i participates in an avalanche started with unit k, the probability distribution of individual avalanches P(a = a), and probability distributions of avalanche assemblies P(U (a) = I) in closed-form expressions.
A. Equivalence to an ergodic translation dynamics on the N -torus
For globally connected, homogeneous systems it has been shown that the uniform state density supported on the inhabited region D is invariant under the dynamics [28,[41][42][43]. However, ergodicity of the skew-product system has only been conjectured [42,43]. Ergodicity is important since it allows to associate (sub)volumes in phase space with the probability to observe particular avalanches. Here, we close this conjecture and extend it to the generalized system in which the units are coupled by an arbitrary non-negative coupling matrix W . More specifically, in Theorem C.1 we derive necessary and sufficient conditions for unique ergodicity of the Lebesgue measure on D relative to a given measure on the shift space Σ N for a general class of translation dynamics on the standard N -Torus
T [N ] := R [N ] /Z [N ]
. By constructing a translation dynamics on T [N ] which is topologically conjugated to the original dynamics T , we use this theory to establish relative unique ergodicity of the Lebesgue measure supported on the inhabited region D for almost all weight matrices W with p-reachable G(W ) as long as u 0 is irrational (see Theorem C.4).
Here we explain this simplification of the dynamics using the geometric intuition illustrated in Fig. 2 while referring to the associated proofs for the general case in the appendix. The main idea is the following: Even though the avalanche dynamics is discontinuous (in R [N ] ) due to the resets of the unit's states after firing spikes, we can summarize the effect of the dynamics T k on state vector u by using the assembly U of the resulting avalanche as
T k (u) = u + u 0 e k − (1 − W )δ(U (a(k, u))),
where 1 denotes the identity matrix. There are no explicit thresholds in this equation, because the spike reset is expressed as a simple subtraction of 1 for every unit becoming active. After n iterations we have
u := π 2 T n (k, u) = F u + u 0 n t=1 e kt = u + u 0 n t=1 e kt − (1 − W )N n (k, u),(11)
where π 2 denotes the projection to the 2nd component and
N n (k, u) := n−1 t=0 δ(U (a(T t (k, u))))
is the spike count vector which collects how often each unit fired during n steps of the dynamics T starting from the initial state (k, u). [45] in virtue of (5), it has full rank and the column vectors induce a new coordinate system on R [N ] , indicated by the unit cell (dashed parallelogram) in Fig. 2. Effectively, the recurrent dynamics displaces a state u along a linear combination of the column vectors in W with integer coefficients. Expressed in the new coordinates, the system's state u after s avalanches, and the external input dynamics u + u 0 s t=1 e kt without recurrent feedback and spike reset are always integer coordinates apart. This can easily be seen by transforming u in Eq. (11) into the new coordinate system via multiplication by M (Eq. (6)),
Since M −1 = 1 − W is a diagonally dominant M -matrixM u = M u + u 0 n t=1 e kt − N n (k, u).
Geometrically, this property implies that copies of the inhabited region translated by integer coordi-
nates (1 − W )Z [N ] tesselate R [N ] .
Considering all points with a difference of (1−W )Z [N ] as being equivalent induces a topology which is homeomorphic to the topology on the standard N -torus (which associates all points with a difference of Z [N ] ). These considerations imply that the dynamics T is equivalent to the dynamics T (see Eq. 41) on T [N ] , with the shifts u 0 e k induced by external input being transformed by the inverse M = (1 − W ) −1 . We prove this equivalence in Theorem B.4. Fig. 2 illustrates the described equivalence. By transforming the region in the dashed unit cell to [0,1) [N ] , we map the state trajectory on the left to its equivalent trajectory on the torus on the right.
• In the old coordinate system, shown on the left, the axes are equivalent to the states of single units. The external input is realized by a shift along the axis of the unit receiving the input, while the recurrent input and spike reset are given by a combination of shifts along the columns of 1 − W .
• In the new coordinate system, the axes are equivalent to the combined effect of recurrent input provided by one unit to all other units. External input is still represented by a shift, but projected onto the new coordinate system via u 0 M e k it is in general no longer parallel to the coordinate axes. However, recurrent input and spike reset are now just mapping the current state to its equivalent point in a different unit cell. On the N -torus with its periodic boundary conditions, recurrent input and spike reset thus map to the identical state, hence making the recurrent dynamics much simpler to handle formally.
The transformation onto the N -torus is invertible since the inhabited region D is the image of the region enclosed by the dashed lines under F (see Theorem B.2 and Proposition B.3). With these insights it becomes possible to more easily assess ergodicity by performing the corresponding analysis on the transformed system T [N ] first, and then to transfer results to the original system T . In the following, we briefly state our main insights on ergodicity and refer the reader to the Appendix for the detailed formal treatment.
It turns out that the Lebesgue measure is invariant since every translation u → u+u 0 M e k is bijective on T [N ] . In Theorem C.1 we show for a more general class of translation dynamics on T [N ] that the Lebesgue measure is also the unique ergodic measure of the system, given a stationary probability distribution of the external input. In addition we find that the system T is uniquely ergodic relative to the external input statistics B p for almost-all coupling matrices W(E) with the edge set E, if and only if E is p-reachable (see Theorem C.4).
Since ergodicity is invariant under topological conjugacy, this ensures that also T is ergodic with respect to P = B P × P with P = λ D denoting the normalised Lebesgue measure supported on the inhabited region D.
Let us give an intuition for this remarkable result: Consider the case that only a single unit k ∈ [N ] receives external input. In this case, the equivalent dynamics on T [N ] is a simple rotation on the N -Torus by the vector u 0 M e k . For this classical dynamical system (see e.g. [46]) ergodicity of the Lebesgue measure requires the components of this vector to be irrational and rationally independent, since otherwise the orbits were not dense in T [N ] . If only the unit k receives external input, preachability requires that there has to be a directed path from k to every other unit in [N ]. This condition alone already ensures that each component of M e k is positive. For this case, Theorem C.4 states that if you fix a network topology, e.g. the sparsity pattern of a p-reachable coupling matrix W , and construct a coupling matrix W by choosing random values for the positive entries in W , the entries of u 0 M e k will indeed be irrational and rationally independent almost surely (i.e., with probability one), hence the resulting system is ergodic.
However, almost sure ergodicity does not exclude exceptions, such as can be seen for the much simpler special case of the previously studied homogeneous coupling matrix W ij = α/N for all i, j ∈ [N ] with α + u 0 < 1. It was already noted (see [42,43] that this system is not ergodic if only a single unit receives external input. In fact, our theory shows that this system is ergodic if all units receive external input (Corollary C.5), but not if two or more units do not receive external input (Corollary C.6).
B. Equilibrium rates, spike covariance and mean avalanche sizes Topological conjugacy to a simple translation dynamics on the N -Torus T [N ] greatly facilitates analysis of the dynamics of the extended EHE system. In this subsection, we will first derive key properties such as equilibrium rates and spike covariances, and subsequently assess spike propagation probabilities, allowing us to finally compute mean avalanche sizes in closed-form expressions.
Let Y 0 := lim n→∞ E(N n 0 )/n ∈ R [N ] and X 0 := lim n→∞ cov(N n 0 )/n ∈ R [N ×N ]
be the stationary firing rates and their covariances in a network of uncoupled EHE-units. The number of external inputs received by each unit after n steps of the slow external dynamics follows a multinomial distribution with parameters n and probability vector p. Setting w.l.o.g. the time interval between external inputs to 1, we find Y 0 = u 0 p and X 0 = (u 0 ) 2 (diag(p) − p p T ) as the covariance matrix for the spike counts of different units in the limit of long observation times.
The dynamics of T generating the spike count vector N n translates to T in the sense that its i-th component (N n ) i counts how many times its trajectory has been winding around the side i of the N -Torus up to time n. We use this fact to obtain the equilibrium firing rates Y W and covariances X W as linear transformations of the firing rates and spike count covariances of the uncoupled system (see Theorem D.1, Theorem D.2) via
Y W = M Y 0 = (1 − W ) −1 Y 0 , X W = M T X 0 M .(12)
This functional form is similar to the analytical rates and covariances for linearly coupled Hawkes processes, which have been used to study in detail the influence of network topology on population activity of neural networks (e. g. [37][38][39]47]). In the following, we restate the corresponding implications of this functional form for Y W , which in this model is closely related to the mean avalanche size.
To do so, we study W as a weight matrix of a directed graph allowing us to give an interpretation of the firing rates in terms of weighted paths. We will further investigate this viewpoint for understanding the probability distributions of avalanches in Section IV A.
We denote by G(W ) the graph induced by the coupling matrix W with the edge set E(W ), see
P k (i ∈ U (a)) = M ik M kk ,(13)
where we use the abbreviation
P k (A) := P(A|a 1 = {k})(14)
for the probability of events A conditioned on the event that external input to unit k started an avalanche.
The sum over the equilibrium firing rates 1 T Y W can be seen as the average number of units firing in each time step, i.e. in each iteration of T , which depends on p and u 0 via Y 0 = u 0 p. However, when we condition on a 1 = {k}, these dependencies vanish and we find a closed form which solely depends
on M = (1 − W ) −1 : E(S(a) | S(a) > 0) = k p k E(S(a)|a 1 = {k}), with E(S(a)|a 1 = {k}) = j∈[N ] P k (j ∈ U (a)) = j∈[N ] M jk M kk .(15)
The Note that the association of expressions containing M to putative paths in the graph G(W ) does not have a one-to-one correspondence to the actual dynamics on the network. For instance, i → j → i → k is a path in G(W ) and the product w ki w ij w ji is one summand in (W 3 ) ki . However, since we constrained weights in our model via Eq. (5), units only interact via avalanches in which each unit can occur at most once, such that an avalanche i → j → i → k is not possible. We will resolve this apparent conflict between actual dynamics and apparent interpretation of the mathematical expressions in Section IV, where we show that the states of recurrently connected units are correlated.
It turns out that these correlations increase the probability of eliciting a spike in a connected unit beyond the corresponding entry in W , hereby compensating for paths existing, but never taken by an avalanche.
C. Geometrical structure and self-similarity of the inhabited region
In the previous subsections we showed results for equilibrium first and second order statistics, which we derived from the conjugacy of the original dynamics T to a simple random walk on T [N ] . A deeper look into the statistics of avalanches requires to study the phase space regions on the inhabited region D for which T k will generate specific avalanches. The self-similar geometrical structure of D, which we characterize in this subsection, greatly simplifies the identification of these regions and the computation of their volumes, which will be detailed in the following subsection.
It is convenient to describe regions of interest in phase space as hyperrectangles which are restricted to lower and upper boundaries in the dimensions specified by an index set I ⊂ [N ], and unrestricted in all other dimensions,
[a i , b i ) i∈I := π −1 I × i∈I [a i , b i ) = {u ∈ C | a i ≤ u i < b i for i ∈ I ⊆ [N ]} ,(16)
where π I : R [N ] ⊃ C → R I denotes the restriction to C of the natural projection onto coordinates in The geometrical structure of D is most apparent when studying its complement in C, which we termed the non-inhabited region Λ := C \ D. In Theorem E.5 we show that this non-inhabited region is given by a union of cylinder sets Γ I = [0, (W δ(I)) i ) i∈I (see also definition E.1) for all subsets to vanish along this region in the one-dimensional system since after unit u 1 crossed the threshold and is reset, it immediately receives internal activation w 11 pushing it above this value. Since external activation only increases states u, the interval [0, w 11 ) can not be entered by the dynamics, i.e. The intersection of the cube with the region colored in green, blue, or red reduces to a product of a zero-, one-, or two-dimensional non-inhabited region with a cube, rectangle, or interval, respectively.
∅ = I ⊆ [N ]: Λ = Λ [N ] with Λ I := ∅ =J⊆I Γ I .(17)(c) (d) G {1} G {2} G {1, 2} 0 1 0 1 G {1} G {2} G {1, 2} G {1, 2, 3}
2, and Γ {1,2} . As in the one dimensional system, the regions Γ {1} and Γ {2} mark the regions which can not be entered after unit 1 and 2 crossed the threshold, respectively. The additional feature in two dimensions is that both units can also fire together in an avalanche. In this case they receive internal activation from both units and the state after such an avalanche has to be outside of the rectangle Γ {1,2} = [0, w 11 + w 12 ) × [0, w 22 + w 33 ). In general, the upper boundaries of any Γ I along dimensions specified by index set I consist of the total internal activation which units I receive in an avalanche with U (a) = I. In three dimensions, illustrated in subpanel (c), the noninhabited regions consists of the eight different subregions Γ I , ∅ = I ⊆ [3], and due to the recursive construction there is a striking self-similarity in the noninhabited region: The projection of Λ [3] to the face u 3 = 1 is the two-dimensional Λ [2] shown in panel (b) and the projection of Λ [2] in to u 2 = 1 is the interval Λ [1] shown in (a). This self-similarity of Λ is used in the following section to identify the phase-space regions where T k elicits specific avalanches, and to compute the corresponding volumes with respect to the unique relative ergodic measure λ D .
Additionally, the self-similarity inherent in D = C \ Λ allows to evaluate the cumulative distribution function of P in closed form for coordinates U with U ≥ U > W 1 componentwise: Since condition (5) is fulfilled for the system with modified firing thresholds U , its inhabited region is given by
D = [0, U i ) i∈[N ] \ Λ [N ] . In particular, we have D = [0, U i ) i∈[N ] ∩ D and thus P(u 1 < U 1 , . . . , u N < U N ) = λ([0, U i ) i∈[N ] ∩ D) λ(D) = |diag(U ) − W | |diag(U ) − W | .
Furthermore, this self-similarity relates the inhabited region D of the full system to the inhabited region D I := π I (C \ Λ I ) of a lower-dimensional subsystem.
V I (U ) := λ I (π I ([0, U i ) i∈I \ Λ I )) =| diag(U ) I − W I |(18)
D. Exact avalanche distributions follow from phase space volumes
As for the two-dimensional example in Fig. 2, because of the ergodicity of P = λ D , the probability distribution of the avalanches a is given by the volumes of the corresponding phase space regions.
In this subsection we will identify and evaluate the P-volume of regions corresponding to certain avalanches as a function of an arbitrary coupling matrix W .
For a detailed avalanche a = a(k, u), the corresponding phase space region has a simple structure.
Along the dimensions specified by its assembly it factorizes into a hyperrectangle V a according to the three following conditions:
• The starting unit a 1 = {k} has to be in the interval [U k − u 0 , U k ) for being able to be activated by the external input.
• Similarly, the states of each unit in the i-th generation of an avalanche have to be sufficiently low such that internal activation received up to generation i − 2 did not bring these units over threshold. At the same time, their states have to be sufficiently high such that the additional internal activation from generation i − 1 succeeds in making the units fire.
• The condition for the states of the units that do not participate in the avalanche is, due to absence of leaks, just that they are sufficiently low such that the total internal activation they receive in the avalanche does not push them above firing threshold.
Due to the self-similar strudcture of the inhabited region (see previous section and Appendix, section VII E 1), the subregion corresponding to the avalanche a factorizes into a hyperrectangle along dimensions U (a) and a lower-dimensional inhabited region along dimensions [N ] \ U (a) with upper boundaries reduced by W δ(U (a)) (see Proposition E.8).
These considerations lead to the probability distribution of avalanches:
P k (a = a) = V a V [N ]\U (a) (diag(U ) − W δ(U (a))) V [N ]\{k} (diag(U )) with V a := D(a) j=2 i∈a j ∈a j−1 w i(19)
Equation (19) completely specifies the probability distribution of detailed avalanches. Distributions over avalanche sizes P k (S(a)), avalanche durations P k (D(a)), and avalanche assemblies P k (U (a)), as well as the probabilities P k (i ∈ U (a)) introduced in Eq. (13) all follow from this distribution by summation over the corresponding detailed avalanche probabilities.
However, due to the exponentially increasing number of detailed avalanches a with growing N it is much harder to evaluate and investigate the dependence of these probabilities on the coupling structure: the corresponding sum over detailed avalanches is often difficult to bring into a closed form-expression in terms of the variables of interest, as it is possible for P k (i ∈ U (a)).
Nevertheless, we were able to derive a closed-form expression for the avalanche assembly distribution which is given by
P k (U (a) = I) = V I\{k} (W δ(I)) V [N ]\I (diag(U ) − W δ(I)) V [N ]\{k} (diag(U )) .(20)
Note that V I\{k} (W δ(I)) = 0 if activation from unit k can not spread to all units in I. When deriving this expression from Eq. (19), the sum of the V a over the corresponding avalanches is given in closed form by the single determinant V I\{k} (W δ(I)). We give two proofs of this remarkable identity (see Theorem F.4). The first one is a geometric proof that shows that the images of the avalanche regions under the dynamics T k cluster together and completely fill up the inhabited region along dimensions U (a) \ {k} up to the boundaries given by the total internal activation W δ(I) received by the units during the avalanche. The second, combinatorial proof directly uses the graph theoretical interpretation of the term V I\{k} (W δ(I)), which will be established in the next section.
IV. STRUCTURE-FUNCTION RELATION OF AVALANCHE ASSEMBLY PROBABILI-TIES
In this section we will interpret the assembly probability distribution in Eq. (20) in the context of graph theory, with the aim to distill the features of network connectivity which makes a given assembly likely to become active in form of an avalanche. To make the connections to graph topology easier to recognize, we will set U = 1 in this section.
Intuitively, assembly probability grows with increasing density of connections within the assembly, and increasing sparseness of the connections between the assembly and the rest of the network. The following considerations will allow us to assess which existing edges in the assembly network, i.e. the subnetwork along U (a), contribute most to its activation probability, and which new edge would be most beneficial for increasing this probability. Such information becomes important when a network needs to be optimized for assembly formation under given biological constraints, such as having to spend energy for formation and strengthening neural connections.
We start by first introducing related graph theoretical concepts, and continue by linking these concepts to the assembly probability distribution in Eq. (20). The cut weight appears directly in Eq. (20) in the term V [N ]\I (1 − W δ(I)) which computes the phase
space volume along dimensions [N ] \ I of the hyperrectangle [0, 1 − (W δ(I)) j ) j∈[N ]\I : V [N ]\I (1 − W δ(I)) = λ [N ]\I ({u ∈ D [N ]\I | u < U [N ]\I − cut(I)})
This term reveals that at the start of an avalanche, the state of all units that did not participate in it had to have a distance from firing threshold which was at least as big as the cut weight.
A. Assembly probabilities are proportional to weighted number of spanning trees
While the probability that units outside of the assembly do not fire is determined by the weight of the graph cut, the probability that the units in the assembly do fire is given by the weighted number of spanning trees in the assembly subnetwork.
A directed graph s k = (V, E ) will be called an (outgoing) spanning tree of G = (V, E) rooted at unit k ∈ V , if s k is a subgraph of G which includes all vertices of G and has |V | − 1 edges E ⊆ E such that every unit except k has an in-degree of 1, i. e. there exists a unique path from k to each unit in V . For every spanning tree s the product (j,i)∈E(s) w ij is the weight of s, and for every subset S of spanning trees we write
w(S) := s∈S (j,i)∈E(s) w ij .
We further denote the set of all spanning trees rooted at vertex k by S k .
Spanning trees are well-studied objects in graph theory and closely connected to the graph Laplacian
L(W ) = diag(W 1) − W . Note that diag(W 1)V I\{k} (W δ(I)) = L (k) (W I ) = w(S k ) ,
where S k is the set of spanning trees in the assembly subnetwork, and taken together, we can rephrase Eq. (20) to be proportional to the product of the graph-theoretical terms
P k (U (a) = I) ∝ L (k) (W I )λ [N ]\I ({u ∈ D [N ]\I | u < U [N ]\I − cut(I)}) .(21)
The associated correspondence between assembly probabilities and spanning trees is illustrated in Fig. 4(b) for an assembly rooted at unit k = 5.
There is a natural correspondence between spanning trees rooted at unit k ∈ I of an assembly subgraph with I as set of vertices, and the number of ways in which an avalanche can spread from k through the assembly I. This correspondence was formalized for homogeneous networks in [40] and is extended here to weighted directed graphs: Each avalanche a = a(k, u) specifies which units fire at which generation of the avalanche. Each generation specifies one level of the spanning tree, hence the units in a j are separated from the root by exactly j − 1 edges. The term V a in Eq. (19) has a combinatorial interpretation, since expanding the terms leads to a sum over products of |I| − 1 edge weights, with each product being the total weight of an entire spanning tree rooted at k which is consistent with the level structure imposed by the detailed avalanche a. This correspondence is illustrated in Fig. 4(c). Taken together, the sum over all terms in V a for avalanches a = a(k, u) with U (a) = I is the total, weighted number of spanning trees rooted at k, leading to a combinatorial proof (Appendix, page 68) of Eq. (20).
Interestingly, the weighted number of spanning trees has a strong connection to graph reliability measures [51], yielding that the uniformly most robust graph maximizes the number of weighted spanning trees. Consequently, the probability for joint firing of an avalanche assembly is optimized when the assembly subnetwork is robust under random edge failure. For example, the connectivity of a synfire chain resembles a bipartite graph which, in the undirected version, maximizes the number of spanning trees under the constraint of fixed number of edges and units [51].
B. Effect of links on assembly probability is measured by effective resistance
We showed that the assembly probability is proportional to its weighted number of spanning trees.
From this exact mathematical relation we derive a measure of the importance of individual edges in the assembly network as well as the optimal new connection to form in order to maximize the assembly probability.
If a single link is removed from the assembly network, the probability P k (U (a) = I) is reduced by the weighted number of spanning trees that contain this edge. Similarly, for all new connections between assembly units, the number of additional spanning trees made possible by incorporating this edge into the assembly network increases assembly probability.
For j, i ∈ I denote the graph obtained by inserting the (additional) edge (j, i) with weight w ij > 0 in the weighted graph G(W ), and S (j,i) k be the set of all spanning trees rooted at unit k in the
modified graph G (j,i) . Note that G (j,i) = G, if (j, i) ∈ E(G)
. With these definitions we can generalize the concept of resistance distance to weighted directed graphs by introducing the matrix of directed k-resistances Ω k via Eq. (22):
Ω k ij := w s ∈ S (j,i) k | (j, i) ∈ E(s) /(w ij w(S k )).(22)
The entries of Ω k specify the effect of adding/removing a single edge from the assembly network on the assembly probability P W k (U (a) = I) as follows:
P W k (U (a) = I) = (1 ± ωΩ k ij )P W k (U (a) = I) ,
where W is the coupling matrix obtained either by adding the directed link with strength ω from unit j ∈ I to unit i ∈ I if w ij = 0 (plus sign on r.h.s. of equation), or by removing the link with strength ω = w ij (minus sign on r.h.s) from the original coupling matrix W .
If the assembly subnetwork is undirected, i.e. w ij = w ji for all i, j ∈ I, Ω k becomes independent of k and reduces to the resistance distance ( [52,53]). The matrix entry Ω ij then corresponds to the effective resistance between units i and j in an equivalent electrical network in which edges represent resistors with conductances given by the edge weights. Ω can be computed efficiently and has many applications extending far beyond electrical networks, for example for studying commute times in random walks [54,55]. optimal new edge to add to the assembly network in order to maximally increase the probability of the assembly avalanches would be the edge between 2 and 5 with effective resistance Ω 2,5 = 2, thus tripling the assembly probability.
In the next two subsections, we switch the focus from how network topology influences assembly probabilities to how it induces correlations between membrane potentials of recurrently connected units, and how these correlations affect the dynamics (branching) of an ongoing avalanche. These investigations allow to state conditions on the networks on which the EHE-model reduces to a simple percolation process, and to identify its universality class.
C. States of recurrently connected units are stochastically dependent
We showed that the unique ergodic measure P of the system is given by the Lebesgue measure λ D supported on the inhabited region D. This not only allowed us to determine phase space volumes that represent certain avalanche probabilities, but can also be used to determine stochastic dependencies between states of different units. For a uniform measure, these dependencies can be deduced entirely from the geometrical structure of the support D.
If the uniform measure is supported on a rectangle, i. e. if it factorizes into simple intervals, then the units' states are statistically independent. However, this is typically not the case for the generalized EHE-model, as can be seen from the structure of the inhabited region displayed in Fig. 2: The interval of possible values for u 1 in the inhabited region is smaller if the state u 2 is close to its allowed minimum, while the interval gets bigger when the state u 2 is near firing threshold. Thus, the geometry of the inhabited region D (or equivalently, the non-inhabited region Λ [N ] ) reflects a negative correlation between the states u 1 , u 2 , which decreases the probability to find both units in low states, and in turn facilitates that the units fire together in an avalanche.
But exactly which features of the graph topology influence the volume and geometric structure of the inhabited region D? It turns out that its volume is completely determined by the eigenvalues of W or -equivalently -by circle motifs occurring in G(W ), and by the fact that the inhabited region factorizes along the strongly connected components of G. Strongly connected components are subnetworks in which each unit is reachable from each other unit. In more detail, the spectrum of the adjacency matrix W determines the phase space volume of the inhabited region by
|D| = |1 − W | = N i=1 (1 − λ i ) ,(23)
where λ i are the eigenvalues of W . There is a combinatorial interpretation of |1 − W | which allows to identify cycles in W as the relevant feature determining the volume of the inhabited region (see Corollary G.1): There is an important distinction between self-loops and directed cycles connecting at least two units.
|1 − W | = N n=1 L∈Ln(W ) (−1) #(L) (L) where L n (W )
The effect of self-loops on the inhabited phase space is equivalent to lowering the corresponding firing thresholds (Proposition G.2), whereas a recurrent coupling between more than one unit induces a stochastic dependency between the recurrently connected units (i. e. units in the same strongly However, if there are recurrent connections in the coupling matrix, the number of paths between two nodes is not necessarily finite anymore. Since we have a limit on the recurrent feedback via Eq. (5), avalanches can also not spread along paths with units occurring more than once. In consequence, there is a discrepancy between the interpretation of M ik as a sum over all putative paths in the network, and the much smaller number of paths that can actually be realized by propagating avalanches. The probability that the set of units a j becomes active in generation j of the avalanche, given the units which fired in previous generations a 1 , . . . , a j−1 is given by the following equation (Theorem G.5):
P(a j = a j |a 1 = a 1 , . . . , a j−1 = a j−1 ) = k∈a j ∈a j−1 w k × V [N ]\U j (a) (1 − W δ(U j−1 (a))) V [N ]\U j−1 (a) (1 − W δ(U j−2 (a)))(24)
For the homogeneous EHE-model it was shown [40] that the avalanche size statistics converges in its distribution to the statistics obtained from a Galton-Watson branching process. However, the general branching process described by Eq. (24) is much more involved as it requires the memory of the units triggered in previous avalanche steps, and since updates of individual units P(a j = a j |a 1 = a 1 , . . . , a j−1 = a j−1 ) for a j ∈ [N ] \ U j−1 (a) are not statistically independent due to correlations between their states.
If the coupling matrix W represents a DAG, the inhabited region is the complete cube C and the branching equation simplifies to
P(a j = a j |a 1 = a 1 , . . . , a j−1 = a j−1 ) = k∈a j ∈a j−1 w k k∈[N ]\U j (a) (1 − (W (δ(U j−1 ))) k ) m∈[N ]\U j−1 (a) (1 − (W δ(U j−2 (a))) m ) .
In this case, the region of states consistent with the avalanche propagation up to step j − 1 of the avalanche is given by a simple hyperrectangle. and the probability that units fire in step j of the avalanche is for each unit k equal to ∈a j−1 w k /(1 − (W δ(U j−2 (a))) k ). The dependence of the branching probability for step j on the previously active units U j−2 (a) vanishes on networks for which all paths from the starting unit a 1 to an arbitrary unit k have the same number of steps. This is the case, for example, if the coupling network is a directed tree or a regular percolation network.
In this case, the branching probability becomes particularly simple:
P(a j = a j |a j−1 = a j−1 ) = k∈a j ∈a j−1 w k k∈[N ]\a j 1 − ∈a j−1 w k .
This indicates a simple branching process, in which each unit k fires (transitions to the active state)
independently on other units with probability ∈a j−1 w k . As an example, consider the (infinite) (1+1)D lattice, in which each unit v t,x , t ∈ N, x ∈ Z receives input only from its two 'parents' v t−1,x−1 , v t−1,x+1 with connection strength w. In this case, the probability that the node v t,x fires at the current step of the avalanche depends on how many of its parent nodes fired in the previous step of the avalanche. If none/exactly one/both of its parents was active in the previous step of the avalanche, the branching probability is 0/w/2w, respectively. This shows that the EHE-model on the (1+1)D lattice is equivalent to the Domany-Kinzel model [56,57] with p 2 = 2p 1 , and thus has its critical point in the limit w = 1/2 in which it displays compact directed percolation, which belongs to the exactly solvable universality class of branching-annihilating random walks [58, Section 3.2].
V. APPLICATION TO STRUCTURALLY SIMPLE NETWORKS
Our mathematical framework provides a novel gateway for better understanding collective behavior and synchronization statistics in recurrent excitatory networks. To demonstrate its advantage, we apply our framework in this section to structurally simple examples, including a planar network with periodic boundary conditions and distance-limited connectivity. Here we study deviations from mean field behavior in dependence of changes in coupling topology analytically, and show that scaling exponents of the mean avalanche size depend on the maximal coupling distance. The limiting case of all-to-all couplings leads to a homogeneous system without self-weights, for which we derive not only the mean avalanche size but also the avalanche size distribution analytically. In addition we illustrate topology-induced effects on the avalanche size distribution in small networks by rewiring a ring network into a small world network, and by transforming a ring network into a line network by deletion of a single edge. Furthermore, we quantify the effect of an inhomogeneity between intra-network and inter-network coupling weights on the avalanche size distribution of two all-to-all coupled subnetworks.
A. Homogeneous network without self-weights
In this section we derive the analytical avalanche size distribution and the mean firing rate for a homogeneous network without self-weights. We denote the coupling matrix of this network with zeros on the diagonal and otherwise constant entries α/(N − 1) by W h (α). Using our mathematical framework, it is easy to extend the known results for homogeneous networks with self-weights [28,40] to calculate the avalanche size distribution and the mean avalanche size in dependence of N and α. For completeness, we also show how our framework reproduces the known expressions for the avalanche size distribution and mean avalanche size of the homogeneous network with self-weights in the appendix, section VII H 1.
Due to the symmetry in the homogeneous network, every assembly of size n has equal probability and thus the avalanche size distribution is obtained from the assembly distribution by counting the number of assemblies of a given size.
Let W h = ((1 − δ ij )w) i,j∈[N ]
for some w ≥ 0. Fixing an arbitrary starting unit, there are N −1 n−1 possible assemblies of size n. Every assembly graph is itself a complete graph. Due to this symmetry, the probability of S(a) = n follows from the probabilities of assemblies with |I| = n units for the special case of homogeneous matrices. We will give closed form expressions for the determinants V I\{k} (W h δ(I)) and V J (U ) for general U ∈ R
[N ] >0 , ∅ = J ⊆ [N ]
occuring in (20). The first term is the (k, k) minor of the assembly subgraph Laplacian which is equal to the number of spanning trees in the assembly subgraph rooted at k. By Cayley's Theorem, which is the special case of the Matrix Tree Theorem for complete graphs, the number of spanning trees in a complete graph with n units is n n−2 . Each spanning tree consists of n − 1 edges and is thus weighted by w n−1 . For the more general V J (U ) we obtain V J (U ) = (U + w) |I| − |I|w(U + w) |I|−1 by using Proposition G.2 and the corresponding expression (74) for homogeneous matrices with selfloops. Using the parametrisation w = α N −1 , 0 ≤ α < 1, the avalanche size distribution of nonempty avalanches in the homogeneous network without self-weights is given by:
p h (n) = P W h (S(a) = n|S(a) > 0) = N − 1 n − 1 n n−2 α N − 1 n−1 1 − (n−1)α N −1 N −n−1 (1 − α) 1 + α N −1 N −2 1 − (N −2)α N −1(25)
Using the Stirling approximations for the factorial n! ≈ √ 2πn( n e ) n and for the binomial coefficients L l ≈ L l l! for n 1, L l, we obtain a power law scaling of the avalanche size distribution in the limit N → ∞, α ↑ 1 with exponent 3/2, which is the expected mean-field limit.
With M h given by the expression
M h ij = (1 − W h ) −1 ij = w + δ ij (1 − (N − 1)w) 1 − (N − 2)w − (N − 1)w 2
and Eq. (15) we obtain the mean avalanche size as
E W h (S(a)|S(a) > 0) = N − 1 + α (N − 1) − (N − 2)α .weight of w = α/((2l + 1) 2 − 1), α ∈ [0, 1)
. We denote this connectivity by the coupling matrix W l (α).
We will examine the avalanche size distribution and the scaling of the mean avalanche size, for which we obtain a closed-form analytic solution for every l -in dependence of the coupling distance l.
For l < (L − 1)/2 , coupling is not all-to-all anymore. In this case, there is no apparent symmetry between avalanche assemblies of the same size that would simplify computing the avalanche size distribution. Even though we can evaluate each specific assembly probability analytically using Eq. (20), we did not find a closed-form expression for the sum over assemblies of a given size.
However, we found a closed-form expression for the mean avalanche size in dependence of L and the coupling strength w = α/((2l + 1) 2 − 1).
From shift invariance and periodic boundary conditions, matrix vector multiplication of the L 2 × L 2 matrix W represents a two-dimensional convolution with a rectangular (2L + 1) × (2L + 1) point spread function. The eigenvalues of W are thus given by the two-dimensional Fourier transformation of its point spread function and determine the phase space volume through Eq. (23). We denote the eigenvalues of W by λ ( )
i , i = 1, . . . , L 2 .
We will now determine the mean avalanche size given by Eq. (15). To do this, we need to know the diagonal elements of M := (1 − W ) and the sum of entries in its rows. Note that all entries of M are positive, since the graph given by W is connected, and that M inherits the shift invariance from W . Thus, all diagonal elements M kk are the same and equal to
M kk = trace(M )/N = N i=1 1 N (1 − λ ( ) i )
.
The second equation follows since each eigenvalue λ Taken together, we arrive at a closed form expression for the mean (non-empty) avalanche size
E W (S(a)|a 1 = {k}) = M 1 M kk = N (1 − α) −1 N i=1 (1 − λ ( ) i ) −1 = E W (S(a)|S(a) > 0) .(26)
Note that eigenvalues λ change the mean avalanche size in the limit α → 1, it affects the scaling exponent. As shown above, the mean avalanche size scales according to (1 − α) − with = W h = 1. This scaling exponent grows with shrinking coupling distance, such that 0 < W l=1 < W l=2 < W h = 1. This result is intuitively plausible, since a limited coupling distance imposes topological constraints on the spread of an avalanche. This constraint makes larger avalanches less likely to occur, while smaller avalanches are observed more frequently due to the larger interaction strength between two units with decreasing l.
Differences in scaling exponents are also apparent in the critical avalanche size distributions (Fig. 6, inset) which exhibit a power-law characteristics. While the slope for the homogeneous network approximates the mean field exponent −3/2, it becomes less negative with limited coupling distance.
C. Analytical avalanche size distributions in small or structurally simple networks
While our framework provides a closed form expression for the mean avalanche size and for the probability of any avalanche assembly for networks with arbitrary non-negative couplings, the avalanche size distribution has to be obtained by summing over all 2 N assemblies in the network (and over at most N choices for the unit starting the avalanche). This is only feasible for small networks with fewer than 20 nodes. However, networks with specific structure may allow a more efficient computation of the avalanche size distributions, for example by exploiting symmetries -like in the homogeneous network, where each assembly of a given size is equally likely to occur -or by taking advantage of sparsity in the network which limits the number of connected assemblies.
In this section, we showcase how the network connectivity changes the avalanche size distributions using example networks which are sufficiently small or structurally simple, using a uniform driving probability p = (1/N ) · 1. Our considerations are accompanied by the formal treatment detailed in section VII H of the Appendix, which is complemented by some analytical insights for the ring network and Erdös-Renyi networks.
We start with examples of small networks of size N = 16, comparing ring networks with l-nearest neighbor couplings to a small world network and a homogeneous network. For the l-nearest neighbor ring model, each unit is connected to its l neighbors on each side. The homogeneous network is connected all-to-all with equal weights. Fig. 7, panel (a) shows the corresponding avalanche size distributions with the insets illustrating the 2-nearest neighbor ring network and the particular realization of the small world network. To illustrate changes induced by the topology, all coupling matrices were normalized such that the sum of incoming edge weights to each unit is equal to α = 0.75. The probabilities for large avalanche sizes increase from the more sparsely coupled ring networks, over the small-world network to the homogeneous network. Since the small world network was generated by randomly rewiring edges with a probability of 0.3 from the l = 2 ring network to form a Watts-Strogatz graph [60], this process decreased the mean path length which in turn also facilitated larger avalanches to occur. Note that the avalanche distribution we show is not an average over an ensemble of small-world networks, since our framework allowed to compute it for a particular realization of a small-world topology, which is displayed in the inset.
In general, altering just a single edge can have large effects on the global dynamics and associated and Kirchhoff's matrix tree theorem that the probability of a global avalanche in the ring network of size N is at least N times as high as the probability for the corresponding line network.
In addition to network connectivity, weight inhomogeneities affect the avalanche size distribution.
To illustrate this, consider two homogeneous subnetworks of size N , which are coupled in an all-toall fashion with intra-network coupling weight α/N and inter-network coupling weight β/N . Such a network topology is an ubiquitous structure in the brain, where two strongly coupled local populations
VI. DISCUSSION
In this study we generalized a well-established model class for neural avalanches [28,[40][41][42][43][44] to arbitrary network topologies and non-negative connection weights, and performed a thorough analysis of its dynamics.
Mathematical analysis of this neural model has always been a challenge due to the discontinuities of the avalanche dynamics at spiking threshold. Even though remarkable progress has been made [42,43], ergodicity of the homogeneous skew-product system remained a conjecture and formal treatment Our framework keeps track of avalanches as sequence of index sets specifying which units fired at which avalanche generation. The ergodic measure, along with the self-similar structure of its support, allowed us to derive avalanche distributions analytically by identifying the regions in phase space which lead to specific avalanches. In addition, we found a closed form for the distribution of units involved in avalanches, the assembly distribution. To our knowledge, this approach provides the first detailed investigation of assembly distributions for a recurrent network model with spiking neurons.
For demonstrating the benefits of our approach, we analyzed a structurally simple example for a non-homogeneous coupling topology. Specifically, we considered a shift-invariant uniform connectivity with limited coupling distance on a two-dimensional lattice with periodic boundary conditions (two-dimensional torus). Our framework captures deviations of the scaling exponent of the mean avalanche size from the corresponding mean field value analytically. Furthermore, we assessed the corresponding scaling exponent of the critical avalanche size distributions numerically. These results illustrate changes in scaling exponents in dependence of the coupling topology from a predominantly local coupling exhibiting coalescence [61] to an all-to-all homogeneous network, where the mean field exponents are attained. Future studies could use our analytical framework to study changes in scaling exponents by other features of the coupling topology like synaptic density and feedback loops which have been shown to change scaling exponents in neuronal cultures [62].
Implications for ensemble codes. One main result of our analysis is a closed-form expression of the probability that an ensemble of units fires in short temporal succession in form of the assembly of an avalanche. This form of transient synchronization helps to transmit signals in a fast and reliable manner, since it is more efficient in driving postsynaptic cells than spikes arriving asynchronously [10,11]. Functionally, assemblies can be used for establishing whole coding schemes, as recently formalized in a computational system called Assembly Calculus [63]. The generalized EHE model could in this context serve as a physiologically more realistic realization of such a coding scheme.
But most importantly, reoccurring sequences of spike patterns with a particular composition of participating units were indeed observed in experiments [64][65][66][67][68][69][70][71], indicating a robust formation of assemblies during signal processing.
Having a formal framework is thus essential for interpreting such data, and for understanding how coding with synchronous neural ensembles is enabled by external input and constrained by network connectivity.
Relation to graph theory. One major insight from our analysis is the existence of close links between assembly formation and graph theoretical properties of the synaptic connections, which we will discuss in the following.
We found that the adjacency matrix of an assembly subnetwork determines assembly probabilities as a function of the eigenvalues of the corresponding graph Laplacian. In particular, the probability that external input to unit k starts an avalanche encompassing a given assembly is proportional to the (k, k)-cofactor of the graph Laplacian. Via the well-known Matrix Tree Theorem (also known as Kirchhoff's Theorem, which generalizes Cayley's Theorem to weighted digraphs) this property is related to the graph theoretical concept of spanning trees, making the assembly probability proportional to the (weighted) number of spanning trees for the assembly network. The spanning trees themselves are directly related to the different pathways individual avalanches can spread through the assembly network.
With respect to network function, the weighted number of spanning trees in a graph can be seen as a measure of robustness. Let us consider the elementary setting of a simple graph with a fixed number of units and edges. If every edge can fail independently with a given probability, a uniformly most reliable graph has to maximize the number of spanning trees, i. e. be τ -optimal [72]. In this sense we could extend the Hebbian principle from pairs of neurons to assemblies: What robustly wires together, fires together. Thus our framework provides an explicit objective function for reliable and robust assembly formation. In addition, we analytically determined the impact of single edge failure and the gain of formation of a new edge on assembly probability. For unidirected graphs, the resulting measure turns out to be equivalent to the well-known resistance distance [52,53].
The consequence of these mathematical results for brain function is a general prediction that the Laplacian spectrum would relate more directly to the occurrence of collective synchronous events than the adjacency matrix of the underlying anatomical network. In other words, the strength of a direct connection between neural populations is less indicative for the magnitude of their effective interaction than the sum of all direct and indirect (weighted) pathways between those two units.
Interestingly, it was demonstrated just recently that functional brain connectivity is indeed best predicted from the Laplacian of the structural connectivity which was extracted from diffusion tensor imaging data [73].
Furthermore, equations for equilibrium rates and their covariances Eq. (12) are consistent to the corresponding results for Hawkes processes [37], i. e. (linearly) coupled Poisson processes. For these processes, structure-function relationships have been studied in detail [37][38][39]. Corresponding results, such as which graph motifs most strongly influence equilibrium rates, translate directly to our model.
Relation to branching and percolation processes. Neural avalanches are often studied in simplified models with discrete states and a dynamics defined as a branching process on a graph [ there do the edge weights represent branching probabilities. In particular, the probability that a unit becomes active will be proportional to the sum of incoming edge weights from currently active units.
In addition, we provided a direct relationship to percolation processes by showing that avalanches of the EHE-model for the particular choice of the (1+1)D lattice propagate equivalent to the directed compact percolation process which belongs to the universality class of branching-annihilating random walks [58].
Model generalization. Models are constructed for capturing the generic behaviour of a real system, while being ideally as simple as possible to allow for a comprehensive understanding and analysis of the underlying mechanisms. In this sense we believe that our formal framework provides a major advance over previous work. It is still sufficiently simple for a rigorous analysis, but allows studying assembly formation and avalanche dynamics in arbitrary, inhomogeneous networks. This is the generic case for neural systems in the brain, and assuming homogeneity in these situations will lead to misleading results or apply only to small subsystems for which this condition is approximately fulfilled.
For making analytical treatment possible, the extended EHE model retains some simplifying assumptions from the original framework [28]: it does not have leak conductances, there is no "hard" reset after a spike, and it assumes a separation of time scales. In the following, we will consider implications of lifting these assumptions on the mathematical treatment, and discuss how our results
can be expected to generalize to physiologically more realistic neural units and networks.
a) Separation of time scales. In order to unambiguously identify the detailed progression of avalanches we assume a separation of time scales in this model, which means that external input only occurs after an ongoing avalanche has terminated. This is a common assumption in avalanche models [28,29,33]. A weakening of this assumption would allow several avalanches to coexist and to merge. It is known from field-theoretical treatment that allowing external drive during avalanches leads to changes in the scaling relation like for example the avalanche size [34]. This phenomenon was recently studied in detail [75] in a model very similar to our framework with the result that size distribution exponents in the critical state decreased with increasing relaxation of the time scale separation.
b) Spike reset and refractory period. In the EHE-model resetting a unit's state u after emission of a spike is done by simply subtracting the firing threshold. If instead the units were reset to zero as in other integrate-and-fire models, the spike's impact on the progression of u would no longer be linear. In consequence, the colored boundaries in state space (Fig. 2) would still act as portals, but with an additional absorbing condition. This condition would ensure that the state remains 'glued'
to the resting state u = 0 after transitioning through the boundary, thus effectively dissipating the excess synaptic input delivered in the current generation of an avalanche. It would still be possible to study the system on the torus, however, with the penalty of having a discontinuous dynamics at the boundaries. A 'hard' reset would also induce additional state correlations which would need to be countermanded by additional randomness, as e. g. a stochastically varying drive u 0 , for obtaining a smooth invariant measure which potentially can be treated analytically.
Interestingly, by simultaneously lifting time scale separation and introducing a 'hard' spike reset, the state dynamics will again become closer to the EHE system. Since an avalanche will now be spread over several milliseconds, it is likely that a smaller part of its total synaptic input will arrive when the neuron is just spiking and insensitive to those inputs. In consequence, a smaller fraction of recurrent feedback would be lost.
c) Leak conductances. In real neurons leak conductances make the membrane potential decay towards its resting value. Introducing leaks in the EHE model would thus lead to a non-homogeneous invariant measure which increases towards the resting potential. In addition, state trajectories would be able to enter the formerly non-inhabited region and hence violate validity of the torus transformation. However, the resulting effects will be sufficiently small if assuming a strong external drive in comparison to a weak intrinsic leak [40] , such that we can expect our main result in Eq. (20) schemes where the excitatory drive is on average a little higher than the inhibitory suppression.
Perspectives. The main contribution of our study is the extension of an analytical framework for assembly formation in recurrent networks from homogeneous couplings to networks with arbitrary (positive) connectivity, now allowing rigorous treatment of avalanche dynamics in a much larger class of systems than in previous studies. For future research, a logical next step would be to investigate temporal aspects such as the statistics of avalanche duration, correlations between subsequent avalanches, and inter-event statistics and its relation to known brain rhythms such as gamma oscillations [76]. We think that for these aspects, analytical treatment is within reach and could nicely complement our results on assembly formation.
On a more general level, we believe the novel framework introduced here might support a paradigm shift in research on neural criticality. In highly inhomogeneous systems subject to a substantial even if the system is at the brink of some phase transition or at an optimal point for information processing. However, such a situation is actually the rule, and not the exception when investigating active processing in the brain. Being able to handle these more general situations is the advantage of our theory. In consequence, power laws and criticality played only a secondary role in our study, while instead we focused on detailed assembly formation. Combined with structured inputs from 'meaningful' external stimuli [12] we expect our tools in future studies to provide new insights into how avalanche formation and -potentially -criticality serve information processing and brain function.
ACKNOWLEDGMENTS
This work was supported by the DFG priority programs SPP 1665 (ER 324/3-2) and SPP 2205 (ER 324/5-1). We thank Federica Capparelli and Nergis Tömen for insightful discussions at the initial stage of this project.
VII. APPENDIX
This appendix is organized as follows: The order of the sections in this part is the same as the order in which the topics are treated in the main text. While the main text focuses on the most important results and related intuitions, the appendix provides the corresponding rigorous mathematical treatment and technical details. Although the appendix itself is structured to be self-contained, we advise the dedicated reader to go through the corresponding sections in the main text and the appendix in conjunction.
For convenience of the readers, we first repeat the basic definitions of the generalized EHE model.
In section A we then state some general properties of the model and its avalanche dynamics F which are important for all subsequent sections.
In section B we show that dynamics of the model is homeomorphic to a simple translation on the N -dimensional torus T [N ] which greatly facilitates any formal treatment, allowing to determine under which exact conditions the system is ergodic (section C), and permitting to compute expected firing rates and spike count covariances explicitly (section D).
Section E develops a description of the self-similar structure of the 'inhabited region' D in the model's state space. This description and the notion of ergodicity (section C) is a prerequisite for calculating avalanche probabilities in section F, which is followed by section G detailing the relation of the obtained equations to graph theoretical terms.
The final section H exemplifies how this mathematical framework can be used to derive avalanche size statistics for various networks with different topology and regular structure.
Notation and definition summary: We start by briefly summarizing the model and notation of its dynamics Eq. (1)-Eq. (4) in the remainder of this section:
T k : C → C, u → F (u + u 0 e k ),(27)
with F : R
[N ] ≥0 → C, u → F τ (u) (u)(28)
and F : R
[N ] ≥0 → R [N ] ≥0 , u → u − (diag(U ) − W )A(u)(29)
and τ : R
[N ] ≥0 → N 0 , u → min {n ∈ N 0 | F n (u) ∈ C} .(30)
While F describes one generation of an avalanche, F subsumes an entire avalanche with τ being its duration. Using these definitions, T k describes one iteration of the model upon receiving external input to unit k (which might or might not trigger an avalanche).
The connection weights w ij from matrix W are subject to the constraints
u 0 + N j=1 w ij < U i for all i ∈ [N ] ,(31)
ensuring that each unit can fire at most once during an avalanche (see Proposition A.1). It also ensures the existence of
M := (diag(U ) − W ) −1 .(32)
The avalanche function a is defined as
a(k, u) := j ∈ [N ] | F i−1 (u + e k u 0 ) j ≥ U j i=1,...,τ (u+u 0 e k ) ∈ A .(33)
where A is the set of all avalanches (see Definition E.7). a = () denotes the empty avalanche. The length of the sequence a will be denoted by D(a) and called the duration of the avalanche. We call the union U j of the generations Proof. We will give a proof by contradiction. Let u ∈ C, k ∈ [N ] be arbitrary and set a = a(k, u).
Let unit j be part of generations r and s, j ∈ a r , j ∈ a s with 1 ≤ r < s ≤ D(a) such that the components of a 1,...,s−1 are pairwise disjoint, i.e., no unit has fired twice in the generations up to s − 1, and unit j would fire a second time in generation s. It follows that
F s−1 (u + u 0 e k ) j = u + u 0 e k − (diag(U ) − W ) s l=1 A F l−1 (u + u 0 e k ) j ≤ u 0 + N l=1 w jl < U j ,
which contradicts A j (F s−1 (u + u 0 e k )) = 1. It follows that the index sets are pairwise disjoint and
U (a) = D(a) i=1 a i .
Lemma A.1 allows us to write T k in a more compact form using M −1 = diag(U ) − W :
T k (u) = u + u 0 e k − M −1 δ(U (a(k, u))) (u ∈ C, k ∈ [N ]).(36)
Thus, F projects from R [N ] back to C by subtracting integer combinations of columns of M −1 . In the next lemma, we introduce some properties of F :
Lemma A.2. For u, v ∈ R [N ]
≥0 we have the following properties for F :
(2) F (F (u) + v) = F (u + v) (3) F W 1 + M −1 R [N ] ≥0 = F W 1 + M −1 [0, 1) [N ] , where 1 = N k=1 e k .
Proof. (1) We show the 'if' direction by contradiction:
Let n n be such that F (u) = u−M −1 n and u−M −1 n ∈ C. Hence, F (u)+M −1 (n−n ) = u−
M −1 n ∈ C. After ≤ τ (u) iterations we have F (u) = u−M −1 n ( ) with n ( ) := −1 k=0 A F k (u) .
Since n n there exists an iteration t ∈ N and index j ∈ [N ] such that n (t) ≤ n but n (t+1) j > n j .
Since n j = n
u − M −1 n j ≥ u − M −1 n (t) j = F t (u) ≥ U j . This contradicts F (u) − M −1 n ∈ C.
To show the 'only if' direction in (1), let us assume that x = u − M −1 n ∈ C and u − M −1 n / ∈ C for all n n. Then the stopping condition for the fixed point iteration defining F (u) is fulfilled for the first time at x, thus x = F (u).
(2) Note that u = F (F (v) + w) = v + w − M −1 (n 1 + n 2 ) for some n 1 , n 2 ∈ N [N ] and u + M −1 n / ∈ C for all 0 = n ≤ n 1 + n 2 . Thus with (1), we have u = F (v + w).
(3) We fix z ∈ [0, 1) [N ] . Then we deduce from (1)
that F (W 1 + M −1 z) = W 1 + M −1 (z − n) and W 1 + M −1 (z − n + n ) / ∈ C for all 0 = n ≤ n. Now, every x ∈ R [N ]
can be decomposed into
x = x + z with z = x − x ∈ [0, 1) [N ] . Since W 1 + M −1 R [N ] ≥0 \ [0, 1) [N ] ∩ C = ∅ we find F (W 1 + M −1 z) + M −1 n / ∈ C for all 0 = n ≤ n + x , which implies F (W 1 + M −1 z) = F (W 1 + M −1 x).
We will generalize Eq. (36) to multiple steps s of applying T in the following Corollary. There we introduce the spike count vector N s (k, u) which collects how often each unit fired (i.e., participated in an avalanche) during T s (k, u). This quantity will later be used to determine the equilibrium firing rates and spike count covariances of the model in dependence of the interaction matrix W . δ U a T t (k, u)
.
(37)
For k ∈ [N ] we have for s ∈ N 0 , k ∈ Σ N , u ∈ C,
π 2 T s (k, u) = u + u 0 s t=1 e kt − M −1 N s (k, u) = F u + u 0 s t=1 e kt ,(38)
where π 2 denotes the projection onto the second component (here C).
Proof. The claim follows by induction. The starting case s = 1 is provided by Eq. (36). Now suppose Eq. (38) holds for s − 1. Then we have
π 2 T s (k, u) = F u + u 0 e ks + u 0 s−1 t=1 e kt − M −1 N s−1 (k, u) = u + u 0 s t=1 e kt − M −1 N s−1 (k, u) + δ U a T s−1 (k, u) = u + u 0 s t=1 e kt − M −1 N s (k, u) = F u + u 0 s t=1 e kt
The following definitions will allow us to link properties of the avalanche dynamics to the coupling structure contained in the weight matrix, and aid us in assessing ergodicity of the system. This graph is naturally weighted through W by assigning (j, i) → w ij .
(2) Further, we define the set of all coupling matrices W with the same sparsity pattern as W to be
W(W ) := W = (w ij ) ∈ R [N ×N ] | W respects Eq. (5) and w ij = 0 ⇐⇒ w ij = 0 .(40)(diag(U ) −1 M −1 ) −1 = (1 − diag(U ) −1 W ) −1 = ∞ k=0 diag(U ) −k W k ,
implying that all entries of M are non-negative. Since the entry (W k ) i,j is the sum of products of edge weights along all paths of length k in G(W ) from unit j to unit i, we conclude that (M δ(L)) i > 0 if and only if there exists a path (which can also be the empty path) in G(W ) from some unit in L to the unit i.
T : Σ N × T [N ] → Σ N × T [N ] , T (k, z) := (σ(k), z + u 0 M e k 1 ) ,(41)
with σ being the left-shift operator and k 1 designating the first entry of the external input sequence k.
Theorem B.2. We define the inhabited region D by
D := θ T [N ] , where θ : z → F W 1 + M −1 z ,(42)
where we equip D with the quotient topology induced by the quotient map θ in which way θ becomes a homeomorphism. For every k ∈ [N ], the map T k is a bijection from D to D and Since θ and z → z + u 0 M e k are bijections, we only have to verify Eq. (43) which follows with Lemma A.2 as the following calculation shows:
T k • θ(z) = θ(z + u 0 M e k )(T k • θ(z) = F u 0 e k + F W 1 + M −1 z = F W 1 + M −1 (z + u 0 M e k ) = θ(z + u 0 M e k ) .
In the following, we will thus restrict T to Σ N × D. The name inhabited region for D stems from the fact that for all starting points u ∈ C and almost all input sequences k ∈ Σ N , the iterated dynamics will eventually map to D, i. e. T n (k, u) ∈ D, for all n ∈ N large enough. We show this in the following proposition:
Proposition B.3. Let the graph G(W ) be p-reachable. Then for B p -almost all input sequences k ∈ Σ N there exists an iteration number n 0 ∈ N such that π 2 T n (k, C) = D for all n ≥ n 0 .
Proof. From Corollary A.3 we have with c(u) := M (u − W 1)
π 2 T n (k, u) = F u + u 0 n t=1 e kt = F W 1 + M −1 c(u) + u 0 M n t=1 e kt . Let c min = inf u∈C c(u) ∈ R [N ]
≥0 . Since lim n→∞ ( n t=1 e kt ) i = ∞ for all i with p i > 0 for almost all k ∈ σ N we have lim n→∞ (M n t=1 e kt ) i = ∞. Thus there exists an n ∈ N such that c min + u 0 (M n t=1 e kt ) k ≥ 0 for all k ∈ [N ] which implies π 2 T n (k, C) ⊆ D with Lemma A.2, part (3). Since each T k is a bijection on D ⊆ C we also have D ⊆ π 2 T n (k, C). Now we show that on this inhabited region, the system T is topologically conjugated to the system T which has the whole N -Torus as its phase space.
Theorem B.4. The dynamical systems T on Σ N ×D and T on Σ N ×T [N ] are topologically conjugated via the homeomorphism φ := Id ×θ, i.e. T • φ = φ • T .
Proof. Using Theorem B.2, θ is a homeomorphism from T [N ] to D thus φ is a homeomorphism from
Σ N × T [N ] to Σ N × D. Conjugacy follows from T • φ(k, z) = (σ(k), T k 1 (θ(z))) = (σ(k), θ(z + u 0 M e k 1 )) = φ(k, z) • T .
C. Relative unique ergodicity for skew product dynamical systems
General case
Ergodicity is useful for directly relating volumes in phase space to probabilities for particular avalanches. In this section we establish unique ergodicity relative to a given shift-invariant probability measure on T [N ] for a general class of translation dynamics which includes T . Since we have shown in the previous section that the simple translation dynamics T on Σ N × T [N ] is topologically conjugated to the system T , ergodicity of T implies ergodicity of T . The unique relative ergodic measure will turn out to be a product measure given by the shift-invariant probability measure times the normalised Lebesgue measure on the N -torus, which is transported to the normalised Lebesgue measure with support on D for the system T .
Specificially, for a continuous function g : Σ N → R [N ] we consider the skew product dynamical system
T g : Σ N × T [N ] −→ Σ N × T [N ] , (k, z) → (σ(k), g(k) + z) .
Note that function g in T g describes a more general dynamics than in the EHE system T and can depend on more than just the the first component of the input sequence k. Furthermore, note that every translation on the N -torus defines a bijection which leaves the normalised Lebesgue measure = ν}. We always have ν ⊗ λ ∈ M Tg (ν) and we find, that if ν ⊗ λ is ergodic for T g , if and only if ν is ergodic for σ and M Tg (ν) = {ν ⊗ λ}. The ergodicity of ν follows from the fact that σ with the invariant measure ν is a measure theoretical factor of T [N ] with respect to invariant measure ν ⊗ λ. To infer that M Tg (ν) is a singelton, fix some µ ∈ M Tg (ν). For every
t ∈ T [N ] the translation τ t : Σ N × T [N ] → Σ N × T [N ] , (k, x) → (k, x + t) commutes with T g and hence µ t := µ • τ −1
t , as well as its averaged version µ := T [N ] µ t dλ(t), define again elements of M Tg (ν). For every integrable function f we have
f dµ = T [N ] f dµ t dλ(t) = T [N ] f (k, x + t) dµdλ(t) = T [N ] f (k, x + t) dλ(t)dµ(k, x) = T [N ] f (k, t) dλ(t)dµ(k, x) = f d(µ • π −1 1 ⊗ λ) = f d(ν ⊗ λ).
Consequently, µ = ν ⊗ λ. Since ν ⊗ λ is assumed to be ergodic and therefore extremal in M Tg , we
have that µ t = ν ⊗ λ for almost all t ∈ T [N ] . That is, for almost all t ∈ T [N ] , µ = µ t • (τ −t ) −1 = ν ⊗ λ,
uniqueness follows.
For the converse implication, suppose that ν ⊗ λ is not ergodic. Then we find two distinct measures µ 1 and µ 2 in M Tg and c ∈ (0, 1) such that ν ⊗ λ = cµ 1 + (1 − c)µ 2 . Then for the first marginal we
have ν = (cµ 1 + (1 − c)µ 2 ) • π −1 1 = cµ 1 • π −1 1 + (1 − c)µ 2 • π −1 1 .
Then either ν is not ergodic, or if ν is ergodic, we conclude that ν = µ 1 • π −1 1 = µ 2 • π −1 1 and thus µ 1 and µ 2 actually belong to M Tg (ν). In this case, M Tg (ν) is not a singleton.
This observation gives rise to the following definition: If ν ⊗ λ is ergodic for T g , or equivalently M Tg (ν) = {ν ⊗ λ} and ν is ergodic, we call the random dynamical system uniquely ergodic relative to ν.
To find necessary and sufficient conditions for unique ergodicity relative to an ergodic measure ν, we write functions on T [N ] as Fourier decompositions. This allows us to express shifts induced by external input as simple multiplications (i. e. phase shifts) in Fourier space. In the following all equations involving measurable functions are meant to hold almost everywhere with respect to the relevant measure.
Theorem C.1. With the above notation and such that ν is ergodic for the base transformation σ, we have that T g is uniquely ergodic relative to ν if and only if, for all k ∈ Z [N ] \ {0} there is no complex-valued measurable function R = 0 on (Σ N , B) such that
R = exp 2πik T g · R • σ.(44)
We note that the ergodicity of ν with respect to σ implies that any solution R of Eq. (44) has constant modulus, and we may therefore assume without loss of generality that |R| = 1.
Proof. For the proof we adopt ideas of Furstenberg from [77], where, unlike here, the base transformation is assumed to be uniquely ergodic and the fibers are given by the circle: To prove that our assumption in Eq. (44) implies that ν ⊗ λ is ergodic with respect to T g , fix a square-integrable
function f ∈ L 2 ν⊗λ with f • T g = f . Setting ζ k (x) := exp(2πik T x)
, we can write f as a Fourier series via f (k, x) = k∈Z [N ] c k (k)ζ k (x) for appropriate square-summable coefficients (c k (k)) k∈Z [N ] . Since ν ⊗ λ is a product measure we have c k ∈ L 2 ν for each k ∈ Z [N ] . The invariance of f gives
f (k, x) = k∈Z [N ] c k (k)ζ k (x) = k∈Z [N ] c k (σ(k)) exp 2πik T g(k) ζ k (x) = f • T g (k, x)
and we deduce c k (k) = c k (σ(k)) exp 2πik T g(k) for all k ∈ Z [N ] . If c k = 0 for all k = 0 then f (k, z) = c 0 (k) and by the ergodicity of (Σ N , B, σ, ν) we have that c 0 is a constant function and so is f . If f is not constant, then c k does not vanish for at least one k ∈ Z [N ] \ {0}, by ergodicity of σ and since |c k | = |c k | • σ we have that |c k | equals a positive constant function. Consequently, our assumption is violated for R := c k . Our condition therefore implies that f is constant and hence T g is ergodic with respect to ν ⊗ λ.
Conversely, we assume that our condition is not fulfilled. Then for a solution R = 0 of Eq. (44) for some k ∈ Z [N ] \ {0}, we have Rζ k is non-constant and T g -invariant. Hence, ν ⊗ λ is not ergodic with respect to T g .
Ergodicity in the EHE model
If the random dynamical systems is given by the EHE model (i. e. T g = T ) and the underlying measure in the base is Bernoulli, then our condition in Eq. (44) simplifies as follows. f ( )p R dν.
Since R dν = 0 we get ∈[N ] f ( )p = 1. By convexity of the unit circle and 1 being an extremal point, this is only possible if f ( ) = 1 for all ∈ L. This shows that Eq. (45) is fulfilled for k.
The following Corollary states a sufficient condition for unique relative ergodicity of the system T : However, this contradicts the assumption.
Theorem C.4. Assume that u 0 is irrational. The system T is uniquely ergodic relative to B p for almost all W ∈ W(W ) if and only if W is p-reachable.
Proof. If W is not p-reachable, then (M δ(L)) j = 0 for some unit j ∈ [N ] which implies that exp(2πiu 0 k T M e ) = 1 for the mode k = e j and all ∈ L. Thus, T is not uniquely ergodic relative to B p by Proposition C.2 for all W ∈ W(W ). Now assume that W is p-reachable and let E := E(W ) denote its edge set. First, let P = { ∈ L | (M δ(L)) − M = 0} be the set of units which directly receive external input, but have no incoming paths starting from units receiving external input.
For each unit m ∈ P and ∈ L we have e T m M e = e T m e so that the P coordinates of every k ∈ Z [N ] not fulfilling the condition in Proposition C.2 have to be zero, since u 0 / ∈ Q.
Unique relative ergodicity of T with respect to B p and W ∈ W(W ) would follow from Corollary C.
k T c(W ) / ∈ Z.
We want to show that this property holds almost everywhere with respect to the |E|-dimensional
Lebesgue measure λ on W(W ) considered as a subset of R E . For this we will cover the complement with respect to this property by sets
A k,z = {W ∈ W(W ) | k T c(W ) = z} with k ∈ Z [N ] \ {0}, k| P = 0, z ∈ Z and show that each A k,z is a (|E| − 1)-dimensional submanifolds
of W(W ) and thus a null sets with respect to λ. Indeed, let
F k : R E ⊃ W(W ) → R, (w 1 , . . . , w |E| ) → k T c(W ),
where W ∈ W(W ) is uniquely determined by its non-zero entries (w 1 , . . . , w |E| ). For each k ∈ Z [N ] \ {0}, k| P = 0, the function F k is continuously differentiable and (by the implicit function theorem) for each z ∈ Z the set A k,z defines an (|E| − 1)-dimensional submanifold if ∇F k (W ) = 0 for every W ∈ W(W ) with F k (W ) = z.
In order to show ∇F k (W ) = 0, consider the directional derivatives
∂ W c(W ) = (d/ds) c(W + sW) = u 0 M WM δ(L)
in direction of the matrix W with sparsity pattern dominated by W , i. e. for all i, j ∈ [N ] we have
W i,j = 0 if W i,i ∈ P we set W i,j = (1 − W ) i,j v j /(u 0 M δ(L)) j for all j ∈ [N ]. Since v j = 0 for j ∈ P we have W i,j = 0 if W i,j = 0 for all j ∈ [N ]
. This choice guarantees that W has a sparsity pattern dominated by W and establishes the desired equality since for i ∈ [N ] \ P we have
(WM δ(L)) i = j∈[N ] W i,j (M δ(L)) j = y i and for i ∈ P (WM δ(L)) i = j∈[N ] (1 − W ) i,j v j u 0 (M δ(L)) j (M δ(L)) j = j∈[N ] u −1 0 (1 − W ) i,j v j = y p .
To conclude, almost sure ergodicity follows by considering a countable cover
λ({W ∈ W(W ) | ∃k ∈ Z [N ] \ {0}, ∃z ∈ Z : k T c(W ) = z}) ≤ k∈Z [N ] \{0} z∈Z λ(A k,z ) = 0.
Special case: the homogeneous EHE model
In the following corollaries, we establish two simple conditions to check unique ergodicity for the homogeneous EHE model with constant coupling matrix.
Corollary C.5. The homogeneous EHE-Model (i. e. w ij = w ∈ R ≥0 , U i = 1 for all i, j = 1, . . . , N )
is uniquely ergodic relative to the Bernoulli measure B(p), if p is a strictly positive probability vector and both u 0 , u 0 /(1 − N w) / ∈ Q.
Proof. We will use the characterization given in Eq. (45). Suppose that for k ∈ Z [N ] we have that
u 0 k T M e j ∈ Z for all j ∈ [N ]
. In particular, we then have u 0 k T M (e j − e i ) ∈ Z. For the homogeneous EHE-Model, the entries of M are given by
M = (1 − W ) −1 = 1 + 1 1 − N w W.
For all i, j ∈ [N ] we have
u 0 k T M (e j − e i ) = u 0 (k j − k i ) ∈ Z.
Since u 0 / ∈ Q, it follows that (k j − k i ) = 0 for all i, j ∈ [N ] and hence k = ( , . . . , ) T for some ∈ Z.
Consequently,
u 0 k T M e 1 = u 0 1 − N w ∈ Z,
implying = 0. This shows that condition (45) In this section we show that the equilibrium firing rates and spike count covariances are linear transformations of the rates and covariances of the external input process. Since the firing rate for unit i is directly antiproportional to U i , we suppress this dependency by w.l.o.g. analyzing the special case U = 1.
In order to clearly indicate its dependency on the weight matrix, we denote the spike count vector after s iterations of the EHE dynamics from Eq. (37) as N s W . The results in this section hold for all W and p for which the system T is ergodic and probabilities are evaluated with respect to the uniquely relative ergodic measure P = B p × P, with P = λ D implicitly depending on W through the structure of the inhabited region D. The same applies to the matrix M = 1 − W .
We denote the asymptotic mean firing rates and the N × N spike count covariance matrix of the ergodic system with coupling matrix W by Y W := lim s→∞ E (N s W )/s and X W := lim s→∞ E (N s W )/s. With the choice W = 0, T is equivalent to the random walk induced by the external input. In the next two Theorems we calculate Y 0 , X 0 and show that Y W , X 0 for a general W = 0 are given by linear transformations of Y 0 , X 0 .
Theorem D.1. The equilibrium firing rate is a linear transformation of the firing rate of the uncoupled (W = 0) system.
Y 0 = lim s→∞ E (N s 0 ) s = u 0 p,(46)Y W = lim s→∞ E (N s W ) s = M Y 0 .(47)
The probability that unit i fires given that an avalanche is started by unit k is given by
P k (i ∈ U (a)) = M ik M kk .(48)
Proof. From Corollary A.3 we get . Since i ∈ U (a(k, u)) ⇐⇒ δ(U (a(k, u)))) i = 1 and a(k, u) 1 = {k} ⇐⇒ (k 1 = k ∧ δ(U (a(k, u))) k = 1, we have
N s W (k, u) = M u 0 s t=1 e kt − π 2 T s (k, u) .E 1 {i∈U (a)} | a 1 = {k} = M ik M kk = P k (i ∈ U (a))
Theorem D.2. The asymptotic N × N spike count covariance matrices are given by
X 0 = lim s→∞ cov (N s 0 ) s = u 2 0 (diag(p) − pp T )(49)X W = lim s→∞ cov (N s W ) s = M T X 0 M.(50)
Proof. Rearranging Eq. (38) for N s W , we have In this section we show that the invariant space D (or more directly its complement) has a selfsimilar structure which will be used to simplify expressions for avalanche distributions considerably.
Var (N s W ) = Var u 0 M s t=1 e kt + z s , where z s (k, u) := M (u − π 2 T s (k, u)). For all k ∈ Σ N , u ∈ D z s (k, u) < c ∈ R with c independent of s since π 2 T s (k, u) ∈ D ⊆ [0, U i ) i∈[N ] .
We first introduce regions Λ I and show that they have self-similar properties. We will call these regions 'non-inhabited' and justify this term by showing that Λ [N ] = C \ D.
Γ I = 0, j∈I w ij i∈I(53)Λ H = ∅ =I⊆H Γ I .(54)
We proceed to show self-similar properties of Λ and relate it to the inhabited region D according to the following steps: 3. Use this result to show that Λ is the complement of the inhabited region D in Theorem E.5. Figure 3 illustrates the geometrical structure of the noninhabited region and its self-similarity for dimensions 1 up to 3.
The following Lemma will be used throughout this section and states that the noninhabited region along dimensions H 1 is equal to the lower-dimensional Λ H 1 \H 2 when intersected with a hyperrectangle which has lower boundaries along dimensions H 2 which lie above the corresponding row sums in W .
Lemma E.2. For H 1 , H 2 ⊆ [N ] and a i , b i ≥ j∈H 1 w ij for all i ∈ H 1 ∩ H 2 we have Λ H 1 ∩ [a i , b i ) H 2 = Λ H 1 \H 2 ∩ [a i , b i ) H 2 . Proof. Note that by definition Λ H = ∅ =I⊆H Γ I . The result follows if [a i , b j ) H 2 ∩ Γ J = ∅ for all ∅ = J ⊆ H 1 such that J ∩ H 2 = ∅. We have [a i , b j ) H 2 ∩ Γ J = [a i , b j ) H 2 ∩ 0, j∈J w ij i∈J = ∅,
since the intersection along the dimensions J ∩ H 2 is empty due to j∈J w ij ≤ j∈H 1 w ij < a i for
i ∈ J ∩ H 2 .
We introduce the following decomposition of the phase space along dimensions H ⊆ [N ] into disjoint hyperrectangles: .
[0, U i ] i∈H = I⊆H 0, j∈H w ij i∈I ∩ l∈H w kl , U k k∈H\I (55)Lemma E.3. For ∅ = H ⊆ [N ] and U H = (U i ) i∈H , j∈H w ij < U i ≤ U i for all i ∈ H we have Λ H ∩ [0, U i ) i∈H = ∅ =I⊆H Λ I ∩ 0, j∈H w ij i∈I ∩ l∈H w kl , U k k∈H\I .(56)
Proof. With (W δ(H)) H < U H ≤ U H and the decomposition from Eq. (55) we have
Λ H ∩ [0, U i ) i∈H = ∅ =I⊆H 0, j∈H w ij i∈I ∩ l∈H w kl , U k k∈H\I ∩ Λ H
Using Lemma E.2 we have l∈H w kl , U k k∈H\I ∩ Λ H = l∈H w kl , U k k∈H\I ∩ Λ I and arrive at
Λ H ∩ [0, U i ) i∈H = ∅ =I⊆H Λ I ∩ 0, j∈H w ij i∈I ∩ l∈H w kl , U k k∈H\I .
Lemma E.3 provides the direct generalization of the corresponding result for homogeneous systems [28,Equation B5] to non-negative weight matrices. This self-similar property of Λ will be used to show that it is the complement of the inhabited region D. In the following Lemma, we give an alternative characterization of D, which is more convenient to establish the relation to Λ. The intuition behind this characterization is illustrated in Fig. 3.
D = u ∈ C | u + M −1 n / ∈ C, n ∈ {0, 1} [N ] \ {0} .
Proof. Denote the set on the right hand side by A. Every u ∈ C can be written uniquely as
u = W 1 + M −1 c since M −1 = diag(U ) − W is bijective. Note that U = W 1 + M −1 1, and if c i ≥ 1 for some i ∈ [N ] it follows that W 1 + M −1 c / ∈ C.D = C \ Λ [N ] .(57)
Proof. Using Lemma E.4, the complement of D in C is given by
B := C \ D = u ∈ C | u + M −1 n ∈ C for some 0 = n ∈ {0, 1} N .
We proceed to show that Λ [N ] = B.
We start by showing B ⊆ Λ [N ] . Let x ∈ B and n ∈ {0, 1} N such that x = x + M −1 n ∈ C. Now considering the coordinates i ∈ I = {i ∈ [N ] | n i = 1}, we find Set n I = δ(I) and consider x = x + M −1 n I = x + diag(U )n I − W n I . From the choice of I we have
x i < U i − M −1 n I i ≤ k∈I w ik , thus we have x ∈ Γ I ⊆ Λ [N ] .x k = x k − i∈I w ki ∈ [0, U k ) for all k ∈ [N ] \ I.
For the coordinates i ∈ I, the choice of I assures
that x i = x i + U i − j∈I w ij ∈ [0, U i ).
Taken together, we have x ∈ C, thus x ∈ B which completes the proof.
Corollary E.6 (Volume of inhabited region with upper boundaries U ). Let 0 < U i ≤ U i such that j∈I w ij < U i for i = 1, . . . , N and let π I be the projection from R [N ] to R I and λ I be the Lebesgue measure on R I . Then,
V I (U ) := λ I π I [0, U i ) i∈I \ Λ I = |diag(U I ) − W I |(58)
Proof. Consider the lower-dimensional subsystem of T defined on the units in I, with coupling matrix W I given by rows and columns in I from W , and with firing thresholds given by U I . From j∈I w ij < U i , this system fulfills condition (5) Since the first generation of an avalanche always contains exactly one element, we use for a particular avalanche a ∈ A k the notation a 1 both to denote the singleton set and its only member.
We define the phase space region leading to the avalanche a ∈ A by R(a) := {u ∈ D | a(u, a 1 ) = a} (59) We introduce the shorthand
r I := W δ(I)(60)
for the total recurrent activation distributed in an avalanche with assembly I.
Proposition E.8. For a ∈ A k , let I j = U j (a) for j = 1, . . . , D(a) and I = U (a). We have
R(a) = [U a 1 − u 0 , U a 1 ) a 1 ∩ R U (a) ∩ R U c (I) (61) with R U (a) := D(a) j=2 U i − r I j−1 i , U i − r I j i i∈a j R U c (I) := 0, U i − r I i i∈[N ]\I \ Λ [N ]\I .
Proof. Since a 1 = {k}, it follows that
A(u + u 0 e k ) k = 1 ⇐⇒ u k + u 0 ≥ U k ⇐⇒ u k ∈ [U k − u 0 , U k ) .
Using Lemma A.1, the condition that unit i spikes only in step j in the avalanche, i ∈ a j reduces to A(F j−1 (u + u 0 e k ))) i = 1, and we have
(F j−1 (u + u 0 e k )) i ≥ U i > (F j−2 (u + u 0 e k )) i .
This is fulfilled if and only if
U i − r I j i ≤ u i < U i − r I j−1 i
. Similarly, unit u l does not fire in the avalanche if and only if A(F i−1 (u + u 0 e k )) l = 0 for all i = 1, . . . , D(a), so u l ∈ 0, U l − r I l . Using Lemma E.2, we have Lemma F.1 (Empty avalanches). The phase space region on which external input to unit k leads to the empty avalanche is given by
R(a) = [U k − u 0 , U k ) {k} ∩ R U (a) ∩ 0, U i − r I i i∈[N ]\I ∩ Λ [N ] = [U k − u 0 , U k ) {k} ∩ R U (a) ∩ 0, U i − r I i i∈[N ]\I ∩ Λ [N ]\I , since U i − r I i ≥ l∈[N ]\I w il for i ∈ [N ] \ I.{u ∈ D | a(u, k) = ()} = [0, U j − u 0 δ k,j ) j∈[N ] \ Λ [N ]\{k} ,(62)
and for the probability of the empty avalanche given external input to unit k we have
P(a(u, k 1 ) = () | k 1 = k) = V [N ] (U − u 0 e k ) V [N ] (U ) = 1 − u 0 V [N ]\{k} (U ) V [N ] (U ) .(63)
Proof. The empty avalanche results upon external input to unit k if and only if u k < U k − u 0 . Since Proposition F.2 (Relation between P(a) and P k (a)). For a ∈ A k , k ∈ L we have
U k − u 0 > ∈[N ]P k (a = a) = P(a = a | k 1 = k)V [N ] (U ) u 0 V [N ]\{k} (U ) = P(a = a)V [N ] (U ) p k u 0 V [N ]\{k} (U )
Proof. The condition that a(u, k 1 ) = a is equivalent to u ∈ R(a) and k 1 = k. By definition P k (a = a) = P(a = a | a 1 = {k}) and the condition a 1 = {k} is equivalent to a = (), k 1 = k.
Thus, we have P k (a = a) = P(a=a|k 1 =k) P(a =()|k 1 =k) = P(a=a|k 1 =k) 1−P(a=()|k 1 =k) and using Lemma F.1 this results in the first equality
P k (a = a) = P(a = a | k 1 = k)V [N ] (U ) u 0 V [N ]\{k} (U ) .
The second equality follows from P(k 1 = k) = p k .
Proposition F.2 allows to transform between P(a = a) and P k (a = a). In the following we will calculate the latter probabilities. Note that they depend neither on u 0 nor p k but only on the coupling matrix W .
Theorem F.3 (Avalanche distributions). The probability distribution for a nonempty avalanche a ∈ A k , k ∈ L, U (a) = I is given by
P k (a = a) = V [N ]\U (a) U − r I V [N ]\{k} (U ) D(a) j=2 i∈a j l∈a j−1 w il(64)
Proof. Using Proposition E.8, the N -dimensional Lebesgue volume of R(a) is given by
λ(R(a)) = λ [U a 1 − u 0 , U a 1 ) a 1 λ(R U (a))λ(R U c (I)) = u 0 V [N ]\U (a) U − r I D(a) j=2 i∈a j l∈a j−1 w il ,
where Corollary E.6 was used to compute the volume of R U c (I).
This leads to P(a = a | k 1 = k) = λ(R(a))/V [N ] (U ) and with Proposition F.2 to T k (R(a)) = A with:
P k (a = a) = V [N ]\U (a) U − r I V [N ]\{k} (U ) D(a)A := r I k , r I k + u 0 {k} ∩ 0, r I i i∈I\{k} \ Λ I\{k} ∩ {u | u − r I [N ]\I ∈ π [N ]\I (R U c (I))},
and the distribution of avalanche assemblies started by unit k is given by
P k (U (a) = I) = V I\{k} (r I )V [N ]\I (U − r I )) V [N ]\{k} (U ) .(65)
Proof. By injectivity of T k , T k (R(a)) are disjoint for all a ∈ A k . First, we show that Finally, the states of all i ∈ I \{k} must be sufficiently close to the threshold such that the recurrent input makes the units fire, U i −r I i ≤ u i < U i , and thus
π I\{k} T k (R(a)) ∈ π I\{k} ([0, r I i ) i∈I\{k} \ Λ I\{k} ) .
which completes the proof of a∈A k ,U (a)=I T k (R(a)) ⊆ A.
We continue to show that
A ⊆ a∈A k ,U (a)=I
T k (R(a)) .
Since A ⊂ D but u − e k u 0 ∈ Γ I ⊆ Λ I for u ∈ A we have k ∈ I := U (a( u, k)), where u = T −1 k (u). We will show I = I by contradiction.
Suppose that I = I, then either I \ I or I \ I has to be nonempty. We proceed by a case distinction:
Let j ∈ I \ I = ∅ be arbitrary. We have u j = u j + r I j < r I j =⇒ u j < r I\ I , thus u ∈ Γ I\ I and hence u / ∈ D. This is a contradiction. Now let m ∈ I \ I = ∅ be arbitrary. We have u m = u m − U m + r I < r I and thus
u m − r I m < r I m − r I m < r I\I m ,
which implies that (u−r I ) I\I ∈ π I\I Γ I\I and contradicts (u−r I ) [N ]\I ∈ π [N ]\I R U c (I). This completes the proof of
A = a∈A k ,U (a)=I T k (R(a)) .
With Corollary E.6 we have
P(U (a) = I, k 1 = k) = λ(A)/V [N ] (U ) = u 0 V I\{k} (r I )V [N ]\I (U − r I ) V [N ] (U )
Eq. (65) follows with Proposition F.2.
G. Relation to graph topology
Graph properties determine phase space volumes and avalanche probabilities
In addition to the geometrical proof of Eq. (65), we give a combinatorial proof invoking Kirchhoff's theorem which generalizes the corresponding proof for the homogeneous EHE model [42].
a∈A k ,U (a)=I D(a) j=2 i∈a j l∈a j−1 w il = V I\{k} r I = (diag r I I\{k} ) − W I\{k} .(67)
The right hand side is the (k, k) cofactor of the W I graph Laplacian. By Kirchhoff's Theorem [50], this determinant equals the number of spanning trees rooted at k, weighted by the product of weights along their arcs. There is a natural correspondence between an avalanche a = (a i ) i=1,...,D(a) , a 1 = {k}, D(a) i=1 a i = I and spanning trees of the vertices I rooted at k. For j ∈ I, j ∈ a s corresponds to vertex j being separated from the root k by s steps. In this way, a ∈ A k 1 , U (a) = I partition the spanning trees by their level-structure, i.e. which sets of units are separated from the root by the same number of steps. Let T (I, k) denote the set of weighted spanning trees in I rooted at k. What remains to be shown is that
D(a) j=2 i∈a j l∈a j−1 w il = t∈T (I,k) dist(i,k)=j for i∈a j (i,j)∈t w ij .
By expanding the products on the left hand side iteratively, we enumerate all ways to connect elements in level/generation j − 1 with elements in level/generation j weighted by the corresponding edge weight. Thus, we have
a∈A k ,U (a)=I D(a) j=2 i∈a j l∈a j−1 w il = t∈T (I,k) (i,j)∈t w ij = diag j∈I w ij i∈I\{k} − W I\{k}λ(D) = V [N ] (1) = |1 − W | = 1 + N i=1 L∈L i (W ) (−1) #(L) (L)(68|λ1 − W | = λ N + a 1 λ N −1 + . . . + a N ,(69)with a i = L∈L i (−1) #(L) (L).
Proposition G.2 (Impact of self-loops on the inhabited phase-space volume). Let W be the coupling matrix equivalent to W without self-loops, W = W − diag(W ).
V W (U, I) = V W (U − diag(W ), I) ,(70)
and
[0, U i ) i∈[N ] \ Λ W [N ] = {u + diag(W ) | u ∈ [0, U i − w ii ) i∈[N ] \ Λ W [N ] }(71)
Proof. The first equation follows immediately from corollary E.6 by
V W (U, I) = | diag(U ) I − W I | = | diag(U ) − diag(W ) I − W I | = V W (U − diag(W ), I) .
The second identity follows from the effect of adding diagonal entries to each Γ I ,
Γ W +diag(W ) I = 0, w ii + j∈I,j =i w ij i∈I .
Stochastic dependencies between units
In addition to firing-rate correlations, we can use the geometric structure of the inhabited volume to analyze stochastic dependencies between the states of units in relation to the network topology.
We denote the set of units forming the strongly connected components of the graph with adjacency matrix W by scc(W ).
[0, U h ) h∈H \ Λ H = J∈scc(W H ) [0, U j ) j∈J \ Λ J .
Proof. We denote the right hand side by A.
[0, U h ) h∈H \ Λ H ⊆ A is trivial since I∈scc(W H ) Λ I ⊆ J⊆H Γ H = Λ H . To show A ⊆ [0, U h ) h∈H \ Λ H , it now suffices to show V (A) = V [0, U h ) h∈H \ Λ H , since the complement of A in C is a union of cylinder sets. We have V (A) = J∈scc(W H ) | diag (U J ) − W J |=| diag (U H ) − W H |= V [0, U h ) h∈H \ Λ H ,
where the second equality holds since diag(U H ) − W H can be reordered to form an upper triangular block matrix with respect to the strongly connected components
The direct product structure implies stochastic independence between units in different strongly connected components:
Corollary G.4. For two index sets I, J ⊆ [N ] which do not share a common strongly connected component in the graph with adjacency matrix W , the components u I = π I u and u J = π J u are stochastically independent with respect to the measure P.
Proof. We have to show that the multivariate random variables u I and u J are independent. The cumulative distribution functions of u I and u J are obtained by marginalizations of P. Since P is the normalised Lebesgue measure supported on D, independence follows if D factorizes into a product of subspaces and no two units i ∈ I, j ∈ J share a common subspace. This is ensured by Theorem G.3
if I and J do not share a common strongly connected component in the graph with adjacency matrix W .
Avalanche branching process
In this section, we study how the transition probabilities from step i to step i + 1 during an avalanche are influenced by network topology.
Theorem G.5. Let a ∈ A with P(a = a) > 0 and let I j = U j (a) for j = 1, . . . , D(a) and I = U (a).
The probability of the generation a j of an avalanche for 2 ≤ j ≤ D(a) given the previous steps of the avalanche is
P(a j = a j | a 1 = a 1 , . . . , a j−1 = a j−1 ) = k∈a j ∈a j−1 w k × V [N ]\I j U − r I j−1 V [N ]\I j−1 (U − r I j−2 )(72)
Proof. The region of states consistent with a 1 = a 1 , . . . , a j−1 = a j−1 is like in Proposition E.8 given by a hyperrectangle along dimensions I j−1 , while the remaining coordinates O = [N ] \ I j−1 are below their firing thresholds:
[U a 1 − u 0 , U a 1 ) a 1 ∩ j−1 =2 U i − r I −1 i , U i − r I i i∈a ∩ 0, U i − r I j−1 i i∈O \ Λ O
Specifying which units fire at the next step of the avalanche leads to a smaller consistent region of states, which is the same along the dimensions U j−1 (a) but splits up the region of states along the
dimensions O into U i − r I j i , U i − r I j−1 i i∈a j ∩ 0, U i − r I j−1 i i∈O\a j \ Λ O\a j ,
since exactly the units in a j cross the firing threshold.
The probability Eq. (72) is thus the quotient of the consistent volumes along dimensions O which follow by using Corollary E.6.
This branching process needs memory of which units are refractory.
H. Application to structurally simple networks
In this section we apply our framework to homogeneous and non-homogeneous networks with regular structures whose symmetries allow to simplify the measures and distributions derived in this paper. In consequence the avalanche size distributions can be given in closed form since assembly probabilities of a given size do not depend on the detailed assembly subgraph(s) but only on few global parameters.
In this section we set U = 1, p = (1/N )1 unless otherwise specified.
Homogeneous network
The homogeneous network is the classical setting for the EHE-model, which was introduced and analyzed in [28]. In the following we will describe in detail how the known avalanche size distribution and its expected value [79] arise naturally from our framework when the coupling matrix is homogeneous.
Let W hom = (w ij ) i,j∈[N ]
, with w ij = α N for all i, j ∈ [N ] and α + u 0 < 1. We use the shorthand P = P W hom in this subsection.
For the special choice of W hom , the inverse M hom = 1 − W hom −1 is given in closed form by
M hom = 1 + 1 1 − α W hom .
Thus, the probability that unit i fires in an avalanche started by unit k is given by Eq. (20)
P k (i ∈ U (a)) = M hom ik M hom kk = α N − (N − 1)α .
The mean firing rate of the homogeneous network is
Y W = u 0 M hom p = 1 + α 1 − α u 0 N 1 = u 0 N (1 − α) 1 .
The mean nonempty avalanche size is given by Eq. (15)
E(S(a) | a 1 = {k}) = 1 + j∈[N ]\{k} M jk M kk = 1 + (N − 1)α N − (N − 1)α = N N − (N − 1)α(73)
In order to calculate the avalanche size distributions, we start to simplify the expression for the The more general expression V I (U ), with constant vector U i = v simplifies similarly to
V I (U ) = diag(U ) I − W hom I = |diag(U ) I | 1 − W hom I /v = v |I| 1 − |I|α vN(74)
With these simplifications, the probability of an empty avalanche (Lemma F.1) is given by
P(a(u, k 1 ) = () | k 1 = k) = 1 − u 0 V [N ]\{k} (1) V [N ] (1) = 1 − u 0 N − (N − 1)α N − N α = P(a = ()) .
We will now consider Eq. (65), where r I is for this network given by r I = W hom δ(I) = |I|α/N 1:
P k (U (a) = I) = V I\{k} r I V [N ]\I 1 − r I V [N ]\{k} (1) .
The first term simplifies to
V I\{k} r I = α N |I|−1 |I| |I|−2 ,
which is the number of spanning trees in a complete graph of |I| units (Cayley's formula) weighted by the product of the |I| − 1 edge weights of each spanning tree. The second term and the denominator are given by Setting E(|a 2 | | a 1 = ()) = 1 and solving for α < 1 we obtain
V [N ]\I r I = 1 − |I|α N N −|I| 1 − (N − |I|)α N 1 − |I| α N = 1 − |I|α N (N −|I|−1) (1 − α)α c = N 2 − N √ N − 1 − N N 2 − 3N + 2(75)
For large N , this expression scales like (1 − N −1/2 )/N , consistent with the numerical evidence for the homogeneous EHE-model [28].
The same calculation for the homogeneous coupling matrix without self-weights W h , which was and to the critical coupling strength
α W h c = N 2 − N √ N − 1 − N N 2 − 5N + 5(76)
Coupled homogeneous networks
In this subsection we will generalize the avalanche distribution of the homogeneous network to coupled homogeneous networks. Let W block ∈ R [N ×N ] be a c × c block matrix, with each block being a homogeneous matrix. Let 0 ≤ w ij be the weight between units belonging to subnetworks (blocks) i and j, 0 < i, j ≤ c and N i > 0 be the total number of units in subnetwork i with c i=1 N i = N . We denote the c × c matrix with entries w ij by W c and require c j=1 w ij N j + u 0 < 1 for all i = 1, . . . , c. In this section will use the shorthand P = P W block and set U = 1. =: V c (v, n(I)) .
The assembly distribution is thus given by If the graph W c is fully connected, i.e. W c > 0 component wise, the case distinction in the assembly distribution is not needed. In this case, the probability distribution of P k (n(U (a))) reduces to the expression reported in [41, equation 7].
Two homogeneously coupled subnetworks
As a prototypical example for coupled homogeneous networks we consider two coupled subnetworks with N s units each and block coupling matrix W c given by
W c = 1 N s α β β α
Note that each unit in this network receives internal activation of α + β in a global avalanche. Thus we require α + β + u 0 < 1. Simplifying the avalanche statistics according to the steps above and explicitly calculating the determinants for β > 0 leads to the following distribution for sizes of non-empty avalanches:
P W c (S(a) = n | S(a) > 0) = β P 0 N 2Ns−1 s n k=0 N s k N s n − k nx k 1 x n−k 2 X l 1 −1 1 X l 2 −1 2 ((X 1 − αl 1 )(X 2 − αl 2 ) − l 1 l 2 β 2 ) αβn 2 − (α − β) 2 k(k − n)(77)
where x 1 = kα + (n − k)β, x 2 = kβ + (n − k)α X 1 = N s − x 1 , X 2 = N s − x 2 l 1 = N s − k, l 2 = N s − n + k P 0 = 2N s − 2α(2N s − 1) + 2(N s − 1)(α 2 − β 2 ) .
There is an intuitive explanation for the terms in the simplification: k indicates the number of units from one subnetwork participating in the avalanche and n − k the corresponding number of units from the other subnetwork. x 1 /N s and x 2 /N s represent the input given to a unit in the subnetworks, while X 1 /N s and X 2 /N s denote the upper boundaries for the states of units in the subnetworks not participating in the avalanche. l 1 and l 2 are the numbers of silent units in the subnetworks, and P 0 is a normalization constant.
One-dimensional ring and line networks
Efficiently calculating the avalanche size distribution is possible if all avalanche assemblies of a given size (or pattern as in the coupled subnetwork case) have the same distribution or if the number of possible assemblies is restricted by the network. The latter is the case in sparsely coupled network, like one-dimensional ring or line networks.
In the one-dimensional ring network with N units, each unit is connected bidirectionally to its two nearest neighbors with coupling weight α/2. Thus, the coupling matrix W ring is a circulant matrix with just two positive entries α/2 in each column. Thus we require α + u 0 < 1. This simple form of W ring allows to specify the volume λ(D) of the inhabited region in closed form:
λ(D) = 1 − N −1 j=0
(1 − α cos (2πj/N )) .
Due to the sparsity of the networks, the connected assemblies are always simple line segments. This can be used to find a formula for the avalanche size distribution in the ring and line networks:
P
where P r 0 and v(n) are given by
where v l (n) and P l 0 are given by
P l 0 = j=N −1 j=0 (1 + a) j+1 − (1 − a) j+1 )((1 + a) N −j − (1 − a) N −j a 2 2 N +1 v l (n) = 1
if n = 0 (1 − α + a)(1 + a) n + (α + a − 1)(1 − a) n a2 n+1 otherwise
Erdős-Rényi network
For random graphs in which edges are independently sampled from a distribution, here exemplified by an (undirected) Erdős-Rényi graph, the expected avalanche size distribution can be well approximated by the expected probability of an assembly of size n. In this graph, each undirected edge occurs with probability p and weight α/N independently of all other edges.
In order to compute this expected assembly distribution, the expected values of the assembly Laplacian and of V I (U ) have to be determined. For an Erdős-Rényi graph with n nodes, connection probability p and weight α/N , the expected graph Laplacian is just (α/N ) n−1 times the expected number of spanning trees in the random graph, which is particularly simple since there are n n−2 spanning trees in the complete graph and each of the spanning trees occurs in the random graph with probability p n−1 . Taken together, we have E(V I\{k} (W δ(I))) = (pα/N ) |I|−1 |I| |I|−2 .
The expected determinant in the more general expression V I (U ) is more difficult to determine, since the diagonal elements U i − w ii have different moments than the off diagonal entries −w ij . A consequence of these different statistics is that in the Leibniz formula of the determinant, expected values for cycles in permutations differ depending on the cycle length (since cycles of length one involve a diagonal element, length two cycles the same edge twice, and longer cycles independent edges). Thus, the expected determinant is given by a cycle index of the permutation group S n for which generating functions are known (see [80,Eq. (5.30)]). With these combinatorial results, the expected value of V I (U ) for independent entries U i can be given in terms of Hermite polynomials H n . As an example, we supply the expression for the expected volume of the inhabited region.
Implementation of the analytical avalanche distribution using this technique will be made available by the authors upon reasonable request. With z := (1 + pα/N )/ 2(α/N ) 2 (p − p 2 ) we have
E(λ(D)) = E(V [N ] (1)) = α N p(1 − p) 2 N H N (z) + N 2p 1 − p H N −1 (z)
Note that unlike the expected number of spanning trees, the expected volume of the inhabited region is different than the corresponding volume for the homogeneous matrix with entries p(α/N ). avalanche function returning the avalanche upon input to k from state u a(k, u) a(k, u) := a(k 1 , u) for k = (k 1 , k 2 , . . .) (random variable on (Σ N × C, B)) N s (k, u) spike count vector (random variable) after s iterations from state (k, u) V I (U ) volume of inhabited region V I (U ) = | diag(U ) I − W I | Eq. (18) π 1 , π 2 projection to first/second component of input-state space Σ N × C π I natural projection from C to R I
fire neurons. Each unit i ∈ [N ] := {1, . . . , N } is characterized by a state 0 ≤ u i < U i , hence the phase space of the system is given by the N -dimensional cube C := × i∈[N ] [0, U i ). States can be interpreted as membrane potentials with 0 representing the resting potential and U i the individual firing threshold for unit i. Units are coupled by the non-negative weight matrix W = (w ij ) i,j∈[N ] , with 0 ≤ w ij specifying the increase of the membrane potential for unit i upon receipt of a spike from unit j. The coupling matrix W induces a weighted directed graph G(W ) with vertices [N ], edges E(W ) := {(j, i) ∈ N × N |w ij > 0} and weights (j, i) → w ij , see Definition A.4. We refer to the induced subgraph with vertices I ⊆ [N ] as a subnetwork along I and use W I for the corresponding weight matrix with rows and columns restricted to I.
Figure 1 .
1of a always consists of a singleton a 1 = {k}, k ∈ [N ], if a is not the empty avalanche a = (). The length of the sequence a will be denoted by D(a) and called the duration of the avalanche. We call the union U j of the generations U j (a) := j i=1 a i , 1 ≤ j ≤ D(a) and U (a) := U D(a) Spreading of an avalanche in an example network with four units (circles). Each unit i has a state 0 ≤ u i < U i , with 0 representing the resting state and U i the firing threshold. States are visualized as bar graphs with U i = 1 for all i. Units are coupled by a directed, weighted graph with coupling matrix W = (w ij ) i,j∈[N ] where w ij defines connection strength (non-zero entries shown as arrows). A spike of unit j increases the state of the receiving unit i by the corresponding interaction strength w ij (red bars). White units have not participated in the avalanche yet, and red units are currently active and send spikes to all units connected (arrows marked in red). Incoming weights are restricted in their magnitude such that no unit can be active twice in an avalanche, hence active units become quasi-'refractory' (gray) in the next step. Panels (a) to (c) depict the spreading of the avalanche a = ({1}, {2, 4}), having a size S(a) = 3, duration D(a) = 2, and an assembly of U (a) = {1, 2, 4}. In panel (a), giving external input (light red) to unit 1 pushes its state beyond firing threshold and starts the avalanche. The avalanche terminates in panel (c) since activation from the second generation shown in panel (b) is insufficient to bring any unit above firing threshold.the avalanche assembly (up to generation j) and the sum of cardinalities
Fig. 1
1illustrates in detail the avalanche dynamics on an example network after giving external input to unit 1, which leads to the avalanche a =({1}, {3, 4}).
Figure 2 .
2Figure 2. (a) State space for a two-dimensional EHE-model. States u 1 and u 2 span the state space C (unit rectangle) which consists of the inhabited region D (yellow shading), and the non-inhabited region Λ (gray shading). Black dots and solid arrows indicate a sample trajectory u (1) , . . . , u (4) during which external input u 0 is provided first to unit #2, then to unit #1 and finally to unit #2 again. The length of the solid arrows is u 0 . When the trajectory crosses the right or upper boundary of the unit cube (i.e., the firing thresholds U 1 = 1 or U 2 = 1), a unit spikes and its state is 'reinjected' at the opposite side of C (spike reset). Simultaneously, recurrent activation is distributed to all connected units, corresponding to shifts by columns of W (dashed arrows). Distribution of recurrent input can continue multiple times until no state is above threshold anymore, thus forming multiple generations of avalanches comprising different numbers of units. (b) Torus transformation for a two-dimensional EHE-model. Left: Copies (bright yellow) of the inhabited region D (dark yellow) tesselate the u 1 -u 2 plane. Equivalent points to u (3) , u (4) in the example trajectory introduced in (a) are labeled with u (3) , u (4) in translated copies of the inhabited region. They are reached by simple shifts u 0 , while reset and recurrent activation have no effect on the equivalent trajectory (black arrows). The colors of the line segments indicate the avalanche which is triggered when the trajectory crosses the corresponding border. In this example, purple, red, green, and blue designate the avalanches ({1}), ({2}), ({1}, {2}), and ({2}, {1}), respectively. The equivalent points lie on a grid spanned by the column vectors of 1 − W , with one unit cell indicated by the dashed gray lines. The inhabited region is the image of this unit cell under F . Right: Applying the inverse M = (1 − W ) −1 leads to an equivalent dynamical system on the torus which consists of translations by column vectors of M . Points z (i) on the torus are the images of states u (i) in D.
({1}), ({2}), ({1}, {2}), and ({2}, {1}), respectively. The equivalent points lie on a grid spanned by the column vectors of 1 − W , with one unit cell indicated by the dashed gray lines. The inhabited region is the image of this unit cell under F . Right: Applying the inverse M = (1 − W ) −1 leads to an equivalent dynamical system on the torus which consists of translations by column vectors of M . Points z (i) on the torus are the images of states u (i) in D. a(k, u), namely ({1}), ({2}), ({1}, {2}), or ({2}, {1}). These avalanches occur exactly when the state trajectory crosses the lines along the boundary of C colored in purple, red, green, or blue, respectively.
Definition A. 4 .
4Through Taylor expansion M can be written as a Neumann series M = (1 − W ) −1 = ∞ l=0 W l . For the directed graph G(W ), (W l ) ij equals the product of edge weights summed over all paths from unit j to unit i with exactly l edges. Thus one can interpret Eq. (12) for the equilibrium firing rates as summing the influences between units over all possible paths in the network. Moreover, the weighted sum of all paths from node k to node i gives the probability that unit i fires in an avalanche started by unit k (see Theorem D.1):
diagonal element M kk in the denominator is equal to the quotient M kk = |1 [N ]\{k} −W [N ]\{k} |/|1− W | which has a geometrical interpretation (see next section) as the quotient of the volumes of the inhabited regions of the [N ] \ {k} system and the full system and is proportional to the probability that unit k starts an avalanche. Eq. (15) can again be interpreted in terms of the graph G(W ): Given that an avalanche is started by the unit k, the average size of the avalanche is equal to the weighted sum of all paths in G(W ) from k to all units of the graph, normalized by the weighted sum of all paths from k to itself.
I.
Note that relative unique ergodicity of the uniform measure on the N -torus translates to relative unique ergodicity of P = λ D with λ being the Lebesgue measure.The geometry of D is thus closely related to the stochastic properties enforced by P. The P-volume of a measurable subset A ∈ C is given by the quotient of the N -dimensional Lebesgue volumes λ(A ∩ D)/λ(D). Using the conjugacy to the translation dynamics on T [N ] , the volume λ(D) for the general case of arbitrary firing thresholds U is given by λ(D) = |diag(U ) − W |. The intuition behind this closedform expression is illustrated in Fig. 2: The inhabited region D is the image of the white dashed parallelepiped, which represents a unit cell, under F . Since F only induces translation, it is volumepreserving and thus the volume of D is the volume of the unit cell, which is simply |diag(U) − W |, i.e. the determinant of the inverse mapping from T [N ] to C.
Fig. 3 (
3a)-(c) illustrates the self-similar geometry of the noninhabited region Λ for one-to threedimensional systems. In the phase space of the one-dimensional system shown in (a), the inhabited region consists of the interval Λ[1] = Γ {1} = [0, w 11 ). It is intuitively clear that the density of states has
T 1 (
1C \ [0, w 11 ]) ∩ [0, w 11 ) = ∅. Similarly, the noninhabited region for the two-dimensional system shown in (b) is the union of the two-dimensional extension of Γ {1} , the equivalent region Γ {2} for unit
Figure 3 .
3Illustration of non-inhabited region for the EHE model in (a) one, (b) two, and (c) three dimensions formed by the union of regions Γ I for index sets ∅ = I ⊆ [N ] defined by Eq. (17). The colors blue, red, and yellow mark regions with index sets of cardinalities 1, 2, and 3, respectively. (d) Decomposition of the three-dimensional state space C = [0, 1) 3 into the non-overlapping rectangles according to Eq. (55).
Selecting an assembly subnetwork with units I ⊆ [N ] describes a directed graph cut in which all outgoing edges from units in I to units in [N ] \ I are part of the cut set with (vectorized) cut weight cut(I) ∈ R [N ]\I , cut(I) := (W δ(I)) [N ]\I . The weight of the cut set is equivalent to the recurrent input the units in subnetwork I provide to the units outside the subnetwork. Fig. 4(a) illustrates the graph cut between an assembly of five units and the rest of the network.
Figure 4 .
4Assembly probabilities relate to spanning trees and resistance distance. (a) Illustration of an avalanche assembly. Dashed red circle represents the graph cut separating the assembly I = {1, 2, 3, 4, 5} from the rest of the network (gray units). Dashed edges are in the cut set and contribute to the cut weight. For simplicity, the assembly graph is undirected. (b) There are four spanning trees rooted at unit 5. They are obtained by deleting one of the edges of the {1,2,3,4}-cycle. (c) The three possible ways in which an avalanche starting at unit 5 can spread through the assembly. Numbers associate the avalanche with the set(s) of corresponding spanning trees. (d) Effective resistances between pairs of units for an electrical network coupled by resistors with unit conductance along the edges of the graph (solid lines). Dashed lines represent edges missing in the assembly network. Numbers at existing edges also indicate the fraction of spanning trees that would be lost upon edge deletion. For example, assembly activation is impossible without edge (4, 5) (blue line). Numbers at non-existing edges indicate the relative number of additional spanning trees emerging when the edge is added to the assembly. For example, adding the edge (2, 5) (red line) would double the assembly probability by doubling the number of spanning trees.
contains the weighted in-degrees of each unit on the diagonal. Similar to the adjacency matrix W , the graph Laplacian is a matrix representation of a graph and its spectral properties contain information about the graph connectivity of G(W )[48]. For example, L has a trivial eigenvalue 0 corresponding to the eigenvector 1, while the second smallest eigenvalue is nonzero if and only if the graph is connected.The product of the eigenvalues, except for the trivial one, is related to the weighted number of spanning trees by Kirchhoff's Matrix Tree Theorem[49,50]. Specifically, the weighted sum of spanning trees in the subnetwork along a subset I ⊆ [N ] starting in k ∈ I is equal to L (k) (W I ), which is the (k, k)-cofactor k ∈ I of the graph Laplacian for the induced subgraph L (k) (W I ) := |L(W I ) I\{k} |. This is exactly the term V I\{k} (W δ(I)) = |diag(W δ(I)) I\{k} − W I\{k} | in the numerator of Eq. (20) and thus we have:
Fig. 4 (
4d) annotates the effective resistances in the illustrated simple assembly network. While a failure of one of the edges in the circle connecting units 1, 2, 3, 4 would destroy three out of the four assembly spanning trees, all spanning trees rely on existence of the edge between units 4 and 5. The
is the set of all linear directed subgraphs L of W with n nodes, #(L) denotes the number of connected components of L, and (L) is the product of all edge weights in L. Note that each component of a linear directed subgraph is a directed cycle.
connected component) as described previously. However, only the states of units which are recurrently connected are stochastically dependent in this model. The set of strongly connected components partitions the units in a graph in such a way that the connections between these components form a directed acyclic graph (DAG). In fact, the inhabited region factorizes into a direct product of the inhabited regions along the strongly connected components of W (Theorem G.3). This correspondence between network topology and phase space structure is illustrated in Fig. 5 for different three-unit network motifs. These considerations show that the inhabited region D is the full cube C if the coupling network is a DAG and that in this case all states are stochastically independent. The conditional branching probabilitity Eq. (13) is particularly easy to interpret in this case: M ik is the finite sum of paths between nodes k and i weighted by the product of their edge weights in the DAG, and M kk = 1.Increasing an edge weight in a DAG only increases the numerator in Eq.(13).
This discrepancy is resolved by the denominator M kk which is bigger than one in recurrent networks:Geometrically, M kk is the quotient of the (N -1)-dimensional volume of the (hyper-)face u k = 1 of the inhabited region and the N -dimensional volume of D. As we have shown above, the inhabited region shrinks with increasing recurrent weights and increases correlations in the unit's states which in turn increases M kk . Thus, these correlations lead to an additional increase in the branching probabilities during avalanches which compensates for the lower number of possible paths along which an avalanche can spread in this model.
Figure 5 .
5Relation between phase space structure, graph motifs, and state correlations. The top row shows the phase space structure for weight matrices satisfying the graph motifs depicted in the bottom row (all edge weights are set to 0.2). The boundary between inhabited and non-inhabited region is shaded in gray, with the inhabited region being the complement of the non-inhabited region Λ in the unit cube. (a) complete digraph. The self-similarity of the phase space structure is apparent at the faces of the cube, on which twodimensional inhabited regions emerge (cf. toFig. 2). (b) Graph motif without recurrent connections between different units. The inhabited region factors into a product of three intervals (red, blue, and green lines). (c) Strongly connected components are {1, 2} and {3}. Correspondingly, the inhabited region decomposes into a direct product of the two-dimensional {1, 2} inhabited region (green area) and an interval along u 3 (red line). (d) The circle motif connects all units recurrently. Like in (a), the inhabited region does not factorize.D. Avalanche branching process and relation to directed percolationIn this subsection we investigate how state correlations influence the dynamics during an avalanche i. e. the branching of the avalanche through the network. These considerations allow us to show the EHE-model reduces to a compact directed percolation process on DAGs.In order to describe the branching process associated to the spreading of an avalanche in the model, we tag the units during the ongoing avalanche as either off, active or refractory. Let a = a(k, u) with P(a = a) > 0. At generation 2 ≤ j ≤ D(a), the active units a j−1 are the units that crossed the threshold in the previous generation. Since recurrent feedback has an upper threshold defined by Eq. (5), all previously active units can not fire again in the ongoing avalanche U j−2 (a) and thus become 'quasi'-refractory from the next generation on. The remaining units [N ] \ U j−1 (a) are in the off -state until they eventually become active.
For
large N , the expected avalanche size is approximated by E W h (S(a)) ∼ 1/(1 − α) and scales like a power law with exponent −1 in dependence of 1 − α. B. Planar network with periodic boundary conditions and translation-invariant distancelimited connectivity Consider a two-dimensional grid of units with periodic boundary conditions and varying coupling distance l. On the L × L periodic grid, an edge exists between each pair of distinct units at positions (i 1 , i 2 ) and (j 1 , j 2 ) if the distance max{|i 1 − j 1 mod L|, |i 2 − j 2 mod L|} ≤ l. For l = 1, each unit is connected to its eight neighbors with distance one, while l ≥ (L − 1)/2 leads to the fully connected graph, which we have treated in the previous section. For simplicity, we impose a uniform edge
W corresponds to an eigenvalue (1 − λ ( ) i ) −1 of M . The special structure of M also leads to a closed form for its column-sum norm M 1 := max j∈[N ] N i=1 |M ij |. It is the inverse of a diagonally dominant M-matrix, which allows to use [59, corollary 4] to find M 1 = (1 − α) −1 .
α. Note that Eq. (26) holds for arbitrary non-negative shiftinvariant coupling matrixes with α < 1 being the sum of incoming edge weights to each unit and λ i the eigenvalues of the corresponding point spread function.
Fig. 6 Figure 6 .
66displays mean avalanche sizes E W h (n) for the globally homogeneous network without selfinteractions, and mean avalanche sizes E W (n) for networks with limited coupling distance, as functions of 1 − α for grid sizes L = 50, 100, 300. For l = 1, each unit is connected to its 8 nearest neighbors with uniform weight α/8, for l = 2 it is connected to the 24 neighboring units up to distance 2 with weight α/24. While for fixed l the graph remains sparse with strong edge weights, the homogeneous coupling is dense and the edge weights scale like 1/N . Increasing the coupling distance leads for fixed L to on average larger avalanches for all α ∈ (0, 1). This is expected due to the greater number of units that are reachable during each step of an avalanche, which lowers the chance for the avalanche to stop. In the limit α → 1, the mean avalanche size reaches the system size L 2 regardless of the coupling scheme (homogeneous or distance-limited). While the network topology does not Scaling of mean avalanche size E[s] := E(S(a) | S(av) > 0) with coupling strength α for uniform translation-invariant coupling on a periodic two-dimensional grid, evaluated for different grid lengths L and coupling distances l as well as the homogeneous network. The scaling is depicted as a log-log plot of E[s] in dependence of 1 − α, where α denotes the sum of incoming weights to each unit. Colors red, blue and gray indicate coupling distances l of 1, 2 and a global coupling ('hom'), respectively. Brightness of the colors denotes different grid lengths L (dark, medium and bright for L = 50, 100, 300, respectively). At the 'critical' values of α for L = 50 indicated by the dashed lines and filled dots, the corresponding avalanche size distributions p(s) := P(S(a) = s | S(a) > 0) (inset) resemble power laws. For l = 1, 2, avalanche size distributions were obtained from a simulation of 10 6 avalanches, while the analytical distribution from Eq.(25) is shown for the homogeneous network for the critical coupling strength given by Eq.(76). The dashed straight line in the inset has slope −3/2.
Figure 7 .
7Avalanche size statistics P (s) := P(S(a) = s|S(a) > 0) in small or structurally simply networks. (a) Effect of rewiring edges from ring to small world network. Analytical avalanche size statistics for the l-nearest neighbor coupled ring networks, a homogeneous network and a small world network. Coupling strengths in all networks are normalized so that each row sum of each matrix is α = 0.75. The l = 2 ring network and the chosen realization of the small world network are shown in the inset. Panel (b) shows the effect of removing a single edge from the l = 1 ring network (illustrated in the inset, removing the red edge turns ring into line network) with α = 0.9999 and N = 1000. Whereas the avalanche size distribution for the ring network is bi-modal with a peak for global avalanches, the distribution of the line model is unimodal and decays exponentially for large avalanches. Panel (c) shows the avalanche size distribution for a network consisting of two homogeneous subnetworks of size N s = 100 for different levels of inhomogeneity β/α at a constant row sum α + β = α c (2N s ). Avalanche assemblies of a fixed size differ only in the number of units from each subnetwork. The probability p(f ) := P (|U (a) ∩ [N s ]| /N s = f | S(a) = 97) that an assembly of size s = 97 consists of a certain fraction f of units in one subnetwork is shown in panel (d). avalanche distributions. This point is illustrated for the extreme case of deleting a single edge from a strongly coupled one-nearest neighbor ring network with N = 1000 units in panel (b). Due to the sparsity of this network, assemblies always form connected line segments. This makes if efficient to compute the avalanche size statistics from the assembly probabilities in Eq. (20) for line segments leading to the avalanche size distribution in Eq. (78). The avalanche size distribution of the ring network with α = 0.9999 (where α is again the sum of incoming edge weights to each unit) is bimodal with peaks at around s = 70 and at the global avalanche size s = 1000. We obtained the line network by deleting a single (undirected) edge from the ring network and without changing any of the remaining coupling weights. The avalanche size distribution of this network is given by Eq. (79). In contrast to the ring network, the avalanche size distribution of the line network is uni-modal and decays exponentially for large avalanche sizes. This effect is due to the restriction on the spreading of an avalanche imposed by the missing edge, and can be understood intuitively: If the avalanche starts at one end of the line, it can only spread in one direction and has to complete N − 1 iterations to become a global avalanche. In contrast, an avalanche can always spread into two directions simultaneously in the ring network, and does have to complete about only half of the number of iterations to activate all units. A formal understanding arises from the observation that there are N spanning trees in the ring network of size N , and only one in the corresponding line network. Since removing recurrent connections can only increase the volume of the inhabited region, we immediately deduce from Eq. (20)
or areas interact globally via (potentially) weaker connections. The coupling matrix for this network is a 2 × 2 block matrix with blocks of size N × N and values α/N on the diagonal and β/N on the off-diagonal. Note that the row sums of this matrix have a value of α + β. Due to its regular structure, each avalanche assembly can be characterized just by the number of participating units from each subnetwork. In addition, the determinants in Eq.(65) for this block matrix can be reduced to determinants of 2 × 2 matrices. This reduction is detailed in the appendix, section VII H 2.By calculating these determinants, we find the avalanche size distribution of this network as theexpression given in Eq. (77). Let us discuss some implications of varying the inter-network connection strength β on the avalanche dynamics and assembly formation. With α c (N ) := 1 − 1/ √ N we denote the total, critical coupling strength for a homogeneous network of size N for which it exhibits a power-law avalanche size distribution. For α = β = α c (2N )/2 we obtain a critical homogeneous network of size 2N . We now introduce an inhomogeneity into the weight matrix by varying β/α, while keeping the row sums constant at α + β = α c (2N ). Fig. 7, panel (c) shows the avalanche size distribution for different values of β/α. If β = 0, avalanches cannot spread from one subnetwork to the other and the avalanche distribution for the full network with 2N units is just the same as for a homogeneous network of size N with a supercritical coupling of α c (2N ) > α c (N ). For non-zero, but weak internetwork coupling weights 0 < β α, avalanches up to s = 2N are possible and the avalanche size distributions show an inflexion point at around s = N . However, at still very strong inhomogeneities with β/α = 0.05, the avalanche size distribution (dark gray line) quickly becomes very similar to the one of the homogeneous network (black line, α = β = α c (2N )). In contrast to the small differences in the size distributions observed for a wide range of β values, weight inhomogeneities have a larger effect on the avalanche assemblies, i.e. how likely an assembly of a given size is composed of a certain fraction of units from a single network. For the avalanche size s = 97, which is where the distributions shown in panel (c) intersect, panel (d) shows this assembly distribution in dependence of the fraction of units from the first subnetwork. For β/α = 0.005 (blue line), the most likely composition of an avalanche of size s = 97 is that all participating units stem from either the first or from the second subnetwork. Increasing β shifts the two peaks of the assembly distribution closer together until the distribution becomes unimodal with a single peak at 0.5 (green and red lines). The assembly distribution for the homogeneous network α = β is simply a hypergeometric distribution (arising from 97 draws out of a population of 200 neurons out of which 100 are from the first subnetwork) since each assembly has the same probability. In contrast, the shape of the assembly distribution for β/α = 0.05 is much wider and is approximately constant from 0.3 to 0.7, indicating a much higher variability of assembly compositions due to the inhomogeneity in the network.
of non-homogeneous coupling topologies and reduced number of units which receive external input was out of reach. We were able to drastically simplify analysis by exploiting an invariance of the fast-scale avalanche dynamics. Formally, our model reduces to a simple translation dynamics with respect to a topology that turns out to the equivalent to the topology of the N -torus for general positive coupling matrices W , as long as their eigenvalues stay below one. This torus transformation removes the discontinuities of the avalanche dynamics and is the central idea behind our study. This allowed us to show that for almost all coupling matrices, the Lebesgue measure supported on a subset of the phase space [0, 1) N is the unique ergodic measure relative to the given time-invariant Bernoulli drive if and only if all units can be reached by a path starting from a unit receiving external input in the induced graph. In addition, we studied the geometry of the support for the Lebesgue distribution and uncovered its self-similar structure.
units. Similar to the influence of leaks, the inclusion of inhibitory units removes the strictly non-inhabited region in state space and the independence assumption underlying Eq. (20).Inhibition can easily lead to violations of ergodicity. One example is a network of two populations with strong intrinsic excitatory connectivity which are mutually coupled by inhibition. This ubiquitous connection motif could establish a winner-take-all network, in which one of the populations engages in strongly reverberating activity which completely inhibits activation of any unit in the other population. Clearly, extending our framework to networks with inhibition poses the biggest challenge for future studies. However, we believe that in situations with not too strong inhibition on a global scale, reasonable approximations can be made. This could be the case e.g. in normalization
i , 1 ≤ j ≤ D(a) and U (a) := U D(a) (a) (34) the avalanche assembly (up to generation j) and the sum of cardinalities General properties of the model In this section we introduce some common notation and general properties of the model which are used throughout the appendix. We start by showing that the model is well defined, i.e. that the avalanche duration τ (u + u 0 e k ) < ∞ for all u ∈ C, k ∈ [N ]. In fact, as long as Eq. (31) holds for the coupling matrix W , each unit can fire at most once during an avalanche. Thus, unions of different generations a i of an avalanche a = a(k, u) are disjoint i.e. U (a) = D(a) i=1 a i .Lemma A.1. Assuming Eq. (31), for u ∈ C, k ∈ [N ], we have that each unit can fire at most once during an avalanche and in particular, its duration τ (u + e k u 0 ) ≤ N .
( 1 )
1F (u) = u − M −1 n for some n ∈ N [N ] if and only if u − M −1 n ∈ C and for every n n (component wise) we have u − M −1 n / ∈ C,
, for I ⊆ [N ], we have (M −1 δ(I)) i > 0 if and only if i ∈ I, it follows
Corollary A. 3 .
3Let N s (k, u) be the spike count vector after s applications of T starting at (k,
Definition A. 4 . ( 1 )
41For a given coupling matrix W ∈ R [N ×N ] we define the directed graph G(W ) with vertices given by the units [N ] and edge set E = E(W ), G(W ) := ([N ], E(W )) where E(W ) := {(j, i) ∈ [N ] × [N ] | w ij > 0}.(39)
( 3 )
3For a probability vector p ∈ [0, 1] [N ] we call the coupling matrix W (or equivalently the associated graph G(W ) or the set W(W )) p-reachable, if and only if for every unit in k ∈ [N ] there exists a driven unit ∈ L := { ∈ [N ] | p( ) > 0} and a path (which can also be the empty path) starting in along edges in E(W ) terminating in k. Lemma A.5. The coupling matrix W is p-reachable if and only if M δ(L) > 0 (component wise). Proof. First note that M −1 = diag(U ) − W and by Eq. (5) we have W < max(U ). Thus we obtain with a Neumann series expansion
B. Equivalence to a simple translation dynamics on the N -torusThe non-smooth dynamics of spike propagation and membrane potential reset represented by Fcomplicates mathematical analysis of the model. However, Eq. (36) shows that the whole effect of the internal dynamics is summarized by a shift along integer coordinates of the columns of M −1 . We use this central observation to significantly simplify our dynamical system by restricting its phase space to the inhabited region D which we set to the image of the N -torus T [N ] under the quotient map θ (see Fig. 2, D is the image of the unit cell marked with a dashed gray outline under F ). On D, each iteration step T k is a bijection and is conjugated via the mapping θ to the shift z → z + u 0 M e k on T [N ] . This is formalized in Theorem B.2 which establishes topological equivalence between the complex dynamics T and a much simpler translation T on the N -Torus. Definition B.1. We define the skew-product dynamical system T on the N -Torus T [N ] by
43 )
43Proof. Since θ is a surjective map from T [N ] to D, we can inherit the T[N ] topology to the set D asthe quotient topology induced by θ, i.e. the open sets on D are the images of open sets on T [N ] under θ. In addition, θ is injective since F (u) translates only by integer coordinates of M −1 thus no two z 1 , z 2 ∈ T [N ] can be mapped to the same point by θ. This makes θ a homeomorphism from T [N ] to D.
λ on the N -torus invariant. Let us denote the set of T g -invariant Borel probability measures by M Tg . For a fixed shift-invariant probability measure ν on (Σ N , B) we denote the subset M Tg of elements with marginal ν by M Tg (ν) := {µ ∈ M Tg : µ • π −1 1
Proposition C. 2 .
2If the shift space Σ N is equipped with the Bernoulli measure ν := B p with L = { ∈ [N ] | p > 0} and g(k) := u 0 M e k 1 , we have that the condition in Eq. (44) is equivalent to the condition that for all k ∈ Z [N ] \ {0} there exists an ∈ L such that exp 2πi u 0 k T M e = 1. (45) Proof. We first show that the condition in Eq. (44) implies the condition of the corollary by contraposition: If for some k ∈ Z [N ] \ {0} we have exp 2πi u 0 k T M e = 1 for all ∈ L, then R = 1 solves condition (44). Conversely, suppose that for some k ∈ Z N \ {0}, there exists a measurable function R on (Σ N , B) with |R| = 1 such that Eq. (44) holds. Let us set f : → exp 2πiu 0 k T M e , then we have R(σ(k)) = f (k 1 )R(k). Integrating both sides with respect to the Bernoulli measure ν gives R dν = ∈[N ]
Corollary C. 3 .
3The system T is uniquely ergodic relative to B p if the components of u 0 M δ(L) and 1 are rationally independent, i.e. u 0 k T M δ(L) ∈ Z [N ] for k ∈ Z [N ] implies k = 0. Proof. Suppose that ergodicity does not hold for T . With Proposition C.2 it follows that there exits k ∈ Z [N ] \ {0} such that exp(2πiu 0 k T M e ) = 1 for all ∈ L. In particular, this implies that ∈L exp(2πiu 0 k T M e ) = exp(2πiu 0 k T M δ(L)) = 1 .
all k ∈ Z [N ] \ {0}, k| P = 0 and c(W ) := u 0 M δ(L) with M := (1 − W ) −1 the scalar product
j = 0 and we can consider W as an element of R E . If we could show {∂ W c(W ) : W ∈ R E } = R [N ]\P , then, for all k ∈ Z [N ] \ {0}, k| P = 0, we would clearly have ∇F k (W ) = 0. To verify the latter equality, we fix an arbitrary v ∈ R [N ] with v| P = 0 and construct a matrix W with sparsity pattern dominated by W such that u 0 M WM δ(L) = v, or equivalently WM δ(L) = u −1 0 (1 − W )v =: y, as follows: For every unit i ∈ [N ] \ P pick exactly one j ∈ [N ] with W i,j > 0 and set W i,j = y i /(M δ(L)) j , all other entries W i,[N ]\{i} of the i-th row are chosen to be zero. In particular, for all i ∈ [N ] \ P and j ∈ [N ], we have W i,j = 0 if W i,j = 0. For the remaining rows indexed by
is fulfilled. Corollary C.6. The homogeneous EHE-Model is not uniquely ergodic relatively to B(p), if p i = p j = 0 for two distinct indices i, j ∈ [N ]. Proof. Fix i, j ∈ [N ] with i = j, p i = p j = 0 and set k := e i − e j . Then we have u 0 k T M e = 0 for every ∈ [N ], p > 0. D. Expected firing rates and spike count covariances
E
by solving Eq. (38) for N s W . Let H(s)(k, u) := M (u + u 0 s t=1 e kt ) − M u . By compactness ofD, we have |N s W (k, u) − H(s)(k, u)| < c uniformely for all k ∈ Σ N , u ∈ D, s ∈ N. ( u 0 M s t=1 e kt ) s = u 0 M p.Since M is the identity matrix for W = 0 (and U = 1), this assertsY 0 = u 0 p, Y W = M Y 0 . Inaddition, N W (s) and H(s) are Birkhoff sums of f (k, u) = δ(U (a(k, u))) and g(k, u) = M (u + e k 1 ) − M u , respectively. By Hopf's ratio ergodic theorem [78, Thm. 2.4.24] we have E(f ) = E(g). The identity g(u, k) i = 1 is equivalent to completing a revolution around direction i of the N -Torus (M (u) mod 1) i + (u 0 M e k 1 ) i ≥ 1. From M (u) ∼ λ [0,1) N the probability of g(u, k) i = 1 given g(u, k) k 1 = 1 is M ik 1 M k 1 k 1
e
s t=1 e kt is multinomially distributed with success probabilities p and s trials and thus Var s t=1 e kt = s diag(p) − pp T . From the boundedness of z s , we have Var(z s ) < c 1 ∈ R independent of s, and with the Cauchykt M c 1 = √ sc 1 M T (diag(p) − pp T ) M . Hence the two last terms in Eq. (51) vanish in the limit s → ∞ and we get lim s→∞ Var (N s W ) s = u 2 0 M T diag(p) − pp T M E. Geometrical structure and self-similarity of the inhabited region 1. Geometrical description and self-similarity of the noninhabited region
Figure 3 ,
3panel (d) shows this decomposition for the three-dimensional case H = [N ] = [3]. Note that the intersection of the blue region with the noninhabited region is empty. Similarly the enclosed noninhabited region is just a single Γ { k} for k ∈ [3] in each blue region and the union of Γ I generating two-dimensional noninhabited regions are enclosed in the red regions. The next Lemma formalizes this self-similar structure of Λ H for arbitrary subsets H ⊆ [N ]
First
we show that D ⊆ A. Let u = θ(z) be the image of z ∈ T[N ] in D. Thus we have u =F (W 1 + M −1 z) = W 1 + M −1 (z − n)for some n ∈ {0, 1} N . Now suppose that there exists an n ∈ {0, 1} N such that u + M −1 n ∈ C. We will show that this implies n = 0. First, n ≤ ncomponentwise since u + M −1 n = W 1 + M −1 (z − n + n ) and (z − n + n ) j ≥ 1 if n j > n j . However,using Lemma A.2 n ≤ n implies n = 0.To show that A ⊆ D let x ∈ A be arbitrary. From the condition on A there is a unique way to write x as x = W 1+M −1 (n(x)+z(x)) for some n(x) ∈ {0, 1} N and z(x) ∈ [0, 1) N and for x 1 , x 2 ∈ A, x 1 = x 2 we have z(x 1 ) = z(x 2 ). Since F only subtracts integer combinations of M −1 columns this implies that x = θ(z(x)) and thus x ∈ D.Theorem E.5. The inhabited region D is the complement of Λ[N ] in C = [0, U i ) i∈[N ]
for some u 0 . With Theorem E.5, the inhabited region D I of this subsystem is given by D I = π I [0, U i ) i∈I \ Λ I . With Theorem B.2, the inhabited region is the image θ T [N ] . Since F only consists of translations it is volume-preserving and we have λ I (D I ) = λ I θ T I = λ I (diag (U ) I − W I ) T I = |diag(U ) I − W I | 2. Phase-space regions leading to avalanches In the previous subsections, we have shown unique ergodicity relative to B p of the normalised Lebesgue measure on the inhabited region D = C \ Λ [N ] and established an understanding of the self-similar geometry of Λ [N ] as well as their corresponding Lebesgue volumina. These insights allow us now to derive probabilities for specific avalanches by identifying the pre-images of the avalanche function a(k, u), and by calculating their phase space volumes with respect to the ergodic measure P: Definition E.7. We call a vector a = (a 1 , . . . , a d ) of non-empty pairwise disjoint subsets a i ⊂ [N ], 1 ≤ i ≤ d with a 1 = {k} an avalanche with duration d(≤ N ) starting in k. The coordinate a j will be called generation j of the avalanche a. The set of all avalanches starting in k ∈ N is denoted by A k and we define the set of all avalanches (including the empty avalanche () ) by A := k∈[N ] A k {()}.
F
. Avalanche distributionsTo arrive at probabilities P(a = a), the volumes of the preimage R(a) has to be normalized by the volume of the region where external input does not result in an avalanche, i.e. the preimages for which a(u, a 1 ) = (). The following Lemma specifies these regions and the probability of an empty avalanche given external input to unit k ∈ [N ].
Avalanches a with the same set of participating units U (a) thus have the same volume along the [N ] \ U (a) dimensions. We will derive a closed form expression for the phase space volume of the union of all such avalanches. Theorem F.4. The I \ {k} components of the images of all avalanche regions R(a) with avalanche units U (a) = I and started by unit k fill up the inhabited region along dimensions I \ {k} up to the upper boundaries r(I) + u 0 δ({k}): a∈A k ,U (a)=I
a∈A k ,U (a)=I T k (R(a)) ⊆ A . By Eq. (36), T k induces a shift by u 0 e k − M −1 δ(I) = u 0 e k − diag(U )δ(U (a)) + r I on all states u in R(a) with a ∈ A k . Since π {k} R(a) = [U k − u 0 , U k ) for all a ∈ A k we have π {k} T k (R(a)) = [r I k , r I k + u 0 ). The states of all the remaining units which do not participate in the avalanche are just shifted by r I [N ]\I , so that for all π [N ]\I T (R(a)) = π [N ]\I (R U c (I)+r I ).
Combinatorial proof of Eq. (65): From Theorem (F.3) we have P k (U (a) = I) = a∈A k ,U (a)=I P k (a = a) (66) = V [N ]\I (U − r I ) V [N ]\{k} (U ) a∈A k ,U (a)=I D(a) j=2 i∈a j l∈a j−1 w il .
Next we expand on the implications of the equations Eq. (64), Eq. (65), and Eq. (58).
Corollary G. 1 (
1Phase space volume in dependence of loops). Let U i = 1 for all i. The volume of the inhabited region depends on the set of all linear directed subgraphs L of W weighted by the product of their arc weights. Every component of a linear directed subgraph is a directed cycle. Let L (W ) be the set of all linear directed subgraphs L of W with i nodes and #(L) be its number of components. The product of all arc weights in L is denoted by (L).
volume of the inhabited region λ(D) = V [N ] (1) = |1 − W hom |. Since W hom has only one nonzero eigenvalue equal to α, Equation Eq. (23) gives λ(D) = 1 − α.
V
[N ]\{k} (1) = 1 − (N − 1)α N Thus, P k (U (a) = I) depends in the homogeneous network only on |I| and is independent of the starting unit k. Putting these results together, the distribution of nonempty avalanches is given by P(S(a) = n | S(a) > 0) = ∅ =I⊆N,|I|=n k∈I P k (U (a) = I) homogeneous EHE-model it was shown [40] that the avalanche size statistics converges in distribution to the statistics obtained from a Watson-Galton branching process. In this way, the homogeneous EHE-model behaves like a branching process and we may use the branching factor, approximated by the expected number of units in the second step of the avalanche E(S(a 2 ) | a 1 = ()), to find the critical coupling α c at which large (but finite) networks display power-law avalanche size statistics. At this coupling, the branching factor should be one, i.e one unit causes on average one additional unit to fire in the next step of the avalanche. The expected number of units in the second step of the avalanche can be calculated using Eq. (72) for homogeneous networks to be E (|a 2 || a 1 = {k}) = (N − 1)α(N − (N − 2)α) N (N − (N − 1)α) = E (|a 2 || a 1 = ())
studied in section V and for whichV W h I (U ) = V W hom I (U + α/N ) (Proposition G.2), leads to E W h (|a 2 || a 1 = {k}) = (N − 1)α((N − 3)α − N ) (N − 2)α 2 + N (N − 3)α − N 2 = E W h (|a 2 || a 1 = ())
Due to the block matrix structure, each assembly I ⊆ [N ] is characterized by the number of participating units in each subnetwork, which we denote by n(I) = (|I ∩ {1, . . . , N 1 }|, . . . , |I ∩ {N − N c + 1, . . . , N }|). The index set of positive entries in the pattern is given by pos(n) = {i ∈ [c] | n i > 0}. For coupled homogeneous networks, avalanche assemblies are described by the vector containing the number of participating units in each subnetwork. Note that the rank of the matrix W block is the same as the rank of W c diag n([N ]) and both matrices have the same set of nonzero eigenvalues, thus the volume of the inhabited region can be calculated by an c × c determinant λ(D) = V [N ] (1) = 1 N − W block = |1 c − W c diag(n([N ]))| . For simplification of the assembly probabilities, we need to compute phase space volumes V I (U ) for block constant vectors U , which we can characterize by a vector v ∈ R c with U {1,...,N 1 } = v 1 ,. . . ,U {N −Nc,...,N } = v c . As for the volume of the inhabited region, we have, assuming v > 0 component wise, V I (U ) = diag U I − W block I = |diag U I | 1 I − diag(U (I)) n(I) 1 pos(n(I)) − W c pos(n(I)) diag(n(I)/v c i ) pos(n(I)) = i∈pos(n(I))(v c i ) n(I)−1 diag(v) pos(n(I)) − W c pos(n(I)) diag(n(I)) pos(n(I))
(W c n(I)) j = 0 for some j ∈ c(I)V c (W c n(I), I \ {k})V c (1 c − W c n(I))/V c (1 c , [N ] \ {k}) otherwiseThe condition in the first case is true if and only if the subgraph formed by nodes I is not connected and in this case there are no spanning trees of the assembly network. Calculating phase space volumes with V c only needs to evaluate determinants of matrices with at most c × c dimensions.
W ring (S(a) = n | S(a)
α 2 .
2Note that there are N line segments with size n < N on the ring. For n = N , there is only one line segment which is the full ring. However, there are now N possible spanning trees instead of only one spanning tree for each line segment with n < N . The coupling matrix W line of the line network, which arises from the ring network by deletion of a single (undirected) edge, is a tridiagonal matrix with zeros on the diagonal and α/2 on the off diagonals. The avalanche size distribution for the line network has a similar form as for the ring network, but note that in contrast to the ring network, the factor V [N ]\I (W δ(I)) in Eq. (20) depends in the line network on the number of units to the left and to the right of the line segment corresponding to an assembly: P W line (S(a) = n | S(a) (j)v l (N − n − j) ,
neurons/units, set {1, . . . , N } I, J, H non-empty subsets of [N] X Y set of functions from Y to X, e. g. R I , T [N ] i, j, k indices for units u, z states in phase space u ∈ C, z ∈ T [N ] s, t iteration indices W, w ij coupling matrix, interaction weight from j to i U I , W I restriction of the vector U to the index set I and matrix W to the index set I × I G(W ), E(W ) directed graph and edge set induced by W S, S k set of directed (outgoing) spanning trees (rooted at k) w(S) sum of product of edge weights for all trees in S cut(I) weight of directed graph cut cut(I) = (W δ(I)) [N ]\I L(W ) directed graph Laplacian W 1 − W L (k) (W ) (k, k) cofactor of graph Laplacian Ω k matrix of generalized effective k resistances Eq. (22) | · | cardinality of sets, determinant of square matrices A, A k set of all avalanches/nonempty avalanches started by unit k a, a i avalanche, generation i of an avalanche D(a), S(a) avalanche duration, size U (a), U j (a) set of units (assembly) in avalanche a (until generation j) e i , δ(I) unit vector in R [N ] in direction i, i∈I e i 1, 1 constant 1-vector i∈[N ] e i , identity matrix diag (1) U, U firing thresholds U ∈ R [N ] ,U := diag U M M = (diag(U) − W ) −1 ; maps from state space to Torus r I vector W (δ(I)) of internal activation during avalanche with assembly I C phase space C := × i∈[N ] [0, U i ) D inhabited region D = C \ Λ = Θ(T [N ] ) Λ non-inhabited region Eq. (17) A indicator vector of supra-threshold units A ∈ R [N ] F, F (one generation of) avalanche dynamics T k T k (u) = F (u + e k u 0 ), maps to new state after external input to unit k Σ N space of right-infinite (input unit index) sequences Σ N = [N ] N B Borel σ-algebra k element of Σ N , k = (k 1 , k 2 , . . .) σ left shift operator on Σ N T model dynamics formalised as skew product T (k, u) = T (σ(k), T k 1 (u)) T [N ] N -torus Θ quotient map from N -torus to inhabited region D T , T g (g-extended) equivalent dynamics on N -torus p, B p vector of input probabilities, Bernoulli measure on Σ N with respect to p L support of p (set of units receiving external input) λ, λ D Lebesgue measure, normalized Lebesgue measure supported on D P, E, cov probability, expectation, covariance operator on (Σ N × C, B, B p × λ D )a(k, u)
thus offering opportunities for a rigorous analytical treatment. In comparison, the dynamics of the EHE model is far more complex. Although we demonstrated that an equivalent branching process for the EHE model can in principle be defined, stochastic dependencies between the membrane potentials of units belonging to the same strongly connected component of the network makes its formal description complicated. On directed acyclic graphs these dependencies disappear, and (only)29, 33, 74],
Definition E.1. Let ∅ = H ⊆ [N ] be an index set. Define the non-inhabited region Λ H along dimensions H by
To show Λ [N ] ⊆ B let x ∈ Λ [N ] be arbitrary. Using the decomposition into disjoint sets in Lemma E.3 there exists exactly one set ∅ = I ⊆ [N ] such that x i < j∈[I] w ij for all i ∈ I and x k ≥ l∈[N ] w kl for k ∈ [N ] \ I.
\{k} w k we can apply Lemma E.2 to get (62). Eq. (63) follows from Eq. (62) with Eq. (58).
)
Proof. Eq. (68) follows from the combinatorial interpretation of the characteristic polynomial of a weighted digraph W [48][Section 1.4]
Theorem G.3. For every H ⊆ [N ], the inhabited region decomposes into a direct product of inhabited regions along the strongly connected components of the subgraph H with adjacency matrix W H .
D O Hebb, The organization of behavior: A neuropsychological theory. Psychology PressD. O. Hebb, The organization of behavior: A neuropsychological theory (Psychology Press, 2005).
Functional compartmentalization and viewpoint generalization within the macaque face-processing system. W A Freiwald, D Y Tsao, Science. 330845W. A. Freiwald and D. Y. Tsao, Functional compartmentalization and viewpoint generalization within the macaque face-processing system, Science 330, 845 (2010).
Comparing face patch systems in macaques and humans. D Y Tsao, S Moeller, W A Freiwald, Proceedings of the National Academy of Sciences. 10519514D. Y. Tsao, S. Moeller, and W. A. Freiwald, Comparing face patch systems in macaques and humans, Proceedings of the National Academy of Sciences 105, 19514 (2008).
Visual areas exert feedforward and feedback influences through distinct frequency channels. A M Bastos, J Vezoli, C A Bosman, J.-M Schoffelen, R Oostenveld, J R Dowdall, P De Weerd, H Kennedy, P Fries, Neuron. 85390A. M. Bastos, J. Vezoli, C. A. Bosman, J.-M. Schoffelen, R. Oostenveld, J. R. Dowdall, P. De Weerd, H. Kennedy, and P. Fries, Visual areas exert feedforward and feedback influences through distinct frequency channels, Neuron 85, 390 (2015).
Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. O Mazor, G Laurent, Neuron. 48661O. Mazor and G. Laurent, Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons, Neuron 48, 661 (2005).
Dynamical representation of odors by oscillating and evolving neural assemblies. G Laurent, Trends in Neurosciences. 19489G. Laurent, Dynamical representation of odors by oscillating and evolving neural assemblies, Trends in Neurosciences 19, 489 (1996).
Ensemble coding of vocal control in birdsong. A Leonardo, M S Fee, Journal of Neuroscience. 25652A. Leonardo and M. S. Fee, Ensemble coding of vocal control in birdsong, Journal of Neuroscience 25, 652 (2005).
Rhythmic continuous-time coding in the songbird analog of vocal motor cortex. G F Lynch, T S Okubo, A Hanuschkin, R H Hahnloser, M S Fee, Neuron. 90877G. F. Lynch, T. S. Okubo, A. Hanuschkin, R. H. Hahnloser, and M. S. Fee, Rhythmic continuous-time coding in the songbird analog of vocal motor cortex, Neuron 90, 877 (2016).
Songbirds work around computational complexity by learning song vocabulary independently of sequence. D Lipkind, A T Zai, A Hanuschkin, G F Marcus, O Tchernichovski, R H Hahnloser, Nature Communications. 81D. Lipkind, A. T. Zai, A. Hanuschkin, G. F. Marcus, O. Tchernichovski, and R. H. Hahnloser, Songbirds work around computational complexity by learning song vocabulary independently of sequence, Nature Communications 8, 1 (2017).
Portraits of communication in neuronal networks. G Hahn, A Ponce-Alvarez, G Deco, A Aertsen, A Kumar, Nature Reviews Neuroscience. 20117G. Hahn, A. Ponce-Alvarez, G. Deco, A. Aertsen, and A. Kumar, Portraits of communication in neu- ronal networks, Nature Reviews Neuroscience 20, 117 (2019).
Neural syntax: cell assemblies, synapsembles, and readers. G Buzsáki, Neuron. 68362G. Buzsáki, Neural syntax: cell assemblies, synapsembles, and readers, Neuron 68, 362 (2010).
The role of criticality in flexible visual information processing. N Tomen, U Ernst, The Functional Role of Critical Dynamics in Neural Systems. SpringerN. Tomen and U. Ernst, The role of criticality in flexible visual information processing, in The Functional Role of Critical Dynamics in Neural Systems (Springer, 2019) pp. 233-264.
Switching neuronal inputs by differential modulations of gamma-band phase-coherence. I Grothe, S D Neitzel, S Mandon, A K Kreiter, Journal of Neuroscience. 3216172I. Grothe, S. D. Neitzel, S. Mandon, and A. K. Kreiter, Switching neuronal inputs by differential modulations of gamma-band phase-coherence, Journal of Neuroscience 32, 16172 (2012).
A model for attentional information routing through coherence predicts biased competition and multistable perception. D Harnack, U A Ernst, K R Pawelzik, Journal of Neurophysiology. 1141593D. Harnack, U. A. Ernst, and K. R. Pawelzik, A model for attentional information routing through coherence predicts biased competition and multistable perception, Journal of Neurophysiology 114, 1593 (2015).
The criticality hypothesis: how local cortical networks might optimize information processing. J M Beggs, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 366329J. M. Beggs, The criticality hypothesis: how local cortical networks might optimize information pro- cessing, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, 329 (2008).
Pulse-coupled relaxation oscillators: from biological synchronization to self-organized criticality. S Bottani, Physical Review Letters. 744189S. Bottani, Pulse-coupled relaxation oscillators: from biological synchronization to self-organized criti- cality, Physical Review Letters 74, 4189 (1995).
Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks. N Bertschinger, T Natschläger, 10.1162/089976604323057443Neural Computation. 161413N. Bertschinger and T. Natschläger, Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks, Neural Computation 16, 1413 (2004).
Maximizing sensory dynamic range by tuning the cortical state to criticality. S H Gautam, T T Hoang, K Mcclanahan, S K Grady, W L Shew, 10.1371/journal.pcbi.1004576PLOS Computational Biology. 111S. H. Gautam, T. T. Hoang, K. McClanahan, S. K. Grady, and W. L. Shew, Maximizing sensory dynamic range by tuning the cortical state to criticality, PLOS Computational Biology 11, 1 (2015).
The functional benefits of criticality in the cortex. W L Shew, D Plenz, https:/arxiv.org/abs/https:/doi.org/10.1177/1073858412445487pMID: 22627091The Neuroscientist. 19W. L. Shew and D. Plenz, The functional benefits of criticality in the cortex, The Neuroscientist 19, 88 (2013), pMID: 22627091, https://doi.org/10.1177/1073858412445487.
The organizing principles of neuronal avalanches: cell assemblies in the cortex?. D Plenz, T C Thiagarajan, Trends in Neurosciences. 30101D. Plenz and T. C. Thiagarajan, The organizing principles of neuronal avalanches: cell assemblies in the cortex?, Trends in Neurosciences 30, 101 (2007).
Universality beyond power laws and the average avalanche shape. S Papanikolaou, F Bohn, R L Sommer, G Durin, S Zapperi, J P Sethna, Nature Physics. 7316S. Papanikolaou, F. Bohn, R. L. Sommer, G. Durin, S. Zapperi, and J. P. Sethna, Universality beyond power laws and the average avalanche shape, Nature Physics 7, 316 (2011).
Avalanches, barkhausen noise, and plain old criticality. O Perković, K Dahmen, J P Sethna, Physical review letters. 754528O. Perković, K. Dahmen, and J. P. Sethna, Avalanches, barkhausen noise, and plain old criticality, Physical review letters 75, 4528 (1995).
Crackling noise. J P Sethna, K A Dahmen, C R Myers, Nature. 410242J. P. Sethna, K. A. Dahmen, and C. R. Myers, Crackling noise, Nature 410, 242 (2001).
Neuronal avalanches in neocortical circuits. J M Beggs, D Plenz, Journal of Neuroscience. 2311167J. M. Beggs and D. Plenz, Neuronal avalanches in neocortical circuits, Journal of Neuroscience 23, 11167 (2003).
Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. T Petermann, T C Thiagarajan, M A Lebedev, M A Nicolelis, D R Chialvo, D Plenz, Proceedings of the National Academy of Sciences. 10615921T. Petermann, T. C. Thiagarajan, M. A. Lebedev, M. A. Nicolelis, D. R. Chialvo, and D. Plenz, Spontaneous cortical activity in awake monkeys composed of neuronal avalanches, Proceedings of the National Academy of Sciences 106, 15921 (2009).
Scale-invariant neuronal avalanche dynamics and the cut-off in size distributions. S Yu, A Klaus, H Yang, D Plenz, PloS One. 999761S. Yu, A. Klaus, H. Yang, and D. Plenz, Scale-invariant neuronal avalanche dynamics and the cut-off in size distributions, PloS One 9, e99761 (2014).
Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. W L Shew, H Yang, T Petermann, R Roy, D Plenz, Journal of neuroscience. 2915595W. L. Shew, H. Yang, T. Petermann, R. Roy, and D. Plenz, Neuronal avalanches imply maximum dynamic range in cortical networks at criticality, Journal of neuroscience 29, 15595 (2009).
Finite-size effects of avalanche dynamics. C W Eurich, J M Herrmann, U A Ernst, 10.1103/PhysRevE.66.066137Phys Rev E Stat Nonlin Soft Matter Phys. 6666137C. W. Eurich, J. M. Herrmann, and U. A. Ernst, Finite-size effects of avalanche dynamics., Phys Rev E Stat Nonlin Soft Matter Phys 66, 066137 (2002).
Optimal dynamical range of excitable networks at criticality. O Kinouchi, M Copelli, Nature physics. 2348O. Kinouchi and M. Copelli, Optimal dynamical range of excitable networks at criticality, Nature physics 2, 348 (2006).
Self-organized criticality on small world networks. L De Arcangelis, H Herrmann, Physica A: Statistical Mechanics and its Applications. 308545L. De Arcangelis and H. Herrmann, Self-organized criticality on small world networks, Physica A: Statistical Mechanics and its Applications 308, 545 (2002).
Self-organized criticality in cortical assemblies occurs in concurrent scale-free and small-world networks. P Massobrio, V Pasquale, S Martinoia, Scientific Reports. 51P. Massobrio, V. Pasquale, and S. Martinoia, Self-organized criticality in cortical assemblies occurs in concurrent scale-free and small-world networks, Scientific Reports 5, 1 (2015).
Percolation critical exponents in scale-free networks. R Cohen, D Ben-Avraham, S Havlin, Physical Review E. 6636113R. Cohen, D. Ben-Avraham, and S. Havlin, Percolation critical exponents in scale-free networks, Phys- ical Review E 66, 036113 (2002).
Critical dynamics in complex networks. D B Larremore, W L Shew, J G Restrepo, https:/arxiv.org/abs/https:/onlinelibrary.wiley.com/doi/pdf/10.1002/9783527651009.ch17Criticality in Neural Systems. D. Plenz and E. NieburLtdJohn Wiley & Sons17D. B. Larremore, W. L. Shew, and J. G. Restrepo, Critical dynamics in complex networks, in Criticality in Neural Systems, edited by D. Plenz and E. Niebur (John Wiley & Sons, Ltd, 2014) Chap. 17, pp. 365-392, https://onlinelibrary.wiley.com/doi/pdf/10.1002/9783527651009.ch17.
Simple unified view of branching process statistics: Random walks in balanced logarithmic potentials. S Di Santo, P Villegas, R Burioni, M A Muñoz, Physical Review E. 9532115S. di Santo, P. Villegas, R. Burioni, and M. A. Muñoz, Simple unified view of branching process statistics: Random walks in balanced logarithmic potentials, Physical Review E 95, 032115 (2017).
Predicting criticality and dynamic range in complex networks: effects of topology. D B Larremore, W L Shew, J G Restrepo, Physical Review Letters. 10658101D. B. Larremore, W. L. Shew, and J. G. Restrepo, Predicting criticality and dynamic range in complex networks: effects of topology, Physical Review Letters 106, 058101 (2011).
Statistical properties of avalanches in networks. D B Larremore, M Y Carpenter, E Ott, J G Restrepo, Physical Review E. 8566131D. B. Larremore, M. Y. Carpenter, E. Ott, and J. G. Restrepo, Statistical properties of avalanches in networks, Physical Review E 85, 066131 (2012).
How structure determines correlations in neuronal networks. V Pernice, B Staude, S Cardanobile, S Rotter, 10.1371/journal.pcbi.1002059PLOS Computational Biology. 71V. Pernice, B. Staude, S. Cardanobile, and S. Rotter, How structure determines correlations in neuronal networks, PLOS Computational Biology 7, 1 (2011).
Interplay between graph topology and correlations of third order in spiking neuronal networks. S Jovanović, S Rotter, PLOS Computational Biology. 12S. Jovanović and S. Rotter, Interplay between graph topology and correlations of third order in spiking neuronal networks, PLOS Computational Biology 12 (2016).
Feedback through graph motifs relates structure and function in complex networks. Y Hu, S L Brunton, N Cain, S Mihalas, J N Kutz, E Shea-Brown, Physical Review E. 9862312Y. Hu, S. L. Brunton, N. Cain, S. Mihalas, J. N. Kutz, and E. Shea-Brown, Feedback through graph motifs relates structure and function in complex networks, Physical Review E 98, 062312 (2018).
A mathematical approach to self-organized criticality in neural networks. A Levina, University of. GöttingenPh.D. thesisA. Levina, A mathematical approach to self-organized criticality in neural networks, Ph.D. thesis, Uni- versity of Göttingen (2008).
Unambiguous reconstruction of network structure using avalanche dynamics. T Leleu, K Aihara, Physical Review E. 9122804T. Leleu and K. Aihara, Unambiguous reconstruction of network structure using avalanche dynamics, Physical Review E 91, 022804 (2015).
Ergodicity of avalanche transformations. M Denker, A Rodrigues, 10.1080/14689367.2014.947244Dyn. Syst. 29517M. Denker and A. Rodrigues, Ergodicity of avalanche transformations, Dyn. Syst. 29, 517 (2014).
Avalanche dynamics. M Denker, A Levina, Stochastics and Dynamics. 161660005M. Denker and A. Levina, Avalanche dynamics, Stochastics and Dynamics 16, 1660005 (2016).
Avalanche size distribution of an integrate-and-fire neural model on complex networks. N Jung, Q A Le, K.-E Lee, J W Lee, Chaos: An Interdisciplinary Journal of Nonlinear Science. 3063118N. Jung, Q. A. Le, K.-E. Lee, and J. W. Lee, Avalanche size distribution of an integrate-and-fire neural model on complex networks, Chaos: An Interdisciplinary Journal of Nonlinear Science 30, 063118 (2020).
M -matrix characterizations. I. Nonsingular M -matrices. R J Plemmons, 10.1016/0024-3795(77)90073-8Linear Algebra Appl. 18175R. J. Plemmons, M -matrix characterizations. I. Nonsingular M -matrices, Linear Algebra Appl. 18, 175 (1977).
Introduction to the modern theory of dynamical systems. A Katok, B Hasselblatt, 10.1017/CBO9780511809187Encyclopedia of Mathematics and its Applications. Katok and Leonardo Mendoza54802Cambridge University PressA. Katok and B. Hasselblatt, Introduction to the modern theory of dynamical systems, Encyclopedia of Mathematics and its Applications, Vol. 54 (Cambridge University Press, Cambridge, 1995) pp. xviii+802, with a supplementary chapter by Katok and Leonardo Mendoza.
Recurrent interactions in spiking networks with arbitrary topology. V Pernice, B Staude, S Cardanobile, S Rotter, Physical Review E. 8531916V. Pernice, B. Staude, S. Cardanobile, and S. Rotter, Recurrent interactions in spiking networks with arbitrary topology, Physical Review E 85, 031916 (2012).
D M Cvetković, M Doob, H Sachs, Spectra of graphs. Theory and applications. Leipzig: J. A. Barth Verlag4473rd ed.D. M. Cvetković, M. Doob, and H. Sachs, Spectra of graphs. Theory and applications., 3rd ed. (Leipzig: J. A. Barth Verlag, 1995) p. 447.
B Bollobás, Modern graph theory. Springer Science & Business Media184B. Bollobás, Modern graph theory, Vol. 184 (Springer Science & Business Media, 2013).
Matrix tree theorems. S Chaiken, D J Kleitman, 10.1016/0097-3165(78)90067-5J. Combinatorial Theory Ser. A. 24377S. Chaiken and D. J. Kleitman, Matrix tree theorems, J. Combinatorial Theory Ser. A 24, 377 (1978).
K Khosoussi, G S Sukhatme, S Huang, G Dissanayake, arXiv:1604.01116Maximizing the weighted number of spanning trees: Near-t-optimal graphs. arXiv preprintK. Khosoussi, G. S. Sukhatme, S. Huang, and G. Dissanayake, Maximizing the weighted number of spanning trees: Near-t-optimal graphs, arXiv preprint arXiv:1604.01116 (2016).
Resistance distance. D J Klein, M Randić, Journal of Mathematical Chemistry. 1281D. J. Klein and M. Randić, Resistance distance, Journal of Mathematical Chemistry 12, 81 (1993).
A simple method for computing resistance distance. R B Bapat, I Gutmana, W Xiao, Zeitschrift für Naturforschung A. 58494R. B. Bapat, I. Gutmana, and W. Xiao, A simple method for computing resistance distance, Zeitschrift für Naturforschung A 58, 494 (2003).
The electrical resistance of a graph captures its commute and cover times. A K Chandra, P Raghavan, W L Ruzzo, R Smolensky, P Tiwari, computational complexity. 6312A. K. Chandra, P. Raghavan, W. L. Ruzzo, R. Smolensky, and P. Tiwari, The electrical resistance of a graph captures its commute and cover times, computational complexity 6, 312 (1996).
R Lyons, Y Peres, Probability on trees and networks. Cambridge University Press42R. Lyons and Y. Peres, Probability on trees and networks, Vol. 42 (Cambridge University Press, 2017).
Equivalence of cellular automata to ising models and directed percolation. E Domany, W Kinzel, 10.1103/PhysRevLett.53.311Phys. Rev. Lett. 53311E. Domany and W. Kinzel, Equivalence of cellular automata to ising models and directed percolation, Phys. Rev. Lett. 53, 311 (1984).
Phase transitions of cellular automata. W Kinzel, Zeitschrift für Physik B Condensed Matter. 58229W. Kinzel, Phase transitions of cellular automata, Zeitschrift für Physik B Condensed Matter 58, 229 (1985).
Non-equilibrium critical phenomena and phase transitions into absorbing states, Advances in. H Hinrichsen, Physics. 49815H. Hinrichsen, Non-equilibrium critical phenomena and phase transitions into absorbing states, Ad- vances in Physics 49, 815 (2000).
Bounds for norms of the matrix inverse and the smallest singular value. N Morača, Linear Algebra and its Applications. 4292589N. Morača, Bounds for norms of the matrix inverse and the smallest singular value, Linear Algebra and its Applications 429, 2589 (2008).
Collective dynamics of 'small-world'networks. D J Watts, S H Strogatz, nature. 393440D. J. Watts and S. H. Strogatz, Collective dynamics of 'small-world'networks, nature 393, 440 (1998).
Description of spreading dynamics by microscopic network models and macroscopic branching processes can differ due to coalescence. J Zierenberg, J Wilting, V Priesemann, A Levina, Physical Review E. 10122301J. Zierenberg, J. Wilting, V. Priesemann, and A. Levina, Description of spreading dynamics by mi- croscopic network models and macroscopic branching processes can differ due to coalescence, Physical Review E 101, 022301 (2020).
Neuronal avalanche dynamics indicates different universality classes in neuronal cultures. M Yaghoubi, T De Graaf, J G Orlandi, F Girotto, M A Colicos, J Davidsen, Scientific Reports. 83417M. Yaghoubi, T. de Graaf, J. G. Orlandi, F. Girotto, M. A. Colicos, and J. Davidsen, Neuronal avalanche dynamics indicates different universality classes in neuronal cultures, Scientific Reports 8, 3417 (2018).
Brain computation by assemblies of neurons. C H Papadimitriou, S S Vempala, D Mitropolsky, M Collins, W Maass, Proceedings of the National Academy of Sciences. 11714464C. H. Papadimitriou, S. S. Vempala, D. Mitropolsky, M. Collins, and W. Maass, Brain computation by assemblies of neurons, Proceedings of the National Academy of Sciences 117, 14464 (2020).
Reliable sequential activation of neural assemblies by single pyramidal cells in a three-layered cortex. M Hemberger, M Shein-Idelson, L Pammer, G Laurent, Neuron. 104353M. Hemberger, M. Shein-Idelson, L. Pammer, and G. Laurent, Reliable sequential activation of neural assemblies by single pyramidal cells in a three-layered cortex, Neuron 104, 353 (2019).
Synchronous spike patterns in macaque motor cortex during an instructed-delay reach-to-grasp task. E Torre, P Quaglio, M Denker, T Brochier, A Riehle, S Grün, Journal of Neuroscience. 368329E. Torre, P. Quaglio, M. Denker, T. Brochier, A. Riehle, and S. Grün, Synchronous spike patterns in macaque motor cortex during an instructed-delay reach-to-grasp task, Journal of Neuroscience 36, 8329 (2016).
Selective participation of single cortical neurons in neuronal avalanches. T Bellay, W L Shew, S Yu, J J Falco-Walter, D Plenz, Frontiers in Neural Circuits. 90T. Bellay, W. L. Shew, S. Yu, J. J. Falco-Walter, and D. Plenz, Selective participation of single cortical neurons in neuronal avalanches, Frontiers in Neural Circuits , 90 (2021).
Long-term stability of avalanche scaling and integrative network organization in prefrontal and premotor cortex. S R Miller, S Yu, S Pajevic, D Plenz, Network Neuroscience. 5505S. R. Miller, S. Yu, S. Pajevic, and D. Plenz, Long-term stability of avalanche scaling and integrative network organization in prefrontal and premotor cortex, Network Neuroscience 5, 505 (2021).
An ultra-sparse code underliesthe generation of neural sequences in a songbird. R H Hahnloser, A A Kozhevnikov, M S Fee, Nature. 41965R. H. Hahnloser, A. A. Kozhevnikov, and M. S. Fee, An ultra-sparse code underliesthe generation of neural sequences in a songbird, Nature 419, 65 (2002).
Support for a synaptic chain model of neuronal sequence generation. M A Long, D Z Jin, M S Fee, Nature. 468394M. A. Long, D. Z. Jin, and M. S. Fee, Support for a synaptic chain model of neuronal sequence generation, Nature 468, 394 (2010).
Choice-specific sequences in parietal cortex during a virtualnavigation decision task. C D Harvey, P Coen, D W Tank, Nature. 48462C. D. Harvey, P. Coen, and D. W. Tank, Choice-specific sequences in parietal cortex during a virtual- navigation decision task, Nature 484, 62 (2012).
Internally generated cell assembly sequences in the rat hippocampus. E Pastalkova, V Itskov, A Amarasingham, G Buzsáki, Science. 3211322E. Pastalkova, V. Itskov, A. Amarasingham, and G. Buzsáki, Internally generated cell assembly se- quences in the rat hippocampus, Science 321, 1322 (2008).
Uniformly most-reliable graphs and antiholes. G Rela, F Robledo, P Romero, International Conference on Machine Learning, Optimization, and Data Science. SpringerG. Rela, F. Robledo, and P. Romero, Uniformly most-reliable graphs and antiholes, in International Conference on Machine Learning, Optimization, and Data Science (Springer, 2019) pp. 434-444.
Functional brain connectivity is predictable from anatomic network's Laplacian eigen-structure. F Abdelnour, M Dayan, O Devinsky, T Thesen, A Raj, NeuroImage. 172728F. Abdelnour, M. Dayan, O. Devinsky, T. Thesen, and A. Raj, Functional brain connectivity is pre- dictable from anatomic network's Laplacian eigen-structure, NeuroImage 172, 728 (2018).
Effects of network topology, transmission delays, and refractoriness on the response of coupled excitable systems to a stochastic stimulus. D B Larremore, W L Shew, E Ott, J G Restrepo, Chaos: An Interdisciplinary Journal of Nonlinear Science. 2125117D. B. Larremore, W. L. Shew, E. Ott, and J. G. Restrepo, Effects of network topology, transmission delays, and refractoriness on the response of coupled excitable systems to a stochastic stimulus, Chaos: An Interdisciplinary Journal of Nonlinear Science 21, 025117 (2011).
Critical neuronal models with relaxed timescale separation. A Das, A Levina, Physical Review X. 921062A. Das and A. Levina, Critical neuronal models with relaxed timescale separation, Physical Review X 9, 021062 (2019).
The scale-invariant, temporal profile of neuronal avalanches in relation to cortical γ-oscillations. S R Miller, S Yu, D Plenz, Scientific Reports. 91S. R. Miller, S. Yu, and D. Plenz, The scale-invariant, temporal profile of neuronal avalanches in relation to cortical γ-oscillations, Scientific Reports 9, 1 (2019).
Strict ergodicity and transformation of the torus. H Furstenberg, 10.2307/2372899Amer. J. Math. 83573H. Furstenberg, Strict ergodicity and transformation of the torus, Amer. J. Math. 83, 573 (1961).
Infinite Ergodic Theory of Numbers. M Kesseböhmer, S Munday, B O Stratmann, 10.1515/9783110439427De Gruyter191BerlinM. Kesseböhmer, S. Munday, and B. O. Stratmann, Infinite Ergodic Theory of Numbers, De Gruyter Graduate (De Gruyter, Berlin, 2016) pp. xiii+191.
The abelian distribution. A Levina, J M Herrmann, Stochastics and Dynamics. 141450001A. Levina and J. M. Herrmann, The abelian distribution, Stochastics and Dynamics 14, 1450001 (2014).
. R P Stanley, 10.1017/CBO9780511609589Cambridge Studies in Advanced Mathematics. Sergey Fomin2581Cambridge University PressEnumerative combinatorics.R. P. Stanley, Enumerative combinatorics. Vol. 2 , Cambridge Studies in Advanced Mathematics, Vol. 62 (Cambridge University Press, Cambridge, 1999) pp. xii+581, with a foreword by Gian-Carlo Rota and appendix 1 by Sergey Fomin.
| zyda_arxiv-1649000 |
A SIMPLE MODEL OF 4D-TQFT
22 May 2014
RINAT KASHAEV
A SIMPLE MODEL OF 4D-TQFT
22 May 2014
We show that, associated with any complex root of unity ω, there exists a particularly simple 4d-TQFT model Mω defined on the cobordism category of Delta complexes. For an oriented closed 4-manifold X of Euler characteristic χ(X), it is conjectured that the quantity N 3χ(X)/2 Mω(X), where N is the order of ω, takes only finitely many values as a function of ω. In particular, it is equal to 1 for S 4 , 3 + (−1) N /2 for S 2 × S 2 , and N −1/2 N k=1 ω k 2 for CP 2 .
Introduction
Pachner or bistellar moves are known to form a finite set of operations on triangulations such that arbitrary triangulations of a piecewise linear (PL) manifold can be related by a finite sequence of Pachner moves [12,11]. As a result, the combinatorial framework of triangulated PL manifolds combined with algebraic realizations of Pachner moves can be useful for constructing combinatorial 4-dimensional topological quantum field theories (TQFT) [16,1]. Realization of this scheme in three dimensions has been initiated in the Regge-Ponzano model [13], where the Pachner moves are realized algebraically in terms of the angular momentum 6j-symbols satisfying the five term Biedenharn-Elliott identity [3,7], which has eventually lead to the Turaev-Viro TQFT model [15] and subsequent generalizations based on the theory of linear monoidal categories [14]. The same scheme in four dimensions is more difficult to realize, mainly because of complicated nature of algebraic constructions generalizing those of the linear monoidal categories though some realizations are known [6,5,4,9,10]. In this paper, to any complex root of unity ω, we associate a rather simple model M ω of 4d-TQFT defined on the cobordism category of Delta complexes [8]. The definition is as follows.
We denote by N ≡ ord(ω) the order of ω, and we recall that in any Delta complex realizing an oriented d-manifold, each d-simplex S comes equipped with a sign ǫ(S) taking the positive value 1 if the orientation induced by the linear order on the vertices of S agrees with the orientation of the manifold. We specify M ω by associating the vector space C N to each positive tetrahedron and the dual vector space C N * to each negative tetrahedron. For a pentachoron (4-simplex) P realizing an oriented 4-ball, we associate the vector where
(1) M ω (P ) ∈ M ω (∂P ) = ⊗ 4 i=0 M ω (∂ i P ) defined by the formula (2) M ω (P ) = Q if ǫ(P ) = 1; Q otherwise.Q = 1 √ N k,l,m∈Z/N Z ω km e k ⊗ē k+l ⊗ e l ⊗ē l+m ⊗ e m ,(3)Q = 1 √ N k,l,m∈Z/N Z ω −kmē k ⊗ e k+l ⊗ē l ⊗ e l+m ⊗ē m(4)
with {e k } k∈Z/N Z and {ē k } k∈Z/N Z being the canonical dual bases of C N and C N * respectively.
Let X be an arbitrary Delta complex representing an oriented 4-manifold. We define (5) M ω (X) = N −|X int 0 | Ev(⊗ P ∈X M ω (P )) where the tensor product is taken over all pentachora of X, Ev is the operation of contracting along all the internal tetrahedra of X, and |X int 0 | is the number of internal vertices of X. Our main result is the following theorem.
Theorem 1. M ω is a well defined 4d-TQFT.
The paper is organized as follows. In the next two sections we prove Theorem 1 by showing the independence of M ω on the branching of Delta triangulations and its invariance under the Pachner moves. In the last section, we provide examples of calculation which hint that the associated invariant despite the simplicity of the model might be interesting.
Behavior under branching changes
Any Delta triangulation comes equipped with a branching meaning that the vertices of each triangle are linearly ordered. Proposition 1. For any two compact oriented 4-manifold Delta triangulations X and Y differing by a change of branching, one has the equality
(6) M ω (X) = b(M ω (Y )), b : M ω (∂X) → M ω (∂Y ).
where b is an isomorphism of vector spaces.
Let us fix a square root √ ω. Following [2], we define a function
(7) Φ : Z/N Z → C, Φ(k) = √ ω k(k+N ) ,
which has the obvious properties
(8) Φ(k) 2 = ω k 2 , Φ(−k) = Φ(k), Φ(k + l) = Φ(k)Φ(l)ω kl .
We also denote
(9)Φ(k) ≡ 1 Φ(k) .
Next, we define two vector space isomorphisms
(10) S, T : C N * → C N ,
by the formulae
(11) Sē k = 1 √ N l∈Z/N Z Φ(k − l)e l , Tē k = Φ(k)e −k .
Notice that their inverses are given by the Hermitian conjugate maps :
(12) S −1 e k =Se k = 1 √ N l∈Z/N ZΦ (k − l)ē l , T −1 e k =T e k =Φ(k)e −k .
We also define the permutation maps
(13) P : C N * ⊗ C N → C N ⊗ C N * ,P = P −1 : C N ⊗ C N * → C N * ⊗ C N .
The proof of Proposition 1 is based on the following lemma.
Lemma 1. One has the equalities
(14) Q = (P ⊗ T ⊗T ⊗ T )Q = (T ⊗P ⊗S ⊗ S)Q = (S ⊗S ⊗ P ⊗ T )Q = (T ⊗T ⊗ T ⊗P )Q
where the vectors Q andQ are defined in (3) and (4).
Proof. Let us prove the first equality:
(15) √ N (P ⊗ T ⊗T ⊗ T )Q = k,l,m∈Z/N Z ω −km e k+l ⊗ē k ⊗ Tē l ⊗T e l+m ⊗ Tē m = k,l,m∈Z/N Z ω −km Φ(l)Φ(l + m)Φ(m)e k+l ⊗ē k ⊗ e −l ⊗ē −l−m ⊗ e −m = k,l,m∈Z/N Z ω −km−lm e k+l ⊗ē k ⊗ e −l ⊗ē −l−m ⊗ e −m = k,l,m∈Z/N Z ω −km e k ⊗ē k−l ⊗ e −l ⊗ē −l−m ⊗ e −m = k,l,m∈Z/N Z ω km e k ⊗ē k+l ⊗ e l ⊗ē l+m ⊗ e m = √ N Q
where in the third equality we used the last property in (8), in the forth equality we shifted the summation variable k → k − l, and in the fifth equality we negated the summation variables l and m. The other relations are proved in a similar manner.
Proof of Proposition 1. For a triangle f of a Delta triangulation, let C(f ) be the set of tetrahedra containing f . Let X and Y be two Delta triangulations differing in the orientation of only one edge e. The change of the orientation of e results in changing the sign of each pentachoron of X containing e. By applying the appropriate equality of Lemma 1 to each such pentachoron in M ω (X) we observe that for each triangle f containing e, there is a cancellation of a pair of S or T operators for each internal tetrahedron of C(f ). In this way, we immediately obtain the equality M ω (X) = b(M ω (Y )) where b is given the tensor product of non-canceled S or T operators acting on the boundary tetrahedra. We finish the prove by remarking that any branching change can be obtained as a finite sequence of single edge orientation changes.
Invariance under the Pachner moves
A Pachner move in dimension 4 is associated with a splitting of the boundary of a 5-simplex into two non-empty disjoint sets of 4-simplices (pentachora). A Pachner move is called of the type (k, l) with k + l = 6, if the two disjoint subsets of pentachora consist of k and l elements respectively. Thus, altogether, we have Pachner moves of three possible types (3,3), (2,4) and (1,5). Let us discuss in more detail their algebraic realizations in terms polynomial identities for the matrix coefficients of the vectors (3) and (4) defined by the formulae:
(16) Q i,j,k l,m ≡ ē i ⊗ e l ⊗ē j ⊗ e m ⊗ē k , Q = 1 √ N ω ik δ l,i+j δ m,j+k and (17)Q l,m i,j,k ≡ e i ⊗ē l ⊗ e j ⊗ē m ⊗ e k ,Q = 1 √ N ω −ik δ l,i+j δ m,j+k
3.1. The type (3,3). This is the most fundamental Pachner move as it is the only one which can be written in the form involving only the pentachora of one and the same sign and, in a sense, it implies all other types. Consider a 5-simplex with linearly ordered vertices A = {v 0 , v 1 , . . . , v 5 }. Its boundary is composed of six pentachora ∂ i A = A \ {v i } of which three are positive corresponding to even i's and three are negative corresponding to odd i's. All even (respectively odd) pentachora compose a 4-ball, to be called even (respectively odd ) 4-ball, so that the boundary of both balls are naturally identified as simplicial complexes. Both of these balls, when considered separately, are composed only in terms of positive pentahorons, and the corresponding algebraic condition on the vector Q takes the form where the left hand side corresponds to the even 4-ball and the right hand side to the odd one, while the summations in both sides correspond to their own interior tetrahedra. Namely, denoting the tetrahedron A \ {v i , v j } by A ij , the indices s, t, u correspond to the tetrahedra A 02 , A 04 and A 24 in the even 4-ball, and the tetrahedra A 15 , A 35 and A 13 in the odd 4-ball, while the exterior indices i, j, k, l, m, n, p, q, r on both sides correspond to the boundary tetrahedra A 01 , A 23 , A 45 , A 03 , A 05 , A 25 , A 12 , A 14 , A 34 respectively. All other forms of the Pachner relation of the type (3,3) can be obtained from (18) by applying the symmetry relations (14).
Lemma 2. The Pachner relation (18) holds true for the weights (16).
Proof. Ba substituting one after another the explicit forms from (16), we have = ω im+(i+l)n δ p,i+l+j Q l+m,j+n,k q,r = ω im+(i+l)n+(l+m)k δ p,i+l+j δ q,l+m+j+n δ r,j+n+k , and, similarly, = ω mk+l(n+k)+i(m+n) δ r,j+n+k δ p,i+l+j δ q,l+j+m+n .
Comparing the obtained expressions, we see that they are the same.
Remark 1. It is interesting to note that by defining three linear maps
(19) L i , M j , R k : C N ⊗ C N → C N ⊗ C N , Q i,j,k l,m = ē j ⊗ē k , L i (e l ⊗ e m ) = ē i ⊗ē k , M j (e l ⊗ e m ) = ē i ⊗ē j , R k (e l ⊗ e m ) ,
we can rewrite the system (18) as a 3-index family of matrix Yang-Baxter relations in C N ⊗ C N ⊗ C N :
(20) L i 12 M j 13 R k 23 = R k 23 M j 13 L i 12
with the standard meaning of the subscripts, for example, L i 12 ≡ L i ⊗ id C N , etc. It would be interesting to understand the significance of this fact in relationships of 4d-TQFT with lattice integrable models of statistical mechanics.
Remark 2. Another equivalent form of the system (18) is given by a 3-index family of "twisted" pentagon relations either for the R i -matrices
(21) R m 12 R n 13 R k 23 = s,t Q m,n,k s,t R t 23 R s 12 = 1 √ N ω mk R n+k 23 R m+n 12 ,
or for the L i -matrices
(22) L m 23 L l 13 L i 12 = s,t Q i,l,m s,t L s 12 L t 23 = 1 √ N ω im L i+l 12 L l+m 23 ,
where we use the matrices defined in (19). and all other forms can be obtained from it by using the symmetry relations (14).
Lemma 3. The relation (23) holds true for the weights (16) and (17).
Proof. We rewrite (23) in the equivalent matrix form
(24) k,m,n R m 12 R n 13 R k 23Q
s,t m,n,k = R t 23 R s 12 and easily prove it by using (21):
(25) k,m,n R m 12 R n 13 R k 23Q s,t m,n,k = 1 √ N n ω −(s−n)(t−n) R s−n 12 R n 13 R t−n 23 = N −1 n R t 23 R s 12 = R t 23 R s 12 .
Remark 3. As the proof of Lemma 3 shows, the Pachner relation of the type (2,4) given by equation (23) is clearly weaker than the Pachner relation of the type (3,3) given by equation (18). Namely, we cannot revert the argument of the proof to obtain an equivalence between the two relations.
3.3. The type (1,5). We split the pentachora of the
δ x,u δ x,l+j δ r,j+t Q i,x,s p,q = Q i,u,s p,q N −2 j,l,r,t δ u,l+j δ r,j+t = Q i,u,s p,q N −2 j,l,t δ u,l+j = Q i,u,s p,q N −2 l,t 1 = Q i,u,s p,q .
Examples of calculation
Let us illustrate the calculation of the partition function M ω (S 4 ). by using the standard two pentachora triangulation of S 4 . This is the easiest example to calculate. The 4-sphere can be triangulated in the standard way with two pentachora of opposite signs and five vertices so that we have Table 1 where χ(X) is Table 1.
X χ(X) N 3χ(X)/2 M ω (X) S 4 2 1 S 2 × S 2 4 (3 + (−1) N )/2 CP 2 3 N −1/2 N k=1 ω k 2 S 3 × S 1 0 1 S 2 × S 1 × S 1 0 (3 + (−1) N )/2
the Euler characteristic. This table is consistent with the following conjecture.
Conjecture 1. For a given oriented closed 4-manifold X, the normalized quantum invariant N 3χ(X)/2 M ω (X), considered as a function on the set of all complex roots of unity, takes only finitely many different values.
Date:
May 22, 2014. Supported in part by Swiss National Science Foundation.
N 3/ 2 =
2(r.h.s. of (18)) = u ω mk Q l,ω mk+l(n+k) δ r,j+n+k Q i,l+j,m+n p,q
3. 2 .
2The type(2,4). We split the pentachora of the 5-simplex A = {v 0 , v 1 , . . . , v 5 } into a subset of two pentachora ∂ 1 A and ∂ 3 A the complementary subset of other four pentachora. The corresponding algebraic relation takes the form
Similar but much more lengthy calculations are needed for other examples. Leaving details to a separate publication, we collect the results into
5-simplex A = {v 0 , v 1 , . . . , v 5 } into the a set composed of only one pentachoron ∂ 1 A and the complementary set of other 5 pentachora. The corresponding algebraic relation takes the form where we have taken into account the fact that vertex v 1 is in the interior of the 4-ball corresponding to the left hand side of (26) and, according to our TQFT rules, we have divided the corresponding sum by N . As before, all other forms of the Pachner relations of the type (1,5) can be obtained from (26) by using the symmetry relations(14).Lemma 4. The relation (26) holds true for the weights (16) and (17).(26)
N −1
j,k,l,m,n,r,t,v,w,x
Q i,l,m
v,w Q v,j,n
p,x Q w,x,k
q,rQ
s,t
m,n,kQ
u,r
l,j,t = Q i,u,s
p,q ,
Proof. By using (23), we write
(27) N −1
j,k,l,m,n,r,t,v,w,x
Q i,l,m
v,w Q v,j,n
p,x Q w,x,k
q,rQ
s,t
m,n,kQ
u,r
l,j,t
= N −1
j,l,r,t,x
Q l,j,t
x,r Q i,x,s
p,qQ
u,r
l,j,t = N −2
j,l,r,t,x
Topological quantum field theories. Michael Atiyah, Inst. HautesÉtudes Sci. Publ. Math. 68Michael Atiyah. Topological quantum field theories. Inst. HautesÉtudes Sci. Publ. Math., (68):175-186 (1989), 1988.
New solvable lattice models in three dimensions. V V Bazhanov, R J Baxter, J. Statist. Phys. 693-4V. V. Bazhanov and R. J. Baxter. New solvable lattice models in three dimensions. J. Statist. Phys., 69(3-4):453-485, 1992.
An identity by the Racah coefficients. L C Biedenharn, J. Math. Physics. 31L. C. Biedenharn. An identity by the Racah coefficients. J. Math. Physics, 31:287-293, 1953.
Structures and diagrammatics of four-dimensional topological lattice field theories. J , Scott Carter, Louis H Kauffman, Masahico Saito, Adv. Math. 1461J. Scott Carter, Louis H. Kauffman, and Masahico Saito. Structures and diagrammatics of four-dimensional topological lattice field theories. Adv. Math., 146(1):39-100, 1999.
Four-dimensional topological quantum field theory, Hopf categories, and the canonical bases. Louis Crane, Igor B Frenkel, J. Math. Phys. 3510Topology and physicsLouis Crane and Igor B. Frenkel. Four-dimensional topological quantum field theory, Hopf categories, and the canonical bases. J. Math. Phys., 35(10):5136-5154, 1994. Topology and physics.
A categorical construction of 4D topological quantum field theories. Louis Crane, David Yetter, Quantum topology. River Edge, NJ3World Sci. Publ.Louis Crane and David Yetter. A categorical construction of 4D topological quantum field theories. In Quantum topology, volume 3 of Ser. Knots Everything, pages 120-130. World Sci. Publ., River Edge, NJ, 1993.
Theoretical studies in nuclear structure. V. The matrix elements of non-central forces with an application to the 2p-shell. J P Elliott, Proc. Roy. Soc. London. Ser. A. 218J. P. Elliott. Theoretical studies in nuclear structure. V. The matrix elements of non-central forces with an application to the 2p-shell. Proc. Roy. Soc. London. Ser. A., 218:345-370, 1953.
Algebraic topology. Allen Hatcher, Cambridge University PressCambridgeAllen Hatcher. Algebraic topology. Cambridge University Press, Cambridge, 2002.
Euclidean 4-simplices and invariants of four-dimensional manifolds. I. Surgeries 3 → 3. I G Korepanov, Teoret. Mat. Fiz. 1313I. G. Korepanov. Euclidean 4-simplices and invariants of four-dimensional manifolds. I. Surg- eries 3 → 3. Teoret. Mat. Fiz., 131(3):377-388, 2002.
Parameterizing the simplest Grassmann-Gaussian relations for Pachner move 3-3. SIGMA Symmetry Integrability Geom. G Igor, Nurlan M Korepanov, Sadykov, Methods Appl. 953Igor G. Korepanov and Nurlan M. Sadykov. Parameterizing the simplest Grassmann-Gaussian relations for Pachner move 3-3. SIGMA Symmetry Integrability Geom. Methods Appl., 9:Pa- per 053, 19, 2013.
Simplicial moves on complexes and manifolds. W B R Lickorish, Proceedings of the Kirbyfest. the KirbyfestBerkeley, CA2electronicW. B. R. Lickorish. Simplicial moves on complexes and manifolds. In Proceedings of the Kir- byfest (Berkeley, CA, 1998), volume 2 of Geom. Topol. Monogr., pages 299-320 (electronic).
. Geom. Topol. Publ. Geom. Topol. Publ., Coventry, 1999.
homeomorphic manifolds are equivalent by elementary shellings. Udo P L Pachner, European J. Combin. 122Udo Pachner. P.L. homeomorphic manifolds are equivalent by elementary shellings. European J. Combin., 12(2):129-145, 1991.
Semiclassical limit of Racah coefficients. G Ponzano, T Regge, Spectroscopic and group theoretical methods in physics. AmsterdamNorth-Holland Publ. CoG. Ponzano and T. Regge. Semiclassical limit of Racah coefficients. In Spectroscopic and group theoretical methods in physics, pages 1-58. North-Holland Publ. Co., Amsterdam, 1968.
Quantum invariants of knots and 3-manifolds. V G Turaev, de Gruyter Studies in Mathematics. 18Walter de Gruyter & CoV. G. Turaev. Quantum invariants of knots and 3-manifolds, volume 18 of de Gruyter Studies in Mathematics. Walter de Gruyter & Co., Berlin, 1994.
State sum invariants of 3-manifolds and quantum 6j-symbols. V G Turaev, O Ya, Viro, Topology. 314V. G. Turaev and O. Ya. Viro. State sum invariants of 3-manifolds and quantum 6j-symbols. Topology, 31(4):865-902, 1992.
Topological quantum field theory. Edward Witten, Comm. Math. Phys. 1173Edward Witten. Topological quantum field theory. Comm. Math. Phys., 117(3):353-386, 1988.
| zyda_arxiv-1664000 |
One-Vote Veto: A Self-Training Strategy for Low-Shot Learning of a Task-Invariant Embedding to Diagnose Glaucoma
Rui Fan [email protected]
Christopher Bowd [email protected]
Nicole Brye [email protected]
Mark Christopher
Robert N Weinreb [email protected]
David Kriegman [email protected]
Linda Zangwill [email protected]
U C San
Diego
One-Vote Veto: A Self-Training Strategy for Low-Shot Learning of a Task-Invariant Embedding to Diagnose Glaucoma
Convolutional neural networks (CNNs) are a promising technique for automated glaucoma diagnosis from images of the fundus, and these images are routinely acquired as part of an ophthalmic exam. Nevertheless, CNNs typically require a large amount of well-labeled data for training, which may not be available in many biomedical image classification applications, especially when diseases are rare and where labeling by experts is costly.This paper makes two contributions to address this issue: (1) It introduces a new network architecture and training method for low-shot learning when labeled data are limited and imbalanced, and (2) it introduces a new semisupervised learning strategy that uses additional unlabeled training data to achieve great accuracy. Our multi-task twin neural network (MTTNN) can use any backbone CNN, and we demonstrate with ResNet-50 and MobileNet-v2 that its accuracy with limited training data approaches the accuracy of a finetuned backbone trained with a dataset that is 50 times larger. We also introduce One-Vote Veto (OVV) self-training, a semi-supervised learning strategy, that is designed specifically for MTTNNs. By taking both selfpredictions and contrastive-predictions of the unlabeled training data into account, OVV self-training provides additional pseudo labels for finetuning a pretrained MTTNN. Using a large dataset with more than 50,000 fundus images acquired over 25 years, extensive experimental results demonstrate the effectiveness of low-shot learning with MT-TNN and semi-supervised learning with OVV. Three additional, smaller clinical datasets of fundus images acquired under different conditions (cameras, instruments, locations, populations), are used to demonstrate generalizability of the methods. Source code and pretrained models will be publicly available upon publication.
Introduction
Glaucoma is a blinding but treatable disease in which damage to the optic nerve can result in progressive and ir-reversible vision loss [56]. In 2013, glaucoma affected an estimated 64.3 million individuals worldwide [51]. Because of the rapid global increase in aging populations, 111.8 million cases are expected by 2040 [51]. Improvement in the management of glaucoma would have a major human and socio-economic impact. Early identification would significantly reduce the economic burden of this disease in its late stages [52]. In addition, studies have shown that visual impairment in glaucoma patients are associated with overall decreases in self-reported vision-related physical activity and mental health [18,38] as well as an increased risk of involvement in motor vehicle accidents [26,35].
With recent advances in machine learning, convolutional neural networks (CNNs), trained by supervised learning, have shown their potential to become an effective tool to diagnose glaucoma from fundus images, photographs of the back of eyes [28]. To achieve this, very large amounts of empirical data are required for supervised training (see Fig. 1). For this work, we use 66715 fundus photographs from the Ocular Hypertension Treatment Study (OHTS) [15,21,14], a 22-site multi-center, longitudinal (1994-2019) randomized clinical trial of 1636 subjects. The primary goal of the OHTS [15] was to determine if topical ocular hypotensive medications could delay or prevent the onset of glaucoma in eyes with high intraocular pressure. Conversion to glaucoma was decided by a masked endpoint committee of three glaucoma specialists using fundus photographs and visual fields. Owing to its well-characterized ground truth labels, the OHTS dataset [15,21] provides us a basis to explore an effective way of training CNNs to diagnose glaucoma with low-shot learning (LSL) and/or semisupervised learning (SSL), when only a small quantity of labeled data are available. This is a promising research area that requires more attention, because of its potential to be applied to other biomedical image analysis tasks, where supervised learning often suffers from small sample size issues. However, as shown in Fig. 1, (inductive) SSL typically requires a reliable pretrained CNN (using a small sample) as prior knowledge. Providing such a CNN is often challenging due to the overfitting problem. Moreover, there is also a strong motivation to design a feasible SSL strategy which is capable of determining confident predictions and generating pseudo labels for unlabeled data. Therefore, this paper aims at answering the following questions:
1. Can a CNN be developed to accurately diagnose glaucoma compared to the expert graders of the OHTS [15,21,14]? Will the model be generalizable to other datasets? 2. Is it necessary to train CNNs with thousands of labeled fundus images to diagnose glaucoma? Can we use only one image from each patient (around 1.1K fundus images in total in the training set)? 3. Can performance be improved further when the CNN trained using a small sample is finetuned by SSL with more unlabeled training data?
To answer these questions, we first use ResNet-50 [16], the most commonly utilized CNN for glaucoma diagnosis [29,39,36,8,9], as the backbone CNN to explore the feasibility of training a multi-task twin neural network (MT-TNN) to diagnose glaucoma with only 1147 fundus images (one image from each patient). Additionally, developing low-cost and real-time embedded glaucoma diagnosis systems for mobile devices is also an emerging area [19,33,50]. In this regard, we also use MobileNet-v2 [42] as an alternative backbone in a MTTNN. Furthermore, we propose an effective SSL strategy, referred to as One-Vote Veto (OVV) self-training, to generate confident pseudo labels of the unlabeled training data, which are then added into the labeled training data to finetune the CNNs for better performance and generalization ability. Extensive experimental results demonstrate that by using such a MTTNN, the overfitting problem can be reduced, and the achieved area under the receiver operating characteristic curve (AU-ROC or simply AUC) is only slightly lower than that of the backbone CNN trained with 53K fundus images under full supervision. Furthermore, the MTTNNs finetuned with OVV self-training perform similarly to the corresponding backbone CNNs trained via supervised learning on the OHTS [14] test set, and with higher AUC on three additional independent clinical datasets.
Related Work
Image Classification
AlexNet [25] is one of the modern CNNs that pushed image classification accuracy significantly. VGG architectures [43] improved over AlexNet [25] by increasing the network depth, which enabled them to learn more complicated image features. However, VGG architectures consist of hundreds of millions of parameters, making them very memoryconsuming. GoogLeNet [46] (also known as Inception-v1) and Inception-v3 [47] go deeper in parallel paths with different receptive field sizes, so that the Inception module can act as a multi-level image feature extractor. Compared to VGG [43] architectures, GoogLeNet [46] and Inception-v3 [47] have lower computational complexity.
However, with the increase of network depth, accuracy gets saturated and then degrades rapidly [16], due to vanishing gradients. To tackle this problem, [16] introduces residual neural network (ResNet). It is comprised of a stack of building blocks, each of which utilizes identity mapping (skip connection) to add the output from the previous layer to the layer ahead. This helps gradients propagate. Due to its robustness, ResNet-50 [16] has been extensively used for biomedical image analysis, and it is a common choice [29,39,36,8,9] for fundus image classification.
In recent years, researchers have turned their focus to light-weight CNNs, which can be embedded in mobile devices to perform real-time image classification. MobileNet-v2 [42], ShuffleNet-v2 [31], and MNASNet [49] are three of the most popular CNNs of this kind. MobileNet-v2 [42] mainly introduces the inverted residual and linear bottleneck layer, which allows high accuracy and efficiency in mobile vision applications. We chose to use ResNet-50 [16] and MobileNet-v2 [42] as the backbone CNNs in our experiments as they are two representative CNNs used in medical image analysis and disease detection problems.
Low-Shot Learning
Machine/deep learning has achieved compelling performance in data-intensive applications, but it is often difficult when the training size is small [55]. LSL came into existence to this end. Existing LSL algorithms can be categorized into four classes: 1) multi-task learning, 2) embedding learning, 3) learning with external memory, and 4) gen- Figure 2: An illustration of our multi-task twin neural network for learning glaucoma diagnosis.
erative modeling [55]. Multi-task learning simultaneously learns multiple related tasks by exploiting task-generic and task-specific information [4]. Embedding learning learns a feature (embedding) h from each image x, as illustrated in Fig. 2, satisfying that the image features from the same class are closer in the embedding space, and vice versa [20]. Embedding learning algorithms are typically classified as either task-specific embedding models or task-invariant embedding models. The first group learns an embedding for each task, using the information only from that task, while the second category learns a general embedding with various outputs. Twin (or Siamese) neural network [24] is one of the most representative models of the second type. It is typically used for one/few/low-shot image, e.g., face or digit, recognition. In Sec. 3.1 and 4.2.2, we explore the feasibility of learning glaucoma diagnosis using a MT-TNN. Learning with external memory extracts prior knowledge from a low-shot training set and stores it in an external memory, such as a key-value memory [37]. Each test image is then predicted based on a weighted average of contents extracted from the memory. Our proposed OVV selftraining strategy, as introduced in Sec. 3.2, is also motivated by the mechanism of learning with external memory, where the labels of unlabeled training data are predicted by a classifier trained on a collection of images in the low-shot training set. Finally, generative modeling algorithms aim at estimating the probability distribution from observed data with the help of prior knowledge, typically based on Bayes' theorem [27,13].
Semi-Supervised Learning
In comparison to LSL that uses only a small amount of labeled data, SSL also includes a large amount of unlabeled data for training [5], as illustrated in Fig. 1. It falls between unsupervised learning and supervised learning. The most common taxonomy categorizes existing methods as either inductive or transductive [54]. The former requires a pretrained model (typically yielded via supervised learning) to produce pseudo labels of the unlabeled data, while the latter does not require such a model to do so. Self-training [53] and co-training [2] are the two most popular inductive SSL approaches. The essential distinction between them is the number of pretrained network(s) used to produce pseudo labels: self-training uses one, but co-training uses two. Developing an effective policy to identify confident predictions to produce pseudo labels is, therefore, the key to SSL. Moreover, preparing a reliable classifier with only a small amount of labeled data is notably demanding. Here, we combine SSL with LSL to biomedical image classification tasks, such as glaucoma diagnosis from fundus images.
Methodology
Multi-Task Twin Neural Network
As mentioned in Sec. 2.2, a twin neural network is a typical task-invariant embedding model, commonly utilized for metric learning and one/few/low-shot image recognition [24]. As its name implies, a twin neural network contains two identical sub-networks. A given pair of images x i and x j are separately fed into these sub-networks, which then output two 1D embeddings (features) h i and h j , respectively. Φ(·) measures the distance between h i and h j by computing their vector absolute difference (AD) or squared difference (SD) h i,j , which is then followed by a fully connected (FC) layer to produce a scalar q(x i , x j ) ∈ [0, 1] indicating the similarity between x i and x j . When x i and x j are different, q(x i , x j ) approaches 1, and vice versa. y i ∈ {0, 1} and y j ∈ {0, 1} denote the ground truth labels of x i and x j , respectively, where 0 is healthy and 1 is glaucomatous optic neuropathy (GON).
However, such a twin neural network can only determine whether x i and x j belong to the same category, i.e., healthy or GON, instead of predicting their independent categories. A straightforward solution is to separately connect h i and h j with a FC layer to produce the scalars p(x i ) and p(x j ) indicating the probabilities that x i and x j are GON, respectively. See Fig. 2 and note that the two FC layers use the same weights. In this paper, we refer to the CNN architecture in Fig. 2 as a MTTNN, which is capable of simultaneously classifying a given pair of fundus images into either healthy or GON as well as measuring their similarity. Using such a task-invariant embedding model, we can have C(n, 2) different combinations of image pairs, where n represents the low-shot sample size. For example, the 1147 fundus images we used in our experiments can provide over 657K different pairs, which can overcome the small sample size issue and prevent overfitting.
In this paper, we use n 0 and n 1 to denote the numbers of healthy and GON fundus images, respectively, used for LSL. n 0 is usually much greater than n 1 , because there are fewer patients with glaucomatous disease than healthy patients, resulting in a severe dataset imbalance problem. In this regard, we apply two weights ω cla = n0 n0+n1 and ω sim = n0(n0−1)+n1(n1−1) (n0+n1)(n0+n1−1) to weigh the losses L cla and Figure 3: An illustration of our One-Vote Veto self-training strategy. h d 1,k and h d m,k are two 1D embeddings, followed by a FC layer to produce scalars indicating the similarities between the given pairs of reference and target fundus images.
L sim , that are used for fundus image classification and input similarity measurement tasks, respectively. ω cla represents the proportion of healthy fundus images in the training set, while ω sim represents the proportion of the cases that the given labeled fundus image pairs z = [(x i , y i ), (x j , y j )] belong to the same category. We train our MTTNN by minimizing a combined weighted (binary) cross-entropy loss, as follows:
L(z) = λL cla (z) + L sim (z),(1)
where L cla (z) = − ω cla y i log(p(x i )) + y j log(p(x j ))
+ (1 − ω cla ) (1 − y i ) log(1 − p(x i )) + (1 − y j ) log(1 − p(x j )) ,(2)L sim (z) = − ω sim |y i − y j | log(q(x i , x j )) + (1 − ω sim )(1 − |y i − y j |) log(1 − q(x i , x j )) ,(3)
λ is a hyper-parameter utilized to balance L cla and L sim ; The motivations of using such a combined weighed crossentropy loss function to train our MTTNN, instead of the conventionally used triplet loss [17] or contrastive loss [23] are:
1. Most datasets for glaucoma diagnosis are imbalanced. As detailed in Sec. 4.1, the OHTS [14] training set is severely imbalanced (50208 healthy images vs. 2416 GON images for supervised learning, and 995 healthy images vs. 152 GON images for LSL). Learning from such an imbalanced dataset without weighs on the different classes can result in many incorrect predictions (most GON images are likely to be predicted as healthy images). In this regard, different weights should be used for different classes (a higher weight should be used for the minority class) so as to prevent CNN from predicting all the images as the majority class. 2. It is usually difficult to weigh different types of losses, e.g., regression and classification, in multi-task learning [22]. An undesirable weight can result in a poorlyperforming task, when other tasks converge to satisfactory results. Therefore, formulating L sim as a weighted cross-entropy loss function is a simple but effective solution. However, due to the dataset imbalance problem discussed above, ω sim is included in (3). Fig. 3, our proposed self-training strategy requires both labels and probabilities (of being GON images), predicted by a pretrained model, to produce pseudo labels of the unlabeled data. Such network architecture and training loss can efficiently and effectively provide both "self-predicted" and "contrastively-predicted" labels and probabilities, as explained in Sec. 3.2.
As shown in
It should be noted here that the images from the same patient are not used as an image pair for MTTNN training. The selection of λ and Φ(·) is discussed in Sec. 4.2.2.
One-Vote Veto Self-Training
As introduced in Sec. 2.3, self-training aims to improve the performance of a pretrained model, by incorporating confident predictions of the unlabeled data to obtain useful additional information that can be used during training. A feasible strategy to determine such confident predictions is, therefore, the key to self-training [34].
For conventional SSL algorithms, given a pretrained image classification model (through supervised learning), a straightforward way to determine whether an unlabeled image is confident enough to be included for model finetuning is to compare its probability distributed to the most possible class, with a predetermined threshold. If its probability exceeds that threshold, its prediction will be considered as Algorithm 1: One-Vote Veto self-training strategy.
Data: X r , Y r , and X t 1 while Training do 2 Given a mini-batch consisting of
{x r 1 , . . . , x r m } ∈ X r , {y r 1 , . . . , y r m } ∈ Y r and {x t 1 , . . . , x t m } ∈ X t ; 3 P ← ∅; 4 for i ← 1 to m do 5 if (|p(x t i ) − 1 2 | > 1 2 − κ 2 then 6 w ← 0; 7 for j ← 1 to m do 8 if |p(x r j ) − y r j | < κ 2 9 then 10 w + +; 11 P ← P ∪ {x r j , y r j } ; 12 v 1 (j) ←ỹ r→t i (x r j , x t i ); 13 v 2 (j) ← p r→t (x r j , x t i ); 14 if ( m j=1 v 1 (j) ≤ κ 1 ∨ m j=1 v 1 (j) ≥ w − κ 1 ) ∧ any |v 2 (j) − 1 2 | > 1 2 − κ 2 15 then 16 P ← P ∪ {x t i ,ỹ t i } 17
finetune the target model using P;
18 if the target model outperforms the reference model then 19 Update the reference model parameters;
a pseudo label. The image and its pseudo label will then be incorporated into the labeled data to finetune the pretrained model. However, relying on probability distributions alone to generate pseudo labels is not often sufficient [53]. Inspired by learning with external memory [45], we introduce OVV self-training in this paper, as illustrated in Fig. 3. Similar to learning with external memory [45], we use a collection of m reference (labeled) fundus images {x r 1 , . . . , x r m } ∈ X r to provide "contrastive predictions" to the target (unlabeled) fundus images {x t 1 , . . . , x t m } ∈ X t , where the superscripts r and t represent "reference" and "target", respectively. The contrastive predictions then vote to veto the unconfident "self-predictions" {ỹ t 1 , . . . ,ỹ t m } produced by the MTTNN. Our OVV self-training algorithm is detailed in Algorithm 1, where the target model updates its parameters during selftraining but the reference model does not.
When finetuning a MTTNN pretrained through LSL, each mini-batch contains a discrete set of m reference fundus images {x r 1 , . . . , x r m } ∈ X r , their ground truth labels {y r 1 , . . . , y r m } ∈ Y r , and the same number of m target fundus images {x t 1 , . . . , x t m } ∈ X t without labels. h r k and h t k represent the 1D embeddings learned from x r k and x t k (k ∈ [1, m] ∩ Z), respectively. Given a pair of reference and target fundus images, x r q and x t k , the pretrained MT-TNN can "self-predict" both the scalars p(x r q ) and p(x t k ) which indicate the probabilities that x r q and x t k are GON, respectively, as well as their labelsỹ r q = δ(p(x r q )) andỹ t k = δ(p(x t k )) using its fundus image classification functionality (δ(p) = 1 when p > 0.5, and δ(p) = 0 otherwise). p(x r q ) is then used to determine whether x r q is qualified to veto unconfident predictions. In the mean time, the pretrained MTTNN can also "contrastively-predict" the scalar
p r→t (x r q , x t k ) = |p(x r q ) − q(x r q , x t k )|(4)
indicating the GON probability as well as the label
y r→t k (x r q , x t k ) = |δ(p(x r q )) − δ(q(x r q , x t k ))|(5)
of x t k from x r q using its input similarity measurement functionality. Please note:ỹ r→t
k (x r q , x t k ) is typically unequal to δ(p r→t (x r q , x t k )
). In order to determine whetherỹ t k is confident and can be used as the pseudo label of x t k , all the reference fundus images {x r 1 , . . . , x r m } ∈ X r in the minibatch are used to provide additional judgements. Each pair of contrastively-predicted scalar indicating GON probability and label form a vote (
p r→t (x r q , x t k ),ỹ r→t k (x r q , x t k )). If |p(x r q ) − y r q | > κ 2 ,
we will omit its vote, where κ 2 is a threshold used to select qualified reference fundus images.
With all votes collected from the qualified reference fundus images, the OVV self-training algorithm determines whetherỹ t k should be used as the pseudo label for x t k based on the following criteria:
• Identical to the manner of determining qualified reference fundus images, if any
p r→t (x r j , x t k ) (j ∈ [1, m] ∩ Z) or p(x t k )
is not close to either 0 (healthy) or 1 (GON), evaluated by the threshold κ 2 ,ỹ t k will not be assigned to x t k . • If a minority of more than κ 1 qualified reference fundus images disagree with the majority of the qualified reference fundus images,ỹ t k will not be assigned to x t k . As discussed in Sec. 4.2.2, κ 1 = 0 (all the qualified reference images vote to the same category) achieves the best overall performance. In this regard, the above-mentioned strategy is referred to "One-Vote Veto" in this paper. Since each target fundus image is required to be compared with all the reference fundus images in the same mini-batch, the proposed self-training strategy has a computational complexity of O(n 2 ), which is relatively memory-consuming. The confident target fundus images and their pseudo labels are then included into the low-shot data to finetune the pretrained MTTNN with supervised learning by minimizing a weighted cross-entropy loss. The OVV performance with respect to different κ 1 , κ 2 and m is also shown in Sec. 4.2.2.
Experimental Results
Datasets and Experimental Setups
The datasets used in our experiments were collected at various intervals by different clinicians from different institutes using different fundus cameras. Their details are as follows:
• OHTS: The OHTS [15,21,14] recruited 1636 ocular hypertensive participants with elevated intraocular pressure [14]. The OHTS [14] The aforementioned four test sets are visualized with t-SNE [32], as shown in Fig. 4. Since healthy and GON images are distributed similarly between the OHTS [14] 1 The number of fundus images being published is fewer than what was reported in publication [28]. and LAG [28] datasets, we expect models to perform similarly on these data. Dissimilar distribution in the ACRIMA [11] dataset led us to believe the performance of models in this dataset would be somewhat worse. Using these four datasets, we conduct three experiments:
• Supervised learning experiment: we utilize transfer learning [48] to train ResNet-50 [16] and MobileNet-v2 [42] (pretrained on the ImageNet database [10]), on the entire OHTS [14] training set. The best-performing models of these two CNNs are selected using the OHTS [14] validation set. Their performances are evaluated on the OHTS [14] test set, the ACRIMA [11] dataset, the LAG [28] dataset, and the DIGS/ADAGES [41] test set. • LSL experiment: ResNet-50 [16] and MobileNet-v2 [42] (pretrained on the ImageNet database [10] The fundus images are resized to 224 × 224 pixels. The initial learning rate is set to 0.001, which decays gradually after the 100th epochs. Due to the dataset imbalance problem, F-score is utilized to select the best-performing models during the validation stage. Moreover, we adopt Table 1: AUC (shown along with 95% CI) and training time per epoch t (min) of supervised learning, low-shot learning and semi-supervised learning for glaucoma diagnosis. These results suggest that OVV strategy that requires a smaller sample size of labeled images performs similarly and in some cases significantly better than the backbone CNNs trained with a larger number of labeled images under full supervision.
an early stopping mechanism during the validation stage to reduce the overfitting problem, namely, the training will be terminated if the achieved F-score has not increased for 30 epochs. In addition, we use four metrics: 1) accuracy (ACC), 2) F-score, 3) Matthews correlation coefficient (MCC), and 4) AUC to quantify the performances of the trained models. Additional performance evaluation with more details is also provided in our supplementary material.
Performance Evaluation
Performance comparison of supervised learning, LSL, and SSL for glaucoma diagnosis
The comparisons of supervised learning, LSL, and SSL for glaucoma diagnosis are provided in Tab. 1.
First, these results suggest that our OVV self-training strategy that requires a smaller sample size of labeled images performs similarly (AUC 95% CI overlaps considerably) and, in some cases, significantly better (AUC 95% CI does not overlap) than the backbone CNNs trained with a larger number of labeled images under full supervision. Specifically, ResNet-50-OVV performs similarly to ResNet-50 [16] on the DIGS/ADAGES [41] test set, with AUC (95% CI) of 0.763 (0.695, 0.820) and 0.744 (0.696, 0.792), respectively, on the OHTS [14] test set, with AUC (95% CI) of 0.898 (0.857, 0.928) and 0.904 (0.865, 0.935), respectively, and on the ACRIMA [11] test set, with AUC (95% CI) of 0.775 (0.741, 0.808) and 0.736 (0.698, 0.771), respectively. ResNet-50-OVV performs significantly better on the LAG [28] dataset than ResNet-50 [16] with AUC (95% CI) of 0.881 (0.870, 0.891) and 0.794 (0.780, 0.807), respectively. MobileNet-v2-OVV also performs similarly to MobileNet-v2 [42] on all the four test sets.
Second as expected, the AUC scores achieved by LSL are in most, but not all, cases slightly lower than those achieved by the backbone CNNs on the OHTS [14] test set (∼0.035 for ResNet-50 [16] and ∼0.036 for MobileNet-v2 [42]). Moreover, since our LSL uses only a small amount of training data, MTTNN training is much faster than supervised learning.
Hyper-Parameter and Threshold Selection
We next discuss the selection of λ and Φ(·). In our experiments, we set λ to 0.1, 0.2, 0.3, 0.4 and 0.5 respectively, and compare the MTTNN performance when Φ(·) performs AD and SD. The comparisons in terms of F-score, MCC and AUC on the OHTS [14] test set are provided in Fig. 5, where ResNet-50-LS-AD and MobileNet-v2-LS-AD perform AD in Φ(·), while ResNet-50-LS-SD and MobileNet-v2-LS-SD perform SD in Φ(·), respectively. From Fig. 5, it is obvious that when λ = 0.3, the MTTNN achieves the best overall performance, which is reasonable, as a higher λ weighs more on the image classification task, easily resulting in overfitting. Also, ResNet-50-LS-SD typically outperforms ResNet-50-LS-AD while MobileNet-v2-LS-AD typically outperforms MobileNet-v2-LS-SD. To determine the generalizability of the trained models, we further evaluate them on three additional datasets, as shown in Tab. 2. Compared with SD, when Φ(·) applies AD, MTTNN generally performs better or very similarly on the additional test sets, especially for ResNet-50-LS on the ACRIMA [11] dataset. AD is therefore used in the following experiments. Furthermore, Tab. 2 shows a baseline supervised learning experiment with the low-shot training set. The results suggest that LSL performs much better than its backbone CNN (trained with supervised learning) when the training size is small. Furthermore, we discuss the selection of the thresholds κ 1 and κ 2 used in OVV self-training as well as the impact of different mini-batch size 2m on the performance of OVV self-training. Tab. 3 provides the performances of ResNet-50-OVV and MobileNet-v2-OVV with respect to different κ 1 , κ 2 and m. It can be seen that the ACC and F-score increase slightly but the MCC and AUC almost remain the better results. When m decreases, the generalizability of MTTNN degrades dramatically, especially for F-score (decreases by around 9-19%) and MCC (decreases by around 6-12%). Therefore, increasing the mini-batch size can improve the MTTNN generalizability, as more reference fundus images are used to provide contrastive-predictions for the target fundus images, which can veto more unconfident predictions on the unlabeled data. Since our threshold selection experiments cover a very limited number of discrete sets of κ 1 , κ 2 , and m, we believe better performance can be achieved when more values are tested.
Conclusion
The main contributions of this paper include: 1) a multitask twin neural network that can learn glaucoma diagnosis from very limited labeled training data; 2) an effective semi-supervised learning strategy, referred to as One-Vote Veto self-training, which is designed specially for low-shot learning of task-invariant embeddings to diagnose glaucoma. Extensive experiments conducted on four glaucoma datasets demonstrated the effectiveness of the aforementioned techniques, where the AUC achieved by low-shot learning is only slightly lower than that produced by the backbone CNN trained under full supervision. In addition, with One-Vote Veto self-training, the multi-task twin neural networks perform similarly to their backbone CNNs on the OHTS [14] test set but show better generalization performances on three additional test sets. With more thresholds and hyper-parameters being tested, we believe our proposed One-Vote Veto self-training strategy for low-shot learning of task-invariant embeddings can yield better overall performance. The techniques introduced in this paper also pro-vide the potential to be applied for other binary biomedical image classification applications, which often suffer from small sample size issues with class imbalance.
Supplementary Material
Performance Evaluation Metrics
This section provides details on the metrics used in this paper. TP, TN, FP, and FN denote the numbers of true positive, true negative, false positive, and false negative classifications, respectively.
• Accuracy (ACC) refers to the proportion of correct classifications:
ACC = TP + TN TP + TN + FP + FN .(6)
It can be a misleading result when the dataset is imbalanced. • Precision (PRE) measures the proportion of positive classifications that are actually correct: Table 5: Comparisons of U-Net [40], FCN [30], SegNet [1] and DeepLabv3+ [7] for optic nerve head extraction. The best results are shown in bold type.
PRE = TP TP + FP .(7)
• Sensitivity (SEN), also known as recall (REC) or true positive rate (TPR), measures the proportion of positive pixels or images that are correctly classified:
SEN = REC = TP TP + FN .(8)
• Specificity (SPE), also known as true negative rate (TNR), measures the proportion of negative pixels or images that are correctly classified:
SPE = TN TN + FP .(9)
• F-score is the harmonic mean of PRE and REC:
F-score = 2 PRE × REC PRE + REC .(10)
It is commonly used in image classification and semantic segmentation tasks, especially when the classes are imbalanced. • Matthews correlation coefficient (MCC) is a widely used metric to measure the binary classification quality. It is generally regarded as a balanced measurement which can be utilized even if the dataset is imbalanced. Its expression is:
MCC = TP × TN − FP × FN (TP + FP)(TP + FN)(TN + FP)(TN + FN) .(11)
• Intersection over union (IoU), also known as Jaccard index, is commonly used in semantic segmentation and object detection to measure the similarity and diversity of sample sets:
IoU = TP FN + FP .(12)
• AUC refers to the area underneath the receiver operating characteristic (ROC) curve, whose horizontal and vertical axes are false positive rate (FPR) and TPR, respectively, where FPR=1-TNR. FP 1167 584 31 15 314 157 155 78 FN 175 279 244 280 811 999 792 972 TP 430 326 152 116 900 712 594 414 TN 10506 11089 278 294 2829 2986 1395 1167 584 31 15 314 157 155 78 FN 217 328 160 213 641 875 842 1117 TP 388 277 236 183 1070 836 544 269 TN 10506 11089 278 294 2829 2986 1395 Table 6: AUC (95% CI), sensitivity, accuracy, FP, FN, TP and TN with respect to 90% specificity and 95% specificity. These results suggest that low-shot learning with multi-task twin neural network and semi-supervised learning with One-Vote Veto self-training perform similarly to supervised learning with respect to different specificities.
Dataset Preparation
The glaucoma datasets used in our experiments were collected at various intervals by different clinicians from different institutes using different fundus cameras. Since they were gathered for clinical research purposes and not for evaluating computer vision algorithms, this leads to significant variability in image quality and resolution and can make the CNN training much more challenging than expected. To this end, prior to training CNNs for glaucoma diagnosis, we first cropped a region centered on the optic nerve head from each raw fundus image using a semantic segmentation CNN trained for optic nerve head extraction.
In our experiments, we trained four semantic segmentation CNNs: 1) U-Net [40], fully convolutional network (FCN) [30], 3) SegNet [1] and 4) DeepLabv3+ [7], on three public optic nerve head segmentation datasets: 1) DRIONS- (a) OHTS [14] healthy fundus images. (c) ACRIMA [11] healthy fundus images. (ii)-(vii) class activation maps of (i), with respect to ResNet-50 [16], ResNet-LS (ours), ResNet-50-OVV (ours), MobileNet-v2 [42], MobileNet-v2-LS (ours), MobileNet-v2-OVV (ours), respectively. ResNet-50 [16] and MobileNet-v2 [42] are trained with supervised learning using 53K labeled fundus images; ResNet-50-LS and MobileNet-v2-LS are trained with low-shot learning using 1147 labeled fundus images; ResNet-50-OVV and MobileNet-v2-OVV respectively finetune pretrained ResNet-50-LS and MobileNet-v2-LS with semisupervised learning using 53K fundus images (without ground truth labels). The red regions correspond to high attention, while the blue regions correspond to low attention.
(i) (ii) (iii) (iv) (a) (b) (c) (v) (vi) (vii)(1)(i) (ii) (iii) (iv) (a) (b) (c) (v) (vi) (vii)(1)(i) (ii) (iii) (iv) (a) (b) (c) (v) (vi) (vii)(1)(i) (ii) (iii) (iv) (a) (b) (c) (v) (vi) (vii)(1)
DB [3], 2) Drishti-GS [44], and 3) RIM-ONE [12], which provide the pixel-level optic nerve head ground truth. The quantitative comparisons are shown in Tab. 5, where it can be seen that DeepLabv3+ [7] achieves the best PRE, ACC, F-score and IoU. We then randomly selected 200 fundus images from the OHTS [14] dataset and manually labeled their optic nerve head ground truth to finetune our pretrained DeepLabv3+ [7]. It was subsequently applied to the entire OHTS [14], LAG [28] and DIGS/ADAGES [41] datasets to extract regions centered on the optic nerve heads from raw fundus images. The ACRIMA [11] dataset is already aligned. Examples of the extracted regions are shown in Fig. 6.
Supplementary Comparisons of Supervised Learning, Low-Shot Learning and Semi-Supervised Learning
Tab. 1 in the full paper is a subset of Tab. 6 in the supplement. The supplementary comparisons of supervised learning, low-shot learning and semi-supervised learning are shown in Tab. 6. These results suggest that our low-shot learning with multi-task twin neural network and semi-supervised learning with One-Vote Veto (OVV) selftraining that require only a small quantity of labeled data, perform similarly and, in some cases, better than the backbone CNNs trained on the entire OHTS [14] training set with supervised learning. Specifically, both ResNet-50-LS and ResNet-50-OVV perform similarly to ResNet-50 [16] on all the four test sets in ACC (varies by less than 3%) with respect to both 90% specificity and 95% specificity. MobileNet-v2-LS and MobileNet-v2-OVV perform similarly on the OHTS [14] and LAG [28] test sets in ACC with respect to both 90% specificity and 95% specificity. Although their achieved ACC is lower (decreases by 5 − 8%) than that of MobileNet-v2 [42] on the DIGS/ADAGES [41] test set, the ACC increases by up to 4% on the ACRIMA [11] dataset.
Class Activation Map Visualization
We next utilize Grad-CAM++ [6] to explain the models compared in Tab. 6, as shown in Fig. 7. These results suggest that the optic nerve head areas have the greatest impact on model decisions. The neuroretinal rim areas are identified as most important and the periphery contributed comparatively little to model decisions for both healthy and glaucomatous optic neuropathy (GON) eyes [8].
Figure 1 :
1Supervised learning vs. semi-supervised learning.
Figure 4 :
4Comparison of dataset visualization produced by t-SNE[32], where • and • represent the healthy and GON images in the OHTS[14] test set, respectively; • and • in (a) represent the healthy and GON images in the ACRIMA[11] dataset, respectively; • and • in (b) represent the healthy and GON images in the LAG[28] dataset, respectively; • and • in (c) represent the healthy and GON images in the DIGS/ADAGES[41] dataset, respectively.
) are separately used as the backbone CNNs of the MTTNN, introduced in Sec. 3.1. The corresponding MTTNNs are referred to as ResNet-50-LS and MobileNet-v2-LS, respectively. These MTTNNs are subsequently trained on the OHTS [14] low-shot training set. The validation and testing procedures are identical to those in the supervised learning experiment. • SSL experiment: the well-trained ResNet-50-LS and MobileNet-v2-LS are finetuned on the entire OHTS [14] training set, but only the fundus images in the OHTS [14] low-shot training set have ground-truth labels. The finetuned ResNet-50-LS and MobileNet-v2-LS are referred to as ResNet-50-OVV and MobileNet-v2-OVV, respectively. The validation and testing procedures are identical to those in the supervised learning experiment.
Figure 5 :
5λ selection: (a) ResNet-50-LS results and (b) MobileNet-v2-LS results suggest that when λ = 0.3, LSL performs best.
1472 ( b )
1472bComparisons of MobileNet-v2, MobileNet-v2-LS, and MobileNet-v2-OVV.
Figure 6 :
6Examples of semantic fundus image segmentation for optic nerve head extraction: (a) raw fundus images; (b) semantic segmentation results, where the optic nerve head areas are shown in red; (c) cropped fundus images used for CNN training.
Figure 7 :
7Class activation map visualization examples: (i) fundus images;
dataset utilized in our experiments contains 66715 raw fundus images. In our experiments, a square region centered on the optic nerve head (ONH) was first extracted from each raw fundus image using a well-trained DeepLabv3+ [?] model. A small part of the raw data are stereoscopic fundus images, each of which was split to produce two individual fundus images. This image preprocess-GON images). Please note: all images from each patient are in only one of these three subsets.• ACRIMA: The ACRIMA[11] dataset consists of 309 healthy images and 396 GON images. It was collected as part of an initiative by the government of Spain.ing resulted in a total number of 74768 fundus im-
ages. Moreover, ENPOAGDISC (endpoint committee
attributable to primary open angle glaucoma based on
optic disc changes from photographs) labels are used
as the classification ground truth. The total number of
fundus images are divided into a training set (50208
healthy images and 2416 GON images), a validation
set (7188 healthy images and 426 GON images), and
an independent test set (13780 healthy images and 792
GON images) by participant. Additionally, we select
one image from each patient in the training set to cre-
ate the low-shot training set (995 healthy images and
152 Classification was based on review by a single experi-
enced glaucoma expert. Images were excluded if they
did not provide a clear view of the ONH region [9].
• LAG: The LAG [28] dataset contains 3143 healthy im-
ages and 1711 GON images 1 , obtained from Beijing
Tongren Hospital. Similar to the OHTS [14] dataset,
we also use the well-trained DeepLabv3+ [?] model to
extract a square region centered on the ONH from each
fundus image.
• DIGS/ADAGES: The UCSD-based Diagnostic Inno-
vations in Glaucoma Study (DIGS) and African De-
scent and Glaucoma Evaluation Study (ADAGES) [41]
are longitudinal studies designed to detect and monitor
glaucoma based on optical imaging and visual func-
tion testing that, when combined, have generated tens
of thousands of test results from over 4000 healthy,
glaucoma suspect or glaucoma eyes over the course
of up to 25 years. In our experiments, we use the
DIGS/ADAGES [41] test set to evaluate the perfor-
mance of our proposed methods. It contains 5184
healthy images and 4289 GON images.
Table 2 :
2The best results are shown in bold font. These results suggest that in the small sample case, LSL improves the performance of the backbone CNNs.Supervised learning of backbone network with
small sample size vs. LSL: ResNet-50 [16] and MobileNet-
v2 [42] are trained using supervised learning on the OHTS
[14] low-shot training set. ResNet-50-LS-AD/SD and
MobileNet-v2-LS-AD/SD are trained using LSL. λ is set
to 0.3. Network
κ 1
κ 2
m
ACC (%)
F-score (%)
MCC (%)
AUC
ResNet-50-OVV
0
0.01 20
91.415 ↑
41.148 ↑
39.875 ↑
0.898 ↑
0
0.01 15
92.113 ↑
41.316 ↑
39.341 ↑
0.898 ↑
0
0.01 10
94.199 ↑
43.759 ↑
40.831 ↑
0.890 ↑
0
0.1
20
90.516 ↑
38.139 ↑
36.874 ↑
0.898 ↑
2
0.01 20
90.717 ↑
35.818 ↑
33.686 ↑
0.885 ↑
2
0.1
20
92.656 ↑
32.017 ↑
28.390 ↓
0.851 ↓
4
0.01 20
92.610 ↑
29.668 ↓
25.911 ↓
0.854 ↓
4
0.1
20
92.672 ↑
28.463 ↓
24.675 ↓
0.842 ↓
MobileNet-v2-OVV
0
0.01 25
90.609 ↑
36.960 ↑
35.240 ↑
0.887 ↑
0
0.01 20
93.470 ↑
36.976 ↑
33.658 ↑
0.893 ↑
0
0.01 15
93.742 ↑
37.779 ↑
34.535 ↑
0.863 ↑
0
0.1
25
88.825 ↑
34.351 ↑
33.401 ↑
0.878 ↑
2
0.01 25
93.463 ↑
35.204 ↑
31.814 ↑
0.862 ↑
2
0.1
25
90.772 ↑
32.616 ↑
29.701 ↓
0.858 ↑
2
0.01 25
92.268 ↑
31.852 ↑
28.242 ↓
0.854 ↑
2
0.1
25
90.803 ↑
32.690 ↑
29.772 ↓
0.859 ↑
Table 3 :
3Evaluation of the OVV self-training on the OHTS[14] test set with respect to different κ 1 , κ trained under different m, are also tested on the three additional test sets, as shown in Tab. 4. It can be seen that the network trained under a larger m typically shows2 and m, where
ResNet-50-OVV finetunes ResNet-50-LS, and MobileNet-
v2-OVV finetunes MobileNet-v2-LS. The best results are
shown in bold font. ↑ indicates SSL outperforms LSL.
same, with the decrease of m. Moreover, with the increase
of κ 1 and κ 2 , the standard to determine confident predic-
tions becomes lower, which makes the SSL performance
degrade. It should be noted here that when κ 1 = 0, our self-
training strategy should not be named as "One-Vote Veto",
as mentioned in Sec. 3.2. Based on this experiment, we
believe OVV self-training benefits from smaller κ 1 and κ 2 .
Additionally, ResNet-50-OVV and MobileNet-v2-OVV,
Table 4 :
4The evaluation of OVV self-training with respect to
different m on the additional test sets: (a) ResNet-50-OVV
and (b) MobileNet-v2-OVV. The best results are shown in
bold font. ↑ indicates SSL outperforms LSL.
(e) LAG[28] healthy fundus images.(f) LAG [28] GON fundus images. (g) DIGS/ADAGES [41] healthy fundus images. (h) DIGS/ADAGES [41] GON fundus images.(2)
(3)
FN, p: 0.26
TP, p: 0.99
TP, p: 0.99
GON
FN, p: 0.26
TP, p: 0.99
TP, p: 0.99
FN, p: 0.00
TP, p: 0.62
FN, p: 0.00
GON
FN, p: 0.48
TP, p: 0.99
TP, p: 0.99
FN, p: 0.01
TP, p: 0.99
TP, p: 0.99
GON
TP, p: 0.99
TP, p: 0.99
TP, p: 1.00
Input
Backbone: ResNet-50
Backbone: MobileNet-v2
(d) ACRIMA [11] GON fundus images.
(i)
(ii)
(iii)
(iv)
(a)
(b)
(c)
(v)
(vi)
(vii)
(1)
(2)
(3)
FP, p: 0.72
TN, p: 0.02
TN, p: 0.00
Healthy
TN, p: 0.00
FP, p: 0.97
TN, p: 0.00
FP, p: 0.78
FP, p: 0.99
FP, p: 0.74
FP, p: 0.99
TN, p: 0.06
TN, p: 0.00
Healthy
TN, p: 0.02
FP, p: 0.99
FP, p: 0.96
FP, p: 0.82
TN, p: 0.00
TN, p: 0.00
Healthy
TN_0.000118
Input
Backbone: ResNet-50
Backbone: MobileNet-v2
(i)
(ii)
(iii)
(iv)
(a)
(b)
(c)
(v)
(vi)
(vii)
(1)
(2)
(3)
FN, p: 0.42
TP, p: 0.99
TP, p: 0.99
GON
FN, p: 0.20
TP, p: 0.99
TP, p: 0.99
FN, p: 0.05
TP, p: 0.99
TP, p: 0.99
GON
FN, p: 0.03
TP, p: 0.99
TP, p: 1.00
FN, p: 0.00
TP, p: 0.99
TP, p: 0.99
GON
FN, p: 0.03
TP, p: 0.99
TP, p: 1.00
Input
Backbone: ResNet-50
Backbone: MobileNet-v2
(i)
(ii)
(iii)
(iv)
(a)
(b)
(c)
(v)
(vi)
(vii)
(1)
(2)
(3)
FP, p: 0.87
FP, p: 0.96
TN, p: 0.00
Healthy
TN, p: 0.27
TN, p: 0.02
TN, p: 0.07
TN, p: 0.00
FP, p: 0.69
FP, p: 0.61
FP, p: 0.51
FP, p: 0.97
TN, p: 0.10
Healthy
FP, p: 0.87
FP, p: 0.99
FP, p: 0.99
FP, p: 0.63
TN, p: 0.01
TN, p: 0.03
Healthy
Input
Backbone: ResNet-50
Backbone: MobileNet-v2
(i)
(ii)
(iii)
(iv)
(a)
(b)
(c)
(v)
(vi)
(vii)
(1)
(2)
(3)
FN, p: 0.03
FN, p: 0.01
TP, p: 0.98
GON
FN, p: 0.25
TP, p: 0.98
TP, p: 0.99
FN, p: 0.00
FN, p: 0.02
FN, p: 0.00
GON
FN, p: 0.01
TP, p: 0.93
FN, p: 0.01
FN, p: 0.09
TP, p: 0.99
TP, p: 0.99
GON
FN, p: 0.00
TP, p: 0.99
TP, p: 0.99
Input
Backbone: ResNet-50
Backbone: MobileNet-v2
Segnet: A deep convolutional encoder-decoder architecture for image segmentation. Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla, IEEE transactions on pattern analysis and machine intelligence. 3912Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern anal- ysis and machine intelligence, 39(12):2481-2495, 2017. 11, 12
Combining labeled and unlabeled data with co-training. Avrim Blum, Tom Mitchell, Proceedings of the eleventh annual conference on Computational learning theory. the eleventh annual conference on Computational learning theoryAvrim Blum and Tom Mitchell. Combining labeled and un- labeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pages 92-100, 1998. 3
Identification of the optic nerve head with genetic algorithms. J Enrique, Mariano Carmona, Julián Rincón, José M Martínez-De-La García-Feijoó, Casa, Artificial Intelligence in Medicine. 43314Enrique J Carmona, Mariano Rincón, Julián García-Feijoó, and José M Martínez-de-la Casa. Identification of the optic nerve head with genetic algorithms. Artificial Intelligence in Medicine, 43(3):243-259, 2008. 14
Multitask learning. Rich Caruana, Machine learning. 281Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997. 3
Semi-supervised learning (chapelle. Olivier Chapelle, Bernhard Scholkopf, Alexander Zien, IEEE Transactions on Neural Networks. o. et al.203book reviewsOlivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. IEEE Transactions on Neural Net- works, 20(3):542-542, 2009. 3
Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, N Vineeth, Balasubramanian, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE14Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. Grad-cam++: General- ized gradient-based visual explanations for deep convolu- tional networks. In 2018 IEEE Winter Conference on Appli- cations of Computer Vision (WACV), pages 839-847. IEEE, 2018. 14
Encoder-decoder with atrous separable convolution for semantic image segmentation. Yukun Liang-Chieh Chen, George Zhu, Florian Papandreou, Hartwig Schroff, Adam, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)1214Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801-818, 2018. 11, 12, 14
Performance of deep learning architectures and transfer learning for detecting glaucomatous optic neuropathy in fundus photographs. Mark Christopher, Akram Belghith, Christopher Bowd, A James, Proudfoot, H Michael, Robert N Goldbaum, Weinreb, A Christopher, Girkin, Linda M Jeffrey M Liebmann, Zangwill, Scientific reports. 8114Mark Christopher, Akram Belghith, Christopher Bowd, James A Proudfoot, Michael H Goldbaum, Robert N Weinreb, Christopher A Girkin, Jeffrey M Liebmann, and Linda M Zangwill. Performance of deep learning architec- tures and transfer learning for detecting glaucomatous optic neuropathy in fundus photographs. Scientific reports, 8(1):1- 13, 2018. 2, 14
Effects of study population, labeling and training on glaucoma detection using deep learning algorithms. Mark Christopher, Kenichi Nakahara, Christopher Bowd, A James, Akram Proudfoot, Belghith, H Michael, Jasmin Goldbaum, Rezapour, N Robert, Weinreb, A Massimo, Christopher A Fazio, Girkin, Translational Vision Science & Technology. 926Mark Christopher, Kenichi Nakahara, Christopher Bowd, James A Proudfoot, Akram Belghith, Michael H Goldbaum, Jasmin Rezapour, Robert N Weinreb, Massimo A Fazio, Christopher A Girkin, et al. Effects of study population, la- beling and training on glaucoma detection using deep learn- ing algorithms. Translational Vision Science & Technology, 9(2):27-27, 2020. 2, 6
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 09J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. 6
Cnns for automatic glaucoma assessment using fundus images: an extensive validation/andres diaz-pinto. Sandra Morales Andrés Díaz Pinto, Valeriana Martínez, Thomas Naranjo Ornedo, José Köhler, Manuel Mossi, Amparo Navea García, Tejerina, et al.Andrés Díaz Pinto, Sandra Morales Martínez, Vale- riana Naranjo Ornedo, Thomas Köhler, José Manuel Mossi García, Amparo Navea Tejerina, et al. Cnns for auto- matic glaucoma assessment using fundus images: an exten- sive validation/andres diaz-pinto...[et al.].
. Biomedical Engineering Online, 1814BioMedical Engi- neering OnLine, vol. 18 (20 mar. 2019)., 2019. 6, 7, 8, 12, 13, 14
Rim-one: An open retinal image database for optic nerve evaluation. Francisco Fumero, Silvia Alayón, L José, Jose Sanchez, M Sigut, Gonzalez-Hernandez, 2011 24th international symposium on computer-based medical systems (CBMS). IEEE14Francisco Fumero, Silvia Alayón, José L Sanchez, Jose Sigut, and M Gonzalez-Hernandez. Rim-one: An open reti- nal image database for optic nerve evaluation. In 2011 24th international symposium on computer-based medical sys- tems (CBMS), pages 1-6. IEEE, 2011. 14
Meta-learning probabilistic inference for prediction. Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard Turner, International Conference on Learning Representations. Jonathan Gordon, John Bronskill, Matthias Bauer, Sebas- tian Nowozin, and Richard Turner. Meta-learning probabilis- tic inference for prediction. In International Conference on Learning Representations, 2018. 3
Ocular Hypertension Treatment Study. Assessment of the impact of an endpoint committee in the ocular hypertension treatment study. Eve J Mae O Gordon, Higginbotham, K Dale, Richard K Heuer, I I Parrish, Alan L Robin, Patricia A Morris, Deborah A Dunn, S Bradley, Wilson, A Michael, Kass, American journal of ophthalmology. 19914Mae O Gordon, Eve J Higginbotham, Dale K Heuer, Richard K Parrish II, Alan L Robin, Patricia A Morris, Debo- rah A Dunn, Bradley S Wilson, Michael A Kass, and Ocular Hypertension Treatment Study. Assessment of the impact of an endpoint committee in the ocular hypertension treatment study. American journal of ophthalmology, 199:193-199, 2019. 1, 2, 4, 6, 7, 8, 12, 13, 14
The ocular hypertension treatment study: design and baseline description of the participants. O Mae, Michael A Gordon, Kass, Archives of Ophthalmology. 11756Mae O Gordon and Michael A Kass. The ocular hyperten- sion treatment study: design and baseline description of the participants. Archives of Ophthalmology, 117(5):573-583, 1999. 1, 2, 6
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition1314Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 2, 6, 7, 8, 12, 13, 14
Deep metric learning using triplet network. Elad Hoffer, Nir Ailon, International Workshop on Similarity-Based Pattern Recognition. SpringerElad Hoffer and Nir Ailon. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition, pages 84-92. Springer, 2015. 4
The adverse impact of glaucoma on psychological function and daily physical activity. Wenbin Huang, Kai Gao, Yaoming Liu, Mengyin Liang, Xiulan Zhang, Journal of Ophthalmology. 1Wenbin Huang, Kai Gao, Yaoming Liu, Mengyin Liang, and Xiulan Zhang. The adverse impact of glaucoma on psycho- logical function and daily physical activity. Journal of Oph- thalmology, 2020, 2020. 1
Open-source, ultralow-cost smartphone attachment for non-mydriatic fundus photography-open indirect ophthalmoscope. Devesh Jain, Tristan Swedish, Bailey Shen, Y David, Shizuo Kim, Ramesh Mukai, Raskar, Investigative Ophthalmology & Visual Science. 5712Devesh Jain, Tristan Swedish, Bailey Shen, David Y Kim, Shizuo Mukai, and Ramesh Raskar. Open-source, ultra- low-cost smartphone attachment for non-mydriatic fundus photography-open indirect ophthalmoscope. Investigative Ophthalmology & Visual Science, 57(12):1685-1685, 2016. 2
Caffe: Convolutional architecture for fast feature embedding. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell, Proceedings of the 22nd ACM international conference on Multimedia. the 22nd ACM international conference on MultimediaYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM inter- national conference on Multimedia, pages 675-678, 2014. 3
The ocular hypertension treatment study: a randomized trial determines that topical ocular hypotensive medication delays or prevents the onset of primary open-angle glaucoma. Archives of ophthalmology. A Michael, Kass, K Dale, Eve J Heuer, Chris A Higginbotham, John L Johnson, Philip Keltner, Miller, K Richard, Roy Parrish, Mae O Wilson, Gordon, 1206Michael A Kass, Dale K Heuer, Eve J Higginbotham, Chris A Johnson, John L Keltner, J Philip Miller, Richard K Parrish, M Roy Wilson, and Mae O Gordon. The ocular hypertension treatment study: a randomized trial determines that topical ocular hypotensive medication delays or prevents the onset of primary open-angle glaucoma. Archives of oph- thalmology, 120(6):701-713, 2002. 1, 2, 6
Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. Alex Kendall, Yarin Gal, Roberto Cipolla, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionAlex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geome- try and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7482-7491, 2018. 4
. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan, arXiv:2004.11362arXiv preprintSupervised contrastive learningPrannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020. 4
Siamese neural networks for one-shot image recognition. Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, ICML deep learning workshop. Lille2Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2. Lille, 2015. 3
Alex Krizhevsky, arXiv:1404.5997One weird trick for parallelizing convolutional neural networks. arXiv preprintAlex Krizhevsky. One weird trick for parallelizing convo- lutional neural networks. arXiv preprint arXiv:1404.5997, 2014. 2
Association between glaucoma and at-fault motor vehicle collision involvement among older drivers: a population-based study. Miyoung Kwon, Carrie Huisingh, A Lindsay, Gerald Rhodes, Joanne M McgwinJr, Cynthia Wood, Owsley, Ophthalmology. 1231MiYoung Kwon, Carrie Huisingh, Lindsay A Rhodes, Ger- ald McGwin Jr, Joanne M Wood, and Cynthia Owsley. As- sociation between glaucoma and at-fault motor vehicle col- lision involvement among older drivers: a population-based study. Ophthalmology, 123(1):109-116, 2016. 1
Human-level concept learning through probabilistic program induction. Ruslan Brenden M Lake, Joshua B Salakhutdinov, Tenenbaum, Science. 3506266Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through proba- bilistic program induction. Science, 350(6266):1332-1338, 2015. 3
Attention based glaucoma detection: A large-scale database and cnn model. Liu Li, Mai Xu, Xiaofei Wang, Lai Jiang, Hanruo Liu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition1314Liu Li, Mai Xu, Xiaofei Wang, Lai Jiang, and Hanruo Liu. Attention based glaucoma detection: A large-scale database and cnn model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10571- 10580, 2019. 1, 6, 7, 8, 12, 13, 14
A deep learning-based algorithm identifies glaucomatous discs using monoscopic fundus photographs. Sidong Liu, L Stuart, Angela Graham, Michael Schulz, Barbara Kalloniatis, Weidong Zangerl, Yang Cai, Brian Gao, Hemamalini Chua, John Arvind, Grigg, Ophthalmology Glaucoma. 11Sidong Liu, Stuart L Graham, Angela Schulz, Michael Kalloniatis, Barbara Zangerl, Weidong Cai, Yang Gao, Brian Chua, Hemamalini Arvind, John Grigg, et al. A deep learning-based algorithm identifies glaucomatous discs us- ing monoscopic fundus photographs. Ophthalmology Glau- coma, 1(1):15-22, 2018. 2
Fully convolutional networks for semantic segmentation. Jonathan Long, Evan Shelhamer, Trevor Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition1112Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Pro- ceedings of the IEEE conference on computer vision and pat- tern recognition, pages 3431-3440, 2015. 11, 12
Shufflenet v2: Practical guidelines for efficient cnn architecture design. Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, Jian Sun, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architec- ture design. In Proceedings of the European conference on computer vision (ECCV), pages 116-131, 2018. 2
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 96Laurens van der Maaten and Geoffrey Hinton. Visualiz- ing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008. 6
Smart phone administered fundus imaging without additional imaging optics. Lawson Everett Matthew, Ramesh Raskar, Investigative Ophthalmology & Visual Science. 5513Everett matthew Lawson and Ramesh Raskar. Smart phone administered fundus imaging without additional imaging optics. Investigative Ophthalmology & Visual Science, 55(13):1609-1609, 2014. 2
Effective self-training for parsing. David Mcclosky, Eugene Charniak, Mark Johnson, Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. the Human Language Technology Conference of the NAACL, Main ConferenceDavid McClosky, Eugene Charniak, and Mark Johnson. Ef- fective self-training for parsing. In Proceedings of the Hu- man Language Technology Conference of the NAACL, Main Conference, pages 152-159, 2006. 4
Binocular visual field impairment in glaucoma and at-fault motor vehicle collisions. Carrie Gerald McgwinJr, Shelly Gupta Huisingh, Jain, Cynthia Christopher A Girkin, Owsley, Journal of glaucoma. 242138Gerald McGwin Jr, Carrie Huisingh, Shelly Gupta Jain, Christopher A Girkin, and Cynthia Owsley. Binocular vi- sual field impairment in glaucoma and at-fault motor vehicle collisions. Journal of glaucoma, 24(2):138, 2015. 1
Detection of progressive glaucomatous optic nerve damage on fundus photographs with deep learning. A Felipe, Alessandro A Medeiros, Eduardo B Jammal, Mariottoni, Ophthalmology. 2Felipe A Medeiros, Alessandro A Jammal, and Eduardo B Mariottoni. Detection of progressive glaucomatous optic nerve damage on fundus photographs with deep learning. Ophthalmology, 2020. 2
Key-value memory networks for directly reading documents. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein, Antoine Karimi, Jason Bordes, Weston, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAlexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value mem- ory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400-1409, 2016. 3
Visual function and quality of life among patients with glaucoma. K Richard, Parrish, J Steven, Ingrid U Gedde, Scott, J William, Joyce C Feuer, Carol M Schiffman, Alejandra Mangione, Montenegro-Piniella, Archives of Ophthalmology. 11511Richard K Parrish, Steven J Gedde, Ingrid U Scott, William J Feuer, Joyce C Schiffman, Carol M Mangione, and Alejan- dra Montenegro-Piniella. Visual function and quality of life among patients with glaucoma. Archives of Ophthalmology, 115(11):1447-1455, 1997. 1
Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. An Ran Ran, Y Carol, Xi Cheung, Hao Wang, Luyang Chen, Luo, P Poemen, Chan, O M Mandy, Robert T Wong, Chang, S Suria, Alvin L Mannil, Young, The Lancet Digital Health. 14An Ran Ran, Carol Y Cheung, Xi Wang, Hao Chen, Lu- yang Luo, Poemen P Chan, Mandy OM Wong, Robert T Chang, Suria S Mannil, Alvin L Young, et al. Detection of glaucomatous optic neuropathy with spectral-domain opti- cal coherence tomography: a retrospective training and val- idation deep-learning analysis. The Lancet Digital Health, 1(4):e172-e182, 2019. 2
Unet: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, International Conference on Medical image computing and computer-assisted intervention. Springer1112Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. In International Conference on Medical image com- puting and computer-assisted intervention, pages 234-241. Springer, 2015. 11, 12
The african descent and glaucoma evaluation study (adages): Design and baseline data. Archives of ophthalmology. A Pamela, Sample, Linda M Christopher A Girkin, Sonia Zangwill, Lyne Jain, Lida M Racette, Robert N Becerra, Felipe A Weinreb, Roy Medeiros, Julio Wilson, De León-Ortega, 12714Pamela A Sample, Christopher A Girkin, Linda M Zang- will, Sonia Jain, Lyne Racette, Lida M Becerra, Robert N Weinreb, Felipe A Medeiros, M Roy Wilson, Julio De León- Ortega, et al. The african descent and glaucoma evaluation study (adages): Design and baseline data. Archives of oph- thalmology, 127(9):1136-1145, 2009. 6, 7, 8, 12, 13, 14
Mobilenetv2: Inverted residuals and linear bottlenecks. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen, Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). the IEEE conference on computer vision and pattern recognition (CVPR)1314Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh- moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 4510-4520, 2018. 2, 6, 7, 8, 12, 13, 14
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, International Conference on Learning Representations (ICLR). Karen Simonyan and Andrew Zisserman. Very deep con- volutional networks for large-scale image recognition. In In- ternational Conference on Learning Representations (ICLR), 2015. 2
Drishti-gs: Retinal image dataset for optic nerve head (onh) segmentation. Jayanthi Sivaswamy, Datt Sr Krishnadas, Madhulika Joshi, A Ujjwaft Syed Jain, Tabish, IEEE 11th international symposium on biomedical imaging (ISBI). IEEE14Jayanthi Sivaswamy, SR Krishnadas, Gopal Datt Joshi, Mad- hulika Jain, and A Ujjwaft Syed Tabish. Drishti-gs: Reti- nal image dataset for optic nerve head (onh) segmentation. In 2014 IEEE 11th international symposium on biomedical imaging (ISBI), pages 53-56. IEEE, 2014. 14
Endto-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, Advances in neural information processing systems. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End- to-end memory networks. In Advances in neural information processing systems, pages 2440-2448, 2015. 5
Going deeper with convolutions. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). the IEEE conference on computer vision and pattern recognition (CVPR)Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 1- 9, 2015. 2
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception archi- tecture for computer vision. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 2818-2826, 2016. 2
. Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, Chunfang Liu, A survey on deep transferChuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. A survey on deep transfer
| zyda_arxiv-1721000 |
Linear time algorithm for computing the rank of divisors on cactus graphs
12 Jan 2016
Phan Thi
Ha Duong
Linear time algorithm for computing the rank of divisors on cactus graphs
12 Jan 2016
Rank of divisor on graph was introduced in 2007 and it quickly attracts many attentions. Recently in 2015, the problem of computing this quantity was proved to be NP-hard. In this paper, we describe a linear time algorithm for this problem limited on cactus graphs.
Introduction
The notion of rank of divisor on graph was introduced by Baker and Norine in a paper on Jacobi-Abel theory on graph [1], in which the authors stated the link between this notion with similar notion on Riemann surface. Moreover, the authors have developed a theorem for divisor on graph analogue to the classical Riemann-Rich theorem. Since then, many works have studied for computing the rank of divisor on graph (see for example [3]). The most important result should be the new theorem on the NP-hardness complexity of rank of divisor problem on general graph [9]. The proof of this result was based on the proof of NP-hardness of minimum recurrent configuration problem of Chip Firing Game on directed graphs studied by Perrot and Pham [12]. On the other hand, the rank of divisor problem can be studied in special classes of graphs. In [4], the author proposed a linear time algorithm for this problem on complete graph. The idee of this algorithm is based on Dyck words and parking function, notions very closed to Chip Firing Game, a very well-known combinatorial model [2,6,13] In this paper we investigate this problem in the case of cactus graph. This class was introduced in 1950's year [7], and can be used for representing model on different research domains, for example electrical circuits [10,14] or comparative genomics [11]. Several NP-hardness problem on general graphs can be solved in polynomial time on cactus graphs [5,8,15].
Our main idea is to contract a graph by eliminating edge and cycle, and deduce the rank of divisor on initial graph from that of contracted graph. For a general graph, such an elimination does not always exist, but it is the case for cactus graph. We show that a block (edge or cycle) elimination scheme can be found in linear time for cactus graph, furthermore from this scheme, we construct an algorithm in linear time for computing the rank of divisors.
In Section 2, we will present the key features of the theory of Riemann Roch on graph. Then we discuss about the rank of divisors on trees and cycles. We propose the contraction operator on graph by eliminating edge or cycle, and take in evidence the relation between the rank of divisors on initial graph and that on its contraction. Section 3 focuses on cactus graphs, on the construction of a block elimination scheme, and from there a linear time algorithm for computing the rank of divisors.
Divisors on graphs and Riemann-Roch like theorem
Let G be a multiple undirected graph that has no loop. We always denote by V (G) the vertex set of G and by n its cardinality, by E(G) the edge set of G and by m its cardinality. For each vertex v ∈ V (G), we write deg(v) the degree of v, and for every vertices u, v ∈ V (G), we write e(u, v) the number of edges between u and v. The genus g of G is the quantity g = m − n + 1. For a subset U of V , we denote by G(U) the subgraph of G, induced by U.
The group of divisors of G, Div(G) is the free abelian group on V (G). A divisor f ∈ Div(G) can be considered as a function f : V → Z, or as a vector f ∈ Z V (G) , where the coordinates are indexed by the vertices of G.
The degree of f is defined by deg(f ) = v∈V (G) f (v). The index vector ǫ v is defined by a vector of entries 0 except ǫ v (v) = 1.
The Laplacian matrix (∆ G ) n×n of graph G, where the coordinates are indexed by V (G) × V (G), is defined by:
∆ G (u, v) = deg(u) if u = v, −e(u, v) if u = v. We write ∆ G (v) the vector indexed by vertex v of the matrix. A divisor f ∈ Div(G) is called effective if f (v) ≥ 0 for all v ∈ V . The linear equivalence is a relation on Div(G) defined by: f ∼ g if there exists x ∈ Z V (G) such that g = f + x∆ G . If f is linear equivalent with an effective divisor g, we say f is L-effective.
We give here the definition of the rank of divisor which was introduced by Baker and Norine [1].
Definition 1. For a divisor f ∈ Div(G), the rank of f is • −1 if f is not effective,
• the largest integer r such that for any effective configuration λ of degree r the divisor f − λ is L-effective.
It is useful to state a straightforward property of the rank.
Lemma 1.
Let f and f ′ be two divisors of degree non negative on G.
Then
ρ(f + f ′ ) ≤ ρ(f ) + deg(f ′ ). In particular, for every v ∈ V (G), we have ρ(f ) − 1 ≤ ρ(f − ǫ v ) ≤ ρ(f ).
In their first paper on the rank of divisor, Baker and Norine have proved the following theorem which is analogue to the Riemann Roch theorem on Riemann surface. Theorem 2. Let G be a graph with n vertices and m edges. Let κ be the divisor such that
κ(v) = d(v)−2 for all v ∈ V (G), so that deg(κ) = 2(m−n).
Then any divisor f satisfies:
ρ(f ) − ρ(κ − f ) = deg(f ) − g + 1,
where g being the genus of G.
Let us remark that for any divisor f such that deg(f ) < 0 then f is not effective, and ρ(f
) = −1. Moreover deg(κ − f ) = deg(κ) − deg(f ) = 2(m − n) − deg(f ). Then if deg(f ) > 2(m − n), we have deg(κ − f ) < 0 and ρ(f ) = −1, this implies that ρ(f ) = deg(f ) − g.
Rank on trees and cycles
We now investigate to some elementary cases of graphs.
Tree
A tree is an acyclic connected graph. In a tree, we have m = n − 1, and 2(m − n) = −2. So for every f of degree non negative, we have deg(f ) > deg(κ) which implies that ρ(f ) = deg(f ).
Cycle
A cycle is a connected graph where every vertex are of degree 2. In a cycle of n vertices C n = {v 1 , . . . , v n }, we have m = n and g = 1. So for every f of degree positive, we have deg
(f ) > deg(κ) which implies that ρ(f ) = deg(f ) − 1. In the case deg(f ) = 0, ρ(f ) = 0 if f ∼ 0 (that means f is L-effective), otherwise ρ(f ) = −1.
We call good divisor a divisor of degree 0 and Leffective, and bad divisor a divisor of degree 0 and not L-effective. For cycle C n , we can write a divisor f on C n as a vector f = (f 1 , f 2 , . . . , f n ).
Proposition 3. Let f = (f 1 , f 2 , .
. . , f n ) be a divisor on cycle C n , then the rank of f is computed as follows.
ρ(f ) = −1 if deg(f ) ≤ −1, −1 if deg(f ) = 0 and f is bad, 0 if deg(f ) = 0 and f is good , deg(f ) − 1 if deg(f ) ≥ 1.
Now, we analyze the characterization of good divisors on cycles.
Let f = (f 1 , f 2 , . . . , f n ) be a divisor on cycle C n . We have f ∼ 0 if and only if there exists x = (x 1 , x 2 , . . . , x n ) ∈ Z n such that f − x∆ Cn = 0. Because i=n i=1 ∆ Cn (v i ) = 0 then we have f ∼ 0 ⇔ ∃x = (x 1 , x 2 , . . . , x n−1 , 0) ∈ Z n−1 × {0} : f − x∆ Cn = 0. ⇔ ∃x : (f 1 , f 2 , . . . , f n ) = (x 1 , x 2 , . . . x n−1 , 0) 2 −1 0 . . . 0 0 −1 −1 2 −1 . . . 0 0 0 ... 0 0 0 . . . −1 2 −1 −1 0 0 . . . 0 −1 2 ⇔ ∃x : f 1 = 2x 1 − x 2 f 2 = −x 1 + 2x 2 − x 3 f 3 = −x 2 + 2x 3 − x 4 . . . f n−1 = −x n−2 + 2x n−1 f n = 2x n−1 − x 1 ⇔ ∃x : x 2 = 2x 1 − f 1 x 3 = 3x 1 − (2f 1 + f 2 ) x 4 = 4x 1 − (3f 1 + 2f 2 + f 3 ) . . . x n−1 = (n − 1)x 1 − ((n − 2)f 1 + . . . + 2f n−3 + f n−2 ) 0 = x n = nx 1 − ((n − 1)f 1 + (n − 2)f 2 + . . . + 2f n−2 + f n−1 ) ⇔ ((n − 1)f 1 + (n − 2)f 2 + . . . + 2f n−2 + f n−1 ) ≡ 0 mod n ⇔ (f 1 + 2f 2 + . . . + (n − 2)f n−2 + (n − 1)f n−1 ) ≡ 0 mod n.
So we have the following result.
Proposition 4. Let f = (f 1 , f 2 , .
. . , f n ) be a divisor of degree 0 on the cycle C n , then f is good if and only if
f 1 + 2f 2 + . . . + (n − 2)f n−2 + (n − 1)f n−1 ≡ 0 mod n.
Operators and rank of divisors
The two simple cases of trees and cycles give us the idea to decompose a graph to smaller graphs in a way that the rank of the initial graph can be deduced from that of smaller graphs.
To this purpose, we introduce two operators on graph and on its divisors.
Definition 2. Let G be a connected graph. A vertex v of G is called a cut vertex if removing v from G disconnects G. Moreover, if one can decompose V (G) = V 1 ∪ U such that V 1 ∩ U = {v}
and that the induced graphs G 1 = G(V 1 ) and H = G(U) are connected, we say v decomposes G into G 1 and H. We will denote by G/H, and call the contraction of G by H at vertex v, the subgraph G 1 . Furthermore, if H is a block (maximal subgraph without a cut-vertex) we say v a block cut vertex and H a free block of G. Definition 3. Let G be a graph, and let v be a cut vertex which decomposes G to G 1 = G(V 1 ) and H = G(U). Let f be a divisor on G, we define contraction of f by H, and denote by f G/H , the following divisor on G/H.
f G/H (u) = f (u) if u ∈ V 1 \{v}, u∈H f (u) if u = v.
We define zero of f on H, and denote by f N (H) , the following divisor on H.
f N (H) (u) = f (u) if u ∈ H\{v}, − u∈H\{v} f (u) if u = v.
One has directly the relation between a divisor and its contraction and zero.
f G = f G/H + f N (H) , deg(f G/H ) = deg(f G ), deg(f N (H) ) = 0.
Generally, let U be a subset of V (G) and let H = G(U), we can consider a divisor on H as a divisor on G by giving value 0 to all vertices in V (G)\U. Similarly, we consider the matrix indexed by vertices of H as a matrix indexed by vertices of G by giving value 0 to all entries indexed by vertices in V (G)\U. It is easy to check that ∆ G/H + ∆ H = ∆ G . Nevertheless, the rank of a divisor on G and on H are not the same, that means if f is a divisor on H then f can be seen as a divisor on G but
ρ G (f ) = ρ H (f ).
Now we show that the rank of a divisor can be computed from that of its contraction.
Proposition 5. Let G be a graph and let v be a cut vertex which decomposes G to H and G 1 . If H is a tree then for all divisor f on G, we have ρ(f ) = ρ(f G/H ).
Proof. Let r be the rank of ρ(f G/H ), we will prove that ρ(f ) = r. Let λ be a divisor on G, we have:
f − λ = (f − λ) G/H + (f − λ) H . Because (f − λ) H is of degree 0 on a tree then it is L G -effective. This implies that f − λ is L G -effective if and only if (f − λ) G/H is L G -effective, so ρ(f ) = ρ(f G/H ).
From the above result, we observe that one can contract a graph by its tree and the rank of a divisor does not change after this contraction. After that, the result graph has no vertex of degree 1. It turns out that we need to focus only connected graphs whole every vertices are of degree at least two.
The situation will be more complicated for contraction by a cycle because a divisor of degree 0 on a cycle can be good or bad. Proposition 6. Let G be a graph and let v be a cut vertex which decomposes
G into H and G 1 where H is a cycle. Let f be a divisor on G. If f N (H) is bad then ρ(f ) = ρ(f G/H − ǫ v ).
Proof. Put r = ρ(f G/H ). Let us consider ρ(f G/H −ǫ v ) which can be r or r −1.
If
ρ(f G/H − ǫ v ) = r. Let consider any divisor λ of degree r. We have f − λ = (f G/H − ǫ v λ G/H ) + (f H + ǫ v − λ H ), but f G/H − ǫ v − λ G/H is L G - effective because ρ(f G/H − ǫ v ) = r and deg(λ G/H = r, and f H + ǫ v − λ H is a divisor of degree 1 on cycle H then L G -effective; we have then f − λ is L G -effective, which give the rank r for f . Now if ρ(f G/H − ǫ v ) = r − 1 then there exists λ on G/H of degree r such that f G/H − λ − ǫ v is not L G -effective. Consider λ as a divisor on G, then f H − λ H = f H which is bad.
Then to make the part on H positive, we must take at least 1 unit from V 2 . That mean the part f G/H − λ on G/H must give at least ǫ v to the part on H. But we know that (f G/H − λ) − ǫ v is not L G -effective, then it is impossible.
We can conclude that there exists a divisor λ of degree r such that f − λ is not L G -effective, then ρ(f ) = r − 1.
Proposition 7. Let G be a graph and let v be a cut vertex which decomposes G into H and G 1 where H is a cycle. Let f be a divisor on G. If f N (H) is good then we can compute ρ(f ) as follows.
ρ(f ) = r if ρ(f G/H − 2ǫ v ) ≥ r − 1, r − 1 if ρ(f G/H − 2ǫ v ) = r − 2, where r = ρ(f G/H ).
Proof. Put r = ρ(f G/H ). Let us consider ρ(f G/H − 2ǫ v ), this value can be r, r − 1 or r − 2.
If ρ(f G/H − 2ǫ v ) ≥ r − 1 then for all divisor θ on G/H of degree r − 1, one has f G/H − 2ǫ v − θ is L G -effective. That means there exists α such that
()f G/H − 2ǫ v − θ) − u∈V 2 \{v} α u ∆ u ≥ 0.
Now let consider any divisor λ ≥ 0 of degree r, we will prove that f − λ is L G -effective.
Let consider λ G/H , if λ G/H (v) = 0, that means u∈V 1 λ(u) = 0 which implies that for all u ∈ V 1 , λ(u) = 0. Then f H − λ H = f H which is good and then L G -effective. On the other hand
f − λ = f G/H − λ G/H + f H − λ H , but f G/H − λ G/H is L G -effective by hypothesis of the rank f G/H of (note that deg(λ G/H = r), then f − λ is L G -effective. Now, if λ G/H (v) ≥ 1. Put θ = λ G/H (v) − ǫ v . Then deg(θ) = r − 1, and we have f G/H − 2ǫ v − θ is L G -effective. That means f G/H − ǫ v − λ G/H = f G/H − ǫ v − (θ + ǫ v ) is L G -effective. On the other hand f − λ = (f G/H − ǫ v − λ G/H + (f H + ǫ v − λ H ), but f H + ǫ v − λ H is of degree 1 on a cycle H then it is L G -effective, then we have f − λ is L G -effective. We prove also that if ρ(f G/H − 2ǫ v ) = r − 2 then ρ(f ) = r − 1. In fact, because ρ(f G/H − 2ǫ v ) = r − 2 then there exists a divisor θ on G/H such that θ ≥ 0, deg(θ) = r − 1 and f G/H − 2ǫ v − θ is not L G -effective. Which is equivalent to (f G/H − (θ + ǫ v )) − ǫ v is not L G -effective. Now define the divisor λ on G by λ = θ + ǫ v − ǫ v + ǫ w with w ∈ V 1 \{v}, then f − λ = (f G/H − (θ + ǫ v )) + (f H + ǫ v − ǫ w ). Let us consider the divisor f H + ǫ v − ǫ w ,
this is a divisor of degree 0 on H and is not good because f is good. So if we want to change this divisor to a positive divisor, we must take at least 1 unit from V 2 . That mean the part f G/H − (θ + ǫ v ) on G/H must give at least ǫ v to the part on H. But we know that (f G/H − (θ + ǫ v )) − ǫ v is not L G -effective, then it is impossible.
We can conlude that there exists a divisor λ of degree r such that f − λ is not L G -effective, then ρ(f ) = r − 1.
The two above Propositions give us an idea to compute the rank of a divisor by an elimination scheme of trees and cycles (if it exists). Unfortunately, for a general graph, we can not reduced a graph to a simple vertex by a sequence of contraction of tree and cycles. Moreover, if we can do it, we must consider two cases for the elimination of a good cycle. The last one can make the algorithm an exponential number of computations.
But, for the cactus graph, one can overcome these two difficulties. We will show the existence of elimination scheme of tree and cycles on cactus; and we will prove that there is only one case for Proposition 7.
3 Rank of divisor on cactus graph 3.1 Cactus graph and block elimination scheme Definition 4. A cactus graph (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, every edge in such a graph belongs to at most one simple cycle. Equivalently, every block (maximal subgraph without a cut-vertex) is an edge or a cycle.
It is easy to see that in a cactus every cycle is simple and the number of simple cycles of a cactus G is equal to its genus g = m − n + 1.
In our study, we are interest on special cut vertex which will give us an elimination scheme of cactus.
Definition 5. We say that the graph G have a block elimination scheme if one can construct a sequence of graphs G 0 , G 1 , . . . G k , k ≥ 0, such that G 0 = G, G k has only one vertex, denoted by r or v k+1 , and for all 1
≤ i ≤ k, G i is obtained from G i−1 by contraction a block B i of G i−1 at vertex v i . We denote this scheme by E = (G i , v i , B i .1 ≤ i ≤ k).
Moreover, on the vertex set {v 1 , . . . , v k , r = v k+1 }, we define the BES tree the tree rooted at r and for every vertex v i , 1 ≤ i ≤ k, the parent of v i is the vertex v j (with smallest index j) such that v i belongs to block B j .
We can remark that if a graph accept a block elimination scheme, then by reverse the order of the sequence of block, one can obtain another block elimination scheme, this implies that this graph has at least two schemes, or there are at least two choices for the first block of a scheme. Proposition 8. A connected graph G has a block elimination scheme if and only if G is a cactus graph.
Proof. a) Let G be a cactus graph. We show that G has a block vertex v (and its corresponding block B), and after taking G 1 obtained from G by contraction B at v, we prove that G 1 is also a cactus graph. We can continue this process to construct a block elimination scheme.
Suppose that G has no block cut vertex. First, it implies that every vertex of G has degree at least 2. Second, for every cycle C of G, there is at least two vertices of C of degree greater than 2. Indeed, if there is a cycle C in which only one vertex v is of degree greater than 2, then v is a block cut vertex and C is its corresponding block.
Let us consider the following path. Beginning from a vertex u 1 of degree greater than 2 of a cycle C 1 , go to the second v 1 of degree greater than 2 by a path connecting u 1 to v 1 inside C 1 . Then from v 1 go out of C 1 (this is possible because deg(v 1 ) > 2). Continue the path, each time this path go into a new cycle C i by a vertex of degree greater than 2 u i , it will go inside C i to a second vertex v i of degree greater than 2, then go out. This process will stop when either i) it returns to a vertex w in p and there is no cycle appear in p more than twice, ii) it returns to a cycle C i by a vertex w i (which may different from u i and v i ). Now let us consider the case i): the path p is a cycle which is different from all cycles having intersection with p. Nevertheless, p has two common vertices with C 1 . This fact contradicts the property of cactus graph of G.
Suppose that we have the case ii). If w i is equal to u i or v i , than p contains a cycle which has two commun vertices with C i . If w i is different from u i ans v i . Then lest us consider the path q: taking the sub path of p from u i to w i and adding a path from w i tp u i inside C i (which does not contains v i ). This path q is a cycle, which is different C i and which has two common vertices with C i . We have then a contradiction.
After all, if G has a block cut vertex v (with block C), then the construction B from G at vertex v is clearly a cactus graph.
So we can conclude that a cactus graph G has a block elimination scheme. b) Now, if G is not a cactus graph, we prove that G has no block elimination scheme. If G is not a cactus graph then there exists an edge (u, v) which belongs to two simple cycles C 1 and C 2 . The first time (u, v) is contracted by an contraction operation, if C 1 is contracted then C 2 remains, but if C 1 is contracted then u and v contract to the same vertex while remaining C 1 means u and v remain different. So G can not have a block elimination scheme.
The recognition problem can be solved in linear time [15] by using a depth first search. We use a similar idee to prove the following result.
Lemma 9. A block elimination scheme of a cactus graph G can be found in linear time.
Proof. We will construct a tree and prove that this corresponds to a BES tree.
Let r be any vertex of G. We call a depth first search (DFS) procedure for G from v. This DFS give us a tree from which we will construct to obtain our tree T . In this DFS procedure, each cycle C has an unique vertex v who appears firstly in the DFS, and we represent this cycle by node v. Similarly, each edge e which does not belong to any cycle has an extremity u firstly appear in the DFS, and we represent this edge by node u. A node x is a child of a node y if either vertex x is a sun of the vertex y in the DFS tree and the edge (x, y) does not belong to any cycle or if x belongs to the cycle having y as representation.
After this contraction of the DFS tree, we obtain a tree T where each node represent a block (a cycle or an edge) of cactus G. Moreover each leave v of T represent a block B having v as its block cut vertex in G. We can then construct a block elimination scheme of G by contraction consecutively block at leave by leave.
Finally, a DFS procedure takes O(m) time, then this construction takes O(n) = O(m) times as claim.
Rank of divisors on cactus graphs
As we remark above on Proposition 7, for general graph, there two cases for computing the rank of a divisor from its contraction; the situation will be simpler for cactus. For this purpose, we first prove the following result.
Lemma 10. Let G be a cactus graph and let v be a vertex of G. Let f be a divisor on G, then ρ(f G − 2ǫ v ) < ρ(f G ).
Proof. We prove by recurrence on g(G).
In the case G of genus 0. then G is a tree, we have
ρ(f G − 2ǫ v ) = deg(f G − 2ǫ v ) < deg(f G ) = ρ(f G ).
Suppose that the statement of Proposition is correct for all cactus all genus smaller k ≥ 2, we will prove it is correct for cactus of genus k. Let us consider a block cut vertex v 1 which decompose G into H and G 1 and such that v inH (such a block exists always by a remark after the definition of block elimination scheme). Now, consider graph G 1 , vertex v 1 and divisor f G 1 , one has ρ(f G 1 −2ǫ v 1 ) < ρ(f G 1 ) by hypothesis of recurrence.
If H is a tree then ρ
(f G − 2ǫ v ) = ρ(f G 1 − 2ǫ v ) < ρ(f G 1 ) = ρ(f G ). If H is a bad cycle then ρ(f G − 2ǫ v ) = ρ(f G 1 − 2ǫ v − ǫ v 1 ) = ρ((f G 1 − ǫ v 1 ) − 2ǫ v ) < ρ(f G 1 − ǫ v 1 ) = ρ(f G ).
If H is a good cycle, and because ρ(
f G 1 −2ǫ v 1 ) < ρ(f G 1 ), then ρ(f G −2ǫ v ) = ρ((f G 1 −2ǫ v )−2ǫ v 1 )+1 = ρ((f G 1 −2ǫ v 1 )−2ǫ v )+1 < ρ(f G 1 −2ǫ v 1 )+1 = ρ(f G ).
So in anycase, we have always ρ(f G − 2ǫ v ) < ρ(f G ) for G of genus k. Which complete the recurrence argument.
From Proposition 4 and Proposition 7, we have directly the following result.
Corollary 11. Let G be a cactus graph and let v be vertex which decomposes G into two graphs H and G 1 where H is a cycle. Let f be a divisor on G such that f N (H) is good. Then ρ(f G ) = ρ(f G/H − 2ǫ v ) + 1.
We can now prove our main result.
Theorem 12. Let G be a cactus, and let E = (G i , v i , H i , 1 ≤ i ≤ k) be a block elimination scheme of G. Then we can compute the rank of any divisor f on G by the following recursive algorithm in linear time.
For all 0 ≤ i ≤ k:
ρ(f G i ) = ρ(f G i+1 ) if H i+1 is an edge, ρ(f G i+1 − ǫ v i+1 )
if H i+1 is an cycle and f N (H i ) is bad, ρ(f G i+1 − 2ǫ v i+1 ) + 1 if H i+1 is an cycle and f N (H i ) is good.
Proof. The correctness of this algorithm can be deduced from the above propositions ans corollary. We will now prove the complexity.
Given a block elimination scheme, in the step i, one must calculate f G i . Firstly, it is in constant time to check if H i is an edge or a cycle. Then if H i+1 is a cycle, it is O(|H i+1 |) time to check if f N (H i ) is good or bad. In each case, one must calculate ρ(f G i+1 − ǫ v i+1 ) or ρ(f G i+1 − 2ǫ v i+1 ), which is the recursive procedure on a new graph G i+1 with the size smaller than that of G i a value of O(|H i+1 |).
Totally, the algorithm takes a time of O(|G|) = O(n).
Riemann-Roch and Abel-Jacobi theory on a finite graph. Matthew Baker, Serguei Norine, Adv. Math. 2152Matthew Baker and Serguei Norine. Riemann-Roch and Abel-Jacobi theory on a finite graph. Adv. Math., 215(2):766-788, 2007.
Chip-firing games on graphes. A Bjorner, L Lovász, W Shor, E.J. Combinatorics. 12A. Bjorner, L. Lovász, and W. Shor. Chip-firing games on graphes. E.J. Combinatorics, 12:283-291, 1991.
Algebraic and combinatorial rank of divisors on finite graphs. Lucia Caporasoa, Yoav Lenb, Margarida Meloa, Journal de Mathmatiques Pures et Appliques. 1042Lucia Caporasoa, Yoav Lenb, and Margarida Meloa. Algebraic and combinatorial rank of divisors on finite graphs. Journal de Mathmatiques Pures et Appliques, 104(2):227-257, 2015.
The riemann-roch theorem for graphs and the rank in complete graphs. Robert Cori, Yvan Le Borgne, Robert Cori and Yvan Le Borgne. The riemann-roch theorem for graphs and the rank in complete graphs. http://arxiv.org/abs/1308.5325, 2014.
An optimal algorithm to find maximum independent set and maximum 2-independent set on cactus graphs. Kalyani Das, AMO -Advanced Modeling and Optimization. 122Kalyani Das. An optimal algorithm to find maximum independent set and maximum 2-independent set on cactus graphs. AMO -Advanced Modeling and Optimization, 12(2):239-248, 2010.
Sandpile models and lattices: a comprehensive survey. E Goles, M Latapy, C Magnien, M Morvan, H D Phan, Theoret. Comput. Sci. 3222E. Goles, M. Latapy, C. Magnien, M. Morvan, and H. D. Phan. Sandpile models and lattices: a comprehensive survey. Theoret. Comput. Sci., 322(2):383-407, 2004.
On the number of husimi trees, i. Frank Harary, George E Uhlenbeck, Proceedings of the National Academy of Sciences. the National Academy of Sciences39315322Frank Harary and George E Uhlenbeck. On the number of husimi trees, i. Proceedings of the National Academy of Sciences, 39(4):315322, 1953.
Hamiltonian squares of cacti. M Arthur, Hobbs, Journal of Combinatorial Theory, Series B. 261Arthur M Hobbs. Hamiltonian squares of cacti. Journal of Combinato- rial Theory, Series B, 26(1):50-65, 1979.
Chip-firing games on eulerian digraphs and np-hardness of computing the rank of a divisor on a graph. Viktor Kiss, Lilla Tothmeresz, arXiv:1407.6958v3cs.CCViktor Kiss and Lilla Tothmeresz. Chip-firing games on eulerian di- graphs and np-hardness of computing the rank of a divisor on a graph. arXiv:1407.6958v3 [cs.CC], 2015.
On the number of solutions of a class of nonlinear resistive circuit. Tetsuo Nishi, Proceedings of the IEEE International Symposium on Circuits and Systems. the IEEE International Symposium on Circuits and SystemsTetsuo Nishi. On the number of solutions of a class of nonlinear resistive circuit. Proceedings of the IEEE International Symposium on Circuits and Systems. 1991.
Research in computational molecular biology. Benedict Paten, Mark Diekhans, Dent Earl, John St, Jian John, Bernard Ma, David Suh, Haussler, Lecture Notes in Computer Science. 6044766769Lecture Notes in Computer ScienceBenedict Paten, Mark Diekhans, Dent Earl, John St. John, Jian Ma, Bernard Suh, and David Haussler. Research in computational molecular biology. Lecture Notes in Computer Science, Lecture Notes in Computer Science, 6044:766769, 2010.
Feedback arc set problem and np-hardness of minimum recurrent configuration problem of chip-firing game on directed graphs. Kevin Perrot, Trung Van Pham, Annals of Combinatorics. 192Kevin Perrot and Trung Van Pham. Feedback arc set problem and np-hardness of minimum recurrent configuration problem of chip-firing game on directed graphs. Annals of Combinatorics, 19(2):373-396, 2015.
Lattices generated by chip firing game models: criteria and recognition algorithms. Trung Van Pham, Thi Ha Duong Phan, European Journal of Combinatorics. 345Trung Van Pham and Thi Ha Duong Phan. Lattices generated by chip firing game models: criteria and recognition algorithms. European Jour- nal of Combinatorics, 34(5):812-832, 2013.
Representation of temporal knowledge. K E Yu, Proc. 8th International Joint Conference on Artificial Intelligence. 8th International Joint Conference on Artificial IntelligenceK. E. Yu. Representation of temporal knowledge. Proc. 8th International Joint Conference on Artificial Intelligence, 1983.
Computing the weighted wiener and szeged number on weighted cactus graphs in linear time. Blaz Zmazek, Janez Zerovnik, Croatica Chemica Acta. 762Blaz Zmazek and Janez Zerovnik. Computing the weighted wiener and szeged number on weighted cactus graphs in linear time. Croatica Chem- ica Acta, 76(2):137 -143, 2003.
| zyda_arxiv-1806000 |
May 2000
Ramchander R Sastry
Unity of Fundamental Interactions
Center for Particle Physics
University of Texas at Austin
78712-1081AustinTexas
May 2000(March 7, 2022)arXiv:hep-ph/0004099v3 9
The vector representation of the linearized gravitational field (the graviton field) or the so called quantum gravitodynamics which describes the motion of masses in a weak gravitational field is employed to understand the unity of the four known interactions. We propose a gauge group SU (3)×SU (2)×U (1)×U (1) for such a unified field theory. In this paper we study the SU (2) × U (1) × U (1) sector of the theory and in analogy to the electroweak mixing angle we define a gravitoweak mixing angle. The unified gauge field theory predicts the existence of three massive vector bosons, the Y ± and the X 0 , and two massless vector bosons, the photon and the graviton (in its vector representation). We determine the mass spectrum of the Y ± and the X 0 and predict a modification to the fine structure constant under unified field conditions. Furthermore, we briefly discuss the implications of the extended object formulation for the gauge hierarchy problem.
I. INTRODUCTION
The quantum mechanics of extended objects 1 and its infinite dimensional generalization, namely, the quantum field theory of extended objects, in particular φ 6 scalar field theory 2 , quantum electrodynamics with the Pauli term, 3 and quantum gravitodynamics 4 have been presented by the author. In quantum gravitodynamics, the author develops an approach to understanding the response of of a lepton to a weak (linearized) gravitational field by making use of the vector representation of the linearized gravitational field 5 . In the covariant perturbation theory approach, the quantum theory of gravity is rendered finite by making use of a Euclidean, retarded, graviton propagator given by:
(δµρδνσ + δµσδνρ − δµνδρσ) e −k 2 /m 2 k 2 (1.1)
where 1 m is the graviton Compton wavelength given by 6.7 × 10 −4 R where R = c/H is the "Hubble radius" of the universe and H is the Hubble constant. The graviton propagator is defined in the linear approximation since the notion of mass and spin of a field requires the presence of a flat background metric η µν which one does not have in the full theory. The full theory of general relativity may then be viewed as that of a graviton field which undergoes a nonlinear self-interaction. The propagator in Eq. (1.1) will render such a full theory finite to all orders. It is the discovery of this propagator which motivates us to study the possibility of unifying the graviton field with the existing electroweak theory. It is known that linearized gravity predicts that the motion of masses produces magnetic gravitational effects very similar to electromagnetism 5 . The effective interaction between the electron and the graviton field can be understood in the vector representation where we make use of a propagator with the functional dependence given in Eq. (1.1) but with suitable vector indices. The author has calculated the order α correction to the magnetic gravitational moment by using such a propagator 4 .
Therefore, we are motivated to propose a gauge group SU (3) × SU (2) × U (1) × U (1) for the unified field theory which incorporates the strong force, the weak and electromagnetic interaction, and the graviton field. In this paper, we focus on the SU (2) × U (1) × U (1) sector of the gauge theory. The feasibility of such a gauge structure and its implications for the existence of massive vector bosons, the Y ± and the X 0 , and the determination of their mass spectrum are studied in this paper. We also predict a modification to the fine structure constant under unified field conditions. Furthermore, the consequences of the extended object formulation for the gauge hierarchy problem are examined.
II. SU (2) × U (1) × U (1)
Let us consider the electronic-type lepton fields which consist of only the left-and righthanded parts of the electron field e:
e L = 1 2 (1 + γ 5 )e, e R = 1 2 (1 − γ 5 )e (2.1)
and a purely left-handed electron-neutrino field ν eL :
γ 5 ν eL = ν eL . (2.2)
In any representation of the gauge group, the fields must all have the same Lorentz transformation properties, so the representations of the gauge group must divide into a left-handed doublet (ν eL , e L ) and a right handed singlet e R . Thus, the largest possible gauge group is then
SU (2) × U (1) × U (1) (2.3)
under which the fields transform as
δ ν e e = i ǫ · t + ǫ L t L + ǫ R t R ν e e (2.4)
where the generators are
t = g 4 (1 + γ 5 ) 0 1 1 0 , 0 −i i 0 , 1 0 0 −1 , (2.5) t L ∝ (1 + γ 5 ) 1 0 0 1 , (2.6) t R ∝ (1 − γ 5 ) (2.7) (2.8)
with g an unspecified constant. It will be convenient instead of t L and t R to consider the generators y = g ′ (1 + γ 5 ) 4 1 0 0 1 + (1 − γ 5 ) 2 (2.9) and n e = g ′′ (1 + γ 5 ) 2
1 0 0 1 + (1 − γ 5 ) 2 , (2.10)
where g ′ and g ′′ are unspecified constants like g. The generator y (the hypercharge) appears along with t 3 (the isospin operator) in a linear combination to define the charge q of the pair (ν eL , e L ):
q = e 0 0 0 −1 = e g t 3 − e g ′ y.
(2.11) Also, n e is the electron-type lepton number and it defines the mass of the left-handed pair (ν eL , e L ) and the right-handed singlet e R :
m ab = m 1 0 0 1 = m g ′′ n e (2.12)
where m is the electron mass. Thus, the charge couples to the electromagnetic field and the mass (in geometrized units) couples to the weak gravitational field. We want to include charge changing weak interactions (like beta decay), electromagnetism, and the graviton field in our theory, so we will assume there are gauge fields A µ , B µ , and C µ coupled to t, y, and n e respectively. Before we include the graviton field in our theory we must ensure that it satisfies the stringent limits on long range forces that would be produced by a massless gauge field coupled to n e 6 . Since the gravitational interaction is much weaker than the weak or electromagnetic interactions we are free to include a gauge field C µ with strength g ′′ coupled to n e . The gauge group is then
G = SU (2) L × U (1) × U (1) (2.13)
where the generators t, y, and n e are given by Eq. (2.5), Eq. (2.9), and Eq. (2.10) respectively.
The most general gauge-invariant and renormalizable Lagrangian that involves gauge-fields and electronic leptons is
L Y M + L LG + L e = − 1 4 ∂ µ A ν − ∂ ν A µ + g A µ × A ν 2 (2.14) − 1 4 ∂ µ B ν − ∂ ν B µ 2 − 1 4 ∂ µ C ν − ∂ ν C µ 2 −l / ∂ − i / A · t − i/ By − i/ Cn e l.
The coupling constants g and g ′ are to be adjusted so that the gauge fields A µ , B µ , and C µ coupled to these generators are canonically normalized. Now, of these five gauge fields coupled to t, y, and n e , only two linear combinations, the electromagnetic field A µ , and the graviton field (vector representation) A G µ are actually massless. We therefore must assume that
SU (2) L × U (1) × U (1) is spontaneously broken into U (1) em × U
(1) gravity with generators given by the hypercharge y and the electron-type lepton number n e . The details of the symmetrybreaking mechanism will be considered a little later. However, whatever this mechanism may be, we know that the canonically normalized vector fields corresponding to particles of spin one and definite mass consist of one field of charge +e with mass m Y
Y µ = 1 √ 2 (A µ 1 + iA µ 2 ) (2.15)
and another of charge −e and the same mass
Y µ * = 1 √ 2 (A µ 1 − iA µ 2 ) (2.16)
and three electrically neutral fields of mass m X , zero, and zero respectively given by orthonormal linear combinations of A µ 3 , B µ , and C µ :
X µ = cos φA µ 3 + sin φB µ (2.17) A µ = − cos θ sin φA µ 3 + cos θ cos φB µ + sin θC µ (2.18) A µ G = sin θ sin φA µ 3 − sin θ cos φB µ + cos θC µ (2.19)
where φ is the electroweak mixing angle (the Weinberg angle) and θ is the gravitoweak mixing angle. These linear combinations employ the Euler angles for a transformation from space axes to body coordinates with the third rotation set to zero. In this theory, the third rotation is set to zero because both the electromagnetic and graviton fields are massless U (1) gauge fields and In the limit as the gravitoweak angle θ goes to zero, we recover the linear combinations necessary to generate the electroweak mass spectrum 7 . In the above linear combinations we observe that the massive fields Y µ ± and X µ are specified entirely in terms of the gauge fields A µ and B µ .
Since the electrogravity mixing angle is zero, spontaneous symmetry breaking, which generates the vector meson term, occurs only in the electroweak sector of the theory. However, the coupling constants g and g ′ of the electroweak sector are specified in terms of the coupling constant g ′′ of the gravity sector as shown below. Thus, the spontaneous symmetry breaking of SU (2) L × U (1) × U (1) into U (1) × U (1) will generate two massless particles, namely, the photon and the graviton. Now, the generators of the unbroken symmetries, which are here electromagnetic and gravitodynamic gauge invariance are given by a linear combination of generators in which the coefficients are the same as the coefficients of the canonically normalized gauge fields coupled to these generators 7 . Inspecting Eqs.
(2.20) shows that q = − cos θ sin φ t 3 + cos θ cos φ y, (2.23) m ab = cos θ n e .
(2.24)
Comparing this with Eq.(2.11) and Eq.(2.12) gives then g = −e cos θ sin φ , g ′ = −e cos θ cos φ , g ′′ = m cos θ .
(2.25)
To complete the theory, we must now make some assumption about the mechanism of symmetry breaking. This mechanism must give masses not only to the Y ± and X 0 , but to the electron as well. Thus, we assume a 'Yukawa' coupling
L φ = −G e ν e e = φ + φ 0 e R + H.c.,(2.26)
where (φ + , φ 0 ) is a doublet on which the SU (2) L × U (1) generators are represented by the matrices:
t (φ) = g 2 0 1 1 0 , 0 −i i 0 , 1 0 0 −1 , (2.27) y (φ) = −g ′ 2 1 0 0 1 (2.28)
so that the charge matrix is
q = e 1 0 0 0 = e g t (φ) 3 − e g ′ y (φ) .
(2.29)
The most general form of the gauge-invariant term involving scalar and gauge fields consistent with the SU (2) L × U (1) sector of the theory is:
L φ = −1 2 ∂ µ − i A · t (φ) − iB µ y (φ) φ 2 − µ 2 2 φ † φ − λ 4 (φ † φ) 2 (2.30)
where λ > 0 and
φ = φ + φ 0 . (2.31)
For µ 2 < 0, there is a tree-approximation vacuum expectation value at the stationary point of the Lagrangian
φ φ † = v 2 = |µ 2 |/λ (2.32)
In unitarity gauge the vacuum expectation values of the components of φ are
φ + = 0, φ 0 = v > 0. (2.33)
The scalar Lagrangian Eq. (2.30) then yields a vector meson mass term of the form
−v 2 g 2 4 Y † µ Y µ − v 2 8 g 2 + g ′2 X µ X µ , (2.34) where g g ′′ = −e/m sin φ , (2.35) g ′ g ′′ = −e/m cos φ , (2.36) g ′′ = m cos θ . (2.37)
Here, φ is the Weinberg angle and θ is the gravitoweak mixing angle. We see that the photon mass is zero corresponding to an unbroken gauge symmetry U (1) em and the graviton mass is also zero corresponding to another unbroken gauge symmetry U (1) gravity while the Y ± and X 0 have the masses
m Y = v|g| 2 , m X = g 2 + g ′2 2 .
(2.38)
Now, consider the relation 7
g 2 /m 2 Y = 4 √ 2G F (2.39)
where G F = 1.16639(2) × 10 −5 GeV −2 is the Fermi constant. This relation is obtained by comparing the effective interaction between low energy, e-type and µ-type leptons with the effective 'V-A' theory which is known to give a good description of muon decay. This allows an immediate determination of the vacuum expectation value as
v = 2m Y g = 247GeV.
(2.40) By making use of the known value of the electroweak mixing angle given by sin 2 φ = 0.23, we can determine the masses of Y ± and X 0 in terms of the gravitoweak mixing angle as:
m Y = e µ v 2| cos θ|| sin φ| = 80.2GeV | cos θ| (2.41) m X = e µ v 2| cos θ|| sin 2φ| = 91.3GeV | cos θ| (2.42)
where e µ is the electric charge defined at a sliding scale µ comparable to the energies of interest.
We observe that as θ → 0 we regain the W and Z boson masses which is a result we expect.
Thus, a unified field theory predicts the existence of massive vector bosons Y ± and X 0 with the mass spectrum given in Eqs.(2.41)-(2.42). If we express the covariant derivative in Eq. (2.30) in terms of the mass eigenstate fields Y ± µ , X µ , and A µ we find that the coefficient of the electromagnetic interaction is not the electron charge e, but rather the effective electron charge
e ′ e ′ = e | cos θ| = gg ′ g 2 + g ′2 .
(2.43)
We observe that e ′ ≥ e with equality being achieved when the gravitoweak mixing angle θ is zero. The mixing between the weak interaction and the graviton field causes an increase in the electromagnetic coupling strength. This is because the electromagnetic coupling is a function of the gauge couplings g and g ′ which have a dependence on θ. If α G is the fine structure constant of an electron in the unified field, then we have:
α G α = 1 cos 2 θ (2.44)
implying that the fine structure constant suffers a modification. This would mean that if we were to measure the Lamb shift under unification conditions, the correction to the g-factor of the electron would be a e = α G 2π = 0.0011597 cos 2 θ .
(2.45)
III. THE GAUGE HIERARCHY PROBLEM
We begin with the reasonable observation that if SU (2) × U (1) is broken by the vacuum expectation value of an elementary scalar field, then that scalar field should be part of the grand unification. In order to produce a vacuum expectation value of the right size to give the observed W and Z boson masses, the Higgs scalar field must obtain a negative mass term of
the size 8 − µ 2 ∼ −(100GeV ) 2 . (3.1)
Now, the mass term can be expressed in terms of the vacuum expectation value v as
|µ 2 | = λv 2 (3.2)
where λ is the renormalizable coupling in (φ † φ) 2 charged scalar field theory. Therefore, the (mass) 2 receives additive renormalizations. In a theory with a cutoff scale Λ, µ 2 can be much smaller than Λ 2 only if the bare mass of the scalar field is of the order −Λ 2 and this value is canceled down to −µ 2 by radiative corrections. If our theory of nature contains very large scales of grand unification, then the appropriate value for Λ is 10 16 GeV or larger and it would require bizarre cancellations in the renormalized value of µ 2 . Thus, the Higgs boson mass is very small compared to the grand unification scale. It is a mystery as to why the (mass) 2 of the Higgs boson has a value 28 orders of magnitude or more below its natural value and this question is referred to as the gauge hierarchy problem. However, at grand unification energy scales the contributions of hitherto nonrenormalizable terms such as the Pauli term become significant 3 .
The description of quantum electrodynamics with the Pauli term necessitates the introduction of the quantum field theory of extended objects in which the finite extent of a particle defined via its Compton wavelength is incorporated into the field structure and leads to a finite interaction.
Since hitherto nonrenormalizable terms become important at grand unification scales, it would be more correct if we consider SU (2) × U (1) to be broken by the vacuum expectation value of a hitherto nonrenormalizable (φ † φ) 3 scalar field which can be rendered finite in the extended object formulation 2 . The coupling λ now becomes a finite coupling and the (mass) 2 does not receive additive renormalizations. Consider the potential
V (φ) = −µ 2 (φ † φ) + λ(φ † φ) 3 (3.3)
which has a tree-approximation vacuum expectation value at φ φ † = |µ 2 |/λ 1 2
(3.4)
implying that |µ 2 | = λv 4 (3.5)
where λ is now a finite coupling. Therefore, we can now expect the Higgs boson mass to be of the order of 100 GeV without any conceptual difficulty.
IV. CONCLUSION
The general gauge group SU (3) × SU (2) × U (1) × U (1) appears to describe the four known interactions in a consistent fashion. We are able to predict the existence of gauge bosons Y ± and X 0 for the SU (2) × U (1) × U (1) sector of this unified theory and determine mass spectrum of the gauge bosons. We have also shown that the fine structure constant is modified under unified field conditions. In addition, a possible resolution of the gauge hierarchy problem has been discussed. The results of this paper need to be subjected to experimental tests.
two U ( 1 )
1's are independent of each other. Hence, the mixing angle between the electromagnetic and graviton fields (the electrogravity angle) is zero. The electromagnetic field mixes with the weak interaction via the Weinberg angle and the weak interaction in turn mixes with the graviton field via the gravitoweak mixing angle. By making use of the inverse transformation back to space axes we have: A µ 3 = cos φX µ − cos θ sin φA µ + sin θ sin φA µ G (2.20) B µ = sin φX µ + cos θ cos φA µ − sin θ cos φA µ G (2.21) C µ = sin θA µ + cos θA µ G (2.22)
R. R. Sastry, quant-ph/9903025. 2 R. R. Sastry, hep-th/9903171. 3 R. R. Sastry, hep-th/9903179. 4 R. R. Sastry, hep-th/9905060. 5 R. M. Wald, General Relativity, The University of Chicago Press, 1984. 6 T. D. Lee and C. N. Yang, Phys. Rev. 98, 1501 (1955). 7 S. Weinberg, The Quantum Theory of Fields Volume I, Cambridge University Press, 1995. 8 M .E. Peskin and D. V. Schroeder, An introduction to Quantum Field Theory, Addison-Wesley, 1995
| zyda_arxiv-1820000 |
ESTIMATION OF POLICY-RELEVANT CAUSAL EFFECTS IN THE PRESENCE OF INTERFERENCE WITH AN APPLICATION TO THE PHILADELPHIA BEVERAGE TAX A PREPRINT
1 Feb 2023 February 2, 2023
Gary Hettinger
Christina Roberto
Youjin Lee
Nandita Mitra
Division of Biostatistics
Department of Medical Ethics and Health Policy
University of Pennsylvania Philadelphia
PAU.S.A
Department of Biostatistics Brown University Providence
Division of Biostatistics
University of Pennsylvania Philadelphia
University of Pennsylvania Philadelphia
PA, U.S.A., RI, U.S.A., PA, U.S.A
ESTIMATION OF POLICY-RELEVANT CAUSAL EFFECTS IN THE PRESENCE OF INTERFERENCE WITH AN APPLICATION TO THE PHILADELPHIA BEVERAGE TAX A PREPRINT
1 Feb 2023 February 2, 2023arXiv:2301.06697v2 [stat.ME]Difference-in-Differences · Doubly Robust · Health Policy · Spillover
To comprehensively evaluate a public policy intervention, researchers must consider the effects of the policy not just on the implementing region, but also nearby, indirectly-affected regions. For example, an excise tax on sweetened beverages in Philadelphia was shown to not only be associated with a decrease in volume sales of taxed beverages in Philadelphia, but also an increase in sales in bordering counties not subject to the tax. The latter association may be explained by crossborder shopping behaviors of Philadelphia residents and indicate a causal effect of the tax on nearby regions, which may offset the total effect of the intervention. To estimate causal effects in this setting, we extend difference-in-differences methodology to account for such interference between regions and adjust for potential confounding present in quasi-experimental evaluations. Our doubly robust estimators for the average treatment effect on the treated and neighboring control relax standard assumptions on interference and model specification. We apply these methods to evaluate the change in volume sales of taxed beverages in 231 Philadelphia and bordering county stores due to the Philadelphia beverage tax. We also use our methods to explore the heterogeneity of effects across geographic features.
Introduction
In January 2017, the City of Philadelphia, Pennsylvania (PA) implemented an excise tax of 1.5 cents per ounce on sugar-and artificially-sweetened beverages to raise revenue for educational initiatives including the city's Pre-Kindergarten expansion and Community Schools program (City of Philadelphia [2016]). The decision was motivated also in part by studies that associated excise taxes with reduced intake of taxed beverages (Brownell et al. [2009], Cabrera Escobar et al. [2013). City-level policy makers hoped to reduce the consumption of such beverages, given evidence linking sweetened beverage consumption to negative health outcomes such as obesity and type 2 diabetes (Hu [2013]). Despite generating over $330 million dollars in revenue from January 2017 to June 2021 (Rhynhart [2022]), there have been recent efforts to repeal the tax, motivated by claims of disproportionate economic burden and loss of retailer profits. On the other hand, several studies have shown the benefits of the Philadelphia beverage tax (PBT) in reducing sales, which presumably have led to a reduction in intake (Roberto et al. [2019], Lawman et al. [2020], Bleich et al. [2021], Edmondson et al. [2021], Petimar et al. [2022]).
To assess the causal effects of public policies and excise taxes, researchers often use a difference-in-differences (DiD) approach, which estimates the effect of the intervention by taking the difference in outcome trends between comparable regions with and without the intervention of interest (Ashenfelter [1978], Ashenfelter and Card [1984]). Previous studies employed DiD methods to compare Philadelphia (treated region) to Baltimore, Maryland (control region), finding that city-level sweetened beverage volume sales declined by 51% in Philadelphia in the year following tax initiation (Roberto et al. [2019]) with evidence of sustained declines two years after tax initiation (Petimar et al. [2022]).
Underlying the causal interpretation of the DiD framework are strong identification assumptions, including the key but untestable counterfactual parallel trends assumption, which necessitates that the average outcomes of the treated group would have evolved in parallel with the average outcomes of the control group had the intervention never occurred (Heckman et al. [1997]). Previous authors have developed methods to relax this assumption in order to estimate the average treatment effect on the treated (ATT), requiring that counterfactual parallel trends hold only after conditioning on observed pre-intervention confounding variables. Heckman et al. [1997] presented methods to adjust for confounding with outcome regression modeling or propensity score matching, requiring correctly specified outcome or propensity score models, respectively. Abadie [2005] used a propensity score model to develop an inverse probability of treatment weighting (IPW) estimator. Recent work by Li and Li [2019] and Sant'Anna and Zhao [2020] further relaxed model specification assumptions by developing doubly robust estimators for the ATT which require only that at least one of the outcome and propensity score models is correctly specified.
Many policy evaluations face the added complexity that individuals may avert taxes and restrictions by crossing into neighboring regions where the policy is not in place. Evidence of this behavior is common in practice for excise and sales taxes (Asplund et al. [2007]), gun policies (Raifman et al. [2020]), marijuana restrictions (Hao and Cowan [2020]), and more. Evaluations of excise taxes on sweetened beverages have generally found evidence of significant cross-border purchasing, with a few exceptions (Andreyeva et al. [2022]). When studying the effects of the PBT, Roberto et al. [2019] and Petimar et al. [2022] used DiD methods to compare Philadelphia-bordering counties to Baltimore and estimated that 25 − 30% of the total effect of the PBT on volume sales was offset by cross-border shopping. These behaviors fall under the umbrella labeled interference and violate the Stable Unit Treatment Value Assumption (SUTVA) from Rubin's formulation of the potential outcomes framework for causal inference (Rubin [1980]). While the literature has synonymously referred to this violation as spillover, we find it more intuitive to reserve that term for situations where a neighboring region experiences the direct extension of the effect on the intervened region, such as when a vaccination mandate provides an additive layer of protection to nearby regions. Conversely, the desired policy effects in the aforementioned policy studies are likely reduced by individuals crossing regional boundaries to bypass impositions. Accordingly, here we call this subset of interference a bypass effect.
Major advances have been developed to identify and estimate direct and spillover effects in controlled and observational studies where interference is believed to occur between individuals of a particular group, but not across these groups (Hudgens and Halloran [2008], Tchetgen and VanderWeele [2012], Liu et al. [2019], Papadogeorgou et al. [2019], Huber and Steinmayr [2021]). However, little has been published on DiD methods that specifically target bypass effects in policy evaluations, where an intervention is introduced to an entire group and bypass effects occur between groups. Working papers by Clarke [2017] and Butts [2021] have defined causal estimands of interest and identification assumptions for general potential outcomes under interference, but rely on a two-way fixed effects model (TWFE), which imposes strict parametric and effect homogeneity assumptions to identify an unbiased treatment effect (de Chaisemartin and D'Haultfoeuille [2022]).
In this work, we develop flexible methodology to robustly estimate the causal effects of the PBT on both Philadelphia and its neighboring counties under interference while also adjusting for confounding. This methodology is doubly robust in that our estimators are consistent if at least one of the propensity score or outcome models, which can be estimated non-parametrically, are well-specified. Our ensuing analysis serves to provide a framework for practitioners studying policies susceptible to bypass effects as well as further evidence of the effect of the PBT on volume sales in Philadelphia and its neighboring regions.
In what follows, Section 2 introduces the PBT study data. In Section 3, we present relevant notation, review a potential outcomes framework under interference, propose a modified SUTVA, and present doubly robust estimators for the treated and bypass effects. In Section 4, we conduct an analysis of the PBT to provide new insights on the comprehensive causal effect of the tax. We conduct simulation studies to compare the finite sample performance and empirically verify robustness properties of different DiD methods under realistic scenarios in Section 5. We conclude with a discussion in Section 6.
Philadelphia Beverage Tax Data
Beverage price and sales data were purchased from the market research firm Information Resources Inc (IRI), which obtains data from major US retailers described elsewhere (Muth et al. [2016]). For our study, we used retail sales data reported in 4-week periods for beverages sold from January 1, 2016 to December 31, 2017 in stores from Philadelphia, other PA counties, and Baltimore. Baltimore was chosen to be a comparison city due to its demographic and geographic similarities to Philadelphia and was not directly or indirectly affected by a beverage excise tax. Data were provided at the individual beverage level based on a unique universal product code and aggregated at the store-level. Store and beverage categorization, as well as price and sales aggregations, were conducted as described in Roberto et al. [2019].
For each store, we observed price and sales records at 26 time points (13 before and 13 after tax implementation) for multiple taxed and non-taxed beverage categories, which accounted for $14.3 billion and $19.9 billion in sales, respectively. We further merged store classifications into two categories: one encompassing supermarkets, grocery stores, and mass merchandisers (SGMs), and one for pharmacies which often demonstrate different consumer purchasing behaviors than SGMs. Among the 558 stores in our study, 180 were Philadelphia stores (40 SGMs, 140 pharmacies), 318 were stores from other PA counties (123 SGMs,195 pharmacies), and 60 were Baltimore stores (15 SGMs, 45 pharmacies). We additionally linked zip code-level socioeconomic and racial census data from 2016 to each of the stores in our study. The IRI data contained no missing data.
DiD Methodology for Causal Effects on Treated and Neighboring Control Regions
Here we develop doubly robust estimators for both the policy intervention effect and the bypass effect. To introduce our proposed framework, we first introduce a potential outcomes representation under interference, called an exposure mapping (Aronow and Samii [2017]), and define relevant estimands under this representation. We then provide conditions necessary to identify these estimands and present our proposed DiD estimators.
Exposure Mapping for Potential Outcomes under Interference
Assume we have a collection of units, i = 1, . . . , n, observed across a pre-and post-treatment period, t = 0, 1. For simplicity, we will first introduce methodology in the setting with a single observation per treatment period, i.e. one time point before and after the tax implementation, and thus the only relevant time dimension spans across different treatment periods, which we refer to as t-time. We will then extend this method to the setting with multiple observations per treatment period in Section 3.5.
For each unit, we observe a baseline covariate vector, X i , and a binary treatment group indicator, A i . Then, let Z it = tA i represent the treatment status of unit i in period t. We label the treatment and treatment status vectors for the entire population as A and Z t . Finally, we denote the outcome in each period for each unit as Y it .
Each unit has a potential outcome in each treatment period under each population treatment assignment, Y (Zt) it , resulting in 2 n potential outcomes per unit per period. This number is typically reduced to 2 by invoking SUTVA, which mandates that the potential outcome of a particular unit is unaffected by the treatment status of other units, i.e. that Y
(Zt) it = Y (Zit) it
. However, the presence of cross-border shopping violates this assumption in our setting, as stores would seemingly have different sales depending on the tax policy of nearby stores.
To address this concern, Aronow and Samii [2017] introduced a modified SUTVA that reduces the number of potential outcomes while still accounting for the presence of interference. Their framework assumes that the population treatment status can only affect the potential outcome of unit i at time t through the unit's own treatment status at t, Z it , and some known scalar function, h it : {0, 1} n → R. This function, referred to as the exposure mapping, represents the exposure level received by a unit that is not directly through their own treatment status. Letting g it (Z t ) = (Z it , h it (Z t )) T represent the exposure status of unit i at time t and invoking this modified SUTVA, we can then write the potential outcomes for any possible treatment assignment vector, z t , as:
Y (zt) it = Y (git(zt)) it (A1)
In our study, we assume that the sales of store i at time t only depend on the tax policy through the store's own tax status and the presence of a nearby store with a different tax status. Specifically, we assume that (A1) holds under:
h it (Z t ) =
1 if Z it = 0 and adjacent to any taxed region 0 if Z it = 0 and not adjacent to any taxed region 0 if Z it = 1 and adjacent to any untaxed region 1 if Z it = 1 and not adjacent to any untaxed region (2) This exposure mapping reduces the number of potential outcomes per store from 2 n to 4 while still allowing for sweetened beverage sales in a given store to be affected by cross-border shopping to or away from neighboring regions. In the post-tax period of our study, we observe the control exposure status, g i1 (Z 1 ) = (0, 0), for Baltimore and other PA county stores not adjacent to Philadelphia (Non-Border); the treated exposure status, g i1 (Z 1 ) = (1, 0), for Philadelphia stores; and the neighboring control exposure status, g i1 (Z 1 ) = (0, 1), for other PA county stores adjacent to Philadelphia (Border).
Policy-Relevant Causal Estimands
Our goal here is to define causal estimands representing two policy-relevant questions. We first ask, what would be the average difference between sweetened beverage sales for Philadelphia stores in 2017 with and without the implemented PBT? This question corresponds to the ATT and can be defined in terms of potential outcomes as:
AT T := E[Y (1,0) 1 − Y (0,0) 1 |g(A) = (1, 0)] = E[Y (1,0) 1 − Y (0,0) 1 |A = 1]
(3) where we drop the unit-specific subscript, i, for convenience and define g i (A) analogously to g it (Z t ). The second equality holds by noting that no units in our study have g i (A) = (1, 1).
We additionally ask, what would be the average difference between sweetened beverage sales for Border stores in 2017 with and without the implemented PBT? This question corresponds to what we call the Average Treatment Effect on the Neighboring Control (ATN) and can be defined in terms of potential outcomes as:
AT N := E[Y (0,1) 1 − Y (0,0) 1 |g(A) = (0, 1)](4)
Identifiability under the DiD Framework
Since we do not observe the post-tax potential outcomes for unit i under all possible combinations of g i (A), we cannot directly identify the ATT and ATN without further assumptions. Under the DiD framework and our proposed exposure mapping, we require the aforementioned counterfactual parallel trends assumptions between both the treated and control exposure groups to identify (3):
E[Y (0,0) 1 − Y (0,0) 0 |A i = 1, X] = E[Y (0,0) 1 − Y (0,0) 0 |g(A) = (0, 0), X]
(A2) as well as between the neighboring control and control exposure groups to identify (4):
E[Y (0,0) 1 − Y (0,0) 0 |g(A) = (0, 1), X] = E[Y (0,0) 1 − Y (0,0) 0 |g(A) = (0, 0), X]
(A3) whereby invoking the assumptions conditional on observed covariates, we allow for observed confounding often present in quasi-experimental settings. In our study, the counterfactual parallel trends assumptions would be violated if, for example, (i) wealthier populations are less likely to consume sweetened beverages in the post-tax period than the pre-tax period regardless of tax status, (ii) the distribution of wealth varies by region, and (iii) at least one of the following are true: (a) we do not observe this measure of wealth or (b) the distribution of wealth changes between tax periods. Otherwise, the assumptions would still hold.
In addition to (A1), (A2), and (A3), we also require the consistency and positivity assumptions, which respectively state that the observed outcome is equal to the potential outcome under the observed exposure status and that all units in the study have a non-zero probability of assignment to each of the exposures given the observed covariates.
DiD Estimators for the ATT and ATN
Here, we extend existing doubly robust estimators for the ATT under SUTVA in order to estimate both the ATT and ATN under our modified SUTVA with a binary exposure mapping, h it . To do so, we denote R i as a binary indicator representing assignment to the exposure group of interest for the particular estimand and Z r it = tR i as an indicator representing the status of this relevant exposure at time t. Specifically, R i = 1 refers to the treatment group in the ATT comparison and the neighboring control group in the ATN comparison, whereas R i = 0 refers to the control group in both. We focus on methods for longitudinal panel data, corresponding to our data example. However, these methods can be readily adapted to the setting where pre-and post-tax data are collected on different populations, i.e. cross-sectional data (Sant'Anna and Zhao [2020]).
Two-Way Fixed Effects
We begin by reviewing the commonly applied TWFE approach, which posits the linear outcome model:
Y it = β 0 + β T X i + α 1 t + α 2 R i + τ fe Z r it + ǫ it (7)
where ǫ it can be correlated within unit i. In the setting with two treatment periods, two exposure groups, and a single observation per treatment period, the coefficient τ fe has been shown to be equivalent to the classical difference-inmeans DiD estimator (Bertrand et al. [2004]) and thus can estimate the ATT or ATN under the unconditional analogues of (A2) and (A3), where the expectations are not conditioned on X.
However, the linear additive outcome model is limited by strict parametric assumptions. Further, numerous recent works have cited issues with its implicit assumption of a homogeneous treatment effect (de Chaisemartin and D'Haultfoeuille [2022]). We provide a simple example of the bias induced by this approach in the presence of time-varying confounding and heterogeneous treatment effects in Appendix A.
Outcome Regression Estimators
Alternatively, one can attempt to impute the counterfactual outcome trends for the exposed group by modeling outcome dynamics under the control exposure. Here, we can apply the outcome regression (OR) estimator, first developed for the ATT in Heckman et al. [1997], to estimate the ATT and ATN in our setting. The estimator plugs an estimate for
µ ∆ (X) = E[Y 1 − Y 0 |g(A) = (0, 0), X] into: τ or = E nr [(Y 1 − Y 0 ) −μ ∆ (X)](8)
where E nr denotes the empirical mean over the exposed group population. In contrast to (7),μ ∆ is generally estimated using only control group data to avoid specifying treatment effect dynamics and can be estimated with more flexible models. Still, this approach relies entirely on the correct specification of a model relating covariates to complex outcome dynamics.
Inverse Probability Weighting Estimators
Instead of modeling outcome dynamics, one can use a weighted estimator to balance confounding between the exposed and control groups. Here, we can apply the semi-parametric IPW estimator for the ATT by Abadie [2005] to estimate the ATT and ATN in our setting. The estimator relies on the propensity score, or probability of assignment to the exposure of interest, π r (X) = P (R = 1|X). In the case of panel data, the weights are calculated as:
w i = R i − π r (X i ) P (R i = 1)(1 − π r (X i ))(9)
and are used to estimate the causal effect after estimating π r (X) as:
τ ipw = E n [ŵ(Y 1 − Y 0 )](10)
where the empirical mean is now taken over the study population of both the exposed and control groups. While flexible, IPW approaches can be unstable in finite samples or cases of nonoverlap when the propensity score is close to one for certain units.
Doubly Robust Estimators
Rather than choosing between the OR and IPW approaches, we propose applying the influence function (IF)-based DiD estimators developed for the ATT by Li and Li [2019] and Sant'Anna and Zhao [2020] to estimate both the ATT and ATN under our binary exposure mapping. In addition to doubly robust (DR) properties, these estimators are also asymptotically normal and approach the semi-parametric efficiency bound when both nuisance functions are wellspecified. For a deeper technical discussion as well as proofs of the properties of these estimators, we point the reader to Sant'Anna and Zhao [2020].
The doubly robust (DR) plug-in estimator then incorporates estimates for both the propensity score and the outcome trend under no treatment to estimate the causal effect using:
τ dr = E n [ŵ((Y 1 − Y 0 ) −μ ∆ (X))](11)
where w and µ ∆ (X) are defined as in the preceding estimators. As opposed to the TWFE approach, the IPW, OR, and DR approaches can easily incorporate non-parametric estimation of these nuisance functions using machine learning.
Extension to Multiple Observations Setting
Until now, we have presented methods for data with a single observation per treatment period. However, as noted earlier, our study comprises 13 different time points in both the pre-and post-tax periods. In this section, we present a simple yet robust approach for extending these methods to the multiple time point setting. The proposed approach turns out to be a special case of the general framework proposed by Callaway and Sant'Anna [2021] when there is no variation in treatment initiation time.
Let m = 1, ..., n m index the observation times in the post-tax period. We then refer to the time dimension within a treatment period as m-time. Denoting m-time specific observations by adding a m-subscript to our previous notation, we can define m-time specific effects as:
AT T (m) := E[Y (1,0) 1,m − Y (0,0) 1,m |A = 1], AT N (m) := E[Y (0,1) 1,m − Y (0,0) 1,m |g(A) = (0, 1)](12)
We can then average across these m-time specific effects to summarize the entire effect:
AT T = 1 n m nm m=1 AT T (m), AT N = 1 n m nm m=1 AT N (m)(13)
In our study, observations are observed at 4-week intervals and occur at the same calendar time in the pre-(2016) and post-tax (2017) periods. Thus, we consider data at each of these n m pairs as a two treatment period, single observation per period comparison and therefore require conditional counterfactual parallel trends assumptions at each m-time to identify (12), and thus (13), as causal effects. Finally, we specify our nuisance functions across m-time as
µ ∆,m = E[Y 1,m − Y 0,
Relative SGM Sale Trends of Taxed Beverages By Region
Figure 1: Regional average of taxed beverage sales per store taken relative to average 2016 sales for supermarkets, grocery stores, and mass merchandisers.
Descriptive Analyses
We started by grouping stores by region (Philadelphia, Baltimore, Border, Non-Border) and type (SGM, pharmacy). We analyzed SGMs and pharmacies separately due to expected differences in sales dynamics that may be difficult to properly model (Roberto et al. [2019]). Pre-treatment covariates for our study are summarized in Table 1. Notably, regional-level covariates are similar between Philadelphia and Baltimore as well as between Border and Non-Border regions, but not between Philadelphia and Non-Border or Border and Baltimore regions. Therefore, we used Baltimore stores as the control group for Philadelphia stores in the ATT comparison and Non-Border stores as the control group for Border stores in the ATN comparison.
Regions affected by the PBT either directly or through bypass effects demonstrate a clear disruption in volume sales of sweetened beverages between the year before and after tax implementation relative to our control regions, as visualized in Figure 1. The sales in Philadelphia SGMs and pharmacies respectively decreased by 54.90% and 21.57% from 2016 to 2017, whereas neighboring SGMs and pharmacies increased by 44.75% and 21.60%. This comes in stark contrast to the relatively constant SGM beverage sales in Baltimore (0.96% decrease) and Non-Border stores (0.86% decrease) and slightly decreasing pharmacy beverage sales (10.01% Baltimore, 11.74% Non-Border).
Estimation of Treatment Effects
We estimated the ATT and ATN using both the standard TWFE approach, which requires the standard unconditional counterfactual parallel trends, and our proposed DR approach, which requires conditional counterfactual parallel trends. To estimate the ATT and ATN using the TWFE approach, we estimated time-specific treatment effects using the linear model in (7) for each m = 1, ..., 13. To estimate the ATT and ATN using our proposed doubly robust methods, we fit a different linear regression model for each µ ∆,m and logistic regression model for each π r,m , with terms for our observed covariates (Stuart et al. [2014]).
To estimate 95% confidence intervals (CIs) for our effect estimates, we implemented a stratified nonparametric bootstrap sampling approach (Efron and Tibshirani [1993]). For each of the four regions, we re-sampled with replacement from the empirical distribution of the regional subsample, where a store's entire observed data vector is re-sampled. Stratifying by region limits extreme bootstrap samples where certain regions may only have a few representative members, which is the case in our study. We then estimated the 2.5 and 97.5 percentiles among the 500 bootstrap replicates as our interval bounds. A brief discussion and comparison of possible CI approaches is presented in Appendix D.
To bolster the credibility of Assumption (A1) under (2), we removed Non-Border stores in PA zip codes within 6 miles of the Philadelphia border (138 stores). We made this decision after estimating a small but nonzero ATN on these stores using our DR methodology in a preliminary analysis (Table 4), suggesting this group may contain a mixture of stores with control and neighboring control exposures. Assumption (A1) further implies that the treatment has no causal effect before its implementation, often formalized as a no anticipation assumption, which would be violated in our study if Philadelphia residents stock-piled sweetened beverages in the months leading up to the tax. To evaluate this assumption, we used our DR methodology to estimate the ATT on SGMs and pharmacies between the first pre- (Table 5). Our 95% CIs included the null effect except for the AT T (pre) (12) of pharmacies, which was statistically significant but negative suggesting, if anything, that consumers were pre-emptively cross-border shopping rather than stock-piling.
Researchers commonly use tests on pre-treatment parallel trends to assess the plausibility of the counterfactual parallel trends assumption in DiD studies (Bilinski and Hatfield [2018]). In our setting these tests can be conducted robustly to assess Assumptions (A2) and (A3) by using our proposed methodology to estimate AT T (pre) (m) and the analogous AT N (pre) (m) for m = 2, . . . , 13. If the pre-treatment outcome trends are parallel between the exposure groups, these effects would be zero. As Bilinski and Hatfield [2018] note, practitioners should be wary to conduct these tests under the null hypothesis of parallel trends as this would reward highly uncertain tests that fail to rule out large violations of parallel trends. Therefore, we report 95% CIs of such tests as evidence to reject violations of parallel trends outside the estimated bounds. These intervals, provided in Table 5, largely include 0. A visual example of how conditional parallel pre-trends may be more plausible than the unconditional analogy is given in Figure 2. Still, these tests are a limited proxy for the required counterfactual parallel trends and rely on the assumption that parallel trends within the pre-treatment period can be extrapolated to counterfactual trends between tax periods.
ATT and ATN estimates for sweetened beverage sales at SGMs and pharmacies aggregated by season and year appear in Table 2. In the year after tax initiation, SGMs see an average loss of 2.26 million oz. (95% CI: (1.54, 2.95)) in Philadelphia per 4-week period and gain of 1.20 million oz. (95% CI: (0.79, 1.70)) in neighboring stores. Pharmacy stores see an average loss of 22.9 thousand oz. (95% CI: (11.2, 32.5)) in Philadelphia per 4-week period and a 38.3 thousand oz. (95% CI: (23.6, 56.9)) gain in neighboring stores.
Interestingly, the ATN is larger in magnitude than the ATT for pharmacies. Since our analyses are at the store-level, we would need to consider the total number of stores in each region to assess the relative magnitudes at the regional level. Additionally, some Philadelphia consumers may respond to the PBT by switching their sweetened beverage purchases from Philadelphia SGMs to Border county pharmacies.
Our results show considerable effect heterogeneity between seasons for both the ATT and ATN, with effects strongest in warmer seasons. ATT effects are strongest in the Summer for SGMs and Pharmacies, with estimated decreases of 2.47 million oz. (95% CI: (1.64, 3.29)) and 34.8 thousand oz. (95% CI: (20.8, 46.1)) due to the tax, respectively. ATN effects are strongest for SGMs in the Summer (increase of 1.35 million oz., 95% CI: (0.86, 1.90)) but the largest effects are estimated for pharmacies in the Fall (increase of 42.0 thousand oz., 95% CI: (26.3, 60.9)). Related, ATT effects are weakest in the winter with decreases of 1.93 million oz. for SGM (95% CI: (1.37, 2.52)) and 12.7 thousand oz. for pharmacies (95% CI: (0.7, 23.5)). This holds true for the ATN of SGMs (increase of 0.90 million oz., 95% CI: (0.57, 1.27)) and pharmacies (increase of 35.4 thousand oz., 95% CI: (20.5, 53.2)). Such heterogeneity matches our expectations as the warmer temperatures and general increase of sweetened beverage sales in warmer months may further incentivize consumers to travel to bypass the PBT. However, the subdued tax effects in Winter may also be indicative of a gradual consumer response to the PBT.
The DR estimates closely resemble the estimates using the standard TWFE methodology for SGMs. This may indicate some plausibility of the unconditional parallel trends assumption between exposure groups for these stores but also reflects that our available covariates were not very informative of the outcome for this set of stores. For pharmacies, the DR methods produce higher magnitude effect estimates than their TWFE counterparts, which may suggest that confounding is masking some of the tax effect in standard analyses. For example, our outcome models for µ ∆,m associate lower percentages of White residents with higher declines in beverage sales at Baltimore pharmacies between 2016 and 2017. Since Philadelphia pharmacies are in zip codes with higher percentages of White residents, we would underestimate the post-tax counterfactual sweetened beverage sales of Philadelphia stores by not properly adjusting for race. Notably, by accounting for confounders the DR CIs for the Winter ATT on pharmacies do not include zero unlike the TWFE CIs.
Estimation of Effects by Geographical Proximity
While population-level effects are helpful, policies may affect subpopulations within each region differentially. Understanding who policies are affecting most is especially helpful for policy makers when deciding whether to continue policies, how to address disparities induced by policies, and how to implement policies in regions with different population compositions.
To help understand how policy effects may vary by geographic proximity to non-taxed regions, we first defined subgroups of Philadelphia zip codes according to their border status -PA-bordering, New Jersey (NJ)-bordering, and Non-Bordering -which are visualized in Figure 3 in yellow, purple, and orange, respectively. We then used our proposed DR methodology to estimate an annual relative sales effect, AT T rs := E[Y (1, 0)] for pharmacies in each of these subgroups. The sales ratio provides a comparable scale for subpopulations that may differ in magnitude of sweetened beverage sales, with more discussion presented in Appendix E. The subgroups contained 58, 26, and 56 pharmacies with estimated effects of -33% (95% CI: (-39%, -26%)), -10% (-24%, +5%), and -6% (-17%, +6%), respectively, suggesting that PA-bordering Philadelphia pharmacies experienced a larger decrease than those adjacent only to NJ or other Philadelphia zip codes, for which bypass may be less practical (e.g., requires a toll to enter). These findings complement previous studies which found reduced tax effects on Philadelphia residents further from the city border (Cawley et al. [2019]).
Figure 3: Clustering zip codes by geographic factors (border status for Philadelphia stores and proxies for available traffic and sales from proximal taxed stores for Border county stores) suggests heterogeneity in tax effects.
To further understand the influence of geographic heterogeneity on bypass effects, we also defined subgroups of Border county zip codes according to measures of proximal taxed population and the year-to-year (YtY) differences in taxed beverage sales of these populations. Specifically, we took the total population and YtY difference of each PA-bordering Philadelphia zip code and divided these measures by the number of Border county zip codes they were adjacent to, as proxies for the amount of traffic and sales "available" from a taxed zip code. For each Border county zip code, we then took the sum of these measures from all adjacent Philadelphia zip codes as a proxy for the amount of available traffic and sales from proximal taxed stores. We used K-means clustering to assign these zip codes to groups with low and high amounts of available traffic and sales. Using our proposed method, we estimated substantially lower effects in Border county zip codes with low available traffic and sales from taxed stores (27% increase, 95% CI: (18%, 36%)) than those with high available traffic and sales from taxed stores (56% increase, 95% CI: (30%, 85%)), although the CIs are quite wide in the latter group which is likely due to the small subgroup size (10 pharmacies) and the large heterogeneity of our proxy measures within this subgroup. While these exploratory analyses demonstrate potential geographic heterogeneity in policy effects, we cannot differentiate between effect heterogeneity due to geographical factors and heterogeneity due to other population dynamics associated with geography by estimating causal effects on different subgroup populations.
Simulation Studies
Design
We performed simulation studies motivated by the PBT study to evaluate the performance of different estimators under realistic scenarios. To generate our samples, we first simulate X i,4,m−1 + 0.25, 0.25) for m = 2, 3, . . . , n m . Then, we apply the transformation from Kang and Schafer [2007] on X (orig) to get X (obs) :
X (obs) i,1 = (0.6 + X (orig) i,1 X (orig) i,3 /25) 3 , X (obs) i,2 = 10 + X (orig) i,2 1+exp(X (orig) i,3 ) X (obs) i,3 = exp(0.5X (orig) i,3 ), X (obs) i,4,m = (20 + X (orig) i,2 * X (orig)
i,4,m ) 2 In each simulation, our estimators use the observed covariates, X (obs) . However, the covariate set used to generate the exposure, X (a) , and outcome, X (µ) , vary depending on the scenario. Units are split between the ATT and ATN comparisons according to a 43:57 ratio to mimic some of the imbalance seen in our dataset. Exposure within each comparison is then simulated according to binomial models with P (g
i (A) = (1, 0)|X i ) = expit(β (T) ′ X (a) i ) and P (g i (A|X i ) = (0, 1)) = expit(β (N) ′ X (a)
i ). For units with g i (A) = (0, 1), we additionally simulate a variable representing distance to the Philadelphia border,
D i ∼ N (1/(1 + exp(X (obs) i,2 + X (obs) i,3 + X (obs)
i,4,1 )), 0.05), and bound it between 0.1 and 0.9. This variable is used to induce treatment effect heterogeneity in the ATN.
Outcomes are generated with the linear model as:
Y itm = 20+α i +α gi(A) +γ m +γ t +t{λ tm ′ X (µ) i +τ (AT T ) m ½(g i (A) = (1, 0)) + τ (AT N, * ) m ½(g i (A) = (0, 1))(1 − D i )} + ǫ itm .
Here, α i correspond to unit-specific random intercepts, over m = 1, . . . , n m . Notably, our framework allows for heterogeneous confounding (λ tm ) and treatment effects (τ m ) over m-time, as well as heterogeneous treatment effects over confounders, like distance in this setting. Parameter settings used for the simulations can be found in Appendix F.
Simulations were generated according to three different combinations of sample size (n) and number observation times (n m ) -(1) n = 250, n m = 13, (2) n = 2000, n m = 2 and (3) n = 500, n m = 4. We consider four different scenarios depending on the covariate set used to specify the treatment (X (a) ) and outcome (X (µ) ) models -(a) X (a) = X (obs) , X (µ) = X (obs) , (b) X (a) = X (orig) , X (µ) = X (obs) , (c) X (a) = X (obs) , X (µ) = X (orig) , and (d) X (a) = X (orig) , X (µ) = X (orig) . Thus, our outcome model is correctly specified in (a) and (b) but misspecified in (c) and (d), whereas our propensity score model is correctly specified in (a) and (c) but misspecified in (b) and (d).
In each scenario, we generate 1000 replicates to examine the performance of the described TWFE, IPW, OR, and DR methods. To estimate µ ∆,m , we fit a linear regression model on the difference in outcomes, ∆Y im = Y i1m − Y i0m , for the control group using a separate model for m = 1, . . . , n m . To estimate π r,m , we fit a single time-invariant logistic regression model. CIs are estimated with the aforementioned stratified bootstrap approach.
Results
Simulation results are summarized in Table 3, with results from scenario 3 presented in Table 8. We evaluate each method according to the average bias and standard error of our point estimates as well as the coverage of our CIs.
For both the ATT and ATN, the estimates using the standard TWFE approach are highly biased for all scenarios as the method does not account for time-varying confounding. The estimates using the OR and IPW approaches are unbiased in the scenarios where the respective model is correctly specified, whereas those from the DR approach are unbiased in scenarios (a)-(c). All approaches are biased in scenario (d) when both models are incorrectly specified. While the IPW approach appears relatively unbiased for the ATT in scenario (b) when the propensity score model is misspecified, we caution that this is a product of the specific data generating mechanism and note that this chance behavior is not expected to hold in general, as seen in the ATN comparison. However, the slight bias of the DR approach in scenario (c) in smaller sample settings is something that has been noted in previous works (Li and Li [2019], Sant'Anna and Zhao [2020]), as the DR approach appears more dependent on the outcome model specification than that of the propensity score model.
The OR and DR approaches result in the smallest standard errors, with the OR approach slightly more efficient in these finite sample settings but significantly less robust to misspecification. The higher efficiency of the outcome model when correctly specified has been noted by Li and Li [2019] and Sant'Anna and Zhao [2020] as well. Notably, the IPW approach has relatively large standard errors even with a well-specified model, which is well-documented in the literature as a result of unstable weights in finite sample settings.
Bootstrap approaches for CIs work well in these simulations, roughly achieving the nominal coverage probability when models are correctly specified. The intervals are slightly inflated in small sample settings (Scenario 1) and for the ATN. The latter observation may result from unmodeled treatment effect heterogeneity leading to efficiency loss. Still, when the models are correctly specified, the CIs tend to be conservative.
Discussion
Bypass effects often occur when a policy imposes restrictions on individuals, often substantiallly offsetting the intended effects of the policy. Understanding these effects is critical for policy makers and evaluators. In this work, we propose a framework to estimate policy-relevant causal effects in the presence of such interference by joining together the ideas of exposure mappings and doubly robust DiD estimators. We applied our methods to estimate the causal effects of the PBT on Philadelphia and its bordering counties, accounting for both interference and confounding. Notably, we estimated more pronounced effects of the tax on pharmacies than methods used in previous studies. Additionally, we have used our methods to reveal new insights concerning effect heterogeneity according to season and geographic proximity.
It is important to note that we did not have access to sales data from NJ border stores, which may also see bypass effects, but perhaps to a lesser degree. Further, our estimates do not tell us what would have happened in a counterfactual scenario where a tax was implemented in both Philadelphia and its surrounding counties, a situation that may be quite relevant to policy makers. In future analyses it would be of interest to robustly account for residual spatial correlation between stores of the same region or auto-correlation between time-specific effect estimates. Finally, doubly robust methods that handle continuous and multi-dimensional exposure mappings instead of a known binary exposure mapping would be valuable for this study and many others. In addition to increasing efficiency, such methods would strengthen subpopulation analyses by allowing investigators to flexibly and efficiently understand how policy effects vary across space and/or other factors. In practice, these insights may shed light on why certain cities, such as Seattle, have seen less substantial bypass effects, which has been thought to be a product of geographical borders between neighboring counties (Powell and Leider [2020]). However, collecting and incorporating accurate and relevant geographical and transportation (e.g. access to a car or public transportation) data may still pose a challenge.
A Concerns with TWFE approach in the Presence of Heterogeneous Treatment Effects
Many works have cited issues with the TWFE approach in studies where different groups within the population receive treatment at different times, i.e. staggered adoption, and the treatment effects are heterogeneous across these groups. While the concern of staggered treatment adoption does not apply to our setting, since all stores are exposed to the tax at the same point in time, it is noteworthy that the straightforward extension of the TWFE model to account for time-varying confounding:
Y it = β 0 + β T X i + θ T tX i + α 1 t + α 2 R i + τ fe Z r it + ǫ it(14)
is not robust to treatment effects that are heterogeneous across X.
To demonstrate this limitation, we consider a simple simulation setting. We first generate X i ∼ N (0, 1). Then, we simulate A i ∼ Bin(1/(1 + exp(−0.5 + 0.5X i )). Finally, we simulate Y it ∼ N (10 + t + 2A i + 2X i + tX i + 4tθ i , 0.1).
In the setting with a homogeneous treatment effect, θ i = 1 for all i. In the setting with a heterogeneous treatment effect, we set θ i = 1 + ½{X 1 ≥ 0.5}.
Then, the true ATT is 4 in settings with a homogeneous treatment effect but 4 + P (X ≥ 0.5|A = 1) in settings with heterogeneous treatment effects. We run 1000 simulations with n = 10000 and fit a linear model Y it = β 0 + β t t + β a A i + β x X i + β xt tX i + τ tA i + ǫ it , where the estimate for τ is the estimated ATT.
When the treatment effect is homogeneous, the extended TWFE model in (14) identifies the proper effect, with biases less than 0.1%. However, when the treatment effect is heterogeneous, the TWFE model in (14) identifies an effect with bias greater than 8%. The bias results from misspecified effect dynamics in the model, as both τ and β xt absorb some of the effect of θ i , rather than just τ . The OR and DR approaches avoid specifying a treatment effect in the model as they only model the outcome under the control exposure, making them more robust to this scenario.
B Testing for an Effect on Nearby PA stores not bordering Philadelphia C Pre-Parallel Trends Testing
D Generating Confidence Intervals
In our real data analyses and simulations, we employ a bootstrap approach to generate CIs, which we find quite beneficial in our work. First, it is flexible to the data modeling approach and captures the uncertainty in our estimates due to nuisance function estimation. Second, the approach allows us to seamlessly estimate additional, complex estimands like the multiplicative estimand presented in Appendix E. Finally, by re-sampling a store's entire observed vector, bootstrapping automatically accounts for correlation between time-specific effect estimates when aggregating for seasonal or annual estimates.
Still, the approach may be unstable in studies with small samples and computationally demanding in others. (Sant'anna and Zhao (2020)) provide parametric variance estimators for consistent CI estimation under strict model assumptions and as long as one of the nuisance functions is correctly specified. However, it is not straightforward to derive a formula for the variance in our setting with multiple time-specific effects without assuming that the time-specific effects are independent and normally distributed. Lastly, as the stratified bootstrap approach can improve stability in finite samples by limiting extreme samples, a Bayesian Bootstrap approach may similarly help while avoiding manual definition of strata of interest.
A comparison of CI lengths from these three approaches when estimating annual effects is provided in Table 6. The parametric variance approach is notably tighter than either of the bootstrap variance approaches due to the additional assumptions on model form and effect independence over time, which may not hold in practice. Our proposed stratified bootstrap approach produces narrower CIs than the Bayesian bootstrap, except for the Pharmacy ATT. By stratifying on region, we necessitate consistent sample sizes in each exposure group group, which may explain the tighter confidence bounds compared to the potentially more variable exposure group sample sizes generated by the Bayesian bootstrap. However, the Bayesian bootstrap does outperform the standard bootstrap, which often fails due to generated samples with zero or close to zero exposure group sizes and thus is not shown.
E Estimation of a Relative Effect
In the real data analysis, we noted seasonal trends in the effect of the Philadelphia Beverage Tax. While these trends may be due to seasonal or temporal patterns in consumer behavior, they may also result from trends in volume beverage sales. For example, the tax may more consistently affect the percentage of sales rather than the raw volume of sales at a given store over time. As such, focusing solely on an additive effect may paint an incomplete picture of the effect of the tax policy. In such a case, it is of interest to estimate a relative sales effect, e.g.,
AT T rs := E[Y (1,0) 1 |g(A) = (1, 0)]/E[Y (0,0) 1 |g(A) = (1, 0)](15)
and the respective analogy for AT N rs . As these are relative effects, they may also be useful when comparing effects between regions as in Section 4.3.
In order to identify such effects, we note that the numerator is observed and can be estimated with a sample mean, τ dr rs,1 = E n [RY 1 ] Noting that the difference of the numerator and denominator in (15) equals the additive treatment effect, the same IF estimator and identifying assumptions can be used to estimate the denominator term. Rearranging terms, we see:
τ dr rs,0 = E n [RY 1 − w((Y 1 − Y 0 ) − µ ∆ (X))]
is a doubly-robust estimate for the denominator. Our estimator for the multiplicative effect is then τ dr rs = τ dr rs,1 /τ dr rs,0 . Since this estimator relies on a division of the two components, the bootstrapping method becomes especially helpful to estimate confidence intervals.
The estimates for the Philadelphia and Border county regions are summarized in Table 7. We estimate the annual AT T rs for SGMs as 0.46 (95% CI: (0.37, 0.60)) and for pharmacies as 0.85 (95% CI: (0.78, 0.92)), corresponding to 54% and 15% reductions in sales respectively. We estimate the annual AT N rs for SGMs as 1.44 (95% CI: (1.28, 1.59)) and for pharmacies as 1.39 (95% CI: (1.26, 1.54)), corresponding to 44% and 39% increases in sales respectively. The seasonal estimates display a similar pattern to those of the additive effect, which may suggest that the tax is influenced more by temporal patterns in consumer behavior than by the temporal patterns in sweetened beverage sales. = 1 ∀m. Additionally, we set α gi(A) = 1 and 6 for the ATT control and exposed groups, and 0 and 2 for the ATN control and exposed groups.
For scenario 1, we set γ m = {(0.5, 0.7, 0.8, 1, 1.1, 1.2, 1.4, 1.5, 1.6, 1.4, 1.1, 0.9, 0.8)} and λ 0m = {(0.8, 0.85, 0.9, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.15, 1, 0.95, 0.9)}. For scenario 2, we set γ m = {(0.5, 1)} and λ 0m = {(0.95, 1.05)}. For scenario 3, we set γ m = {(0.5, 1, 1.5, 1)} and λ 0m = {(0.85, 1, 1.1, 0.95)}. We set λ 1m = 2 * λ 0m for each scenario.
G Model Covariates
The covariates chosen for the various nuisance function models in the real data analysis are given here. For the outcome model for the ATT SGM study in this study, we use the average house value in the zip code (house value), an indicator for mass merchandiser, and pre-tax sales of taxed beverages. For the propensity score model in this study, we use a pre-tax weighted price of taxed beveraged (weighted price), percent of the store's zip code identified as White (percent White), and the average income per household in the zip code (income). For the outcome model for the ATN SGM study, we use weighted price, an indicator for mass merchandiser, pre-tax sales, house value, percent white, and an interaction between the weighted price and mass merchandiser status. For the propensity score model in this study, we use weighted price, percent White, and house value. For the outcome model for the ATT Pharmacy study, we use percent White, income, and pre-tax sales. For the propensity score model in this study, we use a weighted price, percent White, and income. In subgroup analyses of this study, we reduced the propensity score model covariate set to percent white and income to adjust for the smaller sample sizes. For the outcome model for the ATN SGM study, we use weighted price, income, and pre-tax sales. For the propensity score model in this study, we use weighted price, percent White, house value, and an interaction between weighted price and house value. For the propensity score model in subgroup analyses of this study, we used weighted price, percent white, and house value for the group with low available traffic and sales from adjacent taxed zip codes and percent Black and house value for the group with high measures.
H Additional Simulation Results
In Table 8, we provide simulation results for scenario (3), where n = 500 and n m = 4. The results show consistent conclusions to our other scenarios, with asymptotic properties expectedly stronger than scenario (1), where n = 250 but weaker than scenario (2), where n = 2000.
m |g(A) = (0, 0), X] and π r,m = P (R m = 1|X).
Figure 2 :
2An example of how conditioning on confounders can affect estimation. IPW-weighting makes pre-tax parallel trends more plausible between Border and Non-Border county pharmacies.
tax observation time and the m th pre-treatment observation time, AT T (pre) (m) := E[Y 1 |A = 1], for m = 12, 13
∼
N (0, I 4 ) for each unit i = 1, . . . , n. We coerce one of the covariates to vary across m-time by setting X
α
gi(A) ∼ N (0, 1) is a fixed effect for the exposure group, γ m and γ t are fixed effects per observation time and treatment period, ½ is an indicator function, and ǫ itm ∼ N (0, 0.5) are iid error terms. The ATT for the entire study period is given as the average of the specified parameters τ (AT T ) m across m = 1, . . . , n m . The population-level ATN varies according to D i and is given by averaging τ
all scenarios, we set β (T ) = (0.25, 0.4, −0.5, −0.35, 0.2), β (N )
Table 1 :
1Descriptive statistics for Philadelphia beverage tax study. Mean (standard deviation) metrics are calculated from 2016 data across stores in a given subset. Price of sweetened beverages is first averaged across 2016 time periods per store.Philadelphia
Baltimore
Border
Non-Border
SGMs
(n=40)
(n=15)
(n=19)
(n=51)
Price ($/ oz)
5.85 (0.62)
5.56 (0.47)
5.41 (0.52)
5.76 (0.81)
White (%)
54.9 (29.5)
42.2 (27.3)
81.8 (15.2)
89.2 (6.1)
Black (%)
34.2 (31.3)
52.8 (30.1)
10.3 (15.5)
5.1 (3.8)
Income ($1000)
43.8 (12.4)
51.2 (13.3)
79.9 (19.4)
81.3 (20.0)
House Value ($1000)
173.3 (76.6)
174.3 (57.5)
318.4 (86.5)
298.8 (91.8)
Mass Merchandiser (%)
35.0
13.3
21.1
33.3
Pharmacies
(n=140)
(n=45)
(n=32)
(n=78)
Price ($/ oz)
7.52 (0.58)
6.85 (0.59)
7.00 (0.58)
7.55 (0.49)
White (%)
48.1 (28.9)
36.4 (24.7)
69.1 (24.9)
89.7 (5.5)
Black (%)
40.1 (31.2)
59.3 (26.9)
23.5 (25.2)
4.9 (3.4)
Income ($1000)
43.6 (16.2)
44.9 (15.5)
73.2 (21.3)
81.9 (18.5)
House Value ($1000)
182.4 (103.3)
174.9 (60.2)
272.3 (104.9)
308.8 (82.5)
Table 2 :
2Seasons are defined chronologically, with Winter as the first 3 observations of the calendar year, Spring as the next 3, Summer as the following 3, and Fall as the final 4.Beverage Tax Effect Estimates
ATT
ATN
TWFE
DR
TWFE
DR
SGMs (million oz.)
Winter
-1.90
-1.83
0.88
0.87
(-2.44, -1.41)
(-2.50, -1.14)
(0.56, 1.26)
(0.55, 1.23)
Spring
-2.41
-2.57
1.14
1.13
(-3.10, -1.74)
(-3.47, -1.86)
(0.66, 1.71)
(0.68, 1.66)
Summer
-2.64
-2.50
1.29
1.29
(-3.47, -1.87)
(-3.39, -1.71)
(0.77, 1.89)
(0.79, 1.89)
Fall
-2.28
-2.30
1.23
1.23
(-3.00, -1.60)
(-3.11, -1.55)
(0.72, 1.78)
(0.78, 1.77)
Annual
-2.30
-2.30
1.14
1.14
(-2.99, -1.66)
(-2.98, -1.64)
(0.68, 1.66)
(0.72, 1.64)
Pharmacies (thousand oz.)
Winter
-5.7
-14.1
29.3
35.4
(-16.1, 5.0)
(-26.8, 0.1)
(16.3, 46.4)
(17.4, 55.5)
Spring
-24.6
-27.8
34.3
40.8
(-36.0, -12.1)
(-39.0, -15.0)
(20.4, 51.9)
(24.0, 61.2)
Summer
-34.1
-36.3
32.5
40.5
(-46.5, -21.6)
(-48.0, -24.5)
(18.6, 51.6)
(22.2, 66.2)
Fall
-18.3
-18.9
35.1
41.0
(-30.1, -6.4)
(-31.5, -5.5)
(21.2, 54.3)
(23.5, 64.8)
Annual
-20.5
-23.9
33.0
39.5
(-30.3, -10.3)
(-34.4, -12.1)
(19.8, 50.9)
(23.1, 61.1)
Table 3 :
3Simulation results
Bias (%)
Std. Err
Coverage (%)
Scenario
TWFE
OR
IPW
DR
TWFE
OR
IPW
DR
TWFE
OR
IPW
DR
ATT
1a
-12.784
-0.069
0.929
-0.103
0.564
0.045
0.389
0.047
91.9
94.1
98.5
95.0
1b
-39.051
-0.067
-2.251
-0.096
0.571
0.045
0.49
0.047
72.0
95.3
96.5
96.3
1c
-7.286
-9.348
-0.225
-2.754
0.597
0.386
0.492
0.472
93.2
93.5
97.7
95.3
1d
22.566
28.382
35.917
32.824
0.624
0.397
0.538
0.453
88.0
72.8
73.3
70.3
2a
-10.887
-0.002
-0.003
0.032
0.226
0.038
0.209
0.040
83.4
94.7
95.3
94.3
2b
-45.221
-0.062
-1.576
-0.059
0.224
0.037
0.148
0.038
1.7
94.7
91.0
95.0
2c
-5.278
-8.913
-0.391
-0.745
0.208
0.138
0.199
0.187
90.6
75.2
94.6
93.7
2d
19.201
29.893
35.902
33.978
0.206
0.134
0.171
0.141
35.8
0.6
1.1
0.0
ATN
1a
-58.815
0.078
0.005
0.186
0.502
0.037
0.347
0.042
75.2
100.0
98.6
100.0
1b
-13.883
-0.166
-24.357
-0.082
0.497
0.039
0.801
0.043
93.2
99.7
98.2
99.7
1c
45.954
-28.936
-0.928
-4.004
0.519
0.359
0.561
0.526
84.8
88.6
97.0
93.7
1d
10.542
-53.447
-63.872
-49.881
0.539
0.355
0.750
0.405
93.8
64.4
85.2
72.8
2a
-49.862
-0.051
0.064
0.013
0.194
0.031
0.128
0.033
27.2
98.2
95.7
98.0
2b
-26.739
0.026
-28.104
0.018
0.193
0.032
0.420
0.035
71.7
97.0
66.8
97.5
2c
52.672
-28.980
0.132
-0.578
0.174
0.127
0.139
0.193
14.7
37.1
94.6
93.2
2d
-5.510
-51.219
-58.066
-42.967
0.175
0.120
0.233
0.200
84.8
0.4
2.7
11.8
Table 4 :
4Effect Estimates on Non-Border Stores in zip codes within 6miles of Philadelphia.
SGM (million oz.)
Pharmacy (thousand oz.)
Winter
0.08 (-0.02, 0.18)
7.5 (2.3, 12.2)
Spring
0.12 (0.01, 0.26)
2.0 (-2.6, 6.5)
Summer
0.20 (0.08, 0.33)
6.2 (1.8, 10.4)
Fall
0.15 (0.06, 0.26)
8.8 (4.6, 13.1)
Annual
0.14 (0.05, 0.25)
6.3 (3.2, 9.5)
Table 5 :
5Pre-Trends Testing SGM (million oz.) Pharmacy (thousand oz.)ATT
ATN
ATT
ATN
Table 6 :
6A comparison of confidence interval lengths for annual effects
SGM (million oz.)
Pharmacy (thousand oz.)
ATT
ATN
ATT
ATN
Stratified Bootstrap
1.41
0.91
21.3
33.3
Parametric Variance
0.41
0.28
7.0
11.4
Bayesian Bootstrap
1.62
1.27
21.2
41.5
Table 7 :
7Relative Effect Estimates SGM (million oz.) Pharmacy (thousand oz.)ATT
ATN
ATT
ATN
Winter
0.54
1.35
0.91
1.36
(0.47, 0.64)
(1.23, 1.47)
(0.85, 1.00)
(1.22, 1.50)
Spring
0.42
1.42
0.82
1.36
(0.34, 0.56)
(1.24, 1.58)
(0.74, 0.89)
(1.23, 1.49)
Summer
0.43
1.46
0.78
1.37
(0.33, 0.58)
(1.29, 1.63)
(0.71, 0.86)
(1.23, 1.52)
Fall
0.44
1.51
0.87
1.47
(0.34, 0.59)
(1.33, 1.68)
(0.78, 0.96)
(1.31, 1.65)
Annual
0.46
1.44
0.85
1.39
(0.37, 0.60)
(1.28, 1.59)
(0.78, 0.92)
(1.26, 1.54)
F Simulation parameter settings
Table 8 :
8Simulation results continued.Bias (%)
Std. Err
Coverage (%)
Scenario
TWFE
IPW
OR
DR
TWFE
IPW
OR
DR
TWFE
IPW
OR
DR
ATT
3a
-11.976
0.128
0.713
0.139
0.423
0.057
0.353
0.059
90.5
93.1
97.1
93.8
3b
-43.696
0.152
-1.364
0.187
0.436
0.058
0.33
0.059
45.4
92.5
94.8
94.0
3c
-5.75
-9.288
-0.239
-2.410
0.394
0.279
0.620
0.451
93.7
88.7
95.0
92.8
3d
18.497
27.495
34.590
32.256
0.400
0.253
0.352
0.288
83.2
42.4
42.9
34.1
ATN
3a
-51.2
-0.080
0.005
-0.166
0.357
0.048
0.229
0.051
72.2
98.7
96.7
98.4
3b
-23.631
-0.040
-24.953
-0.021
0.369
0.044
0.590
0.048
89.4
99.0
94.5
98.9
3c
50.125
-30.159
-0.983
-7.144
0.353
0.247
0.249
0.304
69.9
77.8
95.9
93.1
3d
-1.775
-51.697
-59.399
-46.790
0.352
0.243
0.426
0.330
94.6
41.8
49.0
55.6
AcknowledgementsThis work was supported by NSF Grant 2149716 (PIs: Mitra and Lee).Available CodeCode and an example simulated dataset are provided on GitHub at https://github.com/garyhettinger/DiD-interference.References
Resolved: there is sufficient scientific evidence that decreasing sugar-sweetened beverage consumption will reduce the prevalence of obesity and obesity-related diseases. F B Hu, 10.1111/obr.12040Obesity Reviews. 1482013F. B. Hu. Resolved: there is sufficient scientific evidence that decreasing sugar-sweetened beverage consumption will reduce the prevalence of obesity and obesity-related diseases. Obesity Reviews, 14(8):606-619, 8 2013. ISSN 14677881. doi:10.1111/obr.12040.
Data Release: Beverage Tax Revenue and Expenditures. Rebecca Rhynhart, Office of the Controller12022PhiladelphiaTechnical reportRebecca Rhynhart. Data Release: Beverage Tax Revenue and Expenditures. Technical report, Office of the Controller, Philadelphia, 1 2022.
Association of a Beverage Tax on Sugar-Sweetened and Artificially Sweetened Beverages With Changes in Beverage Prices and Sales at Chain Retailers in a Large Urban Setting. Christina A Roberto, Hannah G Lawman, Michael T Levasseur, Nandita Mitra, Ana Peterhans, Bradley Herring, Sara N Bleich, 10.1001/jama.2019.4249JAMA. 321181799Christina A. Roberto, Hannah G. Lawman, Michael T. LeVasseur, Nandita Mitra, Ana Peterhans, Bradley Herring, and Sara N. Bleich. Association of a Beverage Tax on Sugar-Sweetened and Artificially Sweetened Beverages With Changes in Beverage Prices and Sales at Chain Retailers in a Large Urban Setting. JAMA, 321(18):1799, 5 2019. ISSN 0098-7484. doi:10.1001/jama.2019.4249.
One-year changes in sugar-sweetened beverage consumers' purchases following implementation of a beverage tax: a longitudinal quasi-experiment. Hannah Lawman, Sara Bleich, Jiali Yan, Sophia Hua, Caitlin Lowery, Ana Peterhans, Michael Levasseur, Nandita Mitra, Laura Gibson, Christina Roberto, 10.1093/ajcn/nqaa158The American Journal of Clinical Nutrition. 1123Hannah Lawman, Sara Bleich, Jiali Yan, Sophia Hua, Caitlin Lowery, Ana Peterhans, Michael LeVasseur, Nandita Mitra, Laura Gibson, and Christina Roberto. One-year changes in sugar-sweetened beverage consumers' purchases following implementation of a beverage tax: a longitudinal quasi-experiment. The American Journal of Clinical Nutrition, 112(3):644-651, 9 2020. ISSN 0002-9165. doi:10.1093/ajcn/nqaa158.
Association of a Sweetened Beverage Tax With Purchases of Beverages and High-Sugar Foods at Independent Stores in Philadelphia. Sara N Bleich, Caroline G Dunn, Mark J Soto, Jiali Yan, Laura A Gibson, Hannah G Lawman, Nandita Mitra, Caitlin M Lowery, Ana Peterhans, Sophia V Hua, Christina A Roberto, 10.1001/jamanetworkopen.2021.13527JAMA Network Open. 462113527Sara N. Bleich, Caroline G. Dunn, Mark J. Soto, Jiali Yan, Laura A. Gibson, Hannah G. Lawman, Nandita Mitra, Caitlin M. Lowery, Ana Peterhans, Sophia V. Hua, and Christina A. Roberto. Association of a Sweetened Beverage Tax With Purchases of Beverages and High-Sugar Foods at Independent Stores in Philadelphia. JAMA Network Open, 4(6):e2113527, 6 2021. ISSN 2574-3805. doi:10.1001/jamanetworkopen.2021.13527.
Association of a Sweetened Beverage Tax With Soda Consumption in High School Students. Emma K Edmondson, Christina A Roberto, Emily F Gregory, Nandita Mitra, Senbagam Virudachalam, 10.1001/jamapediatrics.2021.3991JAMA Pediatrics. 175121261Emma K. Edmondson, Christina A. Roberto, Emily F. Gregory, Nandita Mitra, and Senbagam Virudachalam. Associ- ation of a Sweetened Beverage Tax With Soda Consumption in High School Students. JAMA Pediatrics, 175(12): 1261, 12 2021. ISSN 2168-6203. doi:10.1001/jamapediatrics.2021.3991.
Sustained Impact of the Philadelphia Beverage Tax on Beverage Prices and Sales Over 2 Years. Joshua Petimar, Laura A Gibson, Jiali Yan, Sara N Bleich, Nandita Mitra, Marsha L Trego, Hannah G Lawman, Christina A Roberto, 10.1016/j.amepre.2021.12.012American Journal of Preventive Medicine. 6262022Joshua Petimar, Laura A. Gibson, Jiali Yan, Sara N. Bleich, Nandita Mitra, Marsha L. Trego, Hannah G. Law- man, and Christina A. Roberto. Sustained Impact of the Philadelphia Beverage Tax on Beverage Prices and Sales Over 2 Years. American Journal of Preventive Medicine, 62(6):921-929, 6 2022. ISSN 07493797. doi:10.1016/j.amepre.2021.12.012.
Estimating the Effect of Training Programs on Earnings. Orley Ashenfelter, 10.2307/1924332The Review of Economics and Statistics. 60147Orley Ashenfelter. Estimating the Effect of Training Programs on Earnings. The Review of Economics and Statistics, 60(1):47, 2 1978. ISSN 00346535. doi:10.2307/1924332.
Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs. Orley Ashenfelter, David Card, National Bureau of Economic Research. 11Technical reportOrley Ashenfelter and David Card. Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs. Technical report, National Bureau of Economic Research, Cambridge, MA, 11 1984.
Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme. The Review of Economic Studies. J J Heckman, H Ichimura, P E Todd, 10.2307/297173364J. J. Heckman, H. Ichimura, and P. E. Todd. Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme. The Review of Economic Studies, 64(4):605-654, 10 1997. ISSN 0034- 6527. doi:10.2307/2971733.
Alberto Abadie, 10.1111/0034-6527.00321Semiparametric Difference-in-Differences Estimators. 72Alberto Abadie. Semiparametric Difference-in-Differences Estimators. The Review of Economic Studies, 72(1):1-19, 1 2005. ISSN 1467-937X. doi:10.1111/0034-6527.00321.
Double-Robust Estimation in Difference-in-Differences with an Application to. Fan Li, Fan Li, 10.1353/obs.2019.0009Traffic Safety Evaluation. Observational Studies. 51Fan Li and Fan Li. Double-Robust Estimation in Difference-in-Differences with an Application to Traffic Safety Evaluation. Observational Studies, 5(1):1-23, 2019. ISSN 2767-3324. doi:10.1353/obs.2019.0009.
Doubly robust difference-in-differences estimators. H C Pedro, Jun Sant'anna, Zhao, 10.1016/j.jeconom.2020.06.003Journal of Econometrics. 2191Pedro H.C. Sant'Anna and Jun Zhao. Doubly robust difference-in-differences estimators. Journal of Econometrics, 219(1):101-122, 11 2020. ISSN 03044076. doi:10.1016/j.jeconom.2020.06.003.
Demand and distance: Evidence on cross-border shopping. Marcus Asplund, Richard Friberg, Fredrik Wilander, 10.1016/j.jpubeco.2006.05.006Journal of Public Economics. 911-2Marcus Asplund, Richard Friberg, and Fredrik Wilander. Demand and distance: Evidence on cross-border shopping. Journal of Public Economics, 91(1-2):141-157, 2 2007. ISSN 00472727. doi:10.1016/j.jpubeco.2006.05.006.
State handgun purchase age minimums in the US and adolescent suicide rates: regression discontinuity and difference-in-differences analyses. Julia Raifman, Elysia Larson, Colleen L Barry, Michael Siegel, Michael Ulrich, Anita Knopov, Sandro Galea, 10.1136/bmj.m2436BMJ. 720202436Julia Raifman, Elysia Larson, Colleen L Barry, Michael Siegel, Michael Ulrich, Anita Knopov, and Sandro Galea. State handgun purchase age minimums in the US and adolescent suicide rates: regression discontinuity and difference-in-differences analyses. BMJ, page m2436, 7 2020. ISSN 1756-1833. doi:10.1136/bmj.m2436.
The Cross-Border Spillover Effects Of Recreational Marijuana Legalization. Zhuang Hao, Benjamin W Cowan, 10.1111/ecin.12764Economic Inquiry. 582Zhuang Hao and Benjamin W. Cowan. The Cross-Border Spillover Effects Of Recreational Marijuana Legalization. Economic Inquiry, 58(2):642-666, 4 2020. ISSN 0095-2583. doi:10.1111/ecin.12764.
Outcomes Following Taxation of Sugar-Sweetened Beverages. Tatiana Andreyeva, Keith Marple, Samantha Marinello, Timothy E Moore, Lisa M Powell, 10.1001/jamanetworkopen.2022.15276JAMA Network Open. 562215276Tatiana Andreyeva, Keith Marple, Samantha Marinello, Timothy E. Moore, and Lisa M. Powell. Outcomes Follow- ing Taxation of Sugar-Sweetened Beverages. JAMA Network Open, 5(6):e2215276, 6 2022. ISSN 2574-3805. doi:10.1001/jamanetworkopen.2022.15276.
Randomisation analysis of experimental data in the fisher randomisation test. Db Rubin, Journal of American Statistical Association. D. Basu75Comment onDB Rubin. Comment on: "Randomisation analysis of experimental data in the fisher randomisation test" by D. Basu. Journal of American Statistical Association, 75:591-593, 1980.
Toward Causal Inference With Interference. G Michael, M Elizabeth Hudgens, Halloran, 10.1198/016214508000000292Journal of the American Statistical Association. 103482Michael G Hudgens and M. Elizabeth Halloran. Toward Causal Inference With Interference. Journal of the American Statistical Association, 103(482):832-842, 6 2008. ISSN 0162-1459. doi:10.1198/016214508000000292.
On causal inference in the presence of interference. Eric J Tchetgen Tchetgen, J Tyler, Vanderweele, 10.1177/0962280210386779Statistical Methods in Medical Research. 211Eric J Tchetgen Tchetgen and Tyler J VanderWeele. On causal inference in the presence of interference. Statistical Methods in Medical Research, 21(1):55-75, 2 2012. ISSN 0962-2802. doi:10.1177/0962280210386779.
Doubly robust estimation in observational studies with partial interference. Lan Liu, Michael G Hudgens, Bradley Saul, John D Clemens, Mohammad Ali, Michael E Emch, 10.1002/sta4.214Stat. 81Lan Liu, Michael G. Hudgens, Bradley Saul, John D. Clemens, Mohammad Ali, and Michael E. Emch. Dou- bly robust estimation in observational studies with partial interference. Stat, 8(1), 1 2019. ISSN 2049-1573. doi:10.1002/sta4.214.
Causal inference with interfering units for cluster and population level treatment allocation programs. Georgia Papadogeorgou, Fabrizia Mealli, Corwin M Zigler, 10.1111/biom.13049Biometrics. 753Georgia Papadogeorgou, Fabrizia Mealli, and Corwin M. Zigler. Causal inference with interfering units for clus- ter and population level treatment allocation programs. Biometrics, 75(3):778-787, 9 2019. ISSN 0006-341X. doi:10.1111/biom.13049.
A Framework for Separating Individual-Level Treatment Effects From Spillover Effects. Martin Huber, Andreas Steinmayr, 10.1080/07350015.2019.1668795Journal of Business & Economic Statistics. 392Martin Huber and Andreas Steinmayr. A Framework for Separating Individual-Level Treatment Effects From Spillover Effects. Journal of Business & Economic Statistics, 39(2):422-436, 4 2021. ISSN 0735-0015. doi:10.1080/07350015.2019.1668795.
Estimating Difference-in-Differences in the Presence of Spillovers. Damian Clarke, 10.48550/arXiv.2105.03737arXiv:2105.037372017. Kyle Butts. Difference-in-Differences Estimation with Spatial Spillovers. 52021Munich Personal RePEc ArchiveDamian Clarke. Estimating Difference-in-Differences in the Presence of Spillovers. Munich Personal RePEc Archive, (81604), 2017. Kyle Butts. Difference-in-Differences Estimation with Spatial Spillovers. arXiv:2105.03737, 5 2021. doi:https://doi.org/10.48550/arXiv.2105.03737.
Two-Way Fixed Effects and Differences-in-Differences with Heterogeneous Treatment Effects: A Survey. Xavier D' Clément De Chaisemartin, Haultfoeuille, National Bureau of Economic Research. 12022Technical reportClément de Chaisemartin and Xavier D'Haultfoeuille. Two-Way Fixed Effects and Differences-in-Differences with Heterogeneous Treatment Effects: A Survey. Technical report, National Bureau of Economic Research, Cambridge, MA, 1 2022.
Understanding IRI household-based and store-based scanner data. Mk Muth, Sweitzer, Brown, Capogrossi, Karns, Levin, P Okrent, C Siegel, Zhen, 4MK Muth, M Sweitzer, D Brown, K Capogrossi, S Karns, D Levin, A Okrent, P Siegel, and C Zhen. Understanding IRI household-based and store-based scanner data, 4 2016.
Estimating average causal effects under general interference, with application to a social network experiment. M Peter, Cyrus Aronow, Samii, 10.1214/16-AOAS1005The Annals of Applied Statistics. 114Peter M. Aronow and Cyrus Samii. Estimating average causal effects under general interference, with applica- tion to a social network experiment. The Annals of Applied Statistics, 11(4), 12 2017. ISSN 1932-6157. doi:10.1214/16-AOAS1005.
How Much Should We Trust Differences-In-Differences Estimates?. M Bertrand, E Duflo, S Mullainathan, 10.1162/003355304772839588The Quarterly Journal of Economics. 1191M. Bertrand, E. Duflo, and S. Mullainathan. How Much Should We Trust Differences-In-Differences Estimates? The Quarterly Journal of Economics, 119(1):249-275, 2 2004. ISSN 0033-5533. doi:10.1162/003355304772839588.
Difference-in-Differences with multiple time periods. Brantly Callaway, Pedro H C Sant'anna, 10.1016/j.jeconom.2020.12.001Journal of Econometrics. 22522021Brantly Callaway and Pedro H.C. Sant'Anna. Difference-in-Differences with multiple time periods. Journal of Econo- metrics, 225(2):200-230, 12 2021. ISSN 03044076. doi:10.1016/j.jeconom.2020.12.001.
Using propensity scores in difference-in-differences models to estimate the effects of a policy change. Elizabeth A Stuart, Haiden A Huskamp, Kenneth Duckworth, Jeffrey Simmons, Zirui Song, Michael E Chernew, Colleen L Barry, 10.1007/s10742-014-0123-zHealth Services and Outcomes Research Methodology. 144Elizabeth A. Stuart, Haiden A. Huskamp, Kenneth Duckworth, Jeffrey Simmons, Zirui Song, Michael E. Chernew, and Colleen L. Barry. Using propensity scores in difference-in-differences models to estimate the effects of a policy change. Health Services and Outcomes Research Methodology, 14(4):166-182, 12 2014. ISSN 1387-3741. doi:10.1007/s10742-014-0123-z.
An Introduction to the Bootstrap. R J B Efron, Alyssa Tibshirani, Laura A Bilinski, Hatfield, 10.48550/arXiv.1805.03273arXiv:1805.03273Non-inferiority approaches to parallel trends and other model assumptions. New York, NYChapman & Hall52018Nothing to see here?B Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, New York, NY, 1993. Alyssa Bilinski and Laura A. Hatfield. Nothing to see here? Non-inferiority approaches to parallel trends and other model assumptions. arXiv:1805.03273, 5 2018. doi:https://doi.org/10.48550/arXiv.1805.03273.
The impact of the Philadelphia beverage tax on purchases and consumption by adults and children. John Cawley, David Frisvold, Anna Hill, David Jones, 10.1016/j.jhealeco.2019.102225Journal of Health Economics. 67102225John Cawley, David Frisvold, Anna Hill, and David Jones. The impact of the Philadelphia beverage tax on purchases and consumption by adults and children. Journal of Health Economics, 67:102225, 9 2019. ISSN 01676296. doi:10.1016/j.jhealeco.2019.102225.
Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. Joseph Kang, Joseph Schafer, 10.1214/07-STS227Statistical Science. 224Joseph Kang and Joseph Schafer. Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. Statistical Science, 22(4), 11 2007. ISSN 0883-4237. doi:10.1214/07-STS227.
Lisa M Powell, Julien Leider, 10.1016/j.ehb.2020.10085614.7) Period 10 (-0.81, 0.03) (-0.19, 0.38The impact of Seattle's Sweetened Beverage Tax on beverage prices and volume sold. 37128.8, 11.4) Period 4 (-0.62, 0.08) (-0.16, 0.43) (-7.6, 10.9) (-7.0, 7.4) Period 5 (-0.42, 0.25) (-0.15, 0.37) (-6.0, 7.0) (-3.4, 9.6) Period. Period 7 (-0.23, 0.63) (-0.25, 0.30) (-8.0, 13.2) (-0.4, 14.6) Period 8 (-0.53, 0.29) (-0.28, 0.42) (-11.3, 13.3) (0.7, 14.2) Period 9 (-0.96, 0.07) (-0.22, 0.45) (-10.0, 7.6) (-1.9. 8, 9.0) (0.5, 13.1) Period 11 (-0.58, 0.07) (-0.23, 0.25) (-12.8, 2.9. -0.64, 0.28) (-0.24, 0.13) (-18.6, -0.3) (-6.7, 5.6) Period 13 (-0.49, 0.20) (-0.32, 0.20) (-29.8, 0.9) (-28.0, 6.6)Lisa M. Powell and Julien Leider. The impact of Seattle's Sweetened Beverage Tax on beverage prices and volume sold. Economics & Human Biology, 37:100856, 5 2020. ISSN 1570677X. doi:10.1016/j.ehb.2020.100856. Period 2 (-0.60, -0.11) (-0.08, 0.37) (-9.0, 3.9) (-6.0, 2.9) Period 3 (-0.34, 0.25) (-0.15, 0.35) (-18.7, 8.9) (-8.8, 11.4) Period 4 (-0.62, 0.08) (-0.16, 0.43) (-7.6, 10.9) (-7.0, 7.4) Period 5 (-0.42, 0.25) (-0.15, 0.37) (-6.0, 7.0) (-3.4, 9.6) Period 6 (-0.32, 0.41) (-0.35, 0.28) (-8.4, 11.1) (-2.7, 13.9) Period 7 (-0.23, 0.63) (-0.25, 0.30) (-8.0, 13.2) (-0.4, 14.6) Period 8 (-0.53, 0.29) (-0.28, 0.42) (-11.3, 13.3) (0.7, 14.2) Period 9 (-0.96, 0.07) (-0.22, 0.45) (-10.0, 7.6) (-1.9, 14.7) Period 10 (-0.81, 0.03) (-0.19, 0.38) (-9.8, 9.0) (0.5, 13.1) Period 11 (-0.58, 0.07) (-0.23, 0.25) (-12.8, 2.9) (-9.4, 7.0) Period 12 (-0.64, 0.28) (-0.24, 0.13) (-18.6, -0.3) (-6.7, 5.6) Period 13 (-0.49, 0.20) (-0.32, 0.20) (-29.8, 0.9) (-28.0, 6.6)
| zyda_arxiv-1861000 |
Constraint Satisfaction Problems over semilattice block Mal'tsev algebras
10 Jan 2017
Andrei A Bulatov
Constraint Satisfaction Problems over semilattice block Mal'tsev algebras
10 Jan 2017
There are two well known types of algorithms for solving CSPs: local propagation and generating a basis of the solution space. For several years the focus of the CSP research has been on 'hybrid' algorithms that somehow combine the two approaches. In this paper we present a new method of such hybridization that allows us to solve certain CSPs that has been out of reach for a quite a while. We consider these method on a fairly restricted class of CSPs given by algebras we will call semilattice block Mal'tsev. An algebra A is called semilattice block Mal'tsev if it has a binary operation f , a ternary operation m, and a congruence σ such that the quotient A/ σ with operation f is a semilattice, f is a projection on every block of σ, and every block of σ is a Mal'tsev algebra with Mal'tsev operation g. We show that the constraint satisfaction problem over a semilattice block Mal'tsev algebra is solvable in polynomial time.
Introduction
The study of the complexity of the Constraint Satisfaction Problem (CSP, for short) has been initiated by Schaefer [29]. Schaefer studied the complexity of CSP(Γ), the CSP parametrized by a set Γ of allowed constraints, a constraint language, over a certain set. More precisely he determined the complexity of CSP(Γ) for constraint languages on a 2-element set. The complexity of problems CSP(Γ) for constraint languages over finite sets has been attracting much attention since then. This research is guided by the Dichotomy Conjecture proposed by Feder and Vardi [17,18] that states that every CSP of the form CSP(Γ) for a constraint language Γ on a finite set is either solvable in polynomial time or is NP-complete. This conjecture has been restated and made more precise in different languages, see, e.g. [11,27]. Also, several powerful approaches to the problem have been developed, through algebra, logic, and graph theory. So far the most successful method of studying the complexity of the CSP has been the algebraic approach introduced by Jeavons et al. [10,11,13,21]. This approach relates the complexity of CSP(Γ) to the properties of a certain universal algebra A Γ associated with Γ. In particular it allows one to expand CSP(Γ) to the problem CSP(A Γ ) depending only on the associated algebra without changing its complexity. It therefore suffices to restrict ourselves to the study of the complexity of problems of the form CSP(A), where A is a finite universal algebra.
Although the dichotomy conjecture remains open in general, it has been confirmed in a number of cases: for constraint languages on 2-and 3-element sets [6,29] (a dichotomy result was also announced for languages over 4-, 5-, and 7element sets [23,30,31]), for constraint languages containing all unary relations [1,7,8], and several others, see, e.g. [2,3,20]. One of the most remarkable phenomena discovered is that, generally, there are only two types of algorithms applicable to CSPs solvable in polynomial time. The first one has long been known to researchers in Artificial Intelligence as constraint propagation [16]. Algorithms of the other type resemble Gaussian elimination in the sense that they constract a small generating set of the set of all solutions [9,20]. The scope of both types of algorithms is precisely known [2,20].
General dichotomy results, however, cannot be proved using only algorithms of a single 'pure' type. In all such results, see, e.g. [1,6,7,8] a certain mix of the two types of algorithms is needed. In some cases such as [6] such a hybrid algorithm is somewhat ad hoc; in other cases, [1,7,8] it is based on intricate decompositions of the problem instance. It is clear however that ad hoc hybridization and the decomposition techniques developed in the mentioned works are not sufficient. Therefore trying to identify new polynomial time solvable cases of the CSP through combining the two types of algorithms is the key to approaching the Dichotomy Conjecture. There have been several further attempts to design hybrid algorithms; however, most of them were not quite successful. In more successful cases such as [24,25,26,28] the researchers tried to tackle somewhat limited cases, in which a combination of local consistency properties and Gaussian elimination type fragments is very explicit. To represent the context for our results we explain those cases in details.
Suppose an idempotent algebra A is such that it has a congruence σ with the property that the CSP of its factor A/ σ can be solved by, say, a local propagation algorithm, while for every σ-block B (a subalgebra of A) the CSP over B can be solved by the small generating set algorithm; or the other way round, see Figure 1. How can one solve the CSP over A itself? Maroti in [25] considered the second case, when A/ σ can be solved by the small generating set algorithm, say, it is Mal'tsev. This case turns out to be easier because of the property of σ-blocks we can exploit. Suppose for simplicity that every σ-block B is a semilattice, as shown in Figure 1. Then every CSP instance on B has some sort of a canonical solution that assigns the maximal element of the semilattice (that is element a ∈ B such that ab = a for all b ∈ B) to every variable. It then can be shown that if we find a solution ϕ : V → A/ σ where V is the set of variables of the instance on A/ σ , and then assigning the maximal elements of the σ-block ϕ(v) to v, we obtain a solution of the original instance. Figure 1: (a) Algebra A such that A/ σ is Mal'tsev; (b) an SBM algebra. Rectangles represent σ-blocks, dots represent elements, lines show the semilattice structure, and ⊕ represents a Mal'tsev operation acting on elments or σ-blocks.
+ + + + maximal elements maximal -block s (a) (b)
The case when A is a semilattice, while every σ-block is Mal'tsev is much more difficult. We will call such algebras semilattice block Mal'tsev algebras (SBM algebras, for short). More precisely, we consider idempotent algebras A with the following property: There are a binary operation f and a ternary operation m, and a congruence σ of A such that A/ σ is a semigroup with a semigroup operation f , and every σ-block B is a Mal'tsev algebra with Mal'tsev operation m, and f B is a projection. The main difficulty with this kind of algebras is that the only solution of a CSP over a semilattice we can reliably find is the canonical one assigning the maximal elements. Finding a second solution is already hard. On the other hand, if we restrict our instance only to the maximal σ-block B, it may have no solution there, even though the original instance has a solution, which simply does not belong to the maximal block. If this is the case, it has been unclear for nearly 10 years how the maximal block can be eliminated from consideration, and the domain reduced.
The problem has been resolved in some special cases. Firstly, Maroti in [26] showed that it suffices to consider SBM algebras of a certain restricted type. We will use this result in this paper. Marcovic and McKenzie suggested an algorithm that solves the CSP over SBM algebras A when A/ σ is a chain, that is, ab ∈ {a, b} for any a, b ∈ A/ σ . In this case their algorithm is capable of eliminating the maximal block using the fact that if a semilattice is a chain, any its subset is a subalgebra. Finally, very recently Payne in [28] suggested an algorithm that works for a more general class of algebras than SBM, but algebras in this class have to satisfy an extra condition that in SBM algebras manifests itself as the existence of certain well behaving mappings between σ-blocks. In particular, this condition guarantees that the instance restricted to the maximal σ-block has a solution whenever the original problem has a solution.
In this paper we continue the effort started in [24,26,28] and present an algorithm that solves the CSP over an arbitrary SBM algebra.
Theorem 1 If A is a SBM algebra then CSP(A) is solvable in polynomial time.
The algorithm is based upon a new local consistency notion that we call block minimality (although in our case it is necessarily not quite local, since it has to deal with Mal'tsev algebras). More specifically, our algorithm first separates the set V of variable of a CSP instance into overlapping subsets, so-called coherent sets, and considers subproblems on these sets of variables. For block minimality these subproblems have to be minimal, that is, every tuple from every constraint relation has to be a part of a solution. This can achieved be by solving the problem many times with additional constraints. However, this is not very straightforward, because coherent sets may contain all the variables from V . To overcome this we show that the subproblems restricted to coherent sets are either over a Mal'tsev domain and therefore can be solved efficiently, or they split up into a collection of disjoint instances, each of which has strictly smaller domain. In the latter case we can recurse on theses smaller instances. Finally, we prove that any block-minimal instance has a solution.
The results of this paper can easily be made more general by removing some of the restrictions on the basic functions of SBM algebras. However, we hope that these results can be generalized well beyond SBM-like algebras and so we stop short of giving more general but also more technically involved proofs just restricting ourselves to demonstrating the general idea.
In Section 2 we remind the basic definitions and study certain properties of SBM algebras. In Section 3 we strengthen the results of [4] about the structure of relations over Mal'tsev algebras and extend them to SBM algebras 1 . In Section 4 we extend these notions to CSP instances. Finally, in Section 5 we prove the main results and present a solution algorithm.
The necessary facts about universal algebra and the link between algebra and the CSP can be found in [10,11,13,15,19].
Preliminaries
Multisorted Constraint Satisfaction Problem
By [n] we denote the set {1, . . . , n}. Let A 1 , . . . , A n be finite sets. Tuples from A 1 × . . .× A n are denoted in boldface, say, a, and their entries by a [1], . . . , a[n]. A relation R over A 1 , . . . , A n is a subset of A 1 × · · · × A n . We refer to n as the arity of tuple a and relation R. Let I = (i 1 , . . . , i k ) be an (ordered) multiset, a subset of [n]. Then let pr I a = (a[i 1 ], . . . , a[i k ]) and pr I R = {pr I a | a ∈ R}. Relation R is said to be a subdirect product of A 1 , . . . , A n if pr i R = A i for i ∈ [n]. In some cases it will be convenient to consider tuples and relations whose entries are indexed by sets other than [n], most often those will be sets of variables. Then we either assume the index set is somehow ordered, or consider tuples as functions from the index set to the domain and relations as sets of such functions.
Let A be a set of sets, in this paper A is usually the set of universes of finite algebras derived from an SBM algebra; we clarify 'derived' later. An instance of a Constraint Satisfaction Problem (CSP) over A is given by P = (V, D, C), where V is a set of variables, D is a collection of domains D v , and C is a set of constraints; every constraint s, R is a pair consisting of an ordered multiset s = (v 1 , . . . , v k ), a subset of V , called the constraint scope and R is a relation over D v 1 , . . . , D v k , called the constraint relation.
Let A be a class of finite algebras of the same type and A the set of universes of algebras from A. Then CSP(A) is the class of instances (V, D, C) of CSPs over A such that every constraint relation R from s, R ∈ C, s = (v 1 , . . . , v k ), is a sublagebra of D v 1 × · · · × D v k , where D v , v ∈ V , are viewed as algebras from A Let W ⊆ V . By P W we denote the instance (W, D W , C W ) defined as follows:
D W v = D v for each v ∈ W ;
for every constraint C = s, R , C ∈ C, the set C W includes the constraint C W = s ′ , R ′ , where s ′ = s ∩ W and R ′ = pr s ′ R. A solution of P W is called a partial solution of P on W . The set of all such solutions is denoted by S W . If W = {v} or W = {u, v}, we simplify notation to P v , S v and P uv , S uv , respectively.
Instance P is called minimal if every tuple a ∈ R for any constraint s, R ∈ C can be extended to a solution of P; that is, there is ϕ ∈ S such that ϕ
(v) = a[v]
for v ∈ s. Instance P is called k-minimal if P W is minimal for all k-element W ⊆ V . For any fixed k every instance can be reduced to a k-minimal instance in polynomial time by a standard algorithm [12]: cycle over all k element subsets W ⊆ V , solve the problem P W , and for every constraint s, R exclude from R all tuples inconsistent with S W . If P ∈ CSP(A) for some class of finite algebras A closed under subalgebras, the resulting problem also belongs to CSP(A). In particular, from now on we will assume that all the instances we deal with are 1-consistent. For such problems we can also tighten the instance reducing the domains D v , v ∈ V , to the sets S v . Every constraint relation will therefore be assumed to be a subdirect product of the respective domains. If A consists of idempotent algebras, then any problem from CSP(A) can be reduced to a minimal one by solving polynomially many instances of CSP(A). First of all, constant relations, R a = {(a)}, a ∈ A ∈ A, are subalgebras of A and therefore can be used in constraints. Then the algorithm proceeds as follows: cycle over all constraints C = s, R ∈ C and all a ∈ R; replace C with the collection of unary constraints
(s[i]), R a[s[i]]
; solve the resulting instance P C,a ; remove a from R if P C,a has no solutions.
Congruences and minimal sets
The set (lattice) of congruences of an algebra A will be denoted by Con(A). The smallest congruence of A, the equality relation, is denoted by 0 A , and the greatest congruence, the total relation, is denoted by 1 A . If a, b are related by a congruence α, we write a α ≡ b; the α-block containing a is denoted a α . Let R be a subdirect product of A 1 , . . . , A k , and α i ∈ Con(A i ), i ∈ [k]. Then by α R , or simply α if R is clear from the context, we denote the congruence α 1 × · · · × α k of R given by
a α ≡ b if and only if a[i] α i ≡ b[i] for all i ∈ [k]. Also, if I = {i 1 , . . . , i ℓ } ⊆ [k]
then by α I we denote the congruence α i 1 × · · · × α i ℓ of pr I R.
Let P = (V, S, C) be an instance and α v a congruence of S v . By P α we denote the instance (V, S α , C α ), in which S α v = S v / αv , and a constraint s, R ′ , s = (v 1 , . . . , v k ), belongs to C α if and only if a constraint s, R , where
R ′ = R/ α = {a α = (a[1] αv 1 , . . . , a[k] αv k ) : a ∈ R}, belongs to C.
A pair of congruences α, β ∈ Con(A) is said to be a prime quotient, denoted α ≺ β, if α ≤ β and α < γ < β for no congruence γ ∈ Con(A). Then α β
means that α ≺ β or α = β. For an operation f on A we write f (β) ⊆ α if, for any a, b ∈ A with a β ≡ b, f (a) α ≡ f (b). As usual, by an idempotent unary polynomial we mean a polynomial f (x) such that f 2 = f or, equivalently, such that f (x) = x for any x from its range. An (α, β)-minimal set is a minimal (under inclusion) set U such that U = Im(f ) for a unary idempotent polynomial of A satisfying f (β) ⊆ α.
Recall that algebra A is called Mal'tsev if it has a ternary term operation m(x, y, z) satisfying the equations m(x, y, y) = m(y, y, x) = x. Every algebra from the variety generated by a Mal'tsev algebra is congruence permutable, that is any two of its congruences α, β satisfy the condition α • β = β • α. In particular, the congruence lattice of a Mal'tsev algebra is modular.
Let R be a subdirect product of A 1 , . . . , A k . Similar to tuples from R, polynomials of R are also denoted in boldface, say, f . The polynomial f can be represented as f (x 1 , . . . , x k ) = g(x 1 , . . . , x k , a 1 , . . . , a ℓ ) where g is a term operation of R and a 1 , . . . , a l ∈ R. Then the polynomial g(
x 1 , . . . , x k , a 1 [i], . . . , a ℓ [i]) of A i is denoted by f i , and for I = {i 1 , . . . , i s } ⊆ [n], f I denotes the polyno- mial g(x 1 , . . . , x k , pr I a 1 , . . . , pr I a ℓ ) of pr I R. For any i, and any polynomial f of A i , there is a polynomial g of R such that g i = f . We shall call g an ex- tension of f to a polynomial of R. Finally, for I ⊆ [k], and a ∈ i∈I A i and b ∈ i∈[k]−I A i , (a, b) denotes the tuple c such that c[i] = a[i] for i ∈ I and c[i] = b[i] if i ∈ [k] − I.
To distinguish such concatenation of tuples from pairs of tuples, we will denote pairs of tuples by a, b .
The two propositions below list the main basic properties of relations over Mal'tsev algebras, their congruences, and minimal sets.
Proposition 1 (Folklore) Let R be a subdirect product of Mal'tsev algebras A 1 , . . . , A k and I ⊆ [k]. Then the following properties hold (1) R is rectangular, that is if a, b ∈ pr I R, c, d ∈ pr [k]−I R and (a, c), (a, d), (b, c) ∈ R, then (b, d) ∈ R. (2) The relation ν I = { a, b ∈ (pr I R) 2 | there is c ∈ pr [k]−I R such that (a, c), (b, c) ∈ R} is a congruence of pr I R. (3) R is a disjoint union of sets of the form B × C where B is a ν I -block and C is a ν [k]−I -block.
Proposition 2 ([19]) Let
A be a finite algebra and α ≺ β for α, β ∈ Con(A).
(1) Any two (α, β)-minimal U, V sets are polynomially isomorphic; that is, there is a unary idempotent polynomials f, g of A such that f (U ) = V , g(V ) = U and f • g, g • f are identity mappings on V and U , respectively.
(2) If η ≺ θ is a prime factor projective to α ≺ β in Con(A), then every (α, β)minimal set is also (η, θ)-minimal, and conversely, every (η, θ)-minimal set is also
(α, β)-minimal. (3) If A is Mal'tsev, then for any a, b ∈ A with a, b ∈ β − α, there is an (α, β)- minimal set containing both a and b.
Semilattice block Mal'tsev algebras
Since the fewer operations an algebra has, the richer the corresponding constraint language, we assume that the algebras we are dealing with have only two basic operations, just enough to guarantee the required properties. An algebra A is called a semilattice block Mal'tsev (SBM) algebra if it has two basic operations: a binary operation · that we will often omit, and a ternary operation m that satisfy the following conditions. There is a congruence σ A of A such that A/ σ A is term equivalent to a semigroup with semigroup operation ·, and every σ A -block B is a Mal'tsev algebra, where m is a Mal'tsev operation and · is the first projection. For elements a, b ∈ A such that ab = ba = b we write a ≤ b.
Lemma 1 Let A be an SBM algebra. By choosing a reduct of A we may assume that
(1) Operation · satisfies the equation x(xy) = xy; in particular, for any a, b ∈ A a ≤ ab.
(2) For any a, b, c ∈ A, m(a, b, c) σ A = (abc) σ A .
Proof: (1) Follows from [5,14].
(2) The operation m ′ (x, y, z) = m(x, y, z)xyz is Mal'tsev on every σ A -block and satisfies the condition of the lemma. ✷ Next we show several useful properties of SBM algebras. Let A be an SBM algebra and max(A) the maximal block of σ, that is, max(A) · a ⊆ max(A) for all a ∈ A.
Lemma 2 (1) The equivalence relation θ A whose blocks are max(A) and all the remaining elements form singleton blocks, is a congruence.
(2) Let R be a subdirect product of SBM algebras A 1 , . . . , A n and the equivalence relation θ R is such that its blocks are R ∩ (max(A 1 ) × · · · × max(A n )), and all the remaining elements form singleton blocks. Then θ R is a congruence.
Proof: (1) It suffices to observe that for any a ∈ max(A) we have ax, xa, m(a, x, y), m(x, a, y), m(x, y, a) ∈ max(A) for any x, y, and therefore all polynomials of A preserve max(A).
(2) is similar to (1). ✷
Lemma 3 Every (α, β)-minimal set, for α ≺ β ≤ θ A , is a subset of max(A). Proof: Let V be a (α, β)-minimal set and f a polynomial with f (A) = V and f (β) ⊆ α. We may assume f is idempotent. Since β ≤ θ A , V ∩ max(A) = ∅.
Take a ∈ max(A) and set g(x) = f (x)a. Then as before g(β) ⊆ α and g
(A) ⊆ max(A). Finally, f (max(A)) ⊆ max(A), therefore f g(A) ⊆ V ∩ max(A) and f g(x) = x for x ∈ V ∩ max(A). As V is minimal, V = V ∩ max(A). ✷ Lemma 4 Let R be a subdirect product of A 1 , . . . , A n . The interval (0, θ R ) in Con(R) is modular. Proof: Let 0 ′ i , θ ′ i denote the restriction of 0 i , θ i on max(A i ) and R ′ = R ∩ (max(A 1 ) × . . . × max(A n ))
with added unary operations that are restrictions of unary polynomials of R. This definition is proper, as every polynomial of R preserves R ′ . Since all the non-trivial blocks of
θ R are contained in R ′ , interval (0, θ R ) in Con(R) is isomorphic to (0 ′ , θ ′ ).
As R ′ is a Mal'tsev algebra, the claim follows. ✷
Maroti's reduction
In this section we describe a reduction introduced by Maroti in [26] that allows us to reduce CSPs over SBM algebras to CSPs over SBM algebras of a certain restricted type. More precisely, it allows us to assume that every domain A contains a minimal element a, that is, an element such that ab
= ba = b for all b ∈ A.
Moreover, as is easily seen, such element is unique and forms a σ A -block, which is also the smallest element of the semilattice A/ σ A . Let f be an idempotent unary polynomial of algebra A, A is the universe of A. The retract f (A) of A is the algebra with universe f (A), and whose basic operations are of the form f g, given by f g(
x 1 , . . . , x n ) = f (g(x 1 , . . . , x n )) for x 1 , . . . , x n ∈ f (A), where g is a basic operation of A.
Lemma 5 A retract of an SBM algebra through an idempotent polynomial is an SBM algebra.
Proof: Let f be an idempotent polynomial. Let g 1 (x, y) = f (xy), m 1 (x, y, z) = f (m(x, y, z)) be the basic operations of the reduct, and A 1 = f (A), and σ 1 = σ A A 1 . Firstly, note that σ 1 is a congruence of A 1 and A 1 is an idempotent algebra. Since A/ σ A is term equivalent to a semilattice and any retract of a semilattice is a semilattice, so is A 1 / σ 1 . Finally,
m 1 (x, y, y) = f (m(x, y, y)) = f (x) = x m 1 (y, y, x) = f (m(y, y, x)) = f (x) = x, for any x, y ∈ A 1 with x σ 1 ≡ y.
✷ The results of [26] imply the following. Let A be a class of finite algebras of similar type closed under subalgebras, and retracts via idempotent unary polynomials. Suppose that A has a term operation f satisfying the following conditions for some B ∈ A:
(1) f (x, f (x, y)) = f (x, y) for any x, y ∈ B;
(2) for each a ∈ B the mapping x → f (a, x) is not surjective;
(3) the set C of a ∈ B such that x → f (x, a) is surjective generates a proper subalgebra of B. Then CSP(A) is polynomial time reducible to CSP(A − {B}).
As is easily seen, the operation · of the class of SBM algebras from A satisfies condition (1). If the operation a · x is surjective for some a, then a ≤ x for all x ∈ B. Therefore the only case when condition (2) is not satisfied is when B has a minimal element. Finally, condition (3) is satisfied whenever B is not a Mal'tsev algebra. Therefore, choosing B to be a maximal (in terms of cardinality) algebra from A satisfying conditions (1)-(3) we may only consider instances of CSP(A), in which every domain has a minimal element or is a Mal'tsev algebra.
Corollary 1 Every instance P ∈ CSP(A) can be reduced in polynomial time to a polynomially many instances over algebras with minimal elements.
Throughout the rest of the paper A is a finite class of finite SBM algebras closed under taking subalgebras, homomorphic images, and retracts through unary idempotent polynomials.
Coherent sets
In this section we introduce and study a method of decomposition of subdirect products of SBM algebras. Throughout the section R ≤ A 1 × . . . × A n is a subdirect product of SBM algebras A and α i β i ≤ θ A i , unless otherwise stated.
Separation
We will say that an index i ∈ [n] can be separated from j ∈ [n] with respect to α, β, if there exists a unary polynomial f of R such that f i (β i ) ⊆ α i and f j (β j ) ⊆ α j . If f satisfies this property we will also say that f separates i from j with respect to α, β.
Lemma 6
If i can be separated from j then there is a polynomial f that separates i from j and such that f (A ℓ ) ⊆ max(A ℓ ) for every ℓ ∈ [n].
Proof: Let g separate i from j. Choose a tuple a ∈ R ∩ (max(A 1 ) × . . . × max(A n )) and consider the polynomial f (x) = g(x) · a. As is easily seen,
f (A ℓ ) ⊆ max(A ℓ ). Since g j (β j ) ⊆ α j , we have f j (β j ) ⊆ α j . Finally, take a, b ∈ max(A i ) ∩ Im(g i ) and a ′ , b ′ ∈ max(A i ) such that g i (a ′ ) = a, g i (b ′ ) = b.
Such elements exist, because f i (β i ) ⊆ α i and all the nontrivial β i -blocks are inside max(A i ). Then
f i (a ′ ) = g i (a ′ )a[i] = aa[i] = a = b = ba[i] = g i (b ′ )a[i] = f i (b ′ ).
✷ From now on we assume that all polynomials separating coordinate positions satisfy the conditions of Lemma 6.
Lemma 7
If i can be separated from j then, for any (α i , β i )-minimal set U , there is an idempotent unary polynomial g such that g i (A i ) = U , and g separates i from j.
Proof: Let f separate i from j. Then f i (A i ) contains an (α i , β i )-minimal set V , and there is an idempotent polynomial h i with h i (A i ) = V . The polynomial h i can be extended to a polynomial h of R. Then f ′ = hf separates i from j and f ′ i (A i ) = V . There is a (α i , β i )-minimal set W with f ′ i (W ) = V and an idempotent poly- nomial h ′ i with h ′ i (V ) = W . As above the polynomial h ′ i can be extended to a polynomial h ′ of R. For a certain k, (h ′ f ′ ) k is idempotent, separates i from j, and (h ′ i f ′ i ) k (A i ) = W
. Now the lemma follows easily from the fact that any two (α i , β i )-minimal sets are polynomially isomorphic. ✷
Lemma 8
If i can be separated from j and α j ≺ β j then j can be separated from i.
Proof: Let U 1 , . . . , U k be all the (α i , β i )-minimal sets. By Lemma 7, for every U l , there is an idempotent unary polynomial g (ℓ) separating i from j and such that g (ℓ)
i (A i ) = U ℓ . Take a β j -block B that contains more than one α j -blocks, a tuple a ∈ R such that a j ∈ B, and set a (ℓ) = g (ℓ) (a). By Lemmas 3 and 6
a (1) , . . . , a (k) ∈ R ∩ (max(A 1 )) × . . . × max(A n )) and U 1 , . . . , U k ⊆ max(A i ), and B ⊆ max(A j ). The operation h (ℓ) (x) = m(x, g (ℓ) (x), a (ℓ) ) satisfies the conditions • h (ℓ) i (x) = m(x, g (ℓ) i (x), a (ℓ) [i]) = m(x, x, a (ℓ) [i]) = a (ℓ) [i] for every x ∈ U ℓ ; • h (ℓ) j (x) = m(x, g (ℓ) j (x), a (ℓ) [j]) α j ≡ m(x, a (ℓ) [j], a (ℓ) [j]) = x for every x ∈ B; • h (ℓ) (R) ⊆ R ∩ (max(A 1 ) × . . . × max(A n )).
We are going to compose the polynomials h (ℓ) such that the composition collapses β i . To this end take a sequence 1 = ℓ 1 , ℓ 2 , . . . such that U ℓ 2 is a subset of the
range of h (1) = h (ℓ 1 ) i , and, for s > 2, U ℓs is a subset of the range of h (s−1) = h (ℓ s−1 ) i . . . h (ℓ 1 ) i . Since |Im(h (s) )| < |Im(h (s−1) )|, there is r such that |Im(h (r) )| contains no (α i , β i )-minimal sets. Therefore, setting h(x) = h (ℓr) (h (ℓ r−1 ) (. . . h (ℓ 1 ) (x) . . .))
we have h i collapses all the (α i , β i )-minimal sets, and h j acts identically on B/ α j . Therefore, h separates j from i. ✷ Let J ⊆ [n] be the set of all i ∈ [n] with α i ≺ β i . The relation η α,β on J consisting of all pairs (i, j) such that i cannot be separated from j with respect to α, β is obviously reflexive and transitive, and is symmetric by Lemma 8. So, η α,β is an equivalence relation. We call its classes coherent sets with respect to α, β.
We show that coherent sets possess some additional useful properties.
Lemma 9
Let I ⊆ [n], and I 1 , . . . , I k the coherent sets of R with respect to α, β.
Then the coherent sets of pr I R with respect to α I , β I are I 1 ∩ I, . . . , I k ∩ I.
Lemma 9 follows from the observation that whether i can be separated from j or not depends entirely on the algebra pr i,j R.
Lemma 10 If I ⊆ [n] is a coherent set then there is a unary polynomial
f of R such that f is idempotent, ( * ) f (R) ⊆ R ∩ (max(A 1 ) × . . . × max(A n )), f i (β i ) ⊆ α i if and only if i ∈ I, f i (A i ) is a (α i , β i )-minimal set whenever i ∈ I.
Moreover, for any ℓ ∈ I, and any (α ℓ , β ℓ )-minimal set U ℓ , the polynomial f can be chosen such that f ℓ (A ℓ ) = U ℓ .
Proof: Take i ∈ I and an (α i , β i )-minimal set U . By Lemma 7, for every j ∈ I, there is an idempotent polynomial f (j) (x) such that f (j) separates i from j and f (j) i (A i ) = U . Composing all these polynomials we obtain an operation f such that f j (β j ) ⊆ α j for every j ∈ I, and f i (x) = x for x ∈ U . Since for every ℓ ∈ I the polynomial f does not separate i from ℓ, we have f ℓ (β ℓ ) ⊆ α ℓ .
Let now f be an operation separating each i ∈ I from each j ∈ I and such that the sum of |f i (A i )|, for i ∈ I, is minimal among the operations with this property.
Then, for every i ∈ I, f i (A i ) is an (α i , β i )-minimal set. Indeed, suppose that there is i ∈ I such that f i (A i ) is not a minimal set. Then f i (A i ) contains an (α i , β i )minimal set V and there is an idempotent unary polynomial g i with the range V . As is easily seen, for an extension g of g i to a polynomial of R, the operation gf still separates every ℓ ∈ I from every j ∈ I, but the sum of |f j (A j )|, j ∈ I, is less than that for f ; a contradiction.
Finally, arguing as in the proof of Lemma 7 we may derive an idempotent polynomial with the required properties. ✷
Separation in a single algebra
We say that prime interval α ≺ β in Con(A) can be separated from a prime interval
γ ≺ δ if there is a unary polynomial f of A such that f (β) ⊆ α, but f (δ) ⊆ γ.
We say that a congruence ξ ≥ α is e-minimal for the interval α, β if for every α ≤ γ ≺ δ ≤ ξ, the interval γ ≺ δ cannot be separated from α ≺ β. Separation as it is just introduced is closely related to separation as it is defined in Section 3.1.
Lemma 11
Let Q be the binary equality relation on A. Prime interval α ≺ β ≤ θ A can be separated from γ ≺ δ ≤ θ A as intervals in Con(A) if and only if 1 can be separated from 2 in Q with respect to (α, γ) and (β, δ).
Proof: Note that for any polynomial f its action on the first and second projections of Q is the same polynomial of A. Therefore α ≺ β can be separated from γ ≺ δ in Con(A) if and only if, there is a unary polynomial f of A, f (β) ⊆ α while f (δ) ⊆ γ. This condition can be expressed as follows: there is a unary polynomial f of Q, f 1 (β) ⊆ α while f 1 (δ) ⊆ γ, which precisely means that 1 can be separated from 2 in Q. ✷
Corollary 2 Let A be any SBM algebra.
(1) If α ≺ β ≤ θ A can be separated from γ ≺ δ ≤ θ A , then γ ≺ δ can be separated from α ≺ β.
(2) If prime intervals α ≺ β and γ ≺ δ are projective, then they cannot be separated from each other.
(3) If γ ≺ δ ≤ θ A and α ≺ β ≤ θ A cannot be separated,
then a set U is a (α, β)-minimal set if and only if it is an (γ, δ)-minimal set.
Proof: (1) follows from Lemmas 8 and 11.
(2) follows from Proposition 2(2), as (α, β)and (γ, δ)-minimal sets are the same.
(3) Consider the binary equality relation Q on A. By Lemma 11 1 cannot be separated from 2 with respect to (α, γ) and (β, δ). By Lemma 10 for any (α, β)minimal set U there is a polynomial f of Q such that f 1 (A) = U and f 2 (A) is a (γ, δ)-minimal set. However, f 2 (A) = U , the result follows. ✷
Lemma 12 (1) If γ is e-minimal for α ≺ β ≤ θ A , then γ ≤ θ A . (2) If γ 1 , γ 2 are e-minimal for α ≺ β ≤ θ A , then so is γ 1 ∨ γ 2 .
Proof: (1) Take a ∈ max(A) and consider the polynomial f (x) = x · a. This polynomial does not collapse β to α but collapses A onto max(A), and so every interval η ≺ ζ with η ≤ θ A and ζ ≤ θ A .
(2) By Lemma 4 the interval [0 A , θ A ] in Con(A) is modular. Therefore every prime interval in [α, γ], γ = γ 1 ∨ γ 2 , is projective to an inteval in [α, γ 1 ] or to an interval in [α, γ 2 ]. Therefore, for every prime factor η ≺ ζ ≤ γ, every (η, ζ)minimal set U is a minimal set of some prime factor δ ≺ π ≤ γ i , i = 0 or i = 1.
Thus if some f collapses (δ, π), we have U ⊆ Im(f ) for any (η, ζ)-minimal set U . Therefore, f collapses (η, ζ). ✷ By Lemma 12 (2) there is the greatest congruence of A e-minimal for α ≺ β; it will be denoted by emin(α, β).
Minimal sets and direct decompositions
For a set I ⊆ [n], let us denote the equality relations on R, R I by 0, 0 I respectively, and for any γ i ∈ Con(A i ), γ = i∈[n] γ i , γ I = i∈I γ i . The following lemma shows a connection between minimal sets of R and its coherent sets.
Lemma 13
Let ξ i = emin(α i , β i )if α i ≺ β i and ξ i = α i otherwise. Let α ≤ γ ≺ δ ≤ ξ, and let f (R) be a (γ, δ)-minimal set for an idempotent polynomial f . Then f satisfies ( * ) for a certain coherent set with respect to α, β.
Proof: First we prove the claim of the lemma in a particular case. Let γ 1 , . . . , γ n , j ∈ [n] is such that α j ≺ β j , and δ 1 , . . . , δ n , where γ j ≺ δ j and γ i = δ i for i = j; then by the rectangularity of R ′ = R∩(max(A 1 )×. . .×max(A n )) (Proposition 1) either γ = δ or γ ≺ δ. Notice that in the latter case the rectangularity of R ′ implies that for any a, b ∈ δ j , a = b, there are a, b ∈ R with a[j] = a, b[j] = b, and such that a, b ∈ δ − γ. CLAIM 1. For any γ 1 , . . . , γ n , α i ≤ γ i ≤ ξ i for i ∈ [n], any j ∈ [n] with α j ≺ β j , and δ 1 , . . . , δ n , α i ≤ δ i ≤ ξ i such that γ j ≺ δ j and γ i = δ i for i = j, every idempotent polynomial f of R such that f (R) is an (γ ′ , δ ′ )-minimal set satisfies ( * )
for the coherent set I containing j.
As is easily seen, f j (δ j ) ⊆ γ j ; therefore, there is a (γ j , δ j )-minimal set U ⊆ f j (A j ), which is also an (α j , β j )-minimal set. Take a polynomial g satisfying ( * ) for I, and such that g j (A j ) = U . Let k be such that f ′ = (fgf ) k is idempotent.
Then f ′ j (U ) = U , f ′ ℓ (β ℓ ) ⊆ α ℓ if and only if ℓ ∈ I, that is, f ′ satisfies ( * ) for I, and f ′ (δ) ⊆ γ. Since Im(f ′ ) ⊆ Im(f ), and Im(f ) is a (γ, δ)-minimal set, we have Im(f ′ ) = Im(f ). Moreover, as Ker(f ) ⊆ Ker(f ′ )
, we have f ′ = f , and the claim is proved.
Then we prove that, for any prime quotient α ≤ η ≺ θ ≤ ξ, there are γ = γ 1 × · · · × γ n , δ = δ 1 × · · · × δ n , and j ∈ [n] with α j ≺ β j such that α j ≤ γ j ≺ δ j ≤ ξ j and γ i = δ i for i = j, and such that (η, θ) is projective to (γ, δ), and therefore, f (R) is an (η, θ)-minimal set if and only if it is a (γ, δ)-minimal set. This, however, follows from the fact that among conguences of the form γ we can find a maximal chain from α to ξ. ✷
Corollary 3 (1) Let I ⊆ [n] be a coherent set, and α
I ≤ γ < δ ≤ ξ I , γ ′ = { a, b ∈ ξ ′ | pr I a γ ≡ pr I b}, δ ′ = { a, b ∈ ξ ′ | pr I a δ ≡ pr I b}, and γ ′ ≺ η ≤ δ ′ in Con(R). Then, any idempotent polynomial f of R such that f (R) is an (γ ′ , η)- minimal set satisfies ( * ) for I.
(2) Let a ∈ max(R) be a tuple from a nontrivial ξ ′ -block and such that pr I a is in a nontrivial ξ ′ I -block. There is a polynomial f satisfying ( * ) for I and such that a ∈ f (R).
Proof: (1) By Lemma 13, f satisfies ( * ) for a certain coherent set I ′ . If I ′ = I, then f i (ξ i ) ⊆ α i for any i ∈ I. Therefore, f I (δ) ⊆ α I ⊆ γ, and f (δ ′ ) ⊆ γ ′ .
(2) Let γ be the greatest congruence of pr I R such that α I ≤ γ ≤ ξ ′ I and a γ I contains only one α I -block, and α I ≤ γ ≺ δ ≤ ξ ′ I . Since a I lies in a nontrivial ξ ′ I -block, γ = ξ ′ I and by Proposition 2(3) a I belongs to a (γ, δ)-minimal set. Let γ ′ , δ ′ , η be congruences constructed as in part (1) of the corollary. Then there is an idempotent polynomial f satisfying ( * ) for I, and such that a I ∈ f I (pr I R).
Set
h(x) = m(f (x), f (a), a). Since f i (ξ i ) ⊆ α i whenever i ∈ I, we have h i (ξ i ) ⊆ α i and h i (a[i]) = a[i]. If i ∈ I and x ∈ f i (A i ) then f i (x) = x, and therefore h i (x) = m(f i (x), f i (a[i]), a[i]) = m(x, a[i], a[i]) = x. This means h i (ξ i ) ⊆ α i . Moreover, as f i (A i ) is a (α i , β i )-minimal set, h i (A i ) is also a (α i , β i )-minimal set.
Thus, for a certain k, h k is idempotent and satisfies ( * ) for I. Finally,
h i (a[i]) = m(f i (a[i]), f i (a[i]), a[i]) = a[i].
So, a ∈ h k (R); the corollary is proved. ✷
Lemma 14 Let B be a ξ ′ -block of R, such that pr i B ⊆ max(A i ) for i ∈ [n].
Then
B/ α = pr I 1 B/ α I 1 × . . . × pr I k B/ α I k × pr K B where I 1 ,ν = { a, b ∈ (pr I R) 2 | there is c ∈ pr J R such that (a, c), (b, c) ∈ R} ζ = { a, b ∈ (pr I R) 2 | there are c, e ∈ pr J R such that (a, c), (b, e) ∈ R,
c, e ∈ ξ J }, and ν ′ = ν ∩ θ pr I R , ζ ′ = ζ ∩ θ pr I R . Notice that if for a 1 , a 2 ∈ pr I B there is b ∈ pr J R with (a 1 , b), (a 2 , b) ∈ R then the rectangularity of R ′ = R ∩ (max(A 1 )×· · · ×max(A n )) implies that, for any c ∈ pr J B such that (a 1 , c) ∈ R, the tuple (a 2 , c) also belongs to R. Analogously, for any a 1 , a 2 ∈ pr I B with
a 1 ζ ≡ a 2 , there are b 1 , b 2 ∈ pr J B such that (a 1 , b 1 ), (a 2 , b 2 ) ∈ R. We are to prove that if (a 1 , b 1 ), (a 2 , b 2 ) ∈ B then (a 1 , b 2 ), (a 2 , b 1 ) ∈ R, or, equivalently, the equality ν ′ ∩ ξ ′ I = ζ ′ ∩ ξ ′ I .
Denote the congruences appearing in this equality by γ and δ, respectively.
If γ = δ then, for congruences γ ′ , δ ′ , η constructed as in Corollary 3(1), there is an idempotent unary polynomial f satisfying ( * ) for I and such that f (R) is a (γ ′ , η)-minimal set. However, this leads to a contradiction because on the one hand f (η) ⊆ γ ′ , hence f I (δ) ⊆ γ, but on the other hand f I (ζ) ⊆ ν for any f satisfying ( * ) for I.
Repeating the same argument for each coherent set we get what is required. ✷
Splits and alignments
An element a ∈ A i is called α i β i -split if there is a β i -block B and b, c ∈ B such that ab α i ≡ ac. Note that no element from max(A i ) is α i β i -split, while the minimal element is α i β i -split. We say that i, j ∈ [n] are not αβ-aligned if there is a ∈ R such that a[i] is not α i β i -split and a[j] is α j β j -split, or the other way round.
Lemma 15
If i, j are not α i β i -aligned then they are in different coherent sets with respect to α, β.
Proof: It suffices to consider the case n = 2, i = 1, j = 2. Let (a, b)
∈ R be such that a is α i β i -split, while b is not α j β j -split. Let also (c, d) ∈ R ′ = R ∩ (max(A 1 )×max(A 2 )). Consider operation f ((x 1 , x 2 )) = (a, b)·((x 1 , x 2 )·(c, d)). We claim that f 1 (β 1 ) ⊆ α 1 while f 2 (β 2 ) ⊆ α 2 .
First, observe that all the values of the operation g((x 1 , x 2 )) = (x 1 , x 2 ) · (c, d) belong to R ′ , and g((x 1 , x 2 )) = (x 1 , x 2 ) for any (x 1 , x 2 ) ∈ R ′ . Then, for any nontrivial β 2 -block B 2 and any a ′ , b ′ ∈ B 2 we have b(a ′ d)
α 2 ≡ b(b ′ d), as b is not α 2 β 2 -split. Thus f 2 (β 2 ) ⊆ α 2 .
On the other hand, since a is α 1 β 1 -split, there is a β 1 -block B 1 and a ′′ , b ′′ ∈ B 1 such that a(a ′′ c) = aa ′′ α 1 ≡ ab ′′ = a(b ′′ c). Therefore
f 1 (β 1 ) ⊆ α 1 . ✷
Collapsing polynomials
Let α ≺ β ≤ θ A ∈ Con(A), and let α ′ be the smallest congruence such that α ′ ≺ β ′ and α ≺ β cannot be separated for some β ′ . Observe that since the interval [0 A , θ A ] in Con(A) is modular, the intersection of all α ′′ ∈ [0 A , θ A ] such that α ′′ ≺ β ′′ cannot be separated from α ≺ β for some β ′′ , such an α ′ exists. For γ ∈ Con(A), γ ≤ θ A , if α ′ ≤ γ, let ζ αβ (γ) denote the smallest congruence such that there is an irreducible chain ζ αβ (γ) = γ 0 ≺ γ 1 ≺ · · · ≺ γ k = γ and every prime interval γ i ≺ γ i+1 can be separated from α ≺ β. By ν αβ (γ) we denote the smallest congruence such that there is an irreducible chain ν αβ (γ) = γ 0 ≺ γ 1 ≺ · · · ≺ γ k = γ and every prime interval γ i ≺ γ i+1 cannot be separated from α ≺ β.
Again the modularity of [0 A , θ A ] implies that such smallest congruences exist.
A unary idempotent polynomial f is called (α, β)-collapsing with respect to a ∈ max(A) if the following conditions hold:
(C1) for any γ ∈ Con(A), α ′ ≤ γ ≤ θ A , it holds f (a γ ) ⊆ a ν αβ (γ) ; (C2) for any γ ∈ Con(A), γ ≤ θ A and any
x ∈ A with x γ ≡ a, f (x) ζ αβ (γ) ≡ x.
Lemma 16
For every SBM algebra A, any α, β ∈ ConA, α ≺ β ≤ θ A , and any a ∈ A from a β-block containing more than one α-block, there is a (α, β)collapsing polynomial f with respect to a.
Proof: Let α ′ be the smallest congruence such that α ′ ≺ β ′ and α ≺ β cannot be separated for some β ′ . First, we show that A has a polynomial f satisfying the following conditions.
(U1) f (β) ⊆ α;
(U2) f (γ) ⊆ ζ αβ (γ) for any γ ∈ Con(A), γ ≤ θ A .
For any prime interval γ ≺ δ that can be separated from α ≺ β, there is a unary polynomial f γδ such that f γδ (β) ⊆ α, but f γδ (δ) ⊆ γ. We may assume that all the f γδ are idempotent and have the same (α, β)-minimal set as image. As is easily seen, composing all such polynomials we obtain the result. Let us denote the resulting polynomial by f .
We may assume f is idempotent and U = Im(f ) is a (α, β)-minimal set such that a ∈ U . Set g U = m(x, f (x), a); then g U (U ) = a. Take γ ∈ Con(A),
γ ≤ θ A , and x ∈ a γ . Since f (γ) ⊆ ζ αβ (γ), we obtain f (x) ζ αβ (γ) ≡ a. Therefore g U (x) ζ αβ (γ) ≡ x. Now let U 1 , .
. . , U k be a list of all (α, β)-minimal sets, and g U i a function with |g U i (U i )| = 1 and g U i (x) ζ αβ (γ) ≡ x for all γ ∈ Con(A) and x ∈ a γ . Composing these polynomials as in the proof of Lemma 8 we obtain
h(x) = g U ℓ 1 (. . . g U ℓm (x) . . .)
satisfies the following conditions: |h(U )| = 1 for any (α, β)-minimal set U containing a, and h(x) ζ αβ (γ) ≡ x for all γ ∈ Con(A) and x ∈ a γ . The first condition implies h(a γ ) ⊆ a ν αβ (γ) . ✷
Lemma 17
Let R ≤ A 1 × · · · × A n be a subdirect product of SBM algebras, α = α 1 × · · · × α n for α i ∈ ConA i , and β = β 1 × · · · × β n for α i β i . Let also I be a coherent set with respect to α, β and let a ∈ pr I R be such that a[i] belongs to a β i -block containing more than one α i -block for some i ∈ I. There is an idempotent unary polynomial g of R such that g j is (α j , β j )-collapsing with respect to a[j] for all j ∈ I.
Proof:
We repeat the proof of Lemma 16 for relations rather than individual algebras. Without loss of generality assume I = [n].
First, for every j and every γ, δ ∈ Con(A j ), γ ≺ δ ≤ θ A j such that γ ≺ δ can be separated from α j ≺ β j find a polynomial f
(j) γδ of R such that f (j) j,γδ (δ) ⊆ γ and f (j) j,γδ (β j ) ⊆ α j . Note that as [n] is a coherent set with respect to (α, β), f (j)
ℓ,γδ (β ℓ ) ⊆ α ℓ for any ℓ ∈ I. By Lemma 10 we may assume that f (j) ℓ,γδ (A ℓ ) is a (α ℓ , β ℓ )-minimal set for all ℓ ∈ I. We can also assume that for some i 0 ∈ I the images of f (j) i 0 ,γδ are equal to the same (α i 0 , β i 0 )-minimal set U i 0 . As in the proof of Lemma 16, by composing all such polynomials we obtain an idempotent polynomial f satisfying the following conditions for every j ∈ I:
(U'1) f (β j ) ⊆ α j ; (U'2) f (γ) ⊆ ζ α ′ β ′ (γ) for any γ ∈ Con(A j ), γ ≤ θ A j .
Let η be a maximal congruence of R from [α, β] such that a η = a α and θ ∈ [α, β] such that η ≺ θ. Let U = Im(f ) be a (η, θ)-minimal set such that a ∈ U . Note that by Lemma 13 pr j U = f j (A j ) is an (α j , β j )-minimal set and a[j] ∈ pr j U for j ∈ I. Set g U = m(x, f (x), a); then g U (U ) = a. Take γ ∈ Con(A j ), γ ≤ θ A j ,
j ∈ I, and x ∈ a[j] γ . Since f j (γ) ⊆ ζ α j β j (γ), we obtain f j (x) ζ α j β j (γ) ≡ a.
Therefore g U j (x) ζ α j β j (γ) ≡ x. Now let U 1 , . . . , U k be a list of all (α j , β j )-minimal sets for j ∈ I, and g U ℓ a function with |g U ℓ j (U ℓ )| = 1 and g
U j ℓ (x) ζ α ℓ β ℓ (γ)
≡ x for all γ ∈ Con(A ℓ ), ℓ ∈ I, and x ∈ a γ . Composing these polynomials we obtain a polynomial h such that |h j (U )| = 1 for any (α j , β j )-minimal set U containing a[j], j ∈ I, and h j (x)
ζ α j β j (γ) ≡
x for all γ ∈ Con(A j ) and x ∈ a[j] γ . The first
condition implies h j (a[j] γ ) ⊆ a[j] ν α j β j (γ)
. We then complete the proof again composing all such polynomials for all j ∈ I. ✷ A polynomial g constructed in Lemma 17 we will call (α I , β I )-collapsing.
From relations to instances
Let P = (V, S, C) be a 3-minimal instance of CSP(A). A partition V 1 ∪ . . . ∪ V k = V of the set of variables is called a link partition if the following condition holds: Observe that, since P is 3-minimal, the mappings ϕ vw are consistent, that is, for any u, v, w from the same class V i it holds that ϕ uv • ϕ vw = ϕ uw . Without loss of generality we will assume that ϕ vw is an identity mapping whenever v, w belong to the same class of the partition.
• For every v ∈ V there is a partition A v1 ∪ . . . ∪ A vkv = S v such that whenever v, w ∈ V i for some i, we have k v = k w ,
Since coherent sets depend only on binary projections of a relation, coherent sets can be defined for 3-minimal instances as well. More precisely, let P = (V, S, C) and α v , β v ∈ Con(S v ), α v β v ≤ θ Sv ; we say that v is not separated from w, v, w ∈ V , with respect to α, β, if this is the case for S vw . Due to 3-minimality -we can consider ternary sets of solutions -this relation is transitive. It is also reflexive and symmetric. The equivalence classes will be called coherent sets of P with respect to α, β.
Lemma 18
Let P = (V, S, C) be an instance of CSP(A) and V 1 , . . . , V k the coherent sets of P with respect to α, β, α = (
α v ) v∈V , β = (β v ) v∈V , α v β v ≤ θ v ∈ Con(S v ), v ∈ V . If P β has a solution ϕ such that ϕ(v) ∈ max(S v / βv ) and for every i ∈ [k] the problem (P α ) V i has a solution ϕ i such that ϕ i (v) ∈ ϕ(v) for every v ∈ V i , then P α has a solution ψ, where ψ(v) = ϕ i (v) whenever v ∈ V i , i ∈ [k].
Proof: Replacing S v with the factor S v / αv we can assume
α v = 0 v . Let s, R ∈ C and V ′ i = s ∩ V i . By Lemma 14 pr V ′ 1 R ∩ v∈V ′ 1 ϕ(v) × . . . × pr V ′ k R ∩ v∈V ′ k ϕ(v) ⊆ R.
Therefore ψ(s) ∈ R, and ψ is a solution. ✷ In a similar way we define another partition of V based on αβ-alignment properties. Variable v, w ∈ V are αβ-aligned if they are αβ-aligned in S vw . As is easily seen, this property defines an equivalence relation; let V 1 , . . . , V ℓ be the classes of this relation. By Lemma 15 the αβ-alignment partition is coarser than the partition into coherent sets with respect to α, β.
Lemma 19
(1) If variables v, w ∈ V of an instance P = (V, S, C) are αβ-aligned and S v has a minimal elemnt then S w also has a minimal element. (2) If every domain of an instance P = (V, S, C) has a minimal element the partition of V into aligned sets is a link partition.
Proof: For every v ∈ V let L v denote the set of α v β v -split elements of S v and let N v denote the set of α v β v -non-split elements. As we observed before Lemma 15 both sets are nonempty if S v has a minimal element, and L v = ∅ if S v is a Mal'tsev algebra.
(1) If S w is a Mal'tsev algebra then v, w cannot be αβ-aligned since L w = ∅, while L v , N v = ∅.
(2) For any v, w ∈ V i and any pair (a, b) ∈ S vw , a ∈ L v if and only if b ∈ L w . Therefore S vw is link-partitioned, as well as pr s∩V i for any constraint C = s, R ∈ C. ✷
The algorithm
Let A v be a domain of P. Choose an irreducible chain of congruences 0 Av = α v1 ≺ α v2 ≺ · · · ≺ α vkv = θ Av in Con(A v ). Let k = max{k v | v ∈ V }. If k v < k we set α vkv +1 = · · · = α kv = θ Av . We use the following notation. For i ∈ [k v ]:
• ξ ′ v (i) is the greatest j ∈ [k v ] such that j ≥ i and every prime interval in α vi ≺ α vi+1 ≺ · · · ≺ α vj cannot be separated from (α vi , α vi+1 ) if i < k v , and ξ ′ v (i) = i for i ≥ k v ; • next v (i) is the minimal j ∈ [k v ] such that j > ξ ′ (i) and (α vj , α vj+1 ) cannot be separated from (α vi , α vi+1 ), if one exists and if i < k v ; next v (i) = k otherwise.
For β ∈ {α v1 , . . . , α vk }, say,
β v = α viv , let β + v denote α vi+1 and β + v = α vk if β v = α vk . Lemma 20 Let β = (β v ) v∈V be such that β v ∈ {α v1 , . . . , α vk−1 }, say, β v = α viv . Let also η v = α vnextv(iv) .
(1) the coherent sets with respect to β, β + , and η, η + are the same.
(2) for any such coherent set W the (β W , β + W )and (η W , η + W )-collapsing polynomials are the same.
Proof: The lemma straightforwardly follows from the definitions. ✷ Let P = (V, A, C) be a 3-minimal instance such that every its domain is either a Mal'tsev algebra with respect to m or has a minimal element. For every v ∈ V and i ∈ [k v − 1] let W vi be the set of w ∈ V such that for some j ∈ [k w − 1] v cannot be separated from w in S vw with respect to (α vi , α wj ), (α vi+1 , α wj+1 ). For w ∈ W vi let ℓ wvi = j, where j is as above. Let β wvi = α wℓ wvi for w ∈ W vi and β wvi = α w1 for w ∈ V − W vi . As is easily seen, W vi is a coherent set of P with respect to β vi , β + vi and does not depend on the choice of j. Instance P is said to be block-minimal if for any v ∈ V and i ∈ [k v − 1] the instance P W vi is minimal.
Theorem 2 Every block-minimal instance P ∈ CSP(A) with nonempty constraint relations has a solution.
We start with an auxiliary lemma. As is easily seen, for any γ such that γ v ∈ {α v1 , . . . , α vk } for each v ∈ V , any coherent set W of P with respect to γ, γ + is a subset of W wj for any w ∈ W and j such that γ w = α wj .
Lemma 21
Let γ be such that γ v ∈ {α v1 , . . . , α vk } for each v ∈ V and W a coherent set of P with respect to γ, γ + ; and let w, j be such that
W ⊆ W wj . For v ∈ W if γ v = α viv let j v = ξ ′ v (i v ) and δ v = α vjv .
Suppose ϕ is a solution of P W,δ such that if next v (i v ) = k for some v ∈ W , then there is a (γ W , γ + W )-collapsing polynomial f of S ′ W = pr W S W wj with ϕ ∈ f (S ′ W )/ δ . Then there is a solution ψ to P W,γ W such that ψ ∈ ϕ.
Proof: If next v (i v ) = k for all v ∈ W , then there is a (γ W , γ + W )-collapsing polynomial f such that f (x) δv ≡ x for all x ∈ max(S v ), that is, for all x ∈ ϕ(v) α vnextv (iv ) , because ζ α viv α viv +1 (θ Sv ) ≤ α vjv . If next v (i v ) = k for some v ∈ W then we consider the (γ W , γ + W )-collapsing polynomial f given in the lemma. For the same reason this polynomial satisfies the condition f v (x) δv ≡ x for all x ∈ ϕ(v) α vnextv (iv ) .
For every v ∈ W , if we view ϕ(v) as a subset of S v / γv , |f v (ϕ(v))| = 1, as f is (γ W , γ + W )-collapsing and ν α viv α viv +1 (δ v ) ≤ γ v . Consider a constraint C = s, R of P W,γ . There is a tuple a ∈ R such that a[v] ∈ ϕ(v) for v ∈ W ∩ s. Then pr s∩W f (a) ∈ pr s∩W R and the γ s∩W -block containing this tuple does not depend on the choice of a. Therefore f (ϕ) is a solution of P W,γ W . ✷ Now we are in a position to prove Theorem 2. Proof:[of Theorem 2] We prove two claims by induction for any γ = (γ v ) v∈V such that γ v ∈ {α v1 , . . . , α vk } for v ∈ V :
(1) the instance P γ has a solution ϕ γ such that if γ ≤ δ then ϕ γ ∈ ϕ δ ;
(2) solution ϕ γ can be chosen such that for any coherent set W with respect to γ, γ + , there is a (γ W , γ + W )-collapsing polynomial f of pr W S W wj for w, j with W ⊆ W wj such that ϕ(W ) ∈ f (S W )/ γ W .
If γ v = α v1 = 0 Sv then P γ = P and the result follows. The base case of induction is given by γ v = α vk = θ Sv ; then the mapping ϕ(v) = max(S v ) is a solution since S v / σ Sv is a semilattice.
Suppose γ is such that for any δ with γ v ≤ δ v for v ∈ V , where at least one inequality is strict, claims (1) and (2) are true. By the induction hypothesis there is a solution ψ of P γ + . Let W be a coherent set with respect to γ, γ + . For v ∈ W let γ v = α viv , δ v = α vξ ′ v (iv ) , and η v = α vnextv(iv) . Let W ′ ⊆ W be the set of those v for which next v (i v ) = k. Let also η v = γ v for v ∈ W .
As is easily seen, W ′ is a coherent set with respect to η, η + . Again by the induction hypothesis ψ(W ′ )/ η W ′ is a solution of P W ′ ,η W ′ and belongs to the range of some (η W ′ , η + W ′ )-collapsing polynomial f ′ of pr W ′ S W wj . Note that f ′ is also a (γ W ′ , γ + W ′ )-collapsing. Therefore, there is an extension of f ′ to a (γ W , γ + W )collapsing polynomial f of pr W S W wj . Then
ψ(v) ∈ f ′ v (S v )/ ηv = f v (S v )/ ηv for v ∈ W ′ and ψ(v) ∈ S v = f v (S v )/ ηv . Also, since ζγ v γ + v (η v ) ≤ δ v .
Therefore, as f is (γ W , γ + W )-collapsing, f (ψ) δ W ≡ ψ. By Lemma 21 there is a solution ϕ of P W,γ W such that ϕ ∈ ψ(W ). Since as in the proof of Lemma 21 ϕ is an image under polynomial f , this also proves (2) for W . Now (1) follows by Lemma 18. ✷
To show that Theorem 2 gives rise to a polynomial-time algorithm for CSP(A) we need to show how block minimality can be established. We prove that establishing block-minimality can be reduced to solving polynomially many smaller instances of CSP(A).
Proposition 3
Transforming an instance P = (V, S, C) ∈ CSP(A) to a block minimal instance can be reduced to solving polynomially many instances P i = (V i , S i , C i ) ∈ CSP(A) such that V i ⊆ V and for all v ∈ V i either S i v is a Mal'tsev algebra, or |S i v | < |S v |. Since the cardinalities of algebras in A are bounded, this gives a polynomial time algorithm for CSP(A).
Proof: Using the standard propagation algorithm and Maroti's reduction (Section 2.4) we may assume that P is 3-minimal and every S v is either Mal'tsev or has a minimal element. Let W vi be the coherent sets as in the definition of blockminimality. We need to show how to make problems P W vi minimal. If every S w for w ∈ W vi is Mal'tsev, P W vi can be made minimal using the algorithm from [9]. If S w has a minimal element for some w ∈ W vi then by Lemma 19 P W vi is link partitioned, that is, it is a disjoint union of instances P 1 ∪ · · · ∪ P m , where P i = (W vi , S i , C i ) are such that S w = S w1 ∪ · · · ∪ S wm is a disjoint union. We then transform them to minimal instances separately.
If at any stage there is a tuple from a constraint relation that does not extend to a solution of a certain subinstance, we tighten the original problem P and start all over again.
✷
. . . , I k are the coherent sets with respect α, β and K = {i ∈ [n] | α i = β i }. Proof: Set I = I 1 , J = [n]−I. We prove that B = B I / α I ×B J / α J . By taking factors modulo α i we may assume α i = 0 i , i ∈ [n]. Consider two congruences on pr I R:
and there is a bijection ϕ vw : [k v ] → [k w ] such that for any (a, b) ∈ S vw and any j ∈ [k v ], a ∈ A vj if and only if b ∈ A wϕvw(j) . (Or in other words if every A vi is a union of blocks of the link congruence with respect to S vw .)
Kearnes and Szendrei in[22] developed a technique based on so-called critical relations that resembles in certain aspects what can be achieved through coherent sets. However,[22] only concerns congruence modular algebras, and so cannot be used for SBM algebras.
The dichotomy for conservative constraint satisfaction problems revisited. Libor Barto, LICS. Libor Barto. The dichotomy for conservative constraint satisfaction problems revisited. In LICS, pages 301-310, 2011.
Constraint satisfaction problems solvable by local consistency methods. Libor Barto, Marcin Kozik, J. ACM. 6113Libor Barto and Marcin Kozik. Constraint satisfaction problems solvable by local consistency methods. J. ACM, 61(1):3, 2014.
The CSP dichotomy holds for digraphs with no sources and no sinks (A positive answer to a conjecture of Bang-Jensen and Hell). Libor Barto, Marcin Kozik, Todd Niven, SIAM J. Comput. 385Libor Barto, Marcin Kozik, and Todd Niven. The CSP dichotomy holds for digraphs with no sources and no sinks (A positive answer to a conjecture of Bang-Jensen and Hell). SIAM J. Comput., 38(5):1782-1802, 2009.
Three-element Mal'tsev algebras. Andrei A Bulatov, Acta Sci. Math (Szeged). 713-4Andrei A. Bulatov. Three-element Mal'tsev algebras. Acta Sci. Math (Szeged), 71(3-4):469-500, 2002.
A graph of a relational structure and constraint satisfaction problems. Andrei A Bulatov, LICS. Andrei A. Bulatov. A graph of a relational structure and constraint satisfaction problems. In LICS, pages 448-457, 2004.
A dichotomy theorem for constraint satisfaction problems on a 3-element set. Andrei A Bulatov, J. ACM. 531Andrei A. Bulatov. A dichotomy theorem for constraint satisfaction problems on a 3-element set. J. ACM, 53(1):66-120, 2006.
Complexity of conservative constraint satisfaction problems. Andrei A Bulatov, ACM Trans. Comput. Log. 12424Andrei A. Bulatov. Complexity of conservative constraint satisfaction prob- lems. ACM Trans. Comput. Log., 12(4):24, 2011.
Conservative constraint satisfaction re-revisited. CoRR, abs/1408. Andrei A Bulatov, 3690Andrei A. Bulatov. Conservative constraint satisfaction re-revisited. CoRR, abs/1408.3690, 2014.
A simple algorithm for mal'tsev constraints. Andrei A Bulatov, Víctor Dalmau, SIAM J. Comput. 361Andrei A. Bulatov and Víctor Dalmau. A simple algorithm for mal'tsev con- straints. SIAM J. Comput., 36(1):16-27, 2006.
An algebraic approach to multi-sorted constraints. Andrei A Bulatov, Peter Jeavons, CP. Andrei A. Bulatov and Peter Jeavons. An algebraic approach to multi-sorted constraints. In CP, pages 183-198, 2003.
Classifying the complexity of constraints using finite algebras. Andrei A Bulatov, Peter Jeavons, Andrei A Krokhin, SIAM J. Comput. 343Andrei A. Bulatov, Peter Jeavons, and Andrei A. Krokhin. Classifying the complexity of constraints using finite algebras. SIAM J. Comput., 34(3):720- 742, 2005.
Dualities for constraint satisfaction problems. Andrei A Bulatov, Andrei A Krokhin, Benoit Larose, Complexity of Constraints -An Overview of Current Research Themes. Result of a Dagstuhl SeminarAndrei A. Bulatov, Andrei A. Krokhin, and Benoit Larose. Dualities for constraint satisfaction problems. In Complexity of Constraints -An Overview of Current Research Themes [Result of a Dagstuhl Seminar]., pages 93-124, 2008.
Recent results on the algebraic approach to the CSP. Andrei A Bulatov, Matthew Valeriote, Complexity of Constraints -An Overview of Current Research Themes. Result of a Dagstuhl SeminarAndrei A. Bulatov and Matthew Valeriote. Recent results on the algebraic approach to the CSP. In Complexity of Constraints -An Overview of Current Research Themes [Result of a Dagstuhl Seminar]., pages 68-92, 2008.
Graphs of finite algebras, edges, and connectivity. Andrei A Bulatov, abs/1601.07403CoRRAndrei A. Bulatov. Graphs of finite algebras, edges, and connectivity. CoRR, abs/1601.07403, 2016.
A course in universal algebra. S Burris, H P Sankappanavar, Graduate Texts in Mathematics. 78Springer-VerlagS. Burris and H.P. Sankappanavar. A course in universal algebra, volume 78 of Graduate Texts in Mathematics. Springer-Verlag, New York-Berlin, 1981.
Constraint processing. Rina Dechter, Elsevier Morgan KaufmannRina Dechter. Constraint processing. Elsevier Morgan Kaufmann, 2003.
Monotone monadic SNP and constraint satisfaction. T Feder, M Y Vardi, Proceedings of 25th ACM Symposium on the Theory of Computing (STOC). 25th ACM Symposium on the Theory of Computing (STOC)T. Feder and M.Y. Vardi. Monotone monadic SNP and constraint satisfac- tion. In Proceedings of 25th ACM Symposium on the Theory of Computing (STOC), pages 612-622, 1993.
The computational structure of monotone monadic SNP and constraint satisfaction: A study through Datalog and group theory. T Feder, M Y Vardi, SIAM Journal of Computing. 28T. Feder and M.Y. Vardi. The computational structure of monotone monadic SNP and constraint satisfaction: A study through Datalog and group theory. SIAM Journal of Computing, 28:57-104, 1998.
The Structure of Finite Algebras. D Hobby, R N Mckenzie, Contemporary Mathematics. 76American Mathematical SocietyD. Hobby and R.N. McKenzie. The Structure of Finite Algebras, volume 76 of Contemporary Mathematics. American Mathematical Society, Providence, R.I., 1988.
Tractability and learnability arising from algebras with few subpowers. M Pawel, Petar Idziak, Ralph Markovic, Matthew Mckenzie, Ross Valeriote, Willard, SIAM J. Comput. 397Pawel M. Idziak, Petar Markovic, Ralph McKenzie, Matthew Valeriote, and Ross Willard. Tractability and learnability arising from algebras with few subpowers. SIAM J. Comput., 39(7):3023-3037, 2010.
Closure properties of constraints. Peter Jeavons, David A Cohen, Marc Gyssens, J. ACM. 444Peter Jeavons, David A. Cohen, and Marc Gyssens. Closure properties of constraints. J. ACM, 44(4):527-548, 1997.
Clones of algebras with parallelogram terms. A Keith, Gnes Kearnes, Szendrei, Internat. J. Algebra Comput. 2212012Keith A. Kearnes and gnes Szendrei. Clones of algebras with parallelogram terms. Internat. J. Algebra Comput., 22(1), 2012
The complexity of csps on a 4-element set. Personal communication. Petar Markovic, Petar Markovic. The complexity of csps on a 4-element set. Personal com- munication, 2011.
Block algorithm. Personal communication. Petar Markovic, Ralph Mckenzie, Petar Markovic and Ralph McKenzie. Block algorithm. Personal communi- cation, 2009.
Miklos Maroti, Malcev on top. ManuscriptMiklos Maroti. Malcev on top. Manuscript, available at http://www.math.u-szeged.hu/∼ mmaroti/pdf/200x%20Maltsev%20on%20top.pdf, 2011.
Tree on top of Malcev. Miklos Maroti, ManuscriptMiklos Maroti. Tree on top of Malcev. Manuscript, available at http://www.math.u-szeged.hu/∼
A combinatorial constraint satisfaction problem dichotomy classification conjecture. Jaroslav Nesetril, H Mark, László Siggers, Zádori, Eur. J. Comb. 311Jaroslav Nesetril, Mark H. Siggers, and László Zádori. A combinatorial constraint satisfaction problem dichotomy classification conjecture. Eur. J. Comb., 31(1):280-296, 2010.
A CSP Algorithm for some subvarieties of maltsev Products. A workshop on constraint satisfaction, structure theory and computation in algebra. Ian Payne, Boulder COoral communicationIan Payne. A CSP Algorithm for some subvarieties of maltsev Products. A workshop on constraint satisfaction, structure theory and computation in algebra, Boulder CO, 2016. (oral communication)
The complexity of satisfiability problems. T J Schaefer, Proceedings of the 10th ACM Symposium on Theory of Computing (STOC'78). the 10th ACM Symposium on Theory of Computing (STOC'78)T.J. Schaefer. The complexity of satisfiability problems. In Proceedings of the 10th ACM Symposium on Theory of Computing (STOC'78), pages 216-226, 1978.
The proof of CSP Dichotomy Conjecture for 5-element domain. Dmitriy Zhuk, Arbeitstagung Allgemeine Algebra AAA'91. Brno, Czech RepublicDmitriy Zhuk, The proof of CSP Dichotomy Conjecture for 5-element do- main, In Arbeitstagung Allgemeine Algebra AAA'91 Brno, Czech Republic, 2016
Dmitriy Zhuk, Csp Dichotomy On, Conjecture, Arbeitstagung Allgemeine Algebra AAA'92. Prague, Czech Republic32Dmitriy Zhuk, On CSP Dichotomy Conjecture, In Arbeitstagung Allgemeine Algebra AAA'92 Prague, Czech Republic, pages 32, 2016,
| zyda_arxiv-1869000 |
A SHAPE-NEWTON METHOD FOR FREE-BOUNDARY PROBLEMS SUBJECT TO THE BERNOULLI BOUNDARY CONDITION
Yiyun Fan
School of Mathematical Sciences
University of Nottingham
John Billingham
School of Mathematical Sciences
University of Nottingham
Kristoffer Van Der Zee
School of Mathematical Sciences
University of Nottingham
A SHAPE-NEWTON METHOD FOR FREE-BOUNDARY PROBLEMS SUBJECT TO THE BERNOULLI BOUNDARY CONDITION
We develop a shape-Newton method for solving generic free-boundary problems where one of the free-boundary conditions is governed by the Bernoulli equation. The Newton-like scheme is developed by employing shape derivatives in the weak forms, which allows us to update the position of the free surface and the potential on the free boundary by solving a boundary-value problem at each iteration. To validate the effectiveness of the approach, we apply the scheme to solve a problem involving the flow over a submerged triangular obstacle.IntroductionFree boundary problems have many applications in fluid mechanics, such as open-channel flow, fluid/solid interaction and hydrodynamics. The difficulty in solving such problems arises since the geometry of the domain needs to be determined together with other variables in this problem. A simplified but important model problem is the Bernoulli free-boundary problem, which considers the (linear) Dirichlet and Neumann boundary condition on the free boundary[1,2]. This is not to be confused with the Bernoulli equation, which is the pressure boundary condition in the irrotational fluid mechanics, and which we will also study in this paper. The nonlinearity of the Bernoulli equation poses an additional challenge to numerical algorithms.There are several computational approaches to this class of problems. The first is to solve the boundary value problem with the single free-boundary condition for the field variables on a fixed approximated domain and update the free surface derived from the remaining free boundary condition, which is not included in the boundary value problem. These fixed-point type methods are called trial methods, which converge linearly and cannot always find a solution. The details can be found, for example, in[2,3,4].The second approach is to formulate a shape optimization problem to improve the convergence rate. This method aims to construct a boundary-value problem as the state problem with one free-boundary condition and formulate a cost function with the remaining free-boundary condition. However, this approach requires gradient information and is applied to a particular free-boundary problem. The formulation and application of shape optimization to free boundary problems can be found in, e.g.[5,6,7, 8, 9,10].The third approach is linearising the whole system and applying a Newton-type method. One linearisation method, called domain-map linearisation, requires us to transform the free-boundary problem to an equivalent boundary value problem on a fixed domain and then linearise the transformed problem concerning the reference domain[11,12]. An alternative way to linearise the free-boundary problem is to apply shape calculus to the current geometry[13,14]. Kärkkäinen and Tiihonen used this technique to solve two different free-boundary problems: a particular Bernoulli free-boundary problem [15] and a stationary free boundary problem[16]. The application to a more general Bernoulli free-boundary problem has been investigated in[17]by considering the whole problem in one weak form and reformulating the problem with a curvature-dependant boundary condition to analyse with C 1 continuous free boundary. The use of shape calculus and a Newton-type method is called the shape-Newton method.In the current work, we extend the shape-Newton method to a more generic free-boundary problem by considering the Bernoulli boundary condition on the free surface. We also recall the method for the Bernoulli free-boundary problem, which has a Dirichlet boundary condition on the free boundary. In addition, we consider Robin boundary conditions
Γ F Γ R Γ L Ω Γ B Γ 0 x 0 θ (x 0 ) x = x 0 + θ (x 0 )
on the fixed boundary. Similar to Kärkkäinen and Tiihonen, the problem will be set up in terms of two weak forms: one derived from the boundary value problem with the Neumann boundary condition over the current domain, and the other from the remaining free boundary condition (Dirichlet condition or Bernoulli condition). The linearisation for the Dirichlet free-boundary problem is known. However, the linearisation for the Bernoulli equation has not been derived before: we obtain a surprisingly simple expression for the shape derivative, which involves the normal derivative of the velocity squared (|∇ 2 ϕ|); see section 5.3. We will show a numerical experiment involving open channel flow over a submerged triangle [18]. The shape-Newton method converges superlinearly, and the results agree well with the exact solutions or the results in the reference paper.
The contents of this paper are arranged as follows. We will first introduce the model problem either with the Dirichlet boundary condition or the Bernoulli equation on the free boundary in Section 2. In Section 3, we will derive the weak form for both problems. Then, we will introduce some basic concepts about shape derivatives in Section 4 and the linearisation by applying Hadamard shape derivatives for the free-boundary problem follows in Section 5. In Section 6, we will illustrate the Newton-like and coupled schemes. The numerical experiments will be shown in Section 7, following the conclusions in Section 8.
Free-boundary Problem with Bernoulli or Dirichlet free-boundary condition
We investigate the free boundary problem with either the Bernoulli condition or the Dirichlet condition on the free boundary. The Bernoulli condition is commonly used when considering steady, incompressible, and inviscid flow, but it is nonlinear, making the boundary value problem more challenging to solve. Hence, the Bernoulli equation can be simplified as the Dirichlet boundary condition such that the nonlinearity in the free-boundary condition is not included in this problem. To be more general, the boundary conditions on the fixed boundaries are Robin boundary conditions.
Free-boundary Problem With Bernoulli Condition
The free-boundary problem with a Bernoulli condition can be abstracted as seeking an unknown domain Ω ⊂ R N and a corresponding scalar function ϕ : Ω → R. The boundary ∂Ω contains a free boundary Γ F , a left boundary Γ L for input flow, a right boundary Γ R for output flow, and the bed Γ B . Figure 1 is an example of the domain and the parametrization of the free boundary Γ F , where the bed boundary can have any shape. The vertical displacement of the free boundary is denoted as η(x). The problem can be presented as
−∆ϕ = f, in Ω,(1)∂ n ϕ = 0, on Γ F ,(2)∂ n ϕ + ωϕ = g + ωh, on ∂Ω \ Γ F ,(3)a |∇ϕ| 2 + bη + c = 0, on Γ F ,(4)
where ∂ n (·) = n · ∇ (·) is the normal derivative with n being the unit normal vector to the boundary pointing out the domain. The condition (4) with real-valued constants a, b, and c represents the Bernoulli condition. We have Robin boundary conditions on ∂Ω \ Γ F where ω, g and h are the boundary data. Thus we can approximate either a Neumann or Dirichlet-type condition depending on the values of ω. The Neumann boundary condition, obtained when ω = 0, usually represents the kinematic condition, where the perpendicular fluid velocity is zero on the free or solid boundary.
On the other hand, choosing ω to be extremely large yields the approximated Dirichlet boundary condition ϕ = h. Furthermore, it is possible to impose mixed boundary conditions by choosing various values of ω on different parts of the boundaries (i.e. Γ L , Γ R and Γ B ). Sufficiently C 1 -smooth data f , g and h allow us to find a nontrivial solution pair (Γ F , ϕ).
By introducing a vector field θ : Γ 0 → R N , the displacement of the free boundary with respect to the referenced boundary Γ 0 can be defined as
Γ F := x ∈ R N |x = x 0 + θ (x 0 ) , ∀x 0 ∈ Γ 0 ,(5)
to parametrize the domain Ω and the free boundary Γ F , as shown in Figure 1. This allows us to write the problem (1)-(4) in terms of the pair (θ, ϕ). Considering η(x) in (4) denoted as the y-component of θ, the problem (1)-(4) can alternatively be resolved using the pair (η, ϕ) for fixed values of x.
Free-boundary Problem with Dirichlet Boundary Condition
A more straightforward model problem is introduced by replacing the Bernoulli condition with the Dirichlet condition on the free boundary. The boundary-value problem is now linear and easier to solve. The abstract problem is
−∆ϕ = f, in Ω,(6)∂ n ϕ = 0, on Γ F , (7) ∂ n ϕ + ωϕ = g + ωh, on ∂Ω \ Γ F , (8) ϕ = h, on Γ F ,(9)
where g ≤ 0 for input flow and h is assumed to be sufficiently smooth on R N .
By choosing ω → ∞ for Dirichlet boundary conditions on ∂Ω \ Γ F , this problem is quite similar to a classical free-boundary problem for an ideal fluid, called Bernoulli free-boundary problem [2].
The Weak Form
We will first find the weak forms of both free-boundary problems to apply shape-calculus techniques to linearise this problem. Let Γ D represent the boundary ∂Ω \ Γ F with Dirichlet boundary conditions and x L represent the x-component of the left node on the free boundary Γ F , we introduce the test functions v ∈ V := v ∈ C 1 (Ω)|v = 0 on Γ D and w ∈ W := w ∈ C 1 (Γ)|w = 0 on x L . If no Dirichlet boundary conditions are given on any part of the boundary ∂Ω \ Γ F , then the test function v satisfies v ∈ V := C 1 .
Since the only difference between the two free-boundary problems in Section 2 is the Bernoulli condition and the Dirichlet condition on the free boundary, the first weak form in the domain Ω is the same in both situations. It can be obtained by integrating the multiplication of the Laplacian equation ((1) or (6)) and the test function v over Ω, then applying the Green's formula with the Robin boundary conditions on ∂Ω \ Γ F and Neumann boundary condition on Γ F , yielding
R 1 ((θ, ϕ) ; v) = 0, ∀v ∈ V,(10)
where the semilinear form R 1 ((θ, ϕ) ; v) is defined as
R 1 ((θ, ϕ) ; v) = Ω ∆ϕ · vdΩ − Ω f vdΩ, = Ω ∇ϕ · ∇vdΩ − ∂Ω\Γ F gvds − Ω f vdΩ.(11)
However, when ω → ∞ yields the Dirichlet boundary condition on the fixed boundary, we will replace the weak form on the fixed boundary with the strong form ϕ = h instead to enforce ϕ to satisfy the boundary condition.
The second weak form is different on the free boundary, which can be derived by multiplying with test function w and integrating over Γ F ,
R 2 ((θ, ϕ) ; w) = 0, ∀w ∈ W,(12)
with the definition of the semilinear form R 2 ((θ, ϕ) ; w) as
R 2 ((θ, ϕ) ; w) = Γ F (B.C) wdΓ,(13)
where (B.C) can either be the left hand side of Bernoulli condition (4) or Dirichlet condition (9).
Shape Derivatives
The linearisation of R 1 ((θ, ϕ) ; v) and R 2 ((θ, ϕ) ; w) needs the differentiation of the weak forms with respect to the geometry, where the geometry itself is treated as a variable. Thus the shape derivatives are applied to find the differentiation defined in a fixed domain, which requires some appropriate smoothness assumptions.
The weak forms (11) and (13) contain domain integrals Ω (·) dΩ and boundary integrals Γ F (·) dΓ. The shape derivatives for a domain integral and a boundary integral can be obtained by applying the Hadamard formula [13,14] as If Γ is the boundary of Ω of class C 1 , then its shape derivative with respect to the perturbation δθ ∈ C 1 0 R N ; R N is given by
⟨dJ (Ω) , δθ⟩ = Γ ϕδθ · ndΓ,
where n denotes the outward normal derivative to Ω. Then its shape derivative with respect to the perturbation δθ ∈ C 1 0 R N ; R N is given by
⟨dJ (Γ) , δθ⟩ = Γ (∂ n ϕ + κϕ) δθ · ndΩ,
where n denotes the normal vector to Γ and κ is the curvature of Γ.
In the above theorems, δθ is defined as
δθ = δθ (x 0 ) , with x 0 + δθ (x 0 ) = x ∈ Γ F , and x 0 ∈ Γ 0 ,
where Γ 0 is the reference domain.
Linearisation
The linearisation of R 1 ((θ, u) ; v) and R 2 ((θ, ϕ) ; w) at an approximation pair θ ,φ close to the exact solutions can be derived from the partial derivative of the weak forms with respect to ϕ and θ. We assume thatφ is any approximation satisfying the boundary conditions on ∂Ω \ Γ F that lives in the approximate domainΩ with the free boundaryΓ induced by the approximationθ. The y-component ofθ is denoted asη.
The Gâteaux derivative atφ in the direction δϕ and the linearisation for the Dirichlet boundary condition is relatively standard, similar to [17], while the approximation of the linearisation for Bernoulli condition is surprisingly elegant and straightforward.
Linearisation of R 1
The Gâteaux derivative atφ in the direction δϕ can be evaluated as
∂ ϕ R 1 θ ,φ ; v , δϕ = lim t→0 R 1 θ,φ + tδϕ ; v − R 1 θ,φ ; v t = Ω ∇δϕ · ∇vdΩ.(14)
Then the linearisation with respect to θ can be obtained by applying Hadamard formulas from Theorem 4.1 to (11) which yields
∂ θ R 1 θ ,φ ; v , δθ = Γ ∇φ · ∇vδθ · ndΓ − Γ f vδθ · ndΓ.(15)
The tangential gradient ∇ Γ and tangential divergence div Γ are defined as ∇ Γ (·) = ∇(·) − ∂ n (·)n, div Γ = div(·) − ∂ n (·)n. (16) By substituting (16) into (15) and applying the tangential Green's identity [13,14], (15) can be approximated as
∂ θ R 1 θ ,φ ; v , δθ = Γ ∇ Γφ · ∇ Γ v + ∂ nφ ∂ n v δθ · ndΓ − Γ f vδθ · ndΓ ≈ − Γ div Γ δθ · n∇ Γφ vdΩ − Γ f vδθ · ndΓ ≈ − Γ div Γ δθ · n∇φ vdΩ − Γ f vδθ · ndΓ.(17)
Due to the Neumann boundary condition (2) (or (7)), ∂ nφ is very small, and the related term is neglected in the second and third steps.
Linearisation of R 2 with Dirichlet condition
Considering the Dirichlet boundary condition, we have
R 2 ((θ, ϕ) ; w) = Γ F (ϕ − h) wdΓ.(18)
Similar to the linearisation of R 1 with respect to ϕ, it is very straightforward to evaluate the Gâteaux derivative at ϕ in the direction δϕ,
∂ θ R 2 θ ,φ ; w , δϕ = Γ δϕwdΓ.(19)
Then by using the Hadamard formula on the boundary integral (18), we have the shape linearisation
∂ θ R 2 θ ,φ ; v , δθ = Γ (∂ n + κ) φ − h w δθ · ndΓ, = Γ ∂ n φ − h w + φ − h ∂ n w + κ φ − h w δθ · ndΓ.(20)
Using the Dirichlet condition (9) and Neumann condition (7) on the free boundary, we can neglect the (ϕ − h)−term and (∂ n ϕ)−term in (20). We then have the approximation
∂ θ R 2 θ ,φ ; v , δθ ≈ − Γ (∂ n h)wδθ · ndΓ.(21)
Linearisation of R 2 with Bernoulli condition
Substituting the Bernoulli condition (4) into the weak form (13), we have
R 2 ((θ, ϕ) ; w) = Γ F a |∇ϕ| 2 + bη + c wdΓ.(22)
The linearisation in terms of ϕ at approximationφ is
∂ θ R 2 θ ,φ ; w , δϕ = Γ 2a∇φ · ∇δϕwdΓ.(23)
To find the Gâteaux derivative with respect to θ atθ, applying Hadamard formula yields
∂ θ R 2 θ ,φ ; v , δθ = Γ (∂ n + κ) a ∇φ 2 + bη + c w δθ · ndΓ, = Γ a∂ n ∇φ 2 + bn y wδθ · ndΓ + Γ a ∇φ 2 + bη + c ∂ n wδθ · ndΓ + Γ κ a ∇φ 2 + bη + c wδθ · ndΓ,(24)
where n y is the y−coordinate of the unit normal vector n. Sinceη is the y-component ofθ, we can evaluate
∂ nη = ∂ n y = 0 1 · n x n y = n y .
According to the Bernoulli condition (4), a ∇φ 2 + bη + c → 0 and can be neglected, thus the approximation is
∂ θ R 2 θ ,φ ; v , δθ ≈ Γ a∂ n ∇φ 2 + bn y wδθ · ndΓ (25) Givenθ = (x,η(x)), we have the unit normal vector n = 1 √ 1+η 2 x (−η x , 1) and the unit tangential vector τ = 1 √ 1+η 2 x (1,η x )
. Then the Neumann boundary condition (2) on the free boundary can be written in the form of −η xφx +φ y = 0.
This implies that its tangential derivative is also zero, i.e.
(τ · ∇) −η xφx +φ y = 0, which is equivalent to −η xxφx −η xφxx +φ xy −η 2 xφxy +η xφyy = 0.(26)
Then we have
∂ n ∇φ 2 = 1 1 +η 2 x (−η x ∂ x + ∂ y ) φ 2 x +φ 2 y = 2 1 +η 2 xφ x −η xφxx −η 2 xφxy +φ xy +η xφyy = 2 1 +η 2 xη xx φ x 2 = 2κ ∇φ 2 ,(27)where κ = ∂ x ηx √ 1+η 2 x
. The second and last steps are obtained by substituting the Neumann condition, and the third step is obtained by substitution of (26).
On substitution from (27) into (25), the approximate shape linearisation is
∂ θ R 2 θ ,φ ; v , δθ ≈ Γ 2aκ ∇φ 2 + bn y wδθ · ndΓ.(28)
6 Newton-Like Schemes Now, we introduce ϕ =φ + δϕ and θ =θ + δθ ∈ Γ F where δϕ and δθ are the corrections evaluated in the reference domainΩ. The exact Newton method for (δθ, δϕ) would be
∂ (θ,ϕ) R 1 θ ,φ ; v , (δθ, δϕ) = −R 1 θ ,φ ; v ∀v ∈ V,(29)∂ (θ,ϕ) R 2 θ ,φ ; w , (δθ, δϕ) = −R 2 θ ,φ ; v ∀w ∈ W.(30)
The Newton-like scheme for R 1 is obtained by combining (14) and the approximation (17)
of ∂ θ R 1 θ ,φ ; v , i.e. Ω ∇δϕ · ∇vdΩ − Γ div Γ δθ · n∇φ vdΩ − Γ f vδθ · ndΓ = −R 1 θ ,φ ; v , ∀v ∈ V.(31)
1. Initialize with θ 0 , ϕ 0 ; set k = 0.
2. Given θ k , ϕ k , solve the free boundary problem (34)-(36) with (37) (or(38)) for (δθ · n, δϕ).
3. Update the free boundary displacement as
θ k+1 = θ k + (δθ · n) m k .
and ϕ k as
ϕ k+1 = ϕ k + δϕ,
with m k · n = 1 onΓ. Then repeat from step 2. until convergence. Similarly, for the Dirichlet boundary condition, the Newton-like scheme is derived based on (19) and approximation (21) as
Γ δϕwdΓ − Γ (∂ n h) wδθ · ndΓ = −R 2 θ ,φ ; w , ∀w ∈ W.(32)
For the Bernoulli condition, introducing (23) and (28), the Newton-like scheme would be
Γ 2a∇φ · ∇δϕwdΓ + Γ 2aκ ∇φ 2 + bn y wδθ · ndΓ = −R 2 θ ,φ ; w , ∀w ∈ W.(33)
Whenφ andθ are the exact solutions, the approximations are the same as the exact Newton scheme.
The boundary-value problem for (δθ, δϕ) can be extracted based on the Newton-like scheme (31)-(33)
∇ 2 δϕ = −∇ 2φ − f in Ω,(34)∂ n δϕ + div Γ δθ · n∇φ − f δθ · n = 0, onΓ,(35)∂ n δϕ + ωδϕ = g + ωh − ∂ nφ + ωφ on Γ R ,(36)
with the boundary condition on the free boundary as either the Dirichlet condition
δϕ − ∂ n hδθ · n = h − ϕ, onΓ(37)
or the Bernoulli condition
2a∇φ · ∇δϕ + 2aκ ∇φ 2 + bn y δθ · n = − a ∇φ 2 + bη + c , onΓ.(38)
The algorithm is given in Table 1. The free boundary is updated along the direction of m k with m k · n = 1 such that the free surface can be piecewise smooth. Choosing m k = 0, 1 ny , the free boundary would be updated in the y direction.
Alternatively, we have dΓ = ds = 1 +η 2
x dx such that
Γ (·) δθ · ndΓ = Γ (·) δηdx,(39)
where s is the arc length and δη = 1 +η 2 x (δθ · n) = δθ · (−η x , 1). The boundary integrals can be evaluated in a referenced domain along the x direction, and this problem can be solved in terms of the pair (δη, δϕ). The algorithm is now displayed as Table 2, and the geometry is updated vertically with δη.
1. Initialize with η 0 , ϕ 0 ; set k = 0.
2. Given η k , ϕ k , solve the free boundary problem (34)-(36) with (38) for (δη, δϕ).
Update the free boundary displacement as
η k+1 = η k + δη.
and ϕ k as ϕ k+1 = ϕ k + δϕ, repeat from step 2. until convergence.
Numerical experiments
We start with a straightforward test case for the Dirichlet boundary condition problem and then focus on the submerged triangle problem. The first test case is also a Bernoulli free-boundary problem simplified from the submerged triangle problem, with a Dirichlet condition on both the fixed and free boundary. The submerged triangle problem is the problem to which we are mainly interested in applying this shape-Newton scheme. We will use the algorithm in Table 2 such that the displacement of the free boundary is updated vertically.
Dirichlet boundary condition
The test case for the free-boundary problem with Dirichlet boundary condition is a Bernoulli free-boundary problem derived from a manufactured solution,
ϕ = x + y, η = x + 1,(40)
such that the data can be obtained as
f = 0, g = 0, h = 2y − 1, on Γ F , x + y, on ∂Ω \ Γ F .
With an initial domain Ω 0 = (x, y) : x ∈ [0, 1] , y ∈ 0, x 2 + 1 , how the domain and the triangulation changes in the first three iterations are shown in Figure 2. Starting with a parabola, the free boundary is almost a straight line after the third iteration. Figure 3 shows the error between numerical results of ϕ and η compared with the exact solution (40) on the free boundary Γ F with a different number of finite element meshes. The value of N + 1 represents the number of nodes along the x-axis, and the number of nodes along the y-axis is N 4 . Although the error is slightly larger with more nodes, the shape-Newton scheme converges superlinearly. Moreover, there appears to be a plateau at higher iterations. One possible reason for the plateau and the rising error values is that we use a finite difference method to find the derivatives and the normal vector. However, even though the error rises with more elements, it is still around 10 −12 when we choose N = 640 such that there are 409600 elements in total in the domain.
The submerged triangle problem
The second test case is the submerged triangle problem investigated by Dias and Vanden-Broeck [18]. A detailed derivation of the governing equations can be found in Appendix A. In this problem, we have a Neumann boundary condition on ∂Ω \ Γ R and a Dirichlet boundary condition on Γ R , i.e. ω = 0 on ∂Ω \ Γ R and ω → ∞ on Γ R . The data defining this problem is given as follows: The Bernoulli condition is obtained by giving a = 1 2 F 2 , b = 1 and c = 1 2 F 2 + 1 where F is the Froude number. The domain is a rectangle truncated at |x| = 4 containing an isosceles triangle symmetric about x = 0 having an angle α and width 2w 0 at the bottom, as shown in Figure 4. The space is discretised as shown in Figure 5, where it was uniformly spaced along the x−axis and the vertical direction for fixed values of x. Then the algorithm in Table 2 can be applied to solve for the pair (δη, δϕ), and the free boundary can be updated vertically with δη.
f = 0, g = 0, on ∂Ω \ Γ L , −1, on Γ L , h = 0 on Γ R .
Dias and Vanden-Broeck [18] found that the solutions to the submerged problem have two types: One is supercritical flow both upstream and downstream, and the other is supercritical (or subcritical) upstream and subcritical (or supercritical) downstream flow. Our numerical solutions are the first type, and we can compare them with the results in [18]. Some converged grids of the whole region are shown in Figure 6. We noticed that η(x) has a maximum value y 0 at x = 0 on the free boundary, and the value of y 0 can change with the values of α, w 0 and F . Figure 7 shows the value of y 0 against the Froude number F for various values of α. We can observe from Figure 7 that y 0 will decrease when the Froude number F becomes larger for the fixed width of the triangle. In addition, for fixed values of F and angle α, y 0 will also decrease with the width of the triangle. This agrees with the results presented by Dias and Vanden-Broeck in [18], who solved this problem for fixed α = π 4 . However, as shown in Figure 7c and Figure 7d, it is hard for us to solve this problem with a larger triangle. This also explains why the critical values obtained from our algorithm are different from the results in [18]. The detail about Dias and Vanden-Broeck's results will be illustrated in Appendix A.
We also found that the solutions are challenging for larger angle α for fixed width. The possible reason is that with a higher triangle height, the flow can approach its limiting configuration as a thin layer over the edge of the triangle with a stagnation point. Figure 4: The sketch of the domain we used for the second test case. α is denoted as the angle and w 0 as the half width of the triangle.
Γ F Γ L Γ R Ω w 0 Γ B (4, 0) (−4, 0) (−4, 1) (4, 1) α
The rate of convergence is shown in Figure 8, where we show the error ||δϕ|| L2 and the surface error ||δη|| L2 against the number of iterations for α = π 8 , w 0 = 0.3 and F = 3. The Dirichlet error δϕ in Ω and the surface error δη on Γ F show superlinear convergence. This figure also shows the comparison of the errors for different mesh densities. The Dirichlet error is slightly larger with higher mesh densities but converges faster. The convergence of surface error does not have much difference at the beginning but then has lower values for higher mesh densities. The interesting behaviour is that both the Dirichlet and the surface errors oscillate around some values between order 10 −10 and 10 −13 . The order of that values is higher for higher mesh densities, which the discretization error can explain. The convergence is slower as the error becomes very close to the discretization error. In addition, the convergence of surface error slows first, which further affects the Dirichlet error as a consequence of solving as a pair.
Conclusion
We derived a shape-Newton method to solve generic free-boundary problems. The linearised problems are obtained by applying the Hadamard formula for shape derivatives to the two sets of weak forms of the free boundary problem. After the linearisation and neglecting the small-valued terms, the linearised problem can be solved by a Newton-like scheme with an approximated Jacobian matrix.
The linearisation for the problem with the Dirichlet boundary condition is relatively standard, and we use a straightforward numerical experiment with a manufactured solution to test the numerical schemes. The results agree well with the exact solutions and converge superlinearly.
The linearisation for the problem with the Bernoulli equation is interesting. The curvature terms of the shape derivative of the boundary integral can be neglected due to the Neumann boundary condition (2), and only the normal derivative terms remain. However, after some calculations, we find that the normal derivative term satisfies ∂ n |ϕ| 2 = 2κ |∇ϕ| 2 . The test problem considers the flow over the submerged triangle problem, whose detail is shown in Appendix A. This chapter only assumes that the inflow and outflow have the same depth and speed. The results in [18] show that for the fixed shape of the triangle, the Froude number F first decreases and then increases when the maximum deviation of the free boundary increases. This indicates that for some values of F , two solutions exist. However, our method can only find the solution with the lower maximum deviation due to the limitation of solving the problem with a larger triangle. Despite this, both numerical tests show that the shape-Newton method converges superlinearly.
A Submerged Triangle Problem
The submerged triangle problem is from [18], and we will give the detail about how Dias and Vanden-Broeck formulate this problem. The model considers a steady irrotational flow of an incompressible, inviscid fluid over a triangular obstruction as shown in Fig.9a. A system of Cartesian coordinates is introduced, where the x-axis is along the parallel bottom plate and the y-axis going through the apex (point B) of the triangle. The acceleration of gravity g acts in the negative y-direction. The flow approaches a uniform stream when |x| → ∞, where the upstream flow has velocity U and depth L, and the downstream flow has velocityŨ and depthL. The height of the triangle is denoted as W . We introduce the velocity potential of this flow as ϕ(x, y) and the location of the free surface y(x). The Froude number F is defined as
F = U (gL) 1 2 .(41)
Now we introduce dimensionless variables We drop primes for convenience of notations. WhenL ≤ 1, the flow is defined as subcritical upstream and supercritical downstream (as shown in Fig.9b). Moreover, the flow is supercritical upstream and downstream whenL = 1 and F ≥ 1.
x ′ = x L , y ′ = y L , ϕ ′ = ϕ U L , y ′ = y L ,L ′ =L L ,Ũ ′ =Ũ U .(42
We denote the whole region of the flow as Ω, the bottom plate as Γ B and the free surface as Γ F . The Bernoulli condition on Γ F is 1 2 F 2 |∇ϕ| 2 + y = constant, on Γ F .
The constant value on the right-hand side of the Bernoulli equation can be evaluated by considering the conditions upstream. Then the Bernoulli equation on the free surface is 1 2
F 2 |∇ϕ| 2 + y = 1 2 F 2 + 1.(44)
Now the governing equation and boundary conditions are where n is the unit normal to the boundary pointing out the flow.
∇ 2 ϕ = 0, in Ω,(45)1 2 F 2 |∇ϕ| 2 + y = 1 2 F 2 + 1, on Γ F ,(46)∂ϕ ∂n = 0, on Γ F ,(47)∂ϕ ∂n = 0, on Γ B ,(48)ϕ x = 1, x → −∞,(49)ϕ = 0, x → −∞,(50)y = 1, x → ∞,(51)
According to [18], two different types of solutions are derived by considering the Bernoulli condition at |x| → ∞ as
1 2 F 2 + 1 = 1 2 F 2Ũ 2 +L.(52)
The discharge Q is defined as Q = U L =ŨL.
Then we can eliminateŨ in (52) by substituting (53) such that
L − 1 1 2 F 2 1 L + 1 −L = 0.(54)
It is obvious that this equation has two solutions :L = 1, (55) and
F 2 = 2L 2 1 +L .(56)
The first solution (55) indicates thatL = L andŨ = U . For the solution (56), it can be shown that F ≥ 1 whenL ≥ 1, and F ≤ 1 whenL ≤ 1. In [18], when considering the second type of the solutions, Dias and Vanden-Broeck assume thatL ≤ 1, thus the flow is subcritical upstream and supercritical downstream. An example of this flow is shown in Fig.9b.
The results we compare are for the first solution for F ≥ 1 as shown in Fig.10 [18]. τ is the maximum value of the deviation of the free surface, which is equivalent to y 0 in our notation. t 3 defines the geometry of the triangle by using conformal mapping and then mapping to a half-unit circle called t-plane. Dias and Vanden-Broeck found that F first decrease then increase as τ becomes larger, indicating two solutions for some values of F . By considering the solitary wave, i.e. t 3 = 0, they found that the maximum of τ satisfies τ max = 1 2 F 2 .
Figure 1 :
1The sketch of the parametrization of the free boundary Γ F by the displacement θ (x 0 ) with respect to the reference boundary Γ 0 .
Theorem 4 . 1 .
41(Shape derivative of domain integral) Suppose ϕ ∈ W 1,1 R N and Ω is an open and bounded domain; we have the domain integral
Theorem 4.2. (Shape derivative of boundary integral) Suppose ϕ ∈ W 2,1 R N and Ω is an open and bounded domain with the boundary Γ of class C 1,1 , we have the boundary integral
initial domain and the triangulation.
domain and the triangulation after the third iteration.
Figure 2 :
2The initial domain and the change of the domain in three following Newton-like iterations. The free surface is updated vertically.
Figure 3 :
3The Dirichlet error ||ϕ − h|| L∞ and surface error ||η −η|| L∞ on Γ F measured in L ∞ -form against the number of iterations. The upper plot shows the Dirichlet error, and the lower shows the surface error. The values of N + 1 are the number of the nodes along the x-axis.
Figure 5 :
5An example of the domain and the triangulation with α = π 4 , F = 2 and the half width of the triangle w 0 = 0.5.
The final domain for α = π 8 , w0 = 0.5, and F = 2. The final domain for α = π 16 , w0 = 0.5, and F = 2. The final domain for α = π 8 , w0 = 1, and F = 2.
Figure 6 :
6The final domains for various α, w 0 and F , where their free boundarys are the numerical solutions.[7] J. T. Haslinger, K. Kunisch, and G. Peichl. Shape optimization and fictitious domain approach for solving free boundary problems of bernoulli type. Computational Optimization and Applications, 26(3):231-251, 2003. [8] T. Tiihonen. Shape optimization and trial methods for free boundary problems. ESAIM: Mathematical Modelling and Numerical Analysis, 31(7):805-825, 1997.[9] J. I. Toivanen, J. Haslinger, and R. A. E. Mäkinen. Shape optimization of systems governed by bernoulli free
The maximum value y on the free boundary at x = 0 against F with w0 = 0.1 for different values of α.
The maximum value y on the free boundary at x = 0 against F with w0 = 0.3 for different values of α.
The maximum value y on the free boundary at x = 0 against F with w0 = 0.5 for different values of α.
The maximum value y on the free boundary at x = 0 against F with w0 = 0.7 for different values of α.
Figure 7 :Figure 8 :
78The maximum value y 0 on the free boundary at x = 0 against F for different values of α and w 0 . boundary problems. Computer Methods in Applied Mechanics and Engineering, 197(45-48):3803-3815, 2008. The error ||δϕ|| L2 and surface error ||δη|| L2 on Γ F measured in L ∞ -form against the number of iterations with α = π 8 , w 0 = 0.3 and F = 3. The upper plot shows the Dirichlet error, and the lower shows the surface error. The values of N + 1 are the number of the nodes along the x-axis.
[ 18 ]
18F. Dias and J.-M. Vanden-Broeck. Open channel flows with submerged obstructions. Journal of Fluid Mechanics, 206:155-170, 1989.
example of subcritical flow upstream and supercritical flow downstream.
Figure 9 :
9This shows two different types of solutions and the sketch of the flow. 9a is a sketch of the flow for U =Ũ L =L and F ≥ 1. 9b is an example of subcritical flow upstream and supercritical flow downstream.
Figure 10 :
10τ is the maximum value of the deviation of the free surface. t 3 defines the geometry of the triangle by using conformal mapping and then mapping to a half-unit circle called t−plane. The dashed line shows the analytical values of the maximum of τ with respect to F . The curve noted 0 is the solitary wave when t 4 = 0. This graph is from [18].
Table 1 :
1The coupled shape-Newton scheme solving for (δθ, δϕ).
Table 2 :
2The coupled shape-Newton scheme for (δη, δϕ).
Free and Moving Boundary Problems. J Crank, Oxford science publications. Clarendon PressJ. Crank. Free and Moving Boundary Problems. Oxford science publications. Clarendon Press, 1987.
Bernoulli's free-boundary problem, qualitative theory and numerical approximation. M Rumpf, M Flucher, Journal für die reine und angewandte Mathematik. 486M. Rumpf and M. Flucher. Bernoulli's free-boundary problem, qualitative theory and numerical approximation. Journal für die reine und angewandte Mathematik, 1997(486):165-204, 1997.
Numerical solution of the free boundary bernoulli problem using a level set formulation. F Bouchon, S Clain, R Touzani, Computer methods in applied mechanics and engineering. 194F. Bouchon, S. Clain, and R. Touzani. Numerical solution of the free boundary bernoulli problem using a level set formulation. Computer methods in applied mechanics and engineering, 194(36-38):3934-3948, 2005.
Fast numerical methods for bernoulli free boundary problems. C M Kuster, P A Gremaud, R Touzani, SIAM journal on scientific computing. 292C. M. Kuster, P. A. Gremaud, and R. Touzani. Fast numerical methods for bernoulli free boundary problems. SIAM journal on scientific computing, 29(2):622-634, 2007.
Efficient treatment of stationary free boundary problems. K Eppler, H Harbrecht, Applied numerical mathematics. 56K. Eppler and H. Harbrecht. Efficient treatment of stationary free boundary problems. Applied numerical mathematics, 56(10-11):1326-1339, 2006.
Optimal control, dual approach for the numerical solution of a dam problem. J Haslinger, K.-H Hoffmann, R A E Mäkinen, Inst. für Angewandte Mathematik und Statistik, Techn. Univ. MünchenJ. Haslinger, K.-H. Hoffmann, and R. A. E. Mäkinen. Optimal control, dual approach for the numerical solution of a dam problem. Inst. für Angewandte Mathematik und Statistik, Techn. Univ. München, 1992.
Numerical solution of steady free-surface flows by the adjoint optimal shape design method. E H Van Brummelen, A Segal, International Journal for Numerical Methods in Fluids. 411E. H. Van Brummelen and A. Segal. Numerical solution of steady free-surface flows by the adjoint optimal shape design method. International Journal for Numerical Methods in Fluids, 41(1):3-27, 2003.
Numerical solution of bernoulli-type free boundary value problems by variable domain method. G Mejak, International journal for numerical methods in engineering. 3724G. Mejak. Numerical solution of bernoulli-type free boundary value problems by variable domain method. International journal for numerical methods in engineering, 37(24):4219-4245, 1994.
Goal-oriented error estimation and adaptivity for fluid-structure interaction using exact linearized adjoints. K G Van Der Zee, E H Van Brummelen, I Akkerman, R De Borst, Computer Methods in Applied Mechanics and Engineering. 20037Special Issue on Modeling Error Estimation and Adaptive ModelingK.G. van der Zee, E.H. van Brummelen, I. Akkerman, and R. de Borst. Goal-oriented error estimation and adaptivity for fluid-structure interaction using exact linearized adjoints. Computer Methods in Applied Mechanics and Engineering, 200(37):2738-2757, 2011. Special Issue on Modeling Error Estimation and Adaptive Modeling.
Shapes and Geometries: Analysis, Differential Calculus and Optimization. M Delfour, J.-P Zolésio, 4M. Delfour and J.-P. Zolésio. Shapes and Geometries: Analysis, Differential Calculus and Optimization, volume 4. 01 2001.
Introduction to shape optimization: shape sensitivity analysis. J Sokolowski, J.-P Zolésio, Springer Series in Computational Mathematics. Berlin, HeidelbergSpringerJ. Sokolowski and J.-P. Zolésio. Introduction to shape optimization: shape sensitivity analysis. Springer Series in Computational Mathematics. Springer, Berlin, Heidelberg, 1992.
Shape calculus and free boundary problems. K T Kärkkäinen, T Tiihonen, Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering. the European Congress on Computational Methods in Applied Sciences and EngineeringK. T. Kärkkäinen and T. Tiihonen. Shape calculus and free boundary problems. In Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering, ECCOMAS, 2004.
Free surfaces: shape sensitivity analysis and numerical methods. K T Kärkkäinen, T Tiihonen, International Journal for Numerical Methods in Engineering. 448K. T. Kärkkäinen and T. Tiihonen. Free surfaces: shape sensitivity analysis and numerical methods. International Journal for Numerical Methods in Engineering, 44(8):1079-1098, 1999.
Shape-newton method for isogeometric discretizations of free-boundary problems. K G Van Der Zee, G J Van Zwieten, C V Verhoosel, E H Van Brummelen, MARINE 2011, IV International Conference on Computational Methods in Marine Engineering. SpringerK. G. van der Zee, G. J. van Zwieten, C. V. Verhoosel, and E. H. van Brummelen. Shape-newton method for isogeometric discretizations of free-boundary problems. In MARINE 2011, IV International Conference on Computational Methods in Marine Engineering, pages 85-102. Springer, 2013.
| zyda_arxiv-1913000 |
Fate of exceptional points under interactions: Reduction of topological classifications
Tsuneya Yoshida
Department of Physics
Department of Physics
Kyoto University
606-8502KyotoJapan
Yasuhiro Hatsugai
University of Tsukuba
305-8571IbarakiJapan
Fate of exceptional points under interactions: Reduction of topological classifications
(Dated: February 16, 2023)
Despite recent extensive studies of the non-Hermitian topology, understanding interaction effects is left as a crucial question. In this paper, we address interaction effects on exceptional points which are protected by the non-trivial point-gap topology unique to non-Hermitian systems. Our analysis in a two-dimensional parameter space elucidates the existence of exceptional points and symmetry-protected exceptional rings fragile against interactions; they are topologically protected only in non-interacting cases. This fragility of exceptional points and symmetry-protected exceptional rings arises from the reduction of non-Hermitian topological classifications, which is elucidated by introducing topological invariants of the second-quantized Hamiltonian for both non-interacting and interacting cases. These topological invariants are also available to analyze the reduction phenomena of gapped systems. The above results strongly suggest similar reduction phenomena of exceptional points in generic cases and open up a new direction of research in the non-Hermitian topology.
INTRODUCTION
Extensive efforts have been devoted to understanding effects of interactions on topological insulators/superconductors. In particular, it has turned out that interplay between the topology and interactions triggers off exotic phenomena such as the emergence of fractional topological insulators 1-8 and topological Mott insulators 9 . Furthermore, it has been elucidated that interactions change the topological classification of free fermions [10][11][12] which provides systematic understanding of topological states and serves as the corner stone of the material searching. For instance, interactions change the Z-classification to the Z 8 -classification for one-dimensional topological superconductors with timereversal symmetry 13 . This fact indicates that the number of possible topological states is reduced by interactions; there exist an infinite number of topologically distinct states in non-interacting cases while there exist eight topologically distinct states in interacting cases. Further extensive works have elucidated the ubiquity of such reduction of topological classifications . Namely, the reduction phenomena occur for arbitrary dimensions and symmetry classes. In addition, they occur even in parameter spaces 38 .
In parallel with the above significant developments, in these years, a topological aspect of non-Hermitian systems attracts as one of hot topics in condensed matter physics [39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58] . For such systems, extensive works of the non-interacting non-Hermitian topology have discovered a variety of novel phenomena induced by the point-gap topology unique to non-Hermitian systems, such as non-Hermitian skin effects which result in extreme sensitivity to the presence/absence of boundaries [59][60][61][62][63][64][65][66][67][68][69][70][71] . Furthermore, the non-Hermiticity induces a new type of topological degeneracies dubbed exceptional points (EPs) which are protected by the point-gap topology [72][73][74][75][76][77][78][79] . This new type of topological degeneracies is further enriched by symmetry, which results in the emergence of symmetryprotected exceptional rings (SPERs) and symmetryprotected exceptional surfaces in two and three dimensions, respectively [80][81][82][83][84][85][86] . The EPs and their symmetryprotected variants in non-interacting systems attract interdisciplinary interests because they are reported for a wide variety of systems 67,[75][76][77][87][88][89][90][91][92][93] .
The above two progresses lead us the following issues to be addressed; effects of interactions on the non-Hermitian topology. Although several works addressed this issue , fate of EPs under interactions remains highly crucial question. The significance of this question is further enhanced by recent experimental progresses in cold atoms [116][117][118] and quantum circuits 119 .
We hereby analyze effects of interactions on an EP and an SPER in a two-dimensional parameter space which are protected by symmetry. In particular, we elucidate that interactions may destroy the EP and the SPER without breaking relevant symmetry. The above fragility of EPs against interactions arises from the reduction of the non-Hermitian topological classification, which is obtained by comparing topological invariants of the secondquantized Hamiltonian for both non-interacting and interacting cases. Specifically, our analysis elucidates that the reduction Z (N +P +1)/2 → Z (Z → Z 2 ) results in the fragility of EPs (SPERs) for systems with charge U(1) symmetry and spin-parity symmetry (chiral symmetry). Here, we have focused on the Fock space with the particle number N . For even (odd) N , P takes ±1 (0).
The above topological invariants are also applicable to the reduction for gapped systems 104,114 . For gapped systems with charge U(1) symmetry and spin-parity symmetry, our topological invariants indicate the reduction of one-dimensional point-gap topology: Z (N +P +1)/2 → Z. For gapped systems with chiral symmetry, our topological invariants indicate the reduction of zero-dimensional point-gap topology: Z → Z 2 .
The rest of this paper is organized as follows. Section II is devoted to clarifying the fragility of EPs against interactions in systems with charge U(1) symmetry and spin-parity symmetry. Section III elucidates the fragility of SPERs with chiral symmetry. In Sec. IV, a brief summary is provided. In Appendix A, we count the number of the subspaces for a given Fock space. In Appendix B, we demonstrate that there also exist EPs robust against interactions. In Appendices C and D, we address the reduction phenomena for gapped systems 104,114 by computing the above topological invariants.
II.
EXCEPTIONAL POINTS WITH CHARGE U(1) SYMMETRY AND SPIN-PARITY SYMMETRY There exist EPs protected by the point-gap topology only when the second-quantized Hamiltonian is quadratic. In order to see this, let us analyze interaction effects on EPs for the two-dimensional parameter space in the presence of charge U(1) symmetry and spin-parity symmetry. The Hamiltonian reads,
H =Ĥ 0 +Ĥ int ,(1a)H 0 =Ψ † α h αβ (x, y)Ψ β . (1b)
Here, summation is assumed over the repeated indices. The first-quantized Hamiltonian is denoted by h(x, y).
Real variables x and y describe the two-dimensional parameter space. OperatorΨ † (Ψ) denotes a set of creation operatorsĉ † α (annihilation operatorsĉ α ) of fermions. The subscripts α and β label the internal degrees of freedom such as orbital and spin. Here, one might consider that the above setup is somewhat artificial. However, EPs in such a parameter space have been reported for quantum circuits 91,92 .
In this section, we consider a system of fermions with spin-1/2 whose Hamiltonian preserves charge U(1) symmetry and spin-parity symmetry;
[Ĥ,N ] c = 0,(2a)[Ĥ, e iπŜz ] c = 0,(2b)
withN =N ↑ +N ↓ andŜ z = (N ↑ −N ↓ )/2. HereN σ denotes the operator of the total number of fermions in spin-state σ =↑, ↓. The commutation relation denoted by square brackets [Â,B] c :=ÂB −BÂ. The above equations indicates that the second-quantized Hamilto-nianĤ can be block-diagonalized with respect toN and P = (−1)N ↑ = e iπN /2 e iπŜz . We denote eigenvalues ofN , S z ,N σ andP by N , S z , N σ and P , respectively. In the rest of this section, we introduce topological invariants and demonstrate the presence of EPs which are fragile against interactions.
A. Topological invariants
For the Fock space with [N, P ], the number (N + P + 1)/2 of Z-invariants are introduced in the non-interacting cases, while the number of Z-invariants is reduced to one in the presence of interactions. Here, P takes P (0) for even (odd) N . This fact indicates the reduction of the topological classification ofĤ: Z (N +P +1)/2 → Z (for application to gapped systems, see Appendix C). In other words, there exist EPs which are destroyed by interactions without breaking relevant symmetry. The key ingredient is the additional symmetry imposed on the quadratic HamiltonianĤ 0 [see Eq. (4)].
Non-interacting case
In the presence of the spin-parity symmetry (2b), the number (N + P + 1)/2 of Z-invariants can be introduced when the second-quantized Hamiltonian is quadratic.
Firstly, we note that the spin-parity symmetry imposes the following constraint on the first-quantized Hamiltonian
[h, s z ] c = 0,(3)
with s z being the z-component of the first-quantized spin operator. This commutation relation can be seen by noting the relation e iπŜzΨ † α e −iπŜz = e iπ(sz) αβΨ † β . The above constraint indicates that at the non-interacting level, the second-quantized HamiltonianĤ 0 satisfies
[Ĥ 0 ,Ŝ z ] c = 0,(4)
meaning that theĤ 0 can be block-diagonalized with respect toŜ z . Thus, the Fock space with [N, P ] can be divided into subspaces with (N ↑ , N ↓ ). For each subspace with (N ↑ , N ↓ ), the following winding number can be introduced
W (N ↑ ,N ↓ ) = dλ 2πi · ∂ λ log det[Ĥ (N ↑ ,N ↓ ) (λ) − E ref 1l],(5)
with the block-diagonalized HamiltonianĤ (N ↑ ,N ↓ ) , the reference energy E ref ∈ C, and the identity matrix 1l. The integral is taken over a closed path parameterized by λ = (x, y) in the two-dimensional parameter space. Here, we have supposed that along the path, the point-gap opens at the reference energy
E ref ; det[Ĥ (N ↑ ,N ↓ ) (λ)−E ref 1l] = 0
holds for λ parameterizing the path. Here, we note that for a given set of [N, P ], the number (N + P + 1)/2 of the sets (N ↑ , N ↓ ) are allowed where P takes P (0) for even (odd) N . The detailed derivation is provided in Appendix A. Therefore, it is concluded that the number (N + P + 1)/2 of Z-invariants are introduced in the non-interacting cases.
2.
Interacting case
For correlated systems, the second-quantized Hamiltonian can be block-diagonalized not withŜ z but witĥ P = (−1)N ↑ due to spin-parity symmetry.
Thus, the point-gap topology is characterized by
W [N,P ] = dλ 2πi · ∂ λ log det[Ĥ [N,P ] (λ) − E ref 1l]. (6)
Here,Ĥ [N,P ] (λ) denotes the second-quantized Hamiltonian for the Fock space with [N, P ].
In the non-interacting case, the above winding numbers satisfy
W [N,P ] = (N ↑ ,N ↓ ) W (N ↑ ,N ↓ ),(7)
where the summation is taken over sets of N ↑ and N ↓ satisfying N ↑ + N ↓ = N and (−1) N ↑ = P for given N and P . Equation (7) indicates that for the Fock space with [N, P ], the point-gap topological states form the Z (N +P +1)/2 group in the non-interacting case while the point-gap topological states form its subgroup Z in correlated cases. In particular, it indicates that interactions may destroy EPs without breaking charge U(1) symmetry and spin-parity symmetry if they are characterized by vanishing W [N,P ] and finite W (N ↑ ,N ↓ ) .
B.
Analysis of a toy model EPs can be fragile against interactions due to the reduction of the non-Hermitian topological classification for systems with charge U(1) symmetry and spin-parity symmetry. In order to demonstrate this fact, let us analyze a three-orbital system described by the second-quantized Hamiltonian (1) withΨ = (ĉ a↑ ,ĉ b↑ ,ĉ c↑ ,ĉ a↓ ,ĉ b↓ ,ĉ c↓ ) T ,
h(x, y) = 0 x + iy 0 1 0 0 0 0 0 0 1 0 x + is ↓ y 0 0 0 0 0 ,(8)andĤ int = iV [(Ŝ + a +Ŝ + b )Ŝ + c + h.c.].(9)
Here, a fermion in orbital l = a, b, c and spin-state σ =↑, ↓ is created (annihilated) by applying operatorĉ † lσ (ĉ lσ ). The parameter s ↓ takes 1 or −1. Unless otherwise noted, we set s ↓ = −1 in the main text.
OperatorŜ + l (Ŝ − l ) is defined asŜ + l = c † l↑ c l↓ (Ŝ − l = c † l↓ c l↑ ).
The prefactor V is real. The above Hamiltonian satisfies [Ĥ,n cσ ] c = 0 for σ =↑, ↓, and thus, we suppose that 120 a fermion occupies orbital c. In this section, we focus on the Fock space with [N, P ] = [2,1] because there is no topologically protected EPs in the other subspace 121 with N = 2.
Let us start with the non-interacting case. For the Fock space specified by [N, P ] = [2,1], the second-quantized HamiltonianĤ 0 [2,1] is written aŝ
H 0[2,1] = Ĥ 0(2,0) 0 0Ĥ 0(0,2) , (10a) withĤ 0(2,0) = 0 x + iy 1 0 , (10b) H 0(0,2) = 0 1 x + is ↓ y 0 . (10c)
Here we have chosen the basis as
(c † a↑ c † c↑ |0 , c † b↑ c † c↑ |0 , c † a↓ c † c↓ |0 , c † b↓ c † c↓ |0 ).(11)
The matrixĤ 0(2,0) [Ĥ 0(0,2) ] is the Hamiltonian for the subspace with (N ↑ , N ↓ ) = (2, 0) [(0, 2)]. The above equation is consistent with the fact that the non-interacting HamiltonianĤ 0 can be block-diagonalized with respect toŜ z in the presence of spin-parity symmetry [see Eq. (4)].
The HamiltonianĤ [2,1] exhibits EPs for V = 0, which can be seen by diagonalizingĤ 0(2,0) andĤ 0(0,2) . is characterized by the winding numbers. For computation of W (2,0) and W (0,2) , we plot det[Ĥ (N ↑ ,N ↓ ) ] for (N ↑ , N ↓ ) = (2, 0) and (0, 2) in Figs. 1(c) and 1(d), respectively. From these figure, we can extract the winding numbers W (2,0) , W (0,2) = (1, −1) for E ref = 0 computed along a path enclosing the origin (x, y) = (0, 0). Here, the path is taken so that it winds the origin in the counterclockwise direction. Therefore, the EPs [see Figs. 1(a) and 1(b)] are robust against perturbations at the non-interacting level because they are protected by the non-trivial point-gap topology.
(a) ReE V=0 -3 0 3 x -3 0 3 y -2 0 2 (b) ImE V=0 -3 0 3 x -3 0 3 y -2 0 2 -3 0 3 x -3 0 3 y (c) Arg[detH (2,0) ]/π V=0 -3 0 3 x -3 0 3 y -1 0 1 (d) Arg[detH (0,2) ]/π
In the presence of interactions, however, the above EPs are no longer protected by the topology, implying that they can be destroyed by interactions. This is because the subspaces with (N ↑ , N ↓ ) = (2, 0) and (0, 2) are unified in the presence of interactions.
Specifically, Eq. (7) elucidates that the point-gap topology is trivial in the presence of interactions; 7)] together, we end up with the conclusion that the reduction Z 2 → Z results in the fragility of EPs against interactions for the Fock space with [N, P ] = [2,1].
W [2,1] = W (2,0) + W (0,2) = 0 for E ref = 0. Correspondingly,H [2,1] =Ĥ 0[2,1] +Ĥ int[2,1] , (12a) H int[2,1] = iV 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 ,(12b)
We finish this section with two remarks. Firstly, we note that if the winding number W [N,P ] is finite, the EPs are robust against interactions, which can be seen in the case for s ↓ = 1 (see Appendix B).
Secondly, we point out that for the Fock space with [N, P ] = [2,1], the winding numbers can be analytically computed along the path specified by x 2 + y 2 = 1. Along this path the Hamiltonian is written aŝ
H [2,1] = 0 e iθ iV 0 1 0 0 iV iV 0 0 1 0 iV e is ↓ θ 0 ,(13)
with (x, y) = (cos θ, sin θ) and 0 ≤ θ < 2π. Here, we have chosen the basis (11). Diagonalizing the above Hamilto-nian, we have eigenvalues 122 for s ↓ = −1
E p± (θ) = cos θ 2 ± i sin 2 θ 2 + V 2 , (14a) E m± (θ) = − cos θ 2 ± i sin 2 θ 2 + V 2 . (14b) For V = 0, eigenvalues for the subspace with (N ↑ , N ↓ ) = (2, 0) are given by (E p+ , E m+ ) = (e i θ 2 , −e i θ 2 ), which in- dicates W (2,0) = 1 for E ref = 0.
In a simiar way, we obtain W (0,2) = −1 along the loop. For V > 0, Eq. (14) indicates that the imaginary-part of all eigenvalues are finite for 0 ≤ θ < 2π. This fact results in W [2,1] = 0 because no eigenvalue winds the origin. The above analysis is consistent with Eq. (7).
(a) ReE V=1 -3 0 3 x -3 0 3 y -2 0 2 (b) ImE V=1 -3 0 3 x -3 0 3 y -2 0 2 FIG. 2. (a) [(b)]
The real-[imaginary-] part of eigenvalues ofĤ [2,1] for V = 1.
III. SYMMETRY-PROTECTED EXCEPTIONAL RING WITH CHIRAL SYMMETRY
As is the case of EPs, there exist SPERs protected by the point-gap topology only when the Hamiltonian is quadratic. In order to see this, let us analyze interaction effects on SPERs for the two-dimensional parameter space in the presence of chiral symmetry. Consider the Hamiltonian (1) preserving chiral symmetry
[Ĥ,Ξ] c = 0,(15)
with anti-unitary operatorΞ which is a product of timereversal and charge conjugation operators. In the rest of this section, we introduce zero-dimensional topological invariants and demonstrate the presence of SPERs which are fragile against interactions.
A. Topological invariants
For a given Fock space, a zero-dimensional Z-(Z 2 -) invariant can be introduced in non-interacting (interacting) cases. This fact indicates the reduction of the topological classification ofĤ: Z → Z 2 (for application to a gapped system, see Appendix D). In other words, there exist SPERs destroyed by interactions without breaking chiral symmetry. As is the case of Sec. II, the key ingredient is an additional constraint imposed only on the quadratic HamiltonianĤ 0 [see Eq. (18a)].
We also note that Eq. (21) is essential for the above reduction.
Non-interacting case
In the presence of chiral symmetry, a zero-dimensional Z-invariant can be introduced when the second-quantized Hamiltonian is quadratic.
Firstly, we note that the chiral symmetry (15) imposes the following constraint on the first-quantized Hamiltonian
ξh † ξ = −h,(16)
with unitary matrix ξ satisfying 123 ξ 2 = 1l. Here, we have considered that h is a traceless matrix. We note that the chiral symmetricĤ 0 may include iγ α n α − 1 2 with γ α ∈ R andn α =ĉ † αĉα . However, this fact does not affect the following argument. Equation (16) can be seen by noting the relation
ΞΨ † αΞ −1 = ξ αβΨβ .(17)
Summation is assumed over repeated indices. As proved below, Eq. (16) results in the following constraint onĤ 0 :Ĥ
0 = −ΓĤ † 0Γ , (18a) withΓ = (−1)N − ,(18b)N − = Ψ † 1l − ξ 2 Ψ. (18c)
Equation (18a) can be proven as follows. Firstly, we rewrite Eq. (16) as
ξh H ξ = −h H , (19a) ξh A ξ = h A ,(19b)H 0Γ = i Ĥ 0 − E ref 1l Γ .(20)
Here, we have supposed thatΞ andΓ are anti-commute with each otherΞΓ = −ΓΞ.
In addition, we have supposed that the point-gap opens
(det[Ĥ 0 − E ref 1l] = 0) for E ref ∈ iR.
The above zeroth Chern number is previously introduced for the firstquantized Hamiltonian h 82,84,104 . The anti-commutation relation betweenΓ andΞ is essential for the above topological characterization. For systems whereΓ andΞ commute with each other, N 0Ch does not characterize the topology due to the relation 125 ΞĤ 0Γ = −Ĥ 0ΓΞ .
In the above we have introduced the zero-dimensional Z-invariant N 0Ch for chiral symmetric systems whereΞ satisfies Eq. (21).
2.
Interacting case
In the presence of interactions, the second-quantized Hamiltonian is no longer quadratic, meaning that Eq. (18a) does not hold. However, we can still introduce the following Z 2 -invariant
ν = sgn det[Ĥ − E ref 1l] ,(22)
for E ref ∈ R due to the symmetry constraint (15). Here, sgn(x) takes 1 (−1) for x > 0 (x < 0). In the non-interacting case, the parity of N 0Ch corresponds to ν for E ref = 0;
ν = sgn(det[iΓ])(−1) N 0Ch .(23)
The above relation can be seen as follows
(−1) N 0Ch = sgn det[Ĥ 0Γ ] = ν sgn det[iΓ] ,(24)
where we have used the relation det[iĤ 0Γ ] = det[iΓ]det[Ĥ 0 ]. Equation (23) indicates that for the Fock space with N , the point-gap topological states form the Z group in the non-interacting case while the point-gap topological states form its subgroup Z 2 in interacting cases. In particular, it indicates that interactions may destroy SPERs without breaking chiral symmetry if they are characterized by ν = 0 and finite N 0Ch .
B.
Analysis of a toy model
The SPERs can be fragile against interactions due to the reduction of the non-Hermitian topological classification for systems with chiral symmetry. In order to demonstrate this fact, let us analyze a system described by the Hamiltonian witĥ
H 0 =Ψ † hΨ + σ=1,0,−1 iγ σ n aσ − 1 2 ,(25)H int = U l=a,b n l1 − 1 2 n l−1 − 1 2 ,(26)h = 2iβ z * z −2iβ 3 2 iβ 2z * 2z − 3 2 iβ iβ 3z * 3z −iβ .(27)
Here, operatorsΨ andn lσ are defined asΨ = (ĉ a1 ,ĉ b1 |ĉ a0 ,ĉ b0 |ĉ a−1 ,ĉ b−1 ) T andn lσ =ĉ † lσĉ lσ , respectively. Subscripts l = a, b and σ = 1, 0, −1 label orbital and spin degrees of freedom 126 , respectively. Parameters β, U and γ σ are real numbers. A parameter z takes a complex number, z = x + iy with x, y ∈ R.
The Hamiltonian is chiral symmetric;Ĥ satisfies Eq. (15) with 82,127,128
Ξ = σ=1,0,−1 (ĉ † aσ +ĉ aσ )(ĉ † bσ −ĉ bσ )K.(28)
Here, K is the complex-conjugation operator.
As well as chiral symmetry, the above Hamiltonian preserves charge U(1) symmetry and spin U(1) symmetry 129 ; the HamiltonianĤ satisfies
[Ĥ,N ] c = 0,(29)[Ĥ,Ŝ z ] c = 0,(30)
withN = lσn lσ andŜ z = lσ σn lσ . Thus, the Hamiltonian can be brock-diagonalized with respect toN and S z . ByĤ (N,Sz) , we denote the second-quantized Hamiltonian for the Fock space with (N, S z ). Here, N and S z denote eigenvalues ofN andŜ z , respectively. We demonstrate thatĤ (3,0) hosts an SPER characterized by the zero-th Chern number for E ref = 0 in the non-interacting case which is fragile against the interaction U . The emergence of the SPER at zero energy E = 0 can be observed in Figs. Figs. 3(e) and 3(f) indicate that the zero-th Chern number for E ref = 0 jumps from N 0Ch = 6 to N 0Ch = 4 on the SPER with increasing x. The above results indicate that the system exhibits the SPER at zero energy E = 0 which is characterized by the zero-th Chern number, a Z-invariant.
3(a) and 3(b) (see red lines). On the SPER, four eigenvalues touch for both real-and imaginary-parts [see Figs. 3(c) and 3(d)]. In addition,
Here, the Z 2 -invariant ν does not change its value on the SPER; form Eq. (23) and Fig. 3(e), we can see that The fact that the Z 2 -invariant does not jump on the SPER indicates the fragility of the SPER against the interaction U , which is demonstrated below. Figures 4(a) and 4(b) display the real-and imaginary-parts of the eigenvalues against x and y for U = 0.2. In contrast to the non-interacting case, the SPER cannot be observed; the real-and imaginary-parts do not touch simultaneously. The absence of the SPER can also be confirmed in Figs. 4(c) and 4(d). These numerical results demonstrate that the interaction U destroys the SPERs on which the zero-th Chern number jumps by an even number.
Putting together, we end up with the conclusion that the reduction Z → Z 2 results in the fragility of the SPERs against interactions.
IV. SUMMARY
We have addressed interaction effects on the EPs and the SPERs in the two-dimensional parameter space. Our analysis elucidates that interactions may destroy the EPs and the SPERs without breaking relevant symmetry. The fragility of EPs and the SPERs is due to the reduction of the non-Hermitian topological classification. Specifically, we have seen that the reduction Z (N +P +1)/2 → Z (Z → Z 2 ) results in the fragility of EPs (SPERs) for systems with charge U(1) symmetry and spin-parity symmetry (chiral symmetry). The above results strongly suggest that the reduction of topological classifications results in the fragility of EPs and their variants in generic dimensions and symmetry classes.
We finish this paper with a remark on gapped systems. Topological invariants defined in Eqs. (5) and (6) [Eqs. (20) and (22)] are available for the characterization of one-[zero-] dimensional gapped systems. In particular, Eqs. (5), (6), and (7) indicate the reduction of one-dimensional point-gap topology Z (N +P +1)/2 → Z for gapped systems with charge U(1) and spin-parity symmetry (see Appendix C). In addition, Eqs. (20), (22), and (23) indicate the reduction of zero-dimensional point-gap topology Z → Z 2 for gapped systems with chiral symmetry (see Appendix D).
H (2,1) = e i θ 2 M θ iV τ0 iV τ0 e −i θ 2 M θ .(31)
Here, the 2 × 2 identity matrix is denoted by τ0. The ma-
trix M θ is defined as M θ = 0 e i θ 2 e −i θ 2 0
. Thus, firstly diagonazling the matrix M θ , we obtain the eigenvalues shown in Eq. (14) . 123 We note that ξ 2 = 1l holds due to the relations uTu * T = ±1l and uCu * C = ±1l. Here uT (uC) denotes a unitary matrix describing time-reversal (particle-hole) symmetry. In order to obtain the relation ξ 2 = 1l, we note that uTu * C squares to be ±1l; (uTu * C )(uTu * C ) = uTu * T uCu * C . Here we have used the fact that without loss of generality, uT and uC can be chosen so that uTu * C = uCu * T is satisfied. For (uTu * C ) 2 = 1l, we can define ξ as ξ = uTu * C . For (uTu * C ) 2 = −1l, we can define ξ as ξ = iuTu * C . Therefore, we can see that ξ 2 = 1l holds. 124 This fact can be directly seen by noting that hH is written as hH = 0 q q † 0 with the basis where ξ is written as
ξ = 1l 0 0 −1l .
Here, q is a matrix. We also note that hA is written as hA = qA+ 0 0 qA− with the above basis.
Here, qA+ and qA− are anti-Hermitian matrices . 125 For systems whereΓ andΞ commute each other, consider an eigenstate |Ẽ of the Hermitian operatorĤ0Γ with eigenvalueẼ ∈ R. Then, we haveĤ0ΓΞ|Ẽ = −ẼΞ|Ẽ , meaning that the stateΞ|Ẽ is an eigenstate with eigenvalue −Ẽ. For a given set of [N, P ], we count how many sets of (N ↑ , N ↓ ) are allowed, which elucidates the number of the subspaces for the given Fock space with [N, P ]. We count the number of the sets (N ↑ , N ↓ ) for the following four cases.
(i) For even N and P = 1, the following sets of (N ↑ , N ↓ ) are allowed: The finite value of W [N,P ] in the non-interacting case indicates the robustness of EPs against interactions. Figure 6(b) displays color map of detĤ [2,1] for V = 1. In this figure, we can see that W [N,P ] takes one which is computed along a path winding around the singular point (denoted by a white arrow) in the counterclockwise direction. Correspondingly, the EPs emerges even in the interacting cases. Figure 7 displays eigenvalues ofĤ [2,1] for V = 1 and s ↓ = 1. The above results indicate that Computing the topological invariants [Eqs. (5) and (6)], we analyze the one-dimensional topology of a gapped system, which justifies the reduction Z (N +P +1)/2 → Z for the one-dimensional point-gap topology with charge U(1) symmetry and spin-parity symmetry. In the following, the topology in a one-dimensional parameter space is mainly analyzed, although a similar analysis can be done for the topology in one spatial dimension as briefly explained around the end of this section.
{(2M,
Consider the Hamiltonian (1) specified withΨ = (ĉ a↑ ,ĉ b↑ ,ĉ a↓ ,ĉ b↓ ) T ,
h(θ) = diag(e iθ , 0, e −iθ , 0),(C1)H int = iV [Ŝ + aŜ + b +Ŝ − aŜ − b ].(C2)
Here, diag(. . .) denotes a diagonal matrix whose elements are specified by the numbers enclosed in the parentheses. The coefficient V is a real number. The one-dimensional parameter space is described by θ (0 ≤ θ < 2π). The above Hamiltonian preserves charge U(1) symmetry and spin-parity symmetry [see Eqs. (2a) and (2b)]. We also note thatĤ commutes withn bσ =ĉ † bσĉ bσ for σ =↑, ↓. Thus, we suppose that a fermion occupies orbital b.
For the Fock space with [N, P ] = [2,1], the Hamiltonian is written asĤ [2,1]
= e iθ iV iV e −iθ ,(C3)
with the basis
ĉ † a↑ĉ † b↑ |0 ,ĉ † a↓ĉ † b↓ |0 .(C4)
For V = 0, the above representation indicates that the Fock space is decomposed into subspaces with (N, S z ) = (2, 1) and (2, −1), and that the winding numbers for these
E ± (θ) = cos θ ± i sin 2 θ + V 2 .(C5)
This result elucidates that the loop structure is destroyed by interactions mixing the subspaces with (N, S z ) = (2, 1) and (2, −1); |ImE ± (θ)| > 0 holds for an arbitrary θ. This fact is explicitly presented in Fig. 8(c).
The above results demonstrate that the reduction of the one-dimensional point-gap topology for the gapped systems: Z 2 → Z for [N, P ] = [2,1]. Namely, in the non-interacting case, the second quantized Hamiltonian H 0 possesses the non-trivial properties characterized by W (2,1) = 1 and W (2,−1) = −1 for E ref = 0. However, the non-trivial topology is not maintained in the presence of interactions (W [2,1] = 0), which is supported by the fragility of the loop structure against the interactions [see Fig. 8].
We finish this section with two remarks. The previous work 114 has also addressed the reduction phenomena.
We note, however, that Ref. 114 has compared the topology of the first-quantized Hamiltonian h and the secondquantized HamiltonianĤ. In contrast, the analysis provided in this section compares the topology of the noninteracting second-quantized HamiltonianĤ 0 with that of the interacting second-quantized HamiltonianĤ which clearly elucidates that the Z (N +P +1)/2 group formed by the point-gap topological states in non-interacting cases reduces to its subgroup Z due to the interactions [see Eq. (7)].
We also note that a similar argument is applied to the point-gap topology in one spatial dimension. In Ref. 114, the point-gap topology in one spatial dimension is analyzed for an interacting non-Hermitian chain with charge U(1) symmetry and spin-parity symmetry [see Eq. We analyze the zero-dimensional topology of a gapped system, which justifies the reduction Z → Z 2 for the zero-dimensional point-gap topology with chiral symmetry. Although the following results are essentially the same as the ones in Sec. III B, we discuss the details by focusing on a zero-dimensional system with the pointgap.
Consider the HamiltonianĤ specified with Eqs. (25), (26), and (27) for given real parameters x and y = 0 (i.e., z = x, 0 ≤ x ≤ 1). The Hamiltonian H preserves the chiral symmetry (15) withΞ defined in Eq. (28). Thus, in the absence of interactions, the topology ofĤ is characterized by the zero-th Chern number N 0Ch , a Z-invariant [see Sec. III A 1]. In the presence of interactions, the topology ofĤ is characterized by the Z 2 -invariant ν.
Let us focus on the Fock space with (N, S z ) = (3, 0). Phase diagram of the HamiltonianĤ (3,0) . In the absence of interactions, the zero-th Chern number takes N 0Ch = 4 (6) at E ref = 0 for 0 ≤ x < xc (xc < x ≤ 1) with xc ∼ 0.6. In the presence of interaction, the Z2-invariant takes ν = 1 for 0 ≤ x ≤ 1. Dashed allows illustrate a path parameterized by λ. 0. In the presence of interactions U , the above pointgap closing does not occur. Therefore, one can identify the topology of N 0Ch = 6 and that of N 0Ch = 4 in the presence of interactions. Correspondingly the topology is characterized by ν which takes ν = 1 in the entire region.
In the following, we numerically demonstrate that interactions allow the smooth deformation ofĤ characterized by N 0Ch = 6 to theĤ characterized by N 0Ch = 4 which keeps the point-gap and chiral symmetry. Figures 10(a) and 10(b) display the spectral flow for U = 0 and U = 0.2, respectively. As shown in Fig. 10(a), the point-gap closes at x = x c ∼ 0.6 in the non-interacting case [see also Fig. 11]. In contrast, the point-gap remains open in the interacting case [see Fig. 10(b)], which is consistent with the phase diagram [see Fig. 9].
In a similar way, we can see that the gap remains open along the path parameterized by λ (0 ≤ λ ≤ 1) which is illustrated by dashed arrows in Fig. 9. Figure 12 indicates that interactions allow the smooth deformation of H characterized by N 0Ch = 6 to theĤ characterized by N 0Ch = 4 which keeps the point-gap and chiral symmetry.
The above results justifies the reduction of the pointgap topology Z → Z 2 for zero-dimensional systems with chiral symmetry.
The previous work 104 has also addressed the reduction phenomenon. We note, however, that Ref. 104 has compared the topology of the first-quantized Hamiltonian h and the second-quantized HamiltonianĤ. In contrast, the analysis provided in this section compares the topology of the non-interacting second-quantized HamiltonianĤ 0 with that of the interacting secondquantized HamiltonianĤ which clearly elucidates that the Z group formed by the point-gap topological states in non-interacting cases reduces to its subgroup Z 2 due to the interactions [see Eq. (23)]. The absolute value of eigenvalues |En| (n = 1, . . . , 8) as functions of λ which parameterizes the path illustrated in Fig. 9. On the dashed vertical lines λ takes λ = 1/3 and 2/3, respectively. Here, λ parameterizes (x, U ) as follows: [region (i)] for 0 ≤ λ < 1/3, it parameterizes as (x, U ) = (0, 0.6λ); [region (ii)] for 1/3 ≤ λ < 2/3 , it parameterizes as (x, U ) = (3λ − 1, 0.2); [region (iii)] 2/3 ≤ λ ≤ 1, it parameterizes as (x, U ) = (1, 0.2 − 0.6λ).
Figures 1(a) and 1(b) display eigenvalues ofĤ 0[2,1] against x and y. In these figures, EPs emerge at zero energy E = 0 and (x, y) = (0, 0) which are denoted by red dots. The point-gap topology protecting these EPs
V=0 FIG. 1 .
V=01(a) [(b)] The real-[imaginary-] part of eigenvalues ofĤ [2,1] for V = 0. (c) [(d)] The argument of detĤ 0(2,0) [detĤ 0(0,2) ]. We recall thatĤ [N,P ] andĤ (N ↑ ,N ↓ ) denote the Hamiltonian for the Fock space with [N, P ] and (N ↑ , N ↓ ). The data are obtained for s ↓ = −1.
with the basis defined in Eq.(11).Figures 2 displays eigenvalues ofĤ [2,1] for V = 1. As observed in this figure, EPs are destroyed by interaction V which mixes the subspaces with (N ↑ , N ↓ ) = (2, 0) and (0, 2). Putting the above results [Figs. 1 and 2 and Eq. (
ofĤ(3,0) and its point-gap topology for U = 0. ByĤ(3,0) , we denote the second-quantizedHamiltonianĤ for the Fock space with (N, Sz) = (3, 0). (a) [(b)] The real-[imaginary-] part of the eigenvalues against x and y. The red lines denote the SPER. (c) [(d)] The real-[imaginary-] part of the eigenvalues for y = 0. (e) The zero-th Chern number N 0Ch for y = 0. (f) Color plot of N 0Ch . The vertical lines in panels (c), (d), and (e) denote the critical value xc ∼ 0.6 where the band touching occurs. Data in panel (e) correspond to the zero-th Chern number on the black line in panel (f). These data are obtained for (β, γ1, γ0, γ−1) = (0.8, −3, −2.945, 1). the Z 2 -invariant remains ν = 1 for E ref = 0 by noting the relation det[iΓ (3,0) ] = 1.
the above results [Figs. 3 and 4 and Eq. (23)] ofĤ (3,0) for U = 0.2. (a) [(b)] The real-[imaginary-] parts of the eigenvalues against x and y. (c) [(d)] The real-[imaginary-] parts of the eigenvalues for y = 0. Panel (c) displays the data multiplied by 5 [i.e., 5ReEn n = 1, 2, . . . , 8]. These data are obtained for (β, γ1, γ0, γ−1) = (0.8, −3, −2.945, 1).
0), (2M − 2, 2), (2M − 4, 4), . . . , (0, 2M )} , (A1) with N = 2M and M being a non-negative integer. Thus, there exist the number N/2 + 1 of the sets (N ↑ , N ↓ ) for the given set of [N, P ] = [2M, 1].(ii) For even N and P = −1, the following sets of (N ↑ , N ↓ ) are allowed:{(2M − 1, 1), (2M − 3, 3), (2M − 5, 5), . . . , (1, 2M − 1)} ,(A2)with N = 2M and M being a non-negative integer. Thus, there exist the number N/2 of the sets (N ↑ , N ↓ ) for the given set of [N, P ] = [2M, −1].(iii) For odd N and P = 1, the following sets of (N ↑ , N ↓ ) are allowed:{(2M + 1, 0), (2M − 1, 2), (2M − 3, 4), . . . , (1, 2M )} ,(A3)with N = 2M + 1 and M being a non-negative integer. Thus, there exist the number (N + 1)/2 of the sets (N ↑ , N ↓ ) for the given set of [N, P ] = [2M + 1, 1]. (iv) For odd N and P = −1, the following sets of (N ↑ , N ↓ ) are allowed: {(2M, 1), (2M − 2, 3), (2M − 4, 5), . . . , (0, 2M + 1)} , (A4) with N = 2M + 1 and M being a non-negative integer. Thus, there exist the number (N + 1)/2 of the sets (N ↑ , N ↓ ) for the given set of [N, P ] = [2M + 1, −1]. Taking into account the above results, we end up with the fact that there exist the number (N + P + 1)/2 of the subspaces with (N ↑ , N ↓ ) for the given Fock space with [N, P ]. Here, P takes P (0) for even (odd) N . Appendix B: Robustness of EPs for s ↓ = 1EPs characterized by a finite value of the winding number W [N,P ] are robust against interactions. In order to demonstrate this fact, let us analyze the Hamiltonian discussed in Sec. II B for s ↓ = 1 [i.e., the Hamiltonian (1) specified by Eqs.(8) and(9)].
FIG. 5 .
5(a) [(b)] The real-[imaginary-] part of eigenvalues ofĤ [2,1] for V = 0 and s ↓ = 1. The red dots denote EPs. (c) [(d)] The argument of detĤ 0(2,0) [detĤ 0(0,2) ]. The data are plotted in a similar way to Fig. 1. As is the case of s ↓ = −1 (see Sec. II B), we focus on the Fock space with [N, P ] = [2, 1]. Figures 5(a) and 5(b) display eigenvalues ofĤ [2,1] for V = 0 and s ↓ = 1. For V = 0, the Fock space can be divided into subspaces with (N ↑ , N ↓ ) = (2, 0) and (0, 2). For both subspaces, EPs emerge which are characterized by the winding numbers (W (2,0) , W (0,2) ) = (1, 1) for E ref = 0 [see Figs. 5(c) and 5(d)]. Thus, recalling Eq. (7), we have W [N,P ] = 2 for E ref = 0. This result is consistent with Fig. 6(a); W [N,P ] takes two which is computed along a path winding around the singular point (denoted by a white arrow) in the counterclockwise direction.
FIG. 6 .
6(a) [(b)] The argument of detĤ [2,1] for V = 0 [V = 1] and s ↓ = 1. The white arrows indicate the points where EPs emerge.
FIG. 7 .
7(a) [(b)] The real-[imaginary-] part of eigenvalues ofĤ[2,1] for V = 1 and s ↓ = 1. Red dots denote the EPs at zero energy E = 0. The data are plotted in a similar way toFig. 2.EPs characterized by a finite value of W [N,P ] are robust against interactions.Appendix C: Reduction of one-dimensional topology for gapped systems
FIG. 8 .
8Spectral flow of the HamiltonianĤ for the Fock space with [N, P ] = [2, 1]. (a) [(b)] Spectral flow for the subspaces with (N, Sz) = (2, 1) [(2, −1)] at V = 0. (c) Spectral flow for the Fock space with [N, P ] = [2, 1] at V = 1. subspaces take W (2,1) = 1 and W (2,−1) = −1 at E ref = 0, respectively [see Figs. 8(a) and 8(b)]. Therefore, the loop structures observed in Figs. 8(a) and 8(b) are robust against perturbations at the non-interacting level because they are protected by the non-trivial point-gap topology.Here, Eq. (7) indicates that the above loop structure is no longer protected by the non-trivial topology in the presence of interactions; because of the winding numbers W (2,1) = 1 and W (2,1) = −1 for E ref = 0, Eq. (7) results in W[2,1] = 0. Indeed, the interaction V destroys the loop structure observed in Figs. 8(a) and 8(b). In order to see this, we compute the eigenvalues which are written as
(9) of Ref. 114]. Because the Fock space with [N, P ] = [3, −1] is divided into subspaces with (N, S z ) = (3, 3/2) and (3, −1/2) in the non-interacting case, the topology of non-interacting Hamiltonian is characterized by the two winding numbers taking W (3,3/2) = 1 and W (3,−1/2) = −1 at E ref = 0 for the Fock space with [N, P ] = [3, −1]. Thus, the loop structure is observed for the non-interacting non-Hermitian chain [see Fig. 3(b) of Ref. 114]. This loop structure is fragile against interactions because Eq. (7) results in the vanishing winding number W [N,P ] = 0. Indeed, interactions destroy the loop structure with keeping the point-gap for E ref = 0 and the relevant symmetry [see Fig. 3(d) of Ref. 114]. Correspondingly, the non-Hermitian skin effect observed at the non-interacting level is also destroyed by the interactions [see Figs. 3(c) and 3(e) of Ref. 114].Appendix D: Reduction of zero-dimensional topology for a gapped system
Figure 9
9displays topological invariants against x and U . In the non-interacting case, the point-gap at E ref = 0 closes (detĤ (3,0) = 0) at the point (x, U ) = (x c , 0) with x c ∼ 0.6 which separates two regions of distinct point-gap topology with N 0Ch = 6 and N 0Ch = 4 for E ref = FIG. 9.
) [(b)] Spectral flow ofĤ (3,0) for U = 0 [0.2]. With increasing x from 0 to 1, the eigenvalues flow along blue curves as indicated by black arrows. In panel (a) [(b)], eigenvalues for x = 0, 0.6, and 1 [x = 0, 0.8725, and 1] are denoted by open circles, crosses, and closed triangles, respectively. In panel (a), the point-gap at E ref = 0 closes as denoted by the green arrow. The flow in panel (a) is symmetric about the real-and imaginary-axes due to Eqs. (15) and (18a). The flow in panel (b) is symmetric about the real-axis due to Eqs. (15). These data are obtained for (β, γ1, γ0, γ−1) The absolute value of eigenvalues |En| (n = 1, . . . , 8) as functions of x for U = 0. At x = xc ∼ 0.6, the eigenvalues become zero.
FIG. 12. The absolute value of eigenvalues |En| (n = 1, . . . , 8) as functions of λ which parameterizes the path illustrated in Fig. 9. On the dashed vertical lines λ takes λ = 1/3 and 2/3, respectively. Here, λ parameterizes (x, U ) as follows: [region (i)] for 0 ≤ λ < 1/3, it parameterizes as (x, U ) = (0, 0.6λ); [region (ii)] for 1/3 ≤ λ < 2/3 , it parameterizes as (x, U ) = (3λ − 1, 0.2); [region (iii)] 2/3 ≤ λ ≤ 1, it parameterizes as (x, U ) = (1, 0.2 − 0.6λ).
where we have decomposed h into the Hermitian part h H and the anti-Hermitian part h A . Equation (19a) indicates that applyingĤ 0H = Ψ † h H Ψ increases/decreases the number N − by one 124 , where N − denotes eigenval-ues ofN − . Thus,Ĥ 0H anti-commutes withΓ = (−1)N − .
Equation (19b) indicates thatĤ 0A = Ψ † h A Ψ commutes
withΓ. Noting the relationĤ 0 =Ĥ 0H +Ĥ 0A , we obtain
Eq. (18a).
Equation (18a) allows us to define the zero-th Chern
number N 0Ch which is the number of eigenstates with
negative eigenvalues of
Gurarie, Phys. Rev. B 83, 085426 (2011). 128 S. R. Manmana, A. M. Essin, R. M. Noack, and V.Thus, tuning parameters does not change the
zero-th Chern number N 0Ch whenΓ andΞ commute
each other. In contrast, for systems whereΓ andΞ anti-
commute each other [see Eq. (21)], the zero-th Chern
number N 0Ch can change its value; in this case, the rela-
tionΞĤ0Γ =Ĥ0ΓΞ holds .
126 To be strict, the subscript σ = 1, 0, −1 labels pseudo-
spin because fermions should have odd half-integer spin
(1/2, 3/2, 5/2 . . .) .
127 V. Gu-
rarie, Phys. Rev. B 86, 205119 (2012).
129 We have imposed the additional constraints on theĤ.
However, these constraints do not affect the discussion
provided in Sec. III A because the block-diagonalization
is not effected by the presence/absence of interactions in
contrast to the case of Sec. II A. Namely, the argument in
Sec. III A is directly available by replacing theĤ to the
block-diagonalized Hamiltonian withN andŜz, although
an explicit analysis of a toy model without additional sym-
metry constraints is left as a future work .
Appendix A: The number of the subspaces for the
Fock space with [N, P ]
R. Ma, B. Saxberg, C. Owens, N. Leung, Y. Lu, J. Simon, and D. I. Schuster, Nature 566, 51 (2019).120 We have imposed the additional constraint on theĤ.However, this does not affect the discussion provided in Sec. II A because the argument in Sec. II A is directly available by replacing theĤ to the block-diagonalized Hamiltonian with σn cσ . 121 Specifically, the winding numbers take W (1,1) = W [2,−1] = 0 for V = 0 . 122 This 4 × 4-matrix is diagonalized as follows. The Hamiltonian can be rewritten aŝ
ACKNOWLEDGEMENTSThe authors thank Hosho Katsura, Norio Kawakami, and Takuma Isobe for fruitful discussions. A part of the computation has been done using the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo. This work is supported by JSPS KAKENHI Grants No. JP17H06138, No. JP21K13850 and No. JP22H05247. This work is also supported by JST CREST, Grant No. JPMJCR19T1.
. D C Tsui, H L Stormer, A C Gossard, 10.1103/PhysRevLett.48.1559Phys. Rev. Lett. 481559D. C. Tsui, H. L. Stormer, and A. C. Gossard, Phys. Rev. Lett. 48, 1559 (1982).
. R B Laughlin, 10.1103/PhysRevLett.50.1395Phys. Rev. Lett. 501395R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983).
. E Tang, J.-W Mei, X.-G Wen, 10.1103/PhysRevLett.106.236802Phys. Rev. Lett. 106236802E. Tang, J.-W. Mei, and X.-G. Wen, Phys. Rev. Lett. 106, 236802 (2011).
. K Sun, Z Gu, H Katsura, S. Das Sarma, 10.1103/PhysRevLett.106.236803Phys. Rev. Lett. 106236803K. Sun, Z. Gu, H. Katsura, and S. Das Sarma, Phys. Rev. Lett. 106, 236803 (2011).
. T Neupert, L Santos, C Chamon, C Mudry, 10.1103/PhysRevLett.106.236804Phys. Rev. Lett. 106236804T. Neupert, L. Santos, C. Chamon, and C. Mudry, Phys. Rev. Lett. 106, 236804 (2011).
. N Regnault, B A Bernevig, 10.1103/PhysRevX.1.021014Phys. Rev. X. 121014N. Regnault and B. A. Bernevig, Phys. Rev. X 1, 021014 (2011).
. D N Sheng, Z.-C Gu, K Sun, L Sheng, Nature Communications. 2389articleD. N. Sheng, Z.-C. Gu, K. Sun, and L. Sheng, Nature Communications 2, 389 EP (2011), article.
. E J Bergholtz, Z Liu, 10.1142/S021797921330017XInternational Journal of Modern Physics B. 271330017E. J. Bergholtz and Z. Liu, International Journal of Mod- ern Physics B 27, 1330017 (2013).
. D Pesin, L Balents, 10.1038/nphys1606Nature Physics. 6376D. Pesin and L. Balents, Nature Physics 6, 376 (2010).
. A P Schnyder, S Ryu, A Furusaki, A W W Ludwig, 10.1103/PhysRevB.78.195125Phys. Rev. B. 78195125A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Phys. Rev. B 78, 195125 (2008).
. A Kitaev, 10.1063/1.3149495AIP Conf. Proc. 113422A. Kitaev, AIP Conf. Proc. 1134, 22 (2009).
. S Ryu, A P Schnyder, A Furusaki, A W W Ludwig, New J. Phys. 1265010S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. W. Ludwig, New J. Phys. 12, 065010 (2010).
. L Fidkowski, A Kitaev, 10.1103/PhysRevB.81.134509Phys. Rev. B. 81134509L. Fidkowski and A. Kitaev, Phys. Rev. B 81, 134509 (2010).
. F Pollmann, A M Turner, E Berg, M Oshikawa, 10.1103/PhysRevB.81.064439Phys. Rev. B. 8164439F. Pollmann, A. M. Turner, E. Berg, and M. Oshikawa, Phys. Rev. B 81, 064439 (2010).
. A M Turner, F Pollmann, E Berg, 10.1103/PhysRevB.83.075102Phys. Rev. B. 8375102A. M. Turner, F. Pollmann, and E. Berg, Phys. Rev. B 83, 075102 (2011).
. L Fidkowski, A Kitaev, 10.1103/PhysRevB.83.075103Phys. Rev. B. 8375103L. Fidkowski and A. Kitaev, Phys. Rev. B 83, 075103 (2011).
. X Chen, Z.-C Gu, X.-G Wen, 10.1103/PhysRevB.83.035107Phys. Rev. B. 8335107X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B 83, 035107 (2011).
. X Chen, Z.-C Gu, X.-G Wen, 10.1103/PhysRevB.84.235128Phys. Rev. B. 84235128X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B 84, 235128 (2011).
. Y.-M Lu, A Vishwanath, 10.1103/PhysRevB.86.125119Phys. Rev. B. 86125119Y.-M. Lu and A. Vishwanath, Phys. Rev. B 86, 125119 (2012).
. H Yao, S Ryu, 10.1103/PhysRevB.88.064507Phys. Rev. B. 8864507H. Yao and S. Ryu, Phys. Rev. B 88, 064507 (2013).
. S Ryu, S.-C Zhang, 10.1103/PhysRevB.85.245132Phys. Rev. B. 85245132S. Ryu and S.-C. Zhang, Phys. Rev. B 85, 245132 (2012).
. X.-L Qi, New J. Phys. 1565002X.-L. Qi, New J. Phys. 15, 065002 (2013).
. M Levin, A Stern, 10.1103/PhysRevB.86.115131Phys. Rev. B. 86115131M. Levin and A. Stern, Phys. Rev. B 86, 115131 (2012).
. C.-T Hsieh, T Morimoto, S Ryu, 10.1103/PhysRevB.90.245111Phys. Rev. B. 90245111C.-T. Hsieh, T. Morimoto, and S. Ryu, Phys. Rev. B 90, 245111 (2014).
. H Isobe, L Fu, 10.1103/PhysRevB.92.081304Phys. Rev. B. 9281304H. Isobe and L. Fu, Phys. Rev. B 92, 081304 (2015).
. X Chen, Z.-C Gu, Z.-X Liu, X.-G Wen, 10.1103/PhysRevB.87.155114Phys. Rev. B. 87155114X. Chen, Z.-C. Gu, Z.-X. Liu, and X.-G. Wen, Phys. Rev. B 87, 155114 (2013).
. Z.-C Gu, X.-G Wen, 10.1103/PhysRevB.90.115141Phys. Rev. B. 90115141Z.-C. Gu and X.-G. Wen, Phys. Rev. B 90, 115141 (2014).
. A Kapustin, arXiv:1403.1467A. Kapustin, arXiv:1403.1467 (2014).
. A Kapustin, arXiv:1404.6659A. Kapustin, arXiv:1404.6659 (2014).
. A Kapustin, R Thorngren, A Turzillo, Z Wang, 10.1007/JHEP12(2015)052Journal of High Energy Physics. 20151A. Kapustin, R. Thorngren, A. Turzillo, and Z. Wang, Journal of High Energy Physics 2015, 1 (2015).
. L Fidkowski, X Chen, A Vishwanath, 10.1103/PhysRevX.3.041016Phys. Rev. X. 341016L. Fidkowski, X. Chen, and A. Vishwanath, Phys. Rev. X 3, 041016 (2013).
. C Wang, A C Potter, T Senthil, 10.1126/science.1243326Science. 343629C. Wang, A. C. Potter, and T. Senthil, Science 343, 629 (2014).
. M A Metlitski, L Fidkowski, X Chen, A Vishwanath, arXiv:1406.3032M. A. Metlitski, L. Fidkowski, X. Chen, and A. Vish- wanath, arXiv:1406.3032 (2014).
. C Wang, T Senthil, 10.1103/PhysRevB.89.195124Phys. Rev. B. 89195124C. Wang and T. Senthil, Phys. Rev. B 89, 195124 (2014).
. Y.-Z You, C Xu, 10.1103/PhysRevB.90.245120Phys. Rev. B. 90245120Y.-Z. You and C. Xu, Phys. Rev. B 90, 245120 (2014).
. T Morimoto, A Furusaki, C Mudry, 10.1103/PhysRevB.92.125104Phys. Rev. B. 92125104T. Morimoto, A. Furusaki, and C. Mudry, Phys. Rev. B 92, 125104 (2015).
. T Yoshida, A Daido, Y Yanase, N Kawakami, 10.1103/PhysRevLett.118.147001Phys. Rev. Lett. 118147001T. Yoshida, A. Daido, Y. Yanase, and N. Kawakami, Phys. Rev. Lett. 118, 147001 (2017).
. C.-M Jian, C Xu, 10.1103/PhysRevX.8.041030Phys. Rev. X. 841030C.-M. Jian and C. Xu, Phys. Rev. X 8, 041030 (2018).
. Y C Hu, T L Hughes, 10.1103/PhysRevB.84.153101Phys. Rev. B. 84153101Y. C. Hu and T. L. Hughes, Phys. Rev. B 84, 153101 (2011).
. K Esaki, M Sato, K Hasebe, M Kohmoto, 10.1103/PhysRevB.84.205128Phys. Rev. B. 84205128K. Esaki, M. Sato, K. Hasebe, and M. Kohmoto, Phys. Rev. B 84, 205128 (2011).
. M Sato, K Hasebe, K Esaki, M Kohmoto, 10.1143/PTP.127.937Progress of Theoretical Physics. 127937M. Sato, K. Hasebe, K. Esaki, and M. Kohmoto, Progress of Theoretical Physics 127, 937 (2012).
. S Diehl, E Rico, M A Baranov, P Zoller, 10.1038/nphys2106Nature Physics. 7971S. Diehl, E. Rico, M. A. Baranov, and P. Zoller, Nature Physics 7, 971 (2011).
. C.-E Bardyn, M A Baranov, C V Kraus, E Rico, A İmamoglu, P Zoller, S Diehl, New Journal of Physics. 1585001C.-E. Bardyn, M. A. Baranov, C. V. Kraus, E. Rico, A.İmamoglu, P. Zoller, and S. Diehl, New Journal of Physics 15, 085001 (2013).
. J C Budich, P Zoller, S Diehl, 10.1103/PhysRevA.91.042117Phys. Rev. A. 9142117J. C. Budich, P. Zoller, and S. Diehl, Phys. Rev. A 91, 042117 (2015).
. T E Lee, 10.1103/PhysRevLett.116.133903Phys. Rev. Lett. 116133903T. E. Lee, Phys. Rev. Lett. 116, 133903 (2016).
. Z Gong, S Higashikawa, M Ueda, 10.1103/PhysRevLett.118.200401Phys. Rev. Lett. 118200401Z. Gong, S. Higashikawa, and M. Ueda, Phys. Rev. Lett. 118, 200401 (2017).
. S Lieu, 10.1103/PhysRevB.97.045106Phys. Rev. B. 9745106S. Lieu, Phys. Rev. B 97, 045106 (2018).
. Z Gong, Y Ashida, K Kawabata, K Takasan, S Higashikawa, M Ueda, 10.1103/PhysRevX.8.031079Phys. Rev. X. 831079Z. Gong, Y. Ashida, K. Kawabata, K. Takasan, S. Hi- gashikawa, and M. Ueda, Phys. Rev. X 8, 031079 (2018).
. K Kawabata, K Shiozaki, M Ueda, M Sato, 10.1103/PhysRevX.9.041015Phys. Rev. X. 941015K. Kawabata, K. Shiozaki, M. Ueda, and M. Sato, Phys. Rev. X 9, 041015 (2019).
. S Lieu, M Mcginley, N R Cooper, 10.1103/PhysRevLett.124.040401Phys. Rev. Lett. 12440401S. Lieu, M. McGinley, and N. R. Cooper, Phys. Rev. Lett. 124, 040401 (2020).
. Z Yang, C.-K Chiu, C Fang, J Hu, 10.1103/PhysRevLett.124.186402Phys. Rev. Lett. 124186402Z. Yang, C.-K. Chiu, C. Fang, and J. Hu, Phys. Rev. Lett. 124, 186402 (2020).
. P.-Y Chang, J.-S You, X Wen, S Ryu, 10.1103/PhysRevResearch.2.033069Phys. Rev. Research. 233069P.-Y. Chang, J.-S. You, X. Wen, and S. Ryu, Phys. Rev. Research 2, 033069 (2020).
. A K Ghosh, T Nag, 10.1103/PhysRevB.106.L140303Phys. Rev. B. 106140303A. K. Ghosh and T. Nag, Phys. Rev. B 106, L140303 (2022).
. R Arouca, J Cayao, A M Black-Schaffer, arXiv:2206.15324arXiv preprintR. Arouca, J. Cayao, and A. M. Black-Schaffer, arXiv preprint arXiv:2206.15324 (2022).
. J Cayao, A M Black-Schaffer, arXiv:2208.05372arXiv preprintJ. Cayao and A. M. Black-Schaffer, arXiv preprint arXiv:2208.05372 (2022).
. E J Bergholtz, J C Budich, F K Kunst, Rev. Mod. Phys. 9315005E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Rev. Mod. Phys. 93, 015005 (2021).
. Y Ashida, Z Gong, M Ueda, 10.1080/00018732.2021.1876991Advances in Physics. 69249Y. Ashida, Z. Gong, and M. Ueda, Advances in Physics 69, 249 (2020).
. T Yoshida, R Peters, N Kawakami, Y Hatsugai, Progress of Theoretical and Experimental Physics. 2020T. Yoshida, R. Peters, N. Kawakami, and Y. Hatsugai, Progress of Theoretical and Experimental Physics 2020, 12A109 (2020).
. V M Martinez Alvarez, J E Barrios, L E F Vargas, Torres, 10.1103/PhysRevB.97.121401Phys. Rev. B. 97121401V. M. Martinez Alvarez, J. E. Barrios Vargas, and L. E. F. Foa Torres, Phys. Rev. B 97, 121401 (2018).
. S Yao, Z Wang, 10.1103/PhysRevLett.121.086803Phys. Rev. Lett. 12186803S. Yao and Z. Wang, Phys. Rev. Lett. 121, 086803 (2018).
. F K Kunst, E Edvardsson, J C Budich, E J Bergholtz, 10.1103/PhysRevLett.121.026808Phys. Rev. Lett. 12126808F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Phys. Rev. Lett. 121, 026808 (2018).
. C H Lee, R Thomale, 10.1103/PhysRevB.99.201103Phys. Rev. B. 99201103C. H. Lee and R. Thomale, Phys. Rev. B 99, 201103 (2019).
. J Y Lee, J Ahn, H Zhou, A Vishwanath, 10.1103/PhysRevLett.123.206404Phys. Rev. Lett. 123206404J. Y. Lee, J. Ahn, H. Zhou, and A. Vishwanath, Phys. Rev. Lett. 123, 206404 (2019).
. D S Borgnia, A J Kruchkov, R.-J Slager, 10.1103/PhysRevLett.124.056802Phys. Rev. Lett. 12456802D. S. Borgnia, A. J. Kruchkov, and R.-J. Slager, Phys. Rev. Lett. 124, 056802 (2020).
. K Zhang, Z Yang, C Fang, Phys. Rev. Lett. 125126402K. Zhang, Z. Yang, and C. Fang, Phys. Rev. Lett. 125, 126402 (2020).
. N Okuma, K Kawabata, K Shiozaki, M Sato, 10.1103/PhysRevLett.124.086801Phys. Rev. Lett. 12486801N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, Phys. Rev. Lett. 124, 086801 (2020).
. T Hofmann, T Helbig, F Schindler, N Salgo, M Brzezińska, M Greiter, T Kiessling, D Wolf, A Vollhardt, A Kabaši, C H Lee, A Bilušić, R Thomale, T Neupert, Phys. Rev. Research. 223265T. Hofmann, T. Helbig, F. Schindler, N. Salgo, M. Brzezińska, M. Greiter, T. Kiessling, D. Wolf, A. Voll- hardt, A. Kabaši, C. H. Lee, A. Bilušić, R. Thomale, and T. Neupert, Phys. Rev. Research 2, 023265 (2020).
. T Yoshida, T Mizoguchi, Y Hatsugai, 10.1103/PhysRevResearch.2.022062Phys. Rev. Research. 222062T. Yoshida, T. Mizoguchi, and Y. Hatsugai, Phys. Rev. Research 2, 022062 (2020).
. T Bessho, M Sato, 10.1103/PhysRevLett.127.196404Phys. Rev. Lett. 127196404T. Bessho and M. Sato, Phys. Rev. Lett. 127, 196404 (2021).
. K Kawabata, K Shiozaki, S Ryu, 10.1103/PhysRevLett.126.216405Phys. Rev. Lett. 126216405K. Kawabata, K. Shiozaki, and S. Ryu, Phys. Rev. Lett. 126, 216405 (2021).
. N Okuma, M Sato, Annual Review of Condensed Matter Physics. 14N. Okuma and M. Sato, Annual Review of Condensed Matter Physics 14, null (2023).
. H Shen, B Zhen, L Fu, 10.1103/PhysRevLett.120.146402Phys. Rev. Lett. 120146402H. Shen, B. Zhen, and L. Fu, Phys. Rev. Lett. 120, 146402 (2018).
. Y Xu, S.-T Wang, L.-M Duan, 10.1103/PhysRevLett.118.045701Phys. Rev. Lett. 11845701Y. Xu, S.-T. Wang, and L.-M. Duan, Phys. Rev. Lett. 118, 045701 (2017).
. A U Hassan, B Zhen, M Soljačić, M Khajavikhan, D N Christodoulides, 10.1103/PhysRevLett.118.093002Phys. Rev. Lett. 11893002A. U. Hassan, B. Zhen, M. Soljačić, M. Khajavikhan, and D. N. Christodoulides, Phys. Rev. Lett. 118, 093002 (2017).
. H Zhou, C Peng, Y Yoon, C W Hsu, K A Nelson, L Fu, J D Joannopoulos, M Soljačić, B Zhen, 10.1126/science.aap9859Science. 3591009H. Zhou, C. Peng, Y. Yoon, C. W. Hsu, K. A. Nelson, L. Fu, J. D. Joannopoulos, M. Soljačić, and B. Zhen, Science 359, 1009 (2018).
. V Kozii, L Fu, arXiv:1708.05841arXiv preprintV. Kozii and L. Fu, arXiv preprint arXiv:1708.05841 (2017).
. T Yoshida, R Peters, N Kawakami, 10.1103/PhysRevB.98.035141Phys. Rev. B. 9835141T. Yoshida, R. Peters, and N. Kawakami, Phys. Rev. B 98, 035141 (2018).
. C C Wojcik, X.-Q Sun, T C V Bzdušek, S Fan, 10.1103/PhysRevB.101.205417Phys. Rev. B. 101205417C. C. Wojcik, X.-Q. Sun, T. c. v. Bzdušek, and S. Fan, Phys. Rev. B 101, 205417 (2020).
. Z Yang, A P Schnyder, J Hu, C.-K Chiu, 10.1103/PhysRevLett.126.086401Phys. Rev. Lett. 12686401Z. Yang, A. P. Schnyder, J. Hu, and C.-K. Chiu, Phys. Rev. Lett. 126, 086401 (2021).
. J C Budich, J Carlström, F K Kunst, E J Bergholtz, 10.1103/PhysRevB.99.041406Phys. Rev. B. 9941406J. C. Budich, J. Carlström, F. K. Kunst, and E. J. Bergholtz, Phys. Rev. B 99, 041406 (2019).
. R Okugawa, T Yokoyama, 10.1103/PhysRevB.99.041202Phys. Rev. B. 9941202R. Okugawa and T. Yokoyama, Phys. Rev. B 99, 041202 (2019).
. T Yoshida, R Peters, N Kawakami, Y Hatsugai, 10.1103/PhysRevB.99.121101Phys. Rev. B. 99121101T. Yoshida, R. Peters, N. Kawakami, and Y. Hatsugai, Phys. Rev. B 99, 121101 (2019).
. H Zhou, J Y Lee, S Liu, B Zhen, Optica. 6190H. Zhou, J. Y. Lee, S. Liu, and B. Zhen, Optica 6, 190 (2019).
. K Kawabata, T Bessho, M Sato, 10.1103/PhysRevLett.123.066405Phys. Rev. Lett. 12366405K. Kawabata, T. Bessho, and M. Sato, Phys. Rev. Lett. 123, 066405 (2019).
. P Delplace, T Yoshida, Y Hatsugai, 10.1103/PhysRevLett.127.186602Phys. Rev. Lett. 127186602P. Delplace, T. Yoshida, and Y. Hatsugai, Phys. Rev. Lett. 127, 186602 (2021).
. I Mandal, E J Bergholtz, 10.1103/PhysRevLett.127.186601Phys. Rev. Lett. 127186601I. Mandal and E. J. Bergholtz, Phys. Rev. Lett. 127, 186601 (2021).
. P San-Jose, J Cayao, E Prada, R Aguado, 10.1038/srep21427Scientific Reports. 621427P. San-Jose, J. Cayao, E. Prada, and R. Aguado, Scien- tific Reports 6, 21427 (2016).
. T Ozawa, H M Price, A Amo, N Goldman, M Hafezi, L Lu, M C Rechtsman, D Schuster, J Simon, O Zilberberg, I Carusotto, 10.1103/RevModPhys.91.015006Rev. Mod. Phys. 9115006T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zil- berberg, and I. Carusotto, Rev. Mod. Phys. 91, 015006 (2019).
. T Yoshida, Y Hatsugai, 10.1103/PhysRevB.100.054109Phys. Rev. B. 10054109T. Yoshida and Y. Hatsugai, Phys. Rev. B 100, 054109 (2019).
. Y Li, Y.-G Peng, L Han, M.-A Miri, W Li, M Xiao, X.-F Zhu, J Zhao, A Alù, S Fan, C.-W Qiu, 10.1126/science.aaw6259Science. 364170Y. Li, Y.-G. Peng, L. Han, M.-A. Miri, W. Li, M. Xiao, X.-F. Zhu, J. Zhao, A. Alù, S. Fan, and C.-W. Qiu, Science 364, 170 (2019).
. M Partanen, J Goetz, K Y Tan, K Kohvakka, V Sevriuk, R E Lake, R Kokkoniemi, J Ikonen, D Hazra, A Mäkinen, E Hyyppä, L Grönberg, V Vesterinen, M Silveri, M Möttönen, 10.1103/PhysRevB.100.134505Phys. Rev. B. 100134505M. Partanen, J. Goetz, K. Y. Tan, K. Kohvakka, V. Sevriuk, R. E. Lake, R. Kokkoniemi, J. Ikonen, D. Hazra, A. Mäkinen, E. Hyyppä, L. Grönberg, V. Vesterinen, M. Silveri, and M. Möttönen, Phys. Rev. B 100, 134505 (2019).
. M Naghiloo, M Abbasi, Y N Joglekar, K W Murch, 10.1038/s41567-019-0652-zNature Physics. 151232M. Naghiloo, M. Abbasi, Y. N. Joglekar, and K. W. Murch, Nature Physics 15, 1232 (2019).
. T Yoshida, T Mizoguchi, Y Hatsugai, 10.1038/s41598-021-04178-8Scientific Reports. 12560T. Yoshida, T. Mizoguchi, and Y. Hatsugai, Scientific Reports 12, 560 (2022).
. D J Luitz, F Piazza, 10.1103/PhysRevResearch.1.033051Phys. Rev. Research. 133051D. J. Luitz and F. Piazza, Phys. Rev. Research 1, 033051 (2019).
. T Yoshida, K Kudo, Y Hatsugai, 10.1038/s41598-019-53253-8Scientific Reports. 916895T. Yoshida, K. Kudo, and Y. Hatsugai, Scientific Reports 9, 16895 (2019).
. T Yoshida, K Kudo, H Katsura, Y Hatsugai, 10.1103/PhysRevResearch.2.033428Phys. Rev. Research. 233428T. Yoshida, K. Kudo, H. Katsura, and Y. Hatsugai, Phys. Rev. Research 2, 033428 (2020).
. C.-X Guo, X.-R Wang, C Wang, S.-P Kou, 10.1103/PhysRevB.101.144439Phys. Rev. B. 101144439C.-X. Guo, X.-R. Wang, C. Wang, and S.-P. Kou, Phys. Rev. B 101, 144439 (2020).
. N Matsumoto, K Kawabata, Y Ashida, S Furukawa, M Ueda, 10.1103/PhysRevLett.125.260601Phys. Rev. Lett. 125260601N. Matsumoto, K. Kawabata, Y. Ashida, S. Furukawa, and M. Ueda, Phys. Rev. Lett. 125, 260601 (2020).
. D.-W Zhang, Y.-L Chen, G.-Q Zhang, L.-J Lang, Z Li, S.-L Zhu, Phys. Rev. B. 101235150D.-W. Zhang, Y.-L. Chen, G.-Q. Zhang, L.-J. Lang, Z. Li, and S.-L. Zhu, Phys. Rev. B 101, 235150 (2020).
. T Liu, J J He, T Yoshida, Z.-L Xiang, F Nori, 10.1103/PhysRevB.102.235151Phys. Rev. B. 102235151T. Liu, J. J. He, T. Yoshida, Z.-L. Xiang, and F. Nori, Phys. Rev. B 102, 235151 (2020).
. Z Xu, S Chen, Phys. Rev. B. 10235153Z. Xu and S. Chen, Phys. Rev. B 102, 035153 (2020).
. L Pan, X Wang, X Cui, S Chen, 10.1103/PhysRevA.102.023306Phys. Rev. A. 10223306L. Pan, X. Wang, X. Cui, and S. Chen, Phys. Rev. A 102, 023306 (2020).
. S Mu, C H Lee, L Li, J Gong, 10.1103/PhysRevB.102.081115Phys. Rev. B. 10281115S. Mu, C. H. Lee, L. Li, and J. Gong, Phys. Rev. B 102, 081115 (2020).
. T Yoshida, Y Hatsugai, 10.1103/PhysRevB.104.075106Phys. Rev. B. 10475106T. Yoshida and Y. Hatsugai, Phys. Rev. B 104, 075106 (2021).
. K Yang, S C Morampudi, E J Bergholtz, 10.1103/PhysRevLett.126.077201Phys. Rev. Lett. 12677201K. Yang, S. C. Morampudi, and E. J. Bergholtz, Phys. Rev. Lett. 126, 077201 (2021).
. R Shen, C H Lee, 10.1038/s42005-022-01015-wCommunications Physics. 5238R. Shen and C. H. Lee, Communications Physics 5, 238 (2022).
. C H Lee, 10.1103/PhysRevB.104.195102Phys. Rev. B. 104195102C. H. Lee, Phys. Rev. B 104, 195102 (2021).
. S.-B Zhang, M M Denner, T Bzdušek, M A Sentef, T Neupert, 10.1103/PhysRevB.106.L121102Phys. Rev. B. 106121102S.-B. Zhang, M. M. Denner, T. c. v. Bzdušek, M. A. Sen- tef, and T. Neupert, Phys. Rev. B 106, L121102 (2022).
. K Kawabata, K Shiozaki, S Ryu, 10.1103/PhysRevB.105.165137Phys. Rev. B. 105165137K. Kawabata, K. Shiozaki, and S. Ryu, Phys. Rev. B 105, 165137 (2022).
. R Schäfer, J C Budich, D J Luitz, 10.1103/PhysRevResearch.4.033181Phys. Rev. Res. 433181R. Schäfer, J. C. Budich, and D. J. Luitz, Phys. Rev. Res. 4, 033181 (2022).
. T Orito, K.-I Imura, 10.1103/PhysRevB.105.024303Phys. Rev. B. 10524303T. Orito and K.-I. Imura, Phys. Rev. B 105, 024303 (2022).
. S Tsubota, H Yang, Y Akagi, H Katsura, 10.1103/PhysRevB.105.L201113Phys. Rev. B. 105201113S. Tsubota, H. Yang, Y. Akagi, and H. Katsura, Phys. Rev. B 105, L201113 (2022).
. W N Faugno, T Ozawa, 10.1103/PhysRevLett.129.180401Phys. Rev. Lett. 129180401W. N. Faugno and T. Ozawa, Phys. Rev. Lett. 129, 180401 (2022).
. T Yoshida, Y Hatsugai, 10.1103/PhysRevB.106.205147Phys. Rev. B. 106205147T. Yoshida and Y. Hatsugai, Phys. Rev. B 106, 205147 (2022).
. F Qin, R Shen, C H Lee, 10.1103/PhysRevA.107.L010202Phys. Rev. A. 10710202F. Qin, R. Shen, and C. H. Lee, Phys. Rev. A 107, L010202 (2023).
. T Tomita, S Nakajima, I Danshita, Y Takasu, Y Takahashi, Science Advances. 31701513T. Tomita, S. Nakajima, I. Danshita, Y. Takasu, and Y. Takahashi, Science Advances 3, e1701513 (2017).
. T Tomita, S Nakajima, Y Takasu, Y Takahashi, 10.1103/PhysRevA.99.031601Phys. Rev. A. 9931601T. Tomita, S. Nakajima, Y. Takasu, and Y. Takahashi, Phys. Rev. A 99, 031601 (2019).
. Y Takasu, T Yagami, Y Ashida, R Hamazaki, Y Kuno, Y Takahashi, Progress of Theoretical and Experimental Physics. 2020Y. Takasu, T. Yagami, Y. Ashida, R. Hamazaki, Y. Kuno, and Y. Takahashi, Progress of Theoretical and Experi- mental Physics 2020, 12A110 (2020).
| zyda_arxiv-1918000 |
ON THE OPTIMAL CONTROL OF RATE-INDEPENDENT SOFT CRAWLERS
19 Feb 2020
Giovanni Colombo
Paolo Gidoni
ON THE OPTIMAL CONTROL OF RATE-INDEPENDENT SOFT CRAWLERS
19 Feb 2020
Existence of optimal solutions and necessary optimality conditions for a controlled version of Moreau's sweeping process are derived. The control is a measurable ingredient of the dynamics and the constraint set is a polyhedron. The novelty consists in considering time periodic trajectories, adding the requirement that the control have zero average, and considering an integral functional that lacks weak semicontinuity. A model coming from the locomotion of a soft-robotic crawler, that motivated our setting, is analysed in detail. In obtaining necessary conditions, an improvement of the method of discrete approximations is used.
Introduction
Moreau's sweeping process comprises a class of evolution inclusions that model the displacement of a point x(t) dragged in a normal direction by a moving (convex or mildly non-convex) closed set: see, e.g., the survey paper [15] and references therein. If the point is also subject to an independent dynamics, then the evolution can be seen as a constrained motion, in which the reaction of the constraint is active. More precisely, the problem is stated as (1.1)ẋ(t) ∈ −N C(t) (x(t)) + g(t, x(t)) a.e. in [0, T ], x(0) = x 0 ∈ C(0).
Here N C (x) denotes the normal cone (of convex analysis if C is convex) to C at x ∈ C. The case where C is independent of time is particularly meaningful, because it is well known that the problem is equivalent to the so called projected differential equation (1.2)ẋ(t) = π T C (x) (g(t, x(t))), x(0) = x 0 ∈ C,
where π T C (x) (y) denotes the projection into the tangent cone to C at x of the vector y (see [4,Sec. 10.1]). The equivalence of (1.1) and (1.2), when C(t) ≡ C, both explains the role of the constraint in the dynamics and its intrinsic nonsmoothness (even discontinuity). Indeed, only one normal vector can be taken in (1.1), that is the smallest that cancels the (external) normal component of f , in order to keep the trajectory inside C. This latter fact follows from the emptiness of the normal, or tangent, cone to C at points outside C. Moreover, observe that the normal cone mapping x ↦ N C (x) is discontinuous -actually it has only closed graph -for two reasons: first because C may be nonsmooth, and, second, because in the interior of C, if any, N C (x) = {0}, while at boundary points N C (x) contains at least a half line. A similar type of discontinuity appears in the right hand side of (1.2). However, the monotone character of the normal cone mapping allows to prove forward-in-time existence (and uniqueness if the ODEẋ = g(t, x) allows so) of solutions to the Cauchy problem (1.1) under usual conditions imposed on g.
The simplest control problems involving Moreau's sweeping process occur when a control parameter u(⋅) appears within g: the dynamics then becomes (1.3)ẋ(t) ∈ −N C(t) (x(t)) + g(t, x(t), u(t)), u(t) ∈ U a.e., U being an assigned compact set. This paper is devoted to deriving necessary optimality conditions for a particular Bolza problem involving (1.3) together with further requirements on both x and u. This problem is motivated by maximizing the displacement in the locomotion of a bio-mimetic soft robotic crawler, whose mathematical model is presented in detail in Section 3. The robot can be described as a chain of N links, each formed by a spring coupled in series with an actuator, whose length is controlled. The movement is one dimensional and the evolution is supposed to be quasi static, i.e., the mechanical system is modelled by a force balance law and therefore obeys a first order differential inclusion. After quite a few transformations, that are essentially known in the theory of rate independent evolutions, one arrives to the controlled dynamics (1.3), where the space dimension of the problem is the number of links. Since one wants to find an optimal gait, namely a periodic actuation to be repeated an arbitrarily large number of times, the fixed initial condition on the trajectory is substituted by a T -periodicity condition, T being fixed a priori. Moreover, in the final model the controls turn out to be the derivative of periodic Lipschitz functions, so that the zero mean condition (1.4) T 0 u(t) dt = 0 must be imposed on feasible controls. Finally, C turns out to be a polyhedron. The functional to be maximized is an integral functional J involving two terms, the reaction of the constraint and the cost of the control (that of course appears with a minus sign):
J(x, u) ∶= T 0 f 1 g(t,
x(t), u(t)) −ẋ(t) − f 2 (t, u(t)) dt.
Here f 1 (⋅) is Lipschitz and positively homogeneous with degree one and f 2 (⋅, ⋅), for simplicity, is C 1 with respect to u. In our application, the first summand in the integrand of J is a function of the reaction of the constraint, measuring the the displacement of the barycentre of the system of springs, while f 2 represents the cost of actuating the control. If, on one hand, it is natural to assume the convexity of f 2 , on the other hand the derivation of the explicit form of f 1 for our model, presented in Section 3.5, gives a functional that is not concave down with respect toẋ and u. For instance, in the simple example presented in Section 4, the first summand of the integrand is 1 2 ẋ − u , so the integral functional is not weakly upper semicontinuous in W 1,2 ([0, T ]; R n ), cf. also Remark 3.1. Therefore, the direct method cannot be used in order to ensure the existence of optimal state-trajectory pairs.
The first contribution of the present paper is proving an existence result for the maximization of J along trajectories of a controlled sweeping process of the type (1.3) by imposing a uniform bound on the total variation of admissible controls, giving pointwise convergence of a maximizing sequence of controls. This is a strong assumption, which however seems to be justified by the observation that optimal controls are expected to be bang-bang with finitely many switchings (see Section 4), or anyway with finite total variation (see Section 4.1). Moreover, this requirement does not completely trivialize the existence argument, because in order to allow passing to the limit on J along a maximizing sequence (x ℓ , u ℓ ) one needs also the strong convergence of the sequence of derivativesẋ ℓ of the state variable. While for general differential inclusions this is not possible, the particular structure of the sweeping process allows to overcome this difficulty.
Our second contribution consists of necessary optimality conditions. The analysis of necessary conditions in this type of setting does not follow from the classical literature on state constrained optimal control problems (see, e.g., [36]), since the right hand side of the dynamics is not Lipschitz (actually, is very far from being so), with respect to the state x. There are essentially two ways to attack the problem. The first one is based on a regularization of the dynamics and goes back essentially to [9], see also [2,16]. It provides an adjoint equation in the sense of measures together with a maximum principle of Pontryagin type, as it may be expected in such problems, but is -up to now -limited by requiring the set C to be smooth. The second one, that is due to Mordukhovich and collaborators (see, e.g., [11,12] and references therein), is based on discrete approximations. This technique fits perfectly with our polyhedral setting, but provides only a weaker form of the maximum principle. In this paper we adapt to our problem the method of discrete approximations, by considering periodic trajectories and adding the control constraint (1.4). Moreover, taking inspiration from the fact that the normal vector in (1.3) cannot be chosen independently of the control, we simplify the discretization procedure by avoiding computing the normal cone to the normal cone N C (x). Furthermore, our approximation technique allows general measurable controls, not being limited to controls with bounded variation as in [12]. Finally, we deal with the nonconcavity of the integral functional without relying at all on relaxation arguments. Actually, in this case relaxation results are difficult to obtain, since the integral functional involves also the derivative of the state, not only the control variable, and furthermore periodic solutions are considered. Nevertheless, the obtained necessary conditions are very similar to those derived in the framework of [12].
The problem and the main results of the paper are stated in Section 2. The existence proof is presented in Section 5, while the proof of the theorem on necessary conditions appears in Sections 6, 7, 8, and 9. The intermediate Section 3 contains the general derivation of the model, while in Section 4 we discuss extensively the necessary condition obtained in Theorem 2.2 in the case of a one-link crawler, and make a few technical remarks and comments.
Statement of the problem and main results
2.1. Notation. Let A and S be sets, with A ⊂ S. We set, for x ∈ S,
1 A (x) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ 1 if x ∈ A 0 if x ∈ A.
The Lebesgue measure of S ⊂ R is denoted by S . Given an integrable function f on a set S with finite measure, we denote its average as
⨏ S f (s) ds = 1 S S f (s) ds
The closed unit ball of a normed space X is B X and the interior of a set S ⊂ X is denoted by int S. The convergence with respect to the Hausdorff distance between closed subsets of X will be considered (see [33,Section 4.C]). We denote with C([0, T ], X) the space of continuous functions from [0, T ] to X, endowed with the ⋅ ∞ norm; with C * ([0, T ], X) its dual space, and with C * + ([0, T ], X) the subset of positive measures.
Classical constructs of nonsmooth analysis will be used. In particular, for a set S, the cone of (limiting/Mordukhovich) normal vectors to S at x ∈ S is denoted by N S (x) (see [33,Definition 6.3]), while for x ∉ S, we set N S (x) = ∅. For a Lipschitz function f ∶ X → R, the (limiting/Mordukhovich) subdifferential of x is denoted by ∂f (x) (see [33,Definition 8.6 (b)]); we also refer the interested reader to [31,Chapter 1], where the above concepts are used also in the context of coderivatives of set-valued maps. By a process, or a state-control pair, for the controlled dynamics (1.3) we mean the couple (x, u), where x is a solution of (1.3) corresponding to the (measurable) control u. The total variation of a function u of one real variable is denoted by T V (u).
2.2.
Statement of the problem. Let C be a given polyhedron in a Euclidean space X = R n , defined as
(2.1) C = σ ⋂ j=1 C j ,
where, for suitable unit vectors x j * ∈ X and real numbers c j ,
C j ∶= {x ∈ X ∶ ⟨x j * , x⟩ ≤ c j }.
Given x ∈ C, let us denote with I(x) the set of active constraints in x, namely I(x) = {j = 1, . . . σ ∶ ⟨x j * , x⟩ = c j }. We assume that C has non-empty interior; in other words, the Positive Linear Independence Constraint Qualification (PLICQ) holds, i.e., if ∑ j∈I(x) λ j x j * = 0 and λ j ≥ 0, j = 1, . . . , σ, then λ j = 0 for all j. In this case, the normal cone to C at x ∈ C is
N C (x) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ v ∶ v = j∈I(x) λ j x j * , λ j ≥ 0 ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ .
The following assumptions will be considered.
(H U ) The control set U ⊂ R d is a compact and convex set, and that d ≤ dim X. Moreover, since we will require a zero-average condition on u(t), we assume 0 ∈ int U .
We remark that in order to guarantee the existence of controls u(t) with zero-average it is sufficient to assume 0 ∈ U , but if the zero lies on the boundary then all the zero-average functions u(t) have values in a lower dimensional convex setŨ with 0 ∈ intŨ . We consider the maps g∶ [0, T ] × X × U → X, f 1 ∶ X → R and f 2 ∶ [0, T ] × U → R with the following properties.
(H g ) the map t ↦ g(t, x, u) is measurable for all x ∈ X, u ∈ U and there exists L ≥ 0 such that
g(t, x, u) ≤ L for a.e. t ∈ [0, T ] and all (x, u) ∈ X × U , (x, u) → g(t, x, u) is smooth and there exists L ′ ≥ 0 such that D x g(t, x, u) ≤ L ′ for a.e. t ∈ [0, T ] and all (x, u) ∈ X × U ; (H f 1 ) the map x ↦ f 1 (x) is Lipschitz continuous; (H f 2 ) the map t ↦ f 2 (t, u)
is continuous for all u ∈ U and the map u ↦ f 2 (t, u) is continuously differentiable for a.e. t ∈ [0, T ] and all u ∈ U ;
Problem (P) We set T > 0 and consider the problem
(2.2) ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ẋ ∈ −N C (x) + g(t, x, u) a.e. on [0, T ], u(t) ∈ U a.e. and ∫ T 0 u(t) dt = 0, x(0) = x(T ).
We wish to maximize the integral functional
(2.3) J(x, u) ∶= T 0 f 1 g(t, x(t), u(t)) −ẋ(t) − f 2 (t, u(t)) dt
among all processes (x, u) of (2.2).
2.3. Statement of the main results. The setting of the existence theorem is slightly more general that in the previous section. Therefore we list the assumptions directly in the statement of the result. We will make reference to problem (2.2), but the statement can be easily reformulated for a Cauchy problem, with or without the constraint on the mean of the control.
Theorem 2.1. Let C ⊂ X be compact and convex, let U ⊂ R d be compact, and let K > 0. Let g∶ [0, T ] × X × U → X be measurable with respect to t, continuous with respect to (x, u) and uniformly bounded. Let f ∶ [0, T ] × X 2 × U → X be measurable with respect to t and upper semicontinuous with respect to (x,ẋ, u). Set
U K ∶= {u ∈ L 1 (0, T ; R d ) ∶ u(t) ∈ U a.
e. and T V (u) ≤ K} and assume that the problem (2.2) admits solutions with u ∈ U K . Then the integral functional
I (x, u) ∶= T 0 f (t, x(t),ẋ(t), u(t)) dt
admits a maximizer among all processes (x, u) of problem (2.2) such that u ∈ U K .
Our necessary optimality conditions are applicable to local W 1,2 -optimal processes for problem (P). We say that (x,ū) is a local W 1,2 -optimal process for (P) provided there existsε > 0 such that for all processes (x, u) of (2.2) with x −x W 1,2 ([0,T ];X) + u −ū L 2 ([0,T ];X) <ε one has J(x, u) ≤ J(x,ū).
The result on necessary optimality conditions is the following Theorem 2.2. Let the assumptions (H U ), (H g ), (H f 1 ), (H f 2 ) hold, and let (x,ū) be a local W 1,2optimal process for the problem (P). Then there exist • a number λ ≥ 0, • a function of bounded variation p∶ [0, T ] → X,
• positive and finite Radon measures dξ j on [0, T ], j = 1, . . . , σ,
• a function ψ ∈ L 1 (0, T ; X),
• a vector ω ∈ B X that satisfy the following properties:
• (adjoint equation) dp = −D x g(t,x(t),ū(t)) * dt + ∑ σ j=1 dξ j x j * (in C * ([0, T ]; X)), • (transversality) p(T ) = p(0), • (weak maximality condition) ψ(t) = −D w g(t,x(t),ū(t)) * p(t)−ω−λD w f 2 (t,ū(t)) ∈ N U (ū(t)) a.e. on [0, T ], • (support condition) for all j = 1, . . . , σ, supp (dξ j ) ⊂ {t ∈ [0, T ] ∶ j ∈ I(x(t))}, • (nontriviality condition) λ + p ∞ = 1.
The proof of Theorem 2.2 will be carried out in Sections 6 -9.
Remark 2.3. One can consider a dynamics more general than (2.2), namely
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ẋ (t) ∈ −N C (x(t)) + g(t, x(t), u(t)) a.e. on [0, T ],y(t) = f 1 (g(t, x(t), u(t)) −ẋ(t)) − f 2 (t, y(t), u(t)), u(t) ∈ U and ∫ T 0 u(t) dt = 0, x(0) = x(T ), y(0) = 0
The object to be maximized in this case is ϕ(y(T )).
for a suitable (e.g., u.s.c.) function ϕ. This amounts to adding only technical difficulties, that we wish to avoid here.
A motivating locomotor model
3.1. Introduction. In the last years, an increasing attention has been directed to the analysis, control and optimization of the locomotion of simple devices, such as a chains of linked segments or blocks. The same trend can be observed both in swimming [5,30,38] or in locomotion on a solid surface, such as inching and crawling [1,8,19,37]. The employment of very simple mechanisms has two main motivations. The first one is that a simple mechanism allows an easier miniaturization of the device. The second advantage comes from the paradigm of simplexity in soft robotics, based on the idea that a simple mechanism with a low number of control parameters may still achieve a complex behaviour and adaptability to an unknown environment by exploiting the large deformation of a soft, elastic body [24]. This also motivates the strong role played by elasticity in our model, despite introducing several additional mathematical challenges.
In the specific case of crawling locomotion, several approaches have been applied to the search of optimal gaits. One strategy is to consider suitable approximations in the model, for instance neglecting elasticity or working on a small deformation regime, so that, with a certain degree of approximation, it is possible to have an explicit description of the dynamics in terms of the control function [1,18]. Another approach, introducing a feedback mechanism in order to apply adaptive control, is presented in [6,7]. A model-free control framework, based on the decomposition of possible gaits as paths between a finite number of basic states, has been proposed in [35].
In this paper we present a more mathematical approach, based on a maximum principle of Pontryagin type. On one hand this, compared to the more pragmatical approaches mentioned above, makes more difficult to obtain an explicit characterization of optimal gaits. On the other hand, we believe that the development of a more theoretical approach, in parallel to engineering studies appearing in literature, may contribute to a better understanding of the challenging issues raised by the optimal control of a soft bodied locomotor. In our opinion crawling locomotion is not only an interesting problem per se, but represent a less hostile framework in which we can learn to unravel difficult phenomena that appear in a more general setting. Concerning the specific model considered in our paper, we will follow the approach developed in [19,20]. Our choice is motivated by the fact that such class of model includes the two main features observed in crawlers (a stick-slip interaction with the environment and an elastic body) without adding unnecessary elements. Moreover, even if here we consider only a smaller family of cases, the same formalism of sweeping process applies to a large class of behaviour, including continuous bodies and time-dependent friction [19], opening the way for future developments of our results.
L 1 (t) L 2 (t) L 3 (t) k k k X 1 (t) X 2 (t) X 3 (t) X 4 (t)
3.2.
A rate-independent model of soft crawler. Let us consider the mechanical system illustrated in Figure 1, consisting of a chain of N blocks. Each couple of adjacent blocks is joined by a link composed by a spring in series with an actuator, namely an element of prescribed length L i (t), which is our control on the system. The body of the crawler can be therefore identified in the reference configuration by a set of N points {ξ 1 , . . . , ξ N }. We represent the state of the crawler in the deformed configuration with a vector X = (X 1 ,
X 2 , . . . , X N ) in R N , where X i stands for the displacement of the point ξ i .
We consider the locomotion of our model in the regime of very slow (quasi-static) actuation, so that inertial forces can be neglected. Hence, the dynamics is described by a force balance between the friction forces acting on the body of the crawler and internal elastic forces associated to the deformations of the springs in the links, which can be written as
(3.1) − D X E(t, X ) ∈ ∂Ẋ R(Ẋ )
Here E(t, X ) is the elastic energy of the crawler, and therefore can be expressed as the sum of the elastic energies E i (t, X ) of each link, namely
(3.2) E(t, X ) = N −1 i=1 E i (t, X ) = N −1 i=1 k 2 (X i+1 − X i − L i (t)) 2
We assume that the actuation functions L i ∶ [0, T ] → R are Lipschitz continuous. The constant k > 0 is the elastic constant of the springs. Note that the same mathematical structure holds if we replace each actuator with an active control on the rest length of the corresponding spring, which is the case of robotic crawlers actuated e.g. by nematic elastomers [17,22]. Each of the points ξ i is subject to an anisotropic dry friction, so that we can write the friction force F i acting on ξ i as
F i = F i (Ẋ i ) ∈ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ {µ − i } ifẊ i < 0 [−µ + i , µ − i ] ifẊ i = 0 {−µ + i } ifẊ i > 0 for some positive coefficients µ ± i .
Hence, friction forces can be expressed variationally in (3.1) as the subdifferential of the dissipation potential
(3.3) R(Ẋ ) = N i=1 R i (Ẋ i ) with R i (Ẋ i ) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ −µ − iẊ i ifẊ i ≤ 0 µ + iẊ i ifẊ i ≥ 0
We recall that ∂Ẋ R(X ) ⊆ ∂Ẋ R(0) since the function R is positively homogeneous of degree one, and set
(3.4) C 0 ∶= {X ∈ X ∶ −µ − i ≤ ⟨e i , X ⟩ ≤ µ + i for i = 1, . . . , N } = ∂Ẋ R(0)
where e 1 , . . . , e n denotes the canonical base of X. Since the friction forces in (3.1) are bounded, we cannot allow too large initial elastic forces, hence we introduce the admissibility condition for the initial state:
(3.5) − D X E(t, X 0 ) ∈ C 0
In order to guarantee existence and uniqueness of solution for the Cauchy problem with an admissible initial state, we make the following assumption: for every subset of indices J ⊆ {1, . . . , N } we have
(3.6) i∈J µ + i − i∈J c µ − i ≠ 0
where J c denotes the complement of J. We refer to [19, Section 2] for a complete proof and discussion.
Since we will refer also later on to results obtained there, we observe for the reader's convenience that the coordinate x and the sets C, C sh in [19] correspond respectively to X , C 0 , C in this paper.
In order to study the locomotion of our model, it is useful to introduce the projections:
(3.7) y = π Y (X ) ∶= 1 N ∑ N i=1 X i ∈ R z = π Z (X ) ∶= (X 2 − X 1 , . . . , X N − X N −1 ) =∶ (z 1 , . . . , z N −1 ) ∈ Z ≅ R N −1
In this way we can split the state of the crawler into two components: the term y describes the position of the crawler, whereas z describes its shape, namely the lengths of the N − 1 links in the deformed configuration.
Setting, without loss of generality 0 = y(0) = π Y (X (0)), our problem consists of finding suitable choices of the actuations L i that maximize y(T ) = π Y (X (T )).
3.3.
Formulation as a sweeping process. We now show how we can pass from the dynamics (3.1) for the model presented above to a sweeping process of the form (2.2) and discuss the other elements of problem (P).
We observe that, since the elastic energy E is invariant for rigid translations, it depends only on the shape z, namely
(3.8) E(t, X ) = ⟨kz − ℓ sh (t), z⟩ + time-dependent term where we define ℓ sh (t) = kL 1 (t), . . . , kL N −1 (t) .
The last term disappears in the dynamics (3.1), so can be neglected for our purposes. We can reformulate the dynamics (3.1) as the variational inequality
(3.9) ⟨kπ Z (X (t)) − ℓ sh (t), π Z (u −Ẋ (t))⟩ + R(u) − R(Ẋ (t)) ≥ 0 for every u ∈ X
cf. [28,29]. It is easily verified that a function X (t) satisfies (3.9) only if its projection z(t) = π Z (X (t)) satisfies:
(3.10) ⟨kz(t) − ℓ sh (t), w −ż(t)⟩ + R sh (w) − R sh (ż(t)) ≥ 0 for every w ∈ Z where the dissipation potential R sh ∶ Z → R is defined as (3.11) R sh (z) = min R(X ) ∶ X ∈ X, π Z (X ) = z
The potential R sh is convex and positively homogeneous of degree one [19, Lemma 2.1]. We notice that, once (3.10) is solved, the solution to (3.9) can be recovered straightforwardly. Indeed, we observe that (3.6) allows to define a function v m ∶ Z → R as the unique satisfying
(3.12) R sh (π Z (X )) = R(X ) if and only if π Y (X ) = v m (π Z (X ))
cf. [19,Lemma 3.2]. This property allows us to recover the evolution of y(t) from that of z(t), as
(3.13)ẏ(t) = π Y (Ẋ (t)) = v m (π Z (Ẋ (t))) = v m (ż(t))
We can now reformulate the problem for the shape coordinates (3.10) in the differential inclusion formulation analogue to (3.1), namely (3.14) − kz + ℓ sh (t) ∈ ∂żR sh (ż)
Let us denote by R * sh the Legendre transform of R sh . Setting C ∶= ∂żR sh (0), by the Legendre-Fenchel equivalence we obtainż
∈ ∂ ζ R * sh (−kz + ℓ sh (t)) = N C (−kz + ℓ sh (t)) (3.15)
We observe that C is a polyhedron in Z of the form (2.1); indeed, by [19, Lemma 2.2] we deduce that
(3.16) C = {z ∈ Z ∶ −µ − i ≤ ⟨π Z (e i ), z⟩ ≤ µ + i for i = 1, . . . , N }
where e 1 , . . . , e N denotes the canonical base of R N . Let us now consider the change of variables x(t) = −kz(t) + ℓ sh (t) and set u(t) ∶= D t ℓ sh (t). The locomotion of our system, by (3.13) and (3.15), is described by
(3.17) ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ẋ (t) ∈ −N C (x(t)) + u(t) y(t) = v m (u(t) −ẋ(t))
3.4. Formulation of the control problem (P). Now that we have shown how the locomotion of our system can be described by the dynamics (3.17), we discuss the cost functional and the constraints required in our control problem (P). A locomotion strategy, be it for crawling, swimming, walking or running, usually can be identified with a gait, namely a periodic pattern that is repeated a large number of times. Let us denote with T the period of a gait. In our model, this correspond to assume that the function L i are T -periodic, which in terms of the control u in (3.17) reads
T 0 u(t) dt = 0
It is also reasonable to assume that there are some constraints on the speed at which the shape change occurs, corresponding to a uniform Lipschitz constant for all the admissible actuations. This, for the dynamics (3.17), is exactly the constraint u(t) ∈ U , where the set U is of the form
(3.18) U = Π d ℓ=1 [−a ℓ , a ℓ ] ⊂ R d ,
where a ℓ > 0 for all ℓ = 1, . . . , d.
Since we are considering locomotion model in one-dimension, we want to maximize the advancement of the crawler produced by the chosen gait, plus possibly subtracting a cost for the actuation.
Due to the hysteretic behaviour of the sweeping process, a periodic input (in the sense above) does not necessarily produce a periodic change in the shape coordinates w, and the produced displacement y(T ) − y(0) of the crawler in a period depends on its initial shape z(0). Hence the optimal gait may depend on the ability to exploit a specific initial state, and on the exact number of periods we are considering. This does not suit our purposes, since we are interested in an arbitrarily long time behaviour. However it is know that sweeping processes with a periodic input converge asymptotically to a periodic output [23,25,26,14]. This has already been observed in the models with one link [20] and two links [21], also noticing that in the specific case of some "common sense" gaits the convergence to the asymptotic periodic orbit occurs within the first period.
Since we are interested in the long time behaviour, we can therefore optimize on the possible limit cycles (for the shape w) associated to a given gait, and evaluate the cost functional on a single period. This corresponds to optimize on the trajectories that satisfy the periodicity condition
x(0) = x(T )
Regarding the cost functional, denoting with f 2 (t, u) a possible cost of the actuation, we have
J(x, u) = y(T ) − y(0) − T 0 f 2 (t, u(t)) dt = T 0 v m (u(t) −ẋ(t)) − f 2 (t, u(t)) dt that corresponds to (2.3) for g(t, x, u) = u and f 1 = v m .
The reader may be wondering why we are considering a periodic fixed time problem, instead of a free time or minimum time problem. The main motivation is that a gait works as an universal strategy, but might be slightly suboptimal for a specific prescribed problem. For instance, we expect that some case-by-case tuning on the first and last iterations of the gait, breaking periodicity, may provide a minor improvement to the solution. The natural way to avoid these complications is to optimize on the limit cycles of the system, as argued above. The reader may then wonder why to consider a fixed period. From the examples discussed in [17] and [20], one may observe that the action of the actuation of the crawler can be divided into two kinds of effects: a change in the tensions of the links (ẋ ≠ 0) and movement of the contact points (u −ẋ ≠ 0). A sufficiently complex change in the tension is necessary to reach suitable configurations that allow to move each contact point, and represents a sort of "fixed cost" necessary for locomotion. Every part of the period that is not used for the necessary tension change is best spent in pure locomotion (u ∈ N C (w)). A larger period hence increases the ratio of the time that is used for locomotion, leading to a better strategy. Hence, we do not expect a free period to affect in a significant way the qualitative structure of optimal solution. We also notice that a fixed period, combined with the bounds (3.18) on the rate of shape change, implicitly provide a bound on the maximum contraction and elongation of each link. This represents an additional physical constraint that should be incorporated in our model if such an assumption is removed. Since, as we will see, our problem is already quite complex as it is, we think that our approach is a preferable first step in the study of optimality for crawling locomotion.
3.5. Computation of the function v m . We now compute explicitly the function v m . Let us consider X ∈ C 0 and denote with J(X ) the set of active constraints in X , namely
J(X ) = j ∈ {1, . . . , N } ∶ X j = −µ − j ∪ j ∈ {N + 1, . . . , 2N } ∶ X j−N = µ + j−N
(here the number of constraints is σ = 2N ). Let us consider a vector v ∈ R N . By the convexity of C 0 , there exists X ∈ C 0 such that v ∈ N C 0 (X ). Moreover we can write, for some non-negative λ i ≥ 0, i = 1 . . . , 2N ,
(3.19) v = N i=1 λ i e i + 2N i=N +1 −λ i e i−N
We notice that, since the vector e i are linearly independent, the coefficients λ i are uniquely determined. Moreover, they satisfy the active constraint condition
(3.20) λ i > 0 ⇒ i ∈ I(X ) Let us set ν i = π Z (e i ) for i = 1 . . . , N and ν i = π Z (−e i−N ) for i = N + 1 . . . , 2N . We have (3.21) π Z (v) = 2N i=1 λ i ν i
Let us recall that π Z is a linear diffeomorphism between Z and π −1 Y (0), and that C = π Z (C 0 ∩π −1 Y (0)), see [19,Lemma 2.2]. We notice that, since C 0 ∩ π −1 Y (0) is a section of a convex set, every vector z ∈ Z can be written as z = π Z (v), with v ∈ N C 0 (ζ) for some ζ ∈ C 0 ∩ π −1 Y (0). Moreover, condition (3.6) implies that such vector v is unique, see [19,Lemma 2.3]. These facts imply that every vector z ∈ Z admits a decomposition (3.21), where the coefficients λ i are uniquely determined.
We can finally use such decomposition ofż to give an explicit expression for v m :
(3.22) v m 2N i=1 λ i ν i = π Y N i=1 λ i e i − 2N i=N +1 λ i e i−N = N i=1 λ i N − 2N i=N +1 λ i N
where we used the fact that π Y (e i ) = 1 N and (3.20). We observe that (3.22) in particular implies that v m is Lipschitz continuous.
Remark 3.1 (Properties of v m ). We observe that, by construction, the function v m is positively homogeneous of degree one. According to the choice of the parameters µ ± i , it can be convex, concave, or more often neither.
Let us consider for example the case N = 3, with homogeneous friction for the three contact points, namely µ + i = µ + and µ − i = µ − for i = 1, 2, 3. Excluding the critical values according to (3.6), we have three situations. If µ − > 2µ + then v m is convex and positive outside the origin, meaning that the crawler can move only forward. Symmetrically, if µ + > 2µ − then v m is concave and negative outside the origin, meaning that the crawler can move only backward. In the intermediate case 1 2 µ − < µ + < 2µ − , the function v m is neither convex nor concave, and assumes both positive and negative values, with the crawler able to move in both directions. In particular, the origin is a "monkey-saddle point" (namely, a saddle point with three ridges and three ravines). Remarkably, the mathematically desirable case of concave v m is also the less meaningful physically: indeed, the crawler can only move backward whereas we want to optimize its movement forwards, so that, for any reasonable actuation cost f 2 , the optimal strategy is trivially to stay idle and not move.
Application to a one-link crawler and remarks
We analyse now the information provided by Theorem 2.2 for the model introduced in Section 3. For simplicity, we consider only the one-link crawler. In this case, taking into account (3.22), the optimal control problem reads as follows.
Given an interval C ∶= [a, b], T > 0 and a smooth convex function f ∶ R → R,
maximize J(x, u) ∶= T 0 1 2 u(t) −ẋ(t) − f (u(t)) dt subject to (4.1)ẋ ∈ −N C (x) + u a.e. on [0, T ], x(T ) = x(0) ∈ C,
where u(t) ∈ [−1, 1] a.e. and ∫ T 0 u(t) dt = 0. Let (x,ū) be an optimal trajectory-control pair. Applying Theorem 2.2, we obtain the following necessary conditions: there exist λ ≥ 0, a BV function p∶ [0, T ] → R, two finite positive Radon measures dξ 1 and dξ 2 , ω ∈ R, and ψ ∈ L 1 (0, T ) such that:
1) supp(dξ 1 ) ⊆ {t ∈ [0, T ] ∶x(t) = a}; 2) supp(dξ 2 ) ⊆ {t ∈ [0, T ] ∶x(t) = b}; 3) dp = − dξ 1 + dξ 2 and p(T ) = p(0); 4) ψ(t) = −p(t) − ω − λD u f (ū(t)) ∈ N [−1,1] (ū(t)) a.e.; 5) λ + p ∞ = 1.
Observe first that there is a degenerate case, namely p(t) ≡ −ω ≠ 0 and λ = 0, that is satisfied by all trajectories of (4.1), with dξ 1 = dξ 2 = 0 and ψ ≡ 0, for any cost f . Now we analyse a few nondegenerate cases, in the (desirable) event they occur. To simplify the analysis, we take either f (u) ≡ 0 or f (u) = 1 2 u 2 . Let us first focus on the trajectories x(⋅) in the interior of C, namely the ones such that a < x(t) < b for all t. If f ≡ 0, we observe that they are all local extrema, because in this caseẋ(t) = u(t) a.e. and the functional J vanishes in a neighbourhood of (x, u).
In the case f (u) = 1 2 u 2 , instead, then necessary conditions provide more information. Indeed, assume again that a < x(t) < b for all t. Then, by the support conditions 1) and 2) and the adjoint equation 3), p is constant, so that p+ω is constant as well. Assume now the nondegeneracy condition λ = 1 is valid. Then the extremality condition 4) reads as 0 ∈ p + ω + u(t) + N [−1,1] (u(t)) for a.e. t.
The right hand side of the above expression is a strictly monotone function of u, thus there exists one and only one u * such that p + ω + u * + N [−1,1] (u * ) = 0, i.e., u(t) ≡ u * . Since all feasible controls must have zero mean, u ≡ 0. Therefore, in this first nondegenerate case, the only extremal solutions that lie in the interior of [a, b] for all t are constant. Observe that the above analysis remains valid if f (⋅) is strictly convex with minimum at 0. Observe also that, still assuming λ = 1, the above argument implies thatū is constant (not necessarily zero) along any interval I in whichx lies in the interior of [a, b].
Let us now consider the trajectories that touch the boundary of [a, b]. In particular, let us notice that, in order to achieve "true" locomotion both the boundary points must be touched by the trajectory, since one contact point can be moved only if x(t) = a and the other only if x(t) = b: a necessary and sufficient condition for this to happen is T > 2(b − a). We assume again f ≡ 0. In this case, the adjoint vector p may not be constant, being however constant in every interval wherex lies in the interior of [a, b]. To proceed, we assume the further nondegeneracy condition
(4.2) p(t) + ω ≠ 0 for all t ∈ [0, T ].
Under this condition, 4) implies thatū(t) = −sign(p(t) + ω) a.e. Let I ∶= {t ∶ a <x(t) < b}. Since p is constant in every connected component of I,ū ∈ {±1} is constant as well in any such component. Therefore, I is a finite union of open intervals, each of them having length b − a, except possibly the first and the last one, whose lengths however sum up to b − a as well due to the periodicity condition onx. Summarizing, in this case optimal controls are bang-bang with finitely many switchings.
4.1.
Remarks on an assumption of Theorem 2.1. The above example also illustrates why it is reasonable to expect optimal controlsū(t) to have bounded variation. Indeed, we show that, given a state-control pair (x, u) with unbounded variation, we can always modify it to obtain a control pair (x,ũ) with x −x L ∞ [0,T ] arbitrarily small, and such that J(
x,ũ) = J(x, u) if f 2 = 0, while J(x,ũ) > J(x, u) if f 2 = 1 2 u 2 .
Indeed, let us consider a control u(t) with unbounded variation, and let t * be a time such that in every neighbourhood of t * the function u(t) has unbounded variation. We distinguish two cases.
Firstly, consider the case u(t * ) ∈ int C and take a sufficiently small interval [t 1 , t 2 ] such that t 1 < t * < t 2 and u(t) ∈ int C for every t ∈ [t 1 , t 2 ]. Then we define a new state-control pair as
(x,û) ∶= ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ (t 2 −t)x(t 1 )+(t−t 1 )x(t 2 ) t 2 −t 1 , ⨏ t 2 t 1 u(s) ds for t ∈ [t 1 , t 2 ] (x(t), u(t)) elsewhere
We observe that (x,û) satisfies (4.1), with J(x,û) = J(x, u) if f 2 = 0, and J(x,û) > J(x, u) if f 2 = 1 2 u 2 . Moreover (x,û) has bounded variation in [t 1 , t 2 ]. Secondly, we consider the case when u(t * ) lies in the boundary of C; for simplicity we discuss the case u(t * ) = b. We take a sufficiently small interval [t 1 , t 4 ] such that t 1 < t * < t 4 and u(t) ∈ (a, b] for every t ∈ [t 1 , t 4 ]. Moreover we set t 2 ∶= min{t ∈ [t 1 , t 4 ] ∶ x(t) = b} and t 3 ∶= max{t ∈ [t 1 , t 4 ] ∶ x(t) = b} Then we define a new state-control pair as
(x,û) ∶= ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ (t 2 −t)x(t 1 )+(t−t 1 )b t 2 −t 1 , ⨏ t 2 t 1 u(s) ds for t ∈ [t 1 , t 2 ] b, ⨏ t 3 t 2 u(s)1 u −1 (b) (s) ds for t ∈ [t 2 , t 3 ] (t 4 −t)b+(t−t 3 )x(t 4 ) t 4 −t 3 , ⨏ t 4 t 3 u(s) ds for t ∈ [t 3 , t 4 ] (x(t), u(t))
elsewhere Also in this case it is easy to see that (x,û) satisfies 1. Observe that, differently from classical state constrained Bolza problems, the Hamiltonian contains only one summand of the integral cost: the part involving f 1 g(t, x(t), u(t)) −ẋ(t) is missing, due to a cancellation that occurs in the proof of Theorem 8.2. 2. It is well known (see, e.g., [36,Sec. 10.6] and [3]) that necessary optimality conditions for state constrained control problems may be satisfied by all state-control pairs. The zero mean condition on the control u provides the further multiplier ω ∈ R d , and this is why the stronger nontriviality condition (4.2) plays a role. However, up to now there are no sufficient conditions for (4.2) to hold. Similarly, conditions ensuring λ = 1 need to be studied, since classical results of this type do not apply to our setting.
Proof of Theorem 2.1
The proof of the existence result is based on a strong convergence argument that is essentially contained in [10] (see also Sec. 1.3 in [23]). We present here a version of this result that is fit for our setting.
Lemma 5.1. Let C ⊂ X be closed and convex and let g be as in the statement of Theorem 2.1. Let
u ℓ , u ∈ L 2 (0, T ; R d ), ℓ ∈ N be such that u ℓ → u in L 2 . Let x 0 ℓ ∈ C be such that x 0 ℓ → x 0 and let x ℓ ∶ [0, T ] → X be a solution of the Cauchy problem ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ẋ ∈ −N C (x) + g(t, x, u ℓ ) x(0) = x 0
ℓ . Then there exist a solution x of the Cauchy problem
(5.1) ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ẋ ∈ −N C (x) + g(t, x, u) x(0) = x 0 ,
and a subsequence {x ℓ k } such that
x ℓ k → x strongly in W 1,2 ([0, T ]; X).
Proof. It is well known (see, e.g., [27]) that {x ℓ } is uniformly bounded in W 1,2 ([0, T ]; X), so that, up to a subsequence, x ℓ converges weakly in W 1,2 ([0, T ]; X) to some x∶ [0, T ] → X. By standard arguments (see, e.g., [27]), x is a solution of (5.1). Moreover,
(5.2) g(t, x ℓ , u ℓ ) → g(t, x, u) in L 2 .
Set ξ ℓ ∶= g(t, x ℓ , u ℓ ) −ẋ ℓ and observe that ξ ℓ (t) ∈ N C (x ℓ (t)) a.e. Therefore, for a.e. t, all ℓ and all h > 0 small enough one has both ⟨ξ ℓ (t),
x ℓ (t + h) − x ℓ (t) h ⟩ ≤ 0 and ⟨ξ ℓ (t), x ℓ (t − h) − x ℓ (t) h ⟩ ≤ 0.
By passing to the limit as h → 0 + , one obtains that
(5.3) ⟨ξ ℓ (t),ẋ(t)⟩ = 0 a.e.
The sequence {ξ ℓ } converges weakly in L 2 to ξ ∶= g(t, x, u) −ẋ. Since ξ(t) ∈ N C (x(t) for a.e. t, the same argument as above yields (5.4) ⟨ξ(t),ẋ(t)⟩ = 0 a.e. By (5.2),ẋ ℓ + ξ ℓ →ẋ + ξ strongly in L 2 ([0, T ; X). Sinceẋ ℓ − ξ ℓ converges toẋ − ξ weakly in L 2 , the strong convergence is equivalent to
(5.5) ẋ ℓ − ξ ℓ L 2 → ẋ − ξ L 2 .
To show (5.5), observe that, by (5.3) and (5.4),
ẋ ℓ − ξ ℓ 2 L 2 = ẋ ℓ 2 L 2 + ξ ℓ 2 L 2 = ẋ ℓ + ξ ℓ 2 L 2 → ẋ + ξ 2 L 2 = ẋ − ξ 2 L 2
, and the proof is concluded. Theorem 2.1 now follows easily.
Proof of Theorem 2.1. Let {(x ℓ , u ℓ )} be a maximizing sequence for I among solutions of (2.2) with u ℓ ∈ U K for all ℓ. By Helly's theorem, there existsū such that, up to a subsequence, u ℓ →ū pointwise, so thatū(t) ∈ U for all t, and actually u ℓ →ū in L 2 . Moreover, ∫ T 0ū (t) dt = 0. Since C is compact, up to taking another subsequence we may assume that x ℓ (0) → x 0 for some x 0 ∈ C. Then, by Lemma 5.1, x ℓ →x strongly in W 1,2 , wherex is the solution of (2.2) corresponding toū and with initial conditionx(0) = x 0 (x is easily seen to be T -periodic). Then
I (x,ū) ≥ lim ℓ→∞ I (x ℓ , u ℓ ),
so that (x,ū) is obviously an optimal state-control pair.
Discrete approximations of trajectories
In order to apply the discretization approach, we need first to establish a result on discrete approximations of solutions of the general sweeping process
(6.1)ẋ ∈ −N C (x) + g(t, x, u) a.e., u(⋅) measurable with u(t) ∈ U,
where the polyhedron C ⊂ X, the function g and the control set U satisfy the assumption of Section 2.2. A similar result was obtained in [12]; here the assumptions on the reference process (x,ū) are weakened, asẋ andū are no longer supposed to have bounded variation.
Let us first state a lemma on piece-wise constant approximations of functions with prescribed average.
I i m = [t i m , t i+1 m ). Define (6.2) u m (t) ∶= 2 m −1 i=0 ⨏ I i mū (s) ds 1 I i m (t).
Then u m →ū a.e. on [0, T ] and ⨏
T 0 u m = ⨏ T 0ū . Proof. Set h m = T 2 m . Let t ∈ [0, T ) and let I m (t) ∶= [τ m (t), τ m (t)+h m ) be the unique interval I i m such that t ∈ I i m . By Lebesgue differentiation theorem, lim h→0 + 1 h ∫ t+h tū (s) ds = lim h→0 + 1 h ∫ t t−hū (s) ds = u(t) almost everywhere. Thus, for almost every t, lim m→∞ ⨏ Im(t)ū = lim m→∞ 1 h m t τm(t)ū + τm(t)+hm tū = lim m→∞ 1 h m (t − τ m (t)) ū(t) + 1 t − τ m (t) t τm(t)ū (s) ds −ū(t) +(τ m (t) + h m − t) ū(t) + 1 τ m (t) + h m − t τm(t)+hm tū (s) ds −ū(t) =ū(t) + lim m→∞ t − τ m (t) h m o(1) + τ m (t) + h m − t h m o(1) =ū(t).
Finally, we observe that the average ofū is preserved by the discretization.
Recalling that, as it is well known, all solutions of (6.1) are Lipschitz with the common Lipschitz constant L, we now state the main result of the section. Theorem 6.2. Under the assumptions of Section 2.2, let (x,ū) be a process for (2.2). Let m ∈ N and let I i m , i = 0, . . . , 2 m − 1 be as in Lemma 6.1. Set also h m = T 2 m . Then there exist sequences {c ji m } j=1,...,σ, i=0,...,2 m −1 ⊂ R,
x m ∶ [0, T ] → X, r m ∶ [0, T ] → [0, +∞) (m ∈ N),
with the following properties:
a) max i=0,...,2 m −1 c ji m − c j ≤ LT 2 m ; b) x m (⋅) is continuous and is affine on each I i m , i = 0, . . . , 2 m − 1, x m (0) =x(0), x m (T ) =x(T )
, and, for i = 0, . . . , 2 m − 1,
x m (t i+1 m ) − x m (t i m ) h m ∈ − ⨏ I i m g(s,x(s),ū(s)) −ẋ(s) ds + g(t i m , x m (t i m ), u m (t i m )) + ⨏ I i m r m (s) ds B X ⊂ −N C i m (x m (t i m )) + g(t i m , x m (t i m ), u m (t i m )) + ⨏ I i m r m (s) ds B X ,
where u m is defined as in (6.2) and the polyhedra C i m will be defined in (6.5) below; c) the sequence of polyhedral valued maps
C m (t) ∶= 2 m −1 i=0 C i m 1 I i m (t)ω i m ∶=x (t i+1 m ) −x(t i m ) h m x m (t) ∶=x(t i m ) + (t − t i m )ω i m =x(t i m ) + (t − t i m ) ⨏ I i mẋ (s) ds.
Observe that, for each i − 0, . . . , 2 m , x m (t i m ) =x(t i m ) ∈ C, and, by Lemma 6.1, x m →x strongly in
W 1,2 ([0, T ]; X). Define ω m (t) ∶= 2 m −1 i=0 ω i m 1 I i m (t), t ∈ [0, T ].
Fix i = 0, . . . , 2 m − 1 and set, for j = 1, . . . σ, and fix j = 1, . . . , σ andt ∈ [0, T ]. If ⟨x(t), x j * ⟩ < c j , then eventually ⟨x(t), x j * ⟩ < c j for all t ∈ [τ m (t), τ m (t) + h m ), so that, eventually, c j m (t) = c j . Let now ⟨x(t), x j * ⟩ = c j . Then c j m (t), that is equal to c j m (τ m (t)), satisfies the conditions ⟨x(τ m (t)),
c ji m ∶= ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ c j if ⟨x(t), x j * ⟩ < c j for all t ∈ I i m ⟨x(t i m ), x j * ⟩ otherwise,x j * ⟩ = c j m (τ m (t)) ≤ ⟨x(t), x j * ⟩ = c j ,
where both the equality and the inequality follow from our definition of c j m (t). Then
c j − c j m (τ m (t)) = ⟨x(t) −x(τ m ), x j * ⟩ ≤ Lh m .
This in turn implies a). Set now
(6.3) g m (t) ∶= g(τ m (t),x(τ m (t)), u m (t)) and ζ m (t) = g m (t) − ω m (t), t ∈ [0, T ].
We recall that, for a.e. t ∈ [0, T ]
g(t,x(t),ū(t)) −ẋ(t) = σ j=1 λ j (t)x j *
for suitable measurable λ j (⋅). Moreover we can take λ j (⋅) ≥ 0, and such that λ j (t) = 0 for each t for which ⟨x(t), x j * ⟩ < c j , namely j ∉ I(x(t)). Observe that
ζ m (t) = ⨏ [τm(t),τm(t)+hm) g(s,x(s),ū(s)) −ẋ(s) ds − ⨏ [τm(t),τm(t)+hm) g(s,x(s),ū(s)) − g m (s) ds = σ j=1 x j * ⨏ [τm(t),τm(t)+hm) λ j (s) ds − ⨏ [τm(t),τm(t)+hm) g(s,x(s),ū(s)) − g m (s) ds. (6.4)
Define now, for j = 1, . . . , σ, i = 0, . . . , 2 m − 1,
(6.5) C ji m ∶= x ∈ X ∶ ⟨x, x j * ⟩ ≤ c ji m , C i m = σ ⋂ j=1 C ji m , C j m (t) ∶= 2 m −1 i=0 C ji m 1 I i m (t)
and observe that, by our construction,
σ j=1 x j * ⨏ Im(t) λ j (s) ds ∈ N C j m (τm(t)) (x m (τ m (t)) = N C j m (τm(t)) (x m (t))
for a.e. t. Moreover, for all t ∈ [0, T ], ⨏ Im(t) g(s,x(s),ū(s)) − g m (s) ds = ⨏ Im(t) g(s,x(s),ū(s)) − g(τ m (s),x(τ m (t)),ū(s)) ds
+ ⨏ Im(t) g(τ m (s),x(τ m (t)),ū(s)) − g m (s) ds,
where we have used the fact that, for s ∈ I m (t), τ m (s) = τ m (t). Now, by the uniform continuity of g w.r.t. x and u in the domain of interest the proof of b) is concluded. The remaining claims are immediate consequences of the construction and of Lemma 6.1, taking into account that ż m ∞ is bounded uniformly w.r.t. m. In particular, the Hausdorff convergence in item c) follows from the fact that active constraints are eventually constant, while statement f) follows from the pointwise a.e. convergence of the sequencesż m and u m . Moreover, u m has zero mean by construction.
A discrete optimization problem
Let (x,ū) be a local W 1,2 -optimal process for problem (P), i.e., there existsε > 0 such that for all processes (x, u) of (2.2) with x −x W 1,2 ([0,T ];X) + u −ū L 2 ([0,T ];X) <ε one has J(x, u) ≤ J(x,ū).
More precisely,
(7.2) lim m→∞ κ m x 0 m −x(0) 2 + 2 m −1 i=0 I i m ẋ m (t) −ẋ(t) 2 + ū m (t) −ū(t) 2 dt = 0. Consequently, J m (x m ,ū m ) → J(x,ū).
Proof. Suppose by contradiction that, possibly along a subsequence,
(7.3) lim m→∞ κ m x 0 m −x(0) 2 + 2 m −1 i=0 I i m ẋ m (t) −ẋ(t) 2 + ū m (t) −ū(t) 2 dt = γ ∈ (0, +∞].
Thanks to (m 4 ), the sequence {(x m ,ū m )} is bounded in W 1,2 ([0, T ]; X) × L 2 (0, T ; R d ), so that, up to taking a subsequence, it converges to some (x,ũ) weakly in the same space. By our definition of κ m ,
(7.4) lim m→∞ κ m x 0 m −x(0) 2 + 2 m −1 i=0 I i m ẋ m (t) −ẋ(t) 2 + u m (t) −ū(t) 2 dt = 0,
where we recall that (x m , u m ) is the sequence of approximations of (x,ū) constructed in Lemma 6.1 and in Theorem 6.2, hence, in particular, x 0 m =x(0). Since (x m ,ū m ) is an optimal process for (P m ),
(7.5) J m (x m ,ū m ) ≥ J m (x m , u m ) ∀m ∈ N.
By (7.4) and by strong convergence (see Theorem 6.2),
(7.6) J m (x m , u m ) → J(x,ū) as m → ∞.
Moreover, by (m 2 ), (m 4 ) and our assumptions, the sequence
h m 2 m −1 i=0 f 1 g(t i m ,x i m ,ū i m ) −x i+1 m −x i m h m − f 2 (t i m ,ū i m )
is uniformly bounded, so that the sequence
J m (x m ,ū m ) + κ m 2 x m (0) −x(0) 2 + T 0 ẋ m (t) −ẋ(t) 2 + ū m (t) −ū(t) 2 dt
is uniformly bounded from above. Moreover, thanks to (7.5) and (7.6), the same sequence is also uniformly bounded from below. This implies in turn that the sequence
κ m x 0 m −x(0) 2 + 2 m −1 i=0 I i m ẋ m (t) −ẋ(t) 2 + ū m (t) −ū(t) 2 dt
is uniformly bounded, so that, in particular, γ < +∞. As a consequence, (x,ũ) = (x,ū), and the convergence (
x m ,ū m ) → (x,ū) is indeed strong in W 1,2 ([0, T ]; X) × L 2 (0, T ; R d ). Thanks to this fact, lim m→∞ J m (x m ,ū m ) = J(x,ū).
Therefore, (7.3) implies that
J(x,ū) − γ 2 ≥ J(x,ū),
and this contradiction completes the proof.
Necessary conditions for the discrete approximate problem
Throughout this section, the assumptions of Theorem 2.2 are supposed to hold.
Fix m ≥ 1. In order to proceed with deriving necessary optimality conditions for problems (P m ), we will introduce some further notations. We set
X ∶= (x 0 m , . . . , x 2 m −1 m ; w 0 m , . . . , w 2 m −1 m ; ρ 0 m , . . . , ρ 2 m −1 m ; ∆ 0 m , . . . , ∆ 2 m −1 m ) ∈ X 2 m × R 2 m d × X 2 m × X 2 m .
With the understanding that ∆ i m =
x i+1 m −x i m hm , we will write the functional J m as depending on X . We define furthermore the following maps, for i = 0, . . . , 2 m − 1: for x ∈ X, w ∈ U , and ρ ∈ B X , we set
Γ i m (x, w, ρ) ∶= − ⨏ I i m g(s,x(s),ū(s)) −ẋ(s) ds + g(t i m , x, w) + ρ ⨏ I i m r m (s) ds (∈ X).
The computation of necessary conditions for (P m ) will be carried out in two steps. In the first step we will show how to set (P m ) in the framework of classical results on finite dimensional optimization, while in the second one the calculations for this particular case will be carried out.
Theorem 8.1. LetX = (x m ,w m ,ρ m ,∆ m ) be an optimal process for (P m ). Then there exist λ m > 0,
ω m ∈ R d , ψ i m ∈ R d , β i m , η i m ∈ X, ξ i m = (ξ i1 m , . . . , ξ iσ m ) ∈ R σ + , p i m ∈ X, and X * m ∈ X 2 m ×R 2 m d ×X 2 m ×X 2 m , i = 0, . . . , 2 m such that (8.1) ξ ij m ⟨x j * ,x i m ⟩ − c ij m = 0, i = 0, . . . , 2 m−1 , j = 1, .
. . , σ, and, for i = 0, . . . , 2 m − 1,
(8.2) ⎛ ⎝ κ i m (x(0) −x i m ) − λ m D x g(t i m ,x i m ,w i m ) * η i m − σ j=1 ξ ij m h m x j * + p i+1 m − p i m h m , − λ m D w g(t i m ,x i m ,w i m ) * η i m + κ m (w i m − u i m ) + D w f 2 (t i m , w i m ) − ω m h m − ψ i m h m , − β i m h m , p i m + λ m η i m + κ m x i+1 m −x i m h m − ⨏ I i mẋ (t) dt ⎞ ⎠ ∈ N graph (Γ i m ) (X ), where (8.3) κ i m = κ m for i = 0 and κ i m = 0 for i ≠ 0, where κ m is defined in (7.1), (8.4) p 2 m m = p 0 m , and (8.5) ψ i m ∈ N U (w i m ), β i m ∈ N B X (ρ i m ), η i m ∈ ∂f 1 g(t i m ,x i m ,w i m ) −x i+1 m −x i m h m .
Proof. We begin by arranging in two different categories the constraints in problem (P m ). The variable X fulfils the following requirements:
Φ(X ) ∶= x 0 m −x(0) 2 + 2 m −1 i=0 I i m (∆ i m , w i m ) − (ẋ(t),ū(t)) 2 dt −ε 2 ≤ 0 (8.6) δ i m (X ) ∶= x i+1 m − x i m − h m ∆ i m = 0, i = 1, . . . , 2 m − 1 (8.7) δ 0 m (X ) ∶= x 1 m − x 2 m m − h m ∆ 0 m = 0 (8.8) h ij m (X ) ∶= ⟨x j * , x i m ⟩ − c ij m ≤ 0, i = 1, . . . , 2 m , j = 1, . . . , σ (8.9) together with X ∈ Ξ i m ∶= X ∶ ∆ i m = Γ i m (x i m , w i m , ρ i m ) , i = 0, . . . , 2 m − 1 (8.10) X ∈ B i m ∶= {X ∶ ρ i m ≤ 1}, i = 0, . . . , 2 m − 1 (8.11) X ∈ Ω i m ∶= {X ∶ w i m ∈ U }, i = 0, . . . , 2 m − 1 (8.12) X ∈ Ω m ∶= {X ∶ 2 m −1 i=0 w i m = 0}. (8.13)
Recalling Theorem 7.1, the constraint (8.6) is eventually inactive and therefore will be neglected in the computations of necessary conditions. Applying classical results in mathematical programming we obtain a set of necessary conditions for (P m ) that read as follows.
There exist λ m > 0, ω m ∈ R d , ψ i m ∈ R d , ξ i m = (ξ i1 m , . . . , ξ iσ m ) ∈ R σ + , p i m ∈ X, and X * i ∈ X 2 m × X 2 m × X 2 m × X 2 m , i = 0, . . . , 2 m such that (8.14) X * i ∈ N Ξ i m (X ) +N B i m (X ) +N Ω i m (X ), i = 0, . . . , 2 m − 1 (8.15) X * 2 m ∈N Ωm (X ), whereN B i m (X ) = (0, . . . , 0, β i m , 0, . . . , 0), with β i m ∈ N B i m (ρ i m ) N Ω i m (X ) = (0, . . . , 0, ψ i m , 0, . . . , 0), with ψ i m ∈ N U (w i m ) N Ωm (X ) = (0, . . . , 0, ω m , 0, . . . , 0), with ω m = (ω m , . . . , ω m ) ∈ R d , together with − 2 m i=0 X * i ∈ λ m ∂J m (X ) + 2 m i=1 σ i=1 ξ ij m ∇h ij m (X ) + 2 m −1 i=0 ∇δ i m (X ) * p i m ,
where ξ ij m h ij m (X ) = 0, i = 1, . . . , 2 m , j = 1, . . . , σ.
(8.16)
We now write componentwise the above expression, making first explicit the (sub)gradients. Invoking the nonsmooth chain rule (see, e.g., Theorem 10.6 and Example 10.8 in [33]) we obtain
∂J m (X ) ⊂ h m D x g(t 0 m ,x 0 m ,w 0 m ) * ∂f 1 g(t 0 m ,x 0 m ,w 0 m ) −∆ 0 m + κ m (x(0) −x 0 m ), h m D x g(t i m ,x i m ,w i m ) * ∂f 1 g(t i m ,x i m ,w i m ) −∆ i m i=1 , . . . , 2 m ; h m D w g(t i m ,x i m ,w i m ) * ∂f 1 g(t i m ,x i m ,w i m ) −∆ i m − D w f 2 (t i m ,w i m ) + κ m I i m (ū(t) −w i m ) dt i=0,...,2 m −1 ; 0 X 2 m −1 ; − h m ∂f 1 g(t i m ,x i m ,w i m ) −∆ i m + κ m I i m (ẋ(t) −∆ i m ) dt i=0,...,2 m −1 . Moreover ∇h ij m (X ) X i = x j * , i = 1, . . . , 2 m , j = 1, . . . , σ, 2 m −1 ℓ=0 ∇δ ℓ m (X ) * p ℓ m X i = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ p i−1 m − p i m for 1 ≤ i ≤ 2 m − 1 p 2 m −1 m − p 0 m for i = 2 m , 2 m −1 ℓ=0 ∇δ ℓ m (X ) * p ℓ m ∆ i = −h m p i m , i = 0, . . . , 2 m − 1.
Thus we obtain from (8.16), for a suitable η ℓ m ∈ ∂f 1 (g(t ℓ m ,x ℓ m ,w ℓ m ) −∆ ℓ m ),
− 2 m i=0 x * ℓ i = −x * ℓ ℓ = λ m h m D x g(t ℓ m ,x ℓ m ,w ℓ m ) * η ℓ m + σ j=1 ξ ℓj m x j * − p ℓ m + p ℓ−1 m , ℓ = 1, . . . , 2 m ,
where we have set p 2 m = p 0 m . For ℓ = 0, . . . , 2 m − 1 we have moreover
− 2 m i=0 w * ℓ i = −w * ℓ ℓ = λ m h m D w g(t ℓ m ,x ℓ m ,w ℓ m ) * η ℓ m + u ℓ m −w ℓ m + D w f 2 (t ℓ m ,w ℓ m ) , − 2 m i=0 ρ * ℓ i = −ρ * ℓ ℓ = 0, −∆ * ℓ ℓ = −h m p ℓ m − λ m h m (η ℓ m +∆ ℓ m ) − I i mẋ (t) dt .
Observe now that (8.14) and (8.15) can be rewritten as
(8.17) (x * i+1 i+1 , w * i i − ψ i m − ω m , ρ * i i − β i m , ∆ * i i ) ∈ N graph(Γ i ) (X i ), i = 0, . . . , 2 m − 1 for suitable vectors ψ i m ∈ N U (w i ) and β i m ∈ N B X (ρ i m )
. Dividing by h m the left hand side of (8.17) and taking into account the above list of necessary conditions, one arrives to (8.1) and (8.2). The proof is concluded.
In the next result we obtain more explicit necessary conditions by computing the normal cone in the right hand side of (8.2).
Theorem 8.2. LetX = (x m ,w m ,ρ m ,∆ m ) be an optimal process for (P m ). Then there exist λ m ∈ R, .3), and we have set
ω m ∈ R d , ψ i m ∈ R d , β i m , η i m ∈ X, ξ i m = (ξ i1 m , . . . , ξ iσ m ) ∈ R σ + , p i m ∈ X, and X * m ∈ X 2 m × X 2 m × X 2 m × X 2 m , i = 0,p i+1 m − p i m h m = −D x g(t i m ,x i m ,w i m ) * (p i m − λ m ϑ x,i m ) + j∈I i m ξ ij m h m x j * + λ m κ i m (x 0 m −x(0)), (8.19) where I i m = j = 1, . . . , σ ∶ ⟨x j * , x i m ⟩ = c ij m , κ i m is as in (8(8.20) ϑ x,i m ∶= ⨏ I i mẋ (t) dt −x i+1 m −x i m h m , ψ i m h m = λ m κ i m (u i m −w i m ) − D w f 2 (t i m ,w i m ) − D w g(t i m ,x i m ,w i m ) * ϑ x,i m − ω m h m + D w g(t i m ,x i m ,w i m ) * p i m ∈ N U (w i m ) (8.21) β i m h m = ϑ r,i m p i m + λ m (η i m − ϑ x,i m ) ∈ N B X (ρ i m ) (8.22)
where we have set Proof. The computation of the normal cone to the graph of Γ i m , recalling (8.2), yields for i = 0, . . . ,
2 m − 1 ⎛ ⎜ ⎜ ⎜ ⎝ κ i m (x(0) −x 0 ) − λ m D x g(t i m ,x i m ,w i m ) * η i m − ∑ j∈I i m ξ ij m hm x j * + p i+1 m −p i m hm −λ m D w g(t i m ,x i m ,w i m ) * η i m + κ m (w i m − u i m ) + D w f 2 (t i m ,w i m ) − ωm hm − ψ i m hm − β i m hm ⎞ ⎟ ⎟ ⎟ ⎠ = − ⎛ ⎜ ⎝ D x g(t i m ,x i m ,w i m ) * D w g(t i m ,x i m ,w i m ) * ϑ r,i m I ⎞ ⎟ ⎠ p i m + λ m (η i m − ϑ x,i m )
where I denotes the identity matrix in X. By computing the above product and recalling the terminal condition from Theorem 8.1, the assertions follow.
9. Proof of Theorem 2.2: passing to the limit
We conclude the proof of Theorem 2.2 by performing a limiting procedure along the necessary conditions for problems (P m ) that were proved in Theorem 8.2.
Proof of Theorem 2.2. Referring to the statement of Theorem 8.2, we set ξ j m x j * L 1 (0,T ;X) + ω * m + ψ m L 1 (0,T ;X) = 1.
• p m (t) = p i m + (t − t i m )(p i+1 m − p i m ), for t ∈ [t i m , t i+1 m ), i = 0, . . . , 2 m − 1 • p m (T ) = p m (0) • ξ j m (t) = ∑ 2 m −1 i=0 ξ ij m hm 1 [t i m ,t i+1 m ) (t), t ∈ [0, T ), j = 1, . . . , σ • ψ m (t) = ∑ 2 m −1 i=0 ψ i m hm 1 [t i m ,t i+1 m ) (t), t ∈ [0, T ) • η m (t) = ∑ 2 m −1 i=0 η i m 1 [t i m ,t i+1 m ) (t), t ∈ [0, T ) • β m (t) = ∑ 2 m −1 i=1 β i m hm 1 [t i m ,t i+1 m ) (t), t ∈ [0, T ) • ϑ m (t) = ∑ 2 m −1
By compactness, there exists a subsequence, that we do not relabel, and there exist λ ≥ 0, ω ∈ X, dξ j ∈ C * + ([0, T ]; X), i = 1, . . . , σ, such that λ m → λ ω * m → ω σ j=1
x j * dξ j m → σ j=1
x j * dξ j in C * ([0, T ]; X)
Observe that, thanks to PLICQ and to the complementarity conditions (8.1) we have also dξ j m → dξ j in C * ([0, T ]; X).
The main point of the proof is showing that the sequence {p m ∶ m ∈ N} is uniformly bounded in W 1,1 ([0, T ]; X), so that a subsequence of {p m } will converge weakly to a BV function p. This fact, in turn, will imply that the further sequences {ψ m } and {β m } will converge (strongly) in the appropriate spaces, thanks to (8.21) and (8.22). The convergence argument will be divided into three steps.
Step 1. The sequence {p m ∶ m ∈ N} is bounded in L ∞ ([0, T ]; X). Proof of Step 1. We start by rewriting (8.19) as
p i+1 m = I − h m D x g(t i m ,x i m ,w i m ) * p i m + σ j=1 ξ ij m x j * + λ m h m D x g(t i m ,x i m ,w i m ) * ϑ x,i m + λ m h m κ i m (x 0 m −x(0)), i = 0, . . . , 2 m − 1,(9.3)
where we recall that p 2 m m = p 0 m . Set γ i m = p i m , i = 0, . . . , 2 m − 1, m ∈ N. By (8.24) and (9.2), γ 0 m ≤ 1 ∀m ∈ N. By (9.3) we obtain
γ 1 m ≤ 1 + h m L ′ + λ m h m L ′ ϑ x,1 m + σ j=1 ξ 1j m x j * + λ m h m Λ =∶ d 1 m ,
and, for i = 2, . . . , 2 m − 1,
γ i m ≤ 1 + h m L ′ γ i−1 m + λ m h m L ′ ϑ x,i m + σ j=1 ξ ij m x j * =∶ 1 + h m L ′ γ i−1 m + d i m .
By induction, we obtain from the above conditions that, for each k = 1, . . . , 2 m − 1
γ k m ≤ k i=1 d i m (1 + h m L ′ ) k−i = k−1 ℓ=0 d k−ℓ m (1 + h m L ′ ) ℓ ≤ e T L ′ k i=1 d i m ,
Therefore, for each k = 1, . . . , 2 m − 1, recalling (9.1),
γ k m ≤ e T L ′ ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ λ m h m (L ′ + 1)Λ + 2 m i=1 σ j=1 ξ ij m x j * ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ .
Therefore, the sequence {γ k m ∶ k = 0, . . . , 2 m } is bounded uniformly w.r.t. m, and the proof of Step 1 is concluded.
Step 2. The sequence {ṗ m ∶ m ∈ N} is bounded in L 1 (0, T ; X) uniformly w.r.t. m. Proof of Step 2. By (8.19) and
Step 1,
ṗ m L 1 ≤ L ′ λ m Λ + L ′ c + σ j=1 ξ j m x j * L 1 (0,T ;X) ,
for a suitable constant c. Recalling (9.2), the claim follows.
Up to taking another subsequence, by standard compactness results we can now assume that p m (t) → p(t) for all t ∈ [0, T ] p m dt * ⇀ dp in C * ([0, T ]; X), for a suitable BV function p∶ [0, T ] → X.
Step 3. The sequence {ψ m } converges strongly in L 1 (0, T ; X) to a function ψ that satisfies the weak maximality condition. Furthermore, β m → 0 strongly in L 1 (0, T ; X). We recall that, by Theorem (7.2), κ m (w m −ū) → 0 in L 1 (0, T ; X). Therefore, recalling also Lemma 6.1 and the above discussion of convergence of ϑ m , ω * m , p m , the right-hand side of (9.4) converges strongly in L 1 (0, T ; X) to −ω + D w g(t,x(t),w(t)) * p(t) − λD w f 2 (t,ū(t)) =∶ ψ(t).
By the graph closedness of the normal cone N U we obtain also that ψ(t) ∈ N U (ū(t)) a.e., hence concluding the proof of Step 3. The above arguments also allow to pass to the limit along (8.19) and (8.24) in the suitable topologies and obtain the adjoint equation and the transversality condition.
We are therefore left with proving the nontriviality and the support conditions. To prove the first one, suppose by contradiction that both λ and p vanish. Then, by the adjoint equation, ∑ σ j=1 x j * dξ j = 0. Therefore, by the weak maximality condition, ψ(t) ≡ −ω is constant. Assume by contradiction that ω ≠ 0, so that necessarilyū(t) ∈ ∂U for a.e. t. Then, since 0 ∈ int U , ⟨ω,ū(t)⟩ < ⟨ω, 0⟩ = 0 for a.e. t ∈ [0, T ].
By integrating the above inequality we contradict the assumption that ∫ T 0ū (t) dt = 0. The above argument, therefore, shows that for all m large anough (9.2) must be violated, hence concluding the proof of the nontriviality condition.
To prove the support condition, fix j = 1, . . . , σ and set
E j ∶= {t ∶ ⟨x j * ,x(t)⟩ < c j }.
Assume that E j ≠ ∅ and let K ⊂ E j be compact. For all m large enough, ⟨x j * ,x m (t)⟩ < c j m (t) for all t ∈ K. By (8.19), ξ j m (t) = 0 on K, so that the support condition is proved. The proof of Theorem 2.2 is concluded.
Figure 1 .
1A model of soft crawler.
(4.1), with J(x,û) = J(x, u) if f 2 = 0, and J(x,û) > J(x, u) if f 2 = 1 2 u 2 . Moreoverû has bounded variation in [t 1 , t 4 ]. We notice that, by the compactness of [0, T ], with a finite number of such modifications, we can obtain the control pair (x,ũ) as desired. Furthermore, by considering sufficiently small intervals, due to the Lipschitz continuity of x(t), we can obtain an arbitrarily small x −x L ∞ [0,T ] . 4.2. Remarks on the necessary conditions of Theorem 2.2.
Lemma 6 . 1 .
61Let T > 0 andū ∈ L 1 (0, T ; R n ). For each m ∈ N, set t i m = iT 2 m , i = 0, 1, . . . , 2 m , and
converges to C with respect to the Hausdorff metric, uniformly w.r.t. t, and C i m satisfies PLICQ for all i = 0, . . . , σ, provided m is large enough; d) r m → 0 a.e. on [0, T ] and r m ∞ is bounded uniformly w.r.t. m; e) x m →x strongly in W 1,2 ([0, T ]; X). f ) J(x m , u m ) → J(x,ū); g) ∫
m (t) dt = 0.Proof. Fix m ∈ N and define, for i = 0, . . . , 2 m − 1 and t ∈ I i m ,
We claim that c j m (t) → c j uniformly on [0, T ] as m → ∞. Indeed, set for t ∈ [0, T ] τ m (t) ∶= max{t i m ∶ i = 0, . . . , 2 m − 1, t i m ≤ t} and I m (t) = [τ m (t), τ m (t) + h m ),
. . . , 2 m such that (8.1) and (8.5) hold, together with (8.18) λ m > 0 and, for, i = 0, . . . , 2 m − 1,
ϑ r,i m ∶= ⨏ I i m r m (t) dt.(8.23) Moreover, in (8.19) we have (8.24) p 2 m m = p 0 m .
1 [t i m ,t i+1 m ) (t), t ∈ [0, T ), where ϑ i m = (ϑ x,i m , ϑ r,i m ) • ω * m = ωmhm . Observe first that, by(8.20),(8.23), and (7.2), ϑ m → 0 in L 1 (0, T ; X 2 ) and κ m (x 0 m −x(0)) → 0. In particular, there exists Λ ∈ R such that, for every m, Since all conditions appearing in the statement of Theorem 8.2 are positively homogeneous of degree one, thanks to (8.18) we assume without loss of generality that(9.2) λ m + p m (T )
Proof of Step 3 .
3We obtain from(8.21) that, for all m and all i = 0, . . . , 2 m − 1,ψ i m h m = −ω * m + D w g(t i m ,x i m ,w i m ) * p i m + λ m κ m (u i m −w i m ) − D w g(t i m ,x i m ,w i m ) * ϑ x,i m − D w f 2 (t i m ,w i m ) ∈ N U (w i m ).
The research was partially carried out while P.G. was a postdoc at the I.N.d.A.M. Unit of the University of Padova, with a fellowship of the Istituto Nazionale di Alta Matematica in the framework of the MathTech project. G.C. is partially supported by the Padua University grant SID 2018 "Controllability, stabilizability and infimun gaps for control systems", BIRD 187147, and is affiliated to Istituto Nazionale di Alta Matematica (GNAMPA). P.G. is partially supported by the GAČ-FWF project 19-29646L.Let {(x m , u m )} be the sequence of approximations of (x,ū) constructed according to Lemma 6.1 and Theorem 6.2 and set, for m ∈ N,Consider the following family of finite dimensional optimization problems (P m ):Problem (P m ) Letε be given by the definition of local W 1,2 optimal process; let m ∈ N be given; let I i m , i = 0, . . . , 2 m − 1, and u m be as in Lemma 6.1; let r m (⋅) and c ji m , j = 1, . . . , σ, i = 0, . . . , 2 m − 1, be as in Theorem 6.2, and set c j2 mBy standard finite dimensional programming arguments, problem (P m ) admits optimal processes.The following result is pivotal in the method of discrete approximations. It is similar to, e.g., Theorem 4.3 in[12], with a difference: we do not assume relaxation stability. Indeed our method allows to treat a non-concave integral functional, as the functional that appears in Sections 3.4 and 4, without passing through the relaxed problem.Theorem 7.1. Let the assumptions of Theorem 2.2 hold. Let (x,ū) be a W 1,2 -optimal process for Problem (P ) and let (x m ,ū m ) be optimal processes for Problems (P m ). With an abuse of notation, considerx m , resp.ū m , as piecewise affinely, resp. piecewise constantly, extended to the whole of
Peristaltic waves as optimal gaits in metameric bio-inspired robots. D Agostinelli, F Alouges, A Desimone, Frontiers in Robotics and AI. 599D. Agostinelli, F. Alouges, A. DeSimone, Peristaltic waves as optimal gaits in metameric bio-inspired robots, Frontiers in Robotics and AI 5, 99 (2018)
A Maximum Principle for the Controlled Sweeping Process, Set-Valued Var. . E Ch, G Arroud, Colombo, Anal. 26Ch. E. Arroud, G. Colombo, A Maximum Principle for the Controlled Sweeping Process, Set-Valued Var. Anal (2018) 26, 607-629.
A Survey on Regularity Conditions for State-Constrained Optimal Control Problems and the Non-degenerate Maximum Principle. A Arutyunov, D Karamzin, J. Optim. Theory Appl. 184A. Arutyunov and D. Karamzin, A Survey on Regularity Conditions for State-Constrained Optimal Control Problems and the Non-degenerate Maximum Principle, J. Optim. Theory Appl. 184 (2020), 697-723.
J-P Aubin, Viability theory. BirkhäuserJ-P. Aubin, Viability theory, Birkhäuser (1991).
Swimming by switching. F Bagagiolo, R Maggistro, M Zoppello, Meccanica. 5214F. Bagagiolo, R. Maggistro, M. Zoppello, Swimming by switching, Meccanica 52(14): 3499-3511(2017)
Adaptive control of straight worms without derivative measurement. C Behn, Multibody Syst. Dyn. 26C. Behn, Adaptive control of straight worms without derivative measurement, Multibody Syst. Dyn. 26 (2011) 213-243.
Adaptive control of singularly perturbed worm-like locomotion systems. C Behn, Differ. Equ. Dyn. Syst. 21C. Behn, Adaptive control of singularly perturbed worm-like locomotion systems, Differ. Equ. Dyn. Syst. 21 (2013) 59-69.
The undulatory motion of a chain of particles in a resistive medium. N Bolotnik, M Pivovarov, I Zeidis, K Zimmermann, ZAMM Z. Angew. Math. Mech. 91N. Bolotnik, M. Pivovarov, I. Zeidis and K. Zimmermann, The undulatory motion of a chain of particles in a resistive medium, ZAMM Z. Angew. Math. Mech. 91 (2011) 259-275.
Optimal control of ODE systems involving a rate independent variational inequality, Discrete and continuous dynamical systems series B. M Brokate, P Krejčí, 18M. Brokate and P. Krejčí, Optimal control of ODE systems involving a rate independent variational inequality, Discrete and continuous dynamical systems series B. Volume 18 (2013), 331-348.
On uniqueness in evolution quasivariational inequalities. M Brokate, P Krejčí, H Schnabel, Journal of Convex Analysis. 11M. Brokate, P. Krejčí and H. Schnabel, On uniqueness in evolution quasivariational inequalities Journal of Convex Analysis 11, 2004, 111-130.
Optimal control of the sweeping process over polyhedral controlled sets. G Colombo, R Henrion, D Nguyen, B S Hoang, Mordukhovich, J. Differential Equations. 260G. Colombo, R. Henrion, Nguyen D. Hoang, B. S. Mordukhovich, Optimal control of the sweeping process over polyhedral controlled sets, J. Differential Equations 260 (2016), 3397-3447.
Optimization of a perturbed sweeping process by constrained discontinuous controls. G Colombo, B Sh, Nguyen Mordukhovich, Tr, Dao Nguyen, 26ppsubmittedG. Colombo, B. Sh. Mordukhovich, Nguyen Tr. Dao Nguyen, Optimization of a perturbed sweeping process by constrained discontinuous controls, submitted (2018), 26 pp.
The minimum time function for the controlled Moreau's sweeping process. G Colombo, M Palladino, SIAM J. Control. 54G. Colombo and M. Palladino, The minimum time function for the controlled Moreau's sweeping process, SIAM J. Control 54 (2016), 2036-2062.
Stabilization of periodic sweeping processes and asymptotic average speed for soft locomotors with dry friction. G Colombo, P Gidoni, E Vilches, in preparationG. Colombo, P. Gidoni, and E. Vilches, Stabilization of periodic sweeping processes and asymptotic average speed for soft locomotors with dry friction, in preparation.
Prox-regular sets and applications, in Handbook of nonconvex analysis and applications. G Colombo, L Thibault, D. Y. Gao and D. Motreanu eds.Int. PressG. Colombo, L. Thibault, Prox-regular sets and applications, in Handbook of nonconvex analysis and applications, 99-182, D. Y. Gao and D. Motreanu eds., Int. Press (2010).
Optimal control involving sweeping processes. Set-Valued Var. M D R De Pinho, M M A Ferreira, G V Smirnov, Anal. 27M.d.R. de Pinho, M.M.A. Ferreira, G.V. Smirnov, Optimal control involving sweeping processes. Set-Valued Var. Anal. 27 (2019), 523-548.
Liquid crystal elastomer strips as soft crawlers. A Desimone, P Gidoni, G Noselli, 10.1016/j.jmps.2015.07.017Journal of the Mechanics and Physics of Solids. 84A. DeSimone, P. Gidoni and G. Noselli, Liquid crystal elastomer strips as soft crawlers, Journal of the Mechanics and Physics of Solids 84, pp. 254-272 (2015), doi: 10.1016/j.jmps.2015.07.017
Crawling motility through the analysis of model locomotors: two case studies. A Desimone, A Tatone, Eur. Phys. J. E. 35A. DeSimone and A. Tatone, Crawling motility through the analysis of model locomotors: two case studies, Eur. Phys. J. E. 35 (2012).
Rate-independent soft crawlers. P Gidoni, Quart. J. Mech. Appl. Math. 71P. Gidoni, Rate-independent soft crawlers. Quart. J. Mech. Appl. Math. 71 (2018), 369-409.
Stasis domains and slip surfaces in locomotion of a bio-inspired two-segment crawler. P Gidoni, A Desimone, Meccanica. 52P. Gidoni and A. DeSimone, Stasis domains and slip surfaces in locomotion of a bio-inspired two-segment crawler, Meccanica 52 (2017) 587-601.
On the genesis of directional friction through bristle-like mediating elements. P Gidoni, A Desimone, ESAIM Control Optim. Calc. Var. 23P. Gidoni and A. DeSimone, On the genesis of directional friction through bristle-like mediating elements, ESAIM Control Optim. Calc. Var. 23 (2017) 1023-1046.
Artificial annelid robot driven by soft actuators. K Jung, J C Koo, Bioinspiration Biomim. 2K. Jung, J. C. Koo, et al. Artificial annelid robot driven by soft actuators, Bioinspiration Biomim. 2 (2007) S42-S49.
P Krejčí, Hysteresis, Convexity and Dissipation in Hyperbolic Equations. Gattotoscho. P. Krejčí, Hysteresis, Convexity and Dissipation in Hyperbolic Equations. Gattotoscho, 1996.
Lessons from Animals and Plants: The Symbiosis of Morphological Computation and Soft Robotics. C Laschi, B Mazzolai, IEEE Robot. Automat. Mag. 233C. Laschi, B. Mazzolai, Lessons from Animals and Plants: The Symbiosis of Morphological Computation and Soft Robotics. IEEE Robot. Automat. Mag. 23(3): 107-114 (2016)
Structurally stable families of periodic solutions in sweeping processes of networks of elastoplastic springs. I Gudoshnikov, O Makarenkov, submittedI. Gudoshnikov and O. Makarenkov, Structurally stable families of periodic solutions in sweeping processes of networks of elastoplastic springs. submitted.
One-period stability analysis of polygonal sweeping processes with application to an elastoplastic model. I Gudoshnikov, M Kamenskii, O Makarenkov, N Voskovskaia, Mathematical Modelling of Natural Phenomena. in pressI. Gudoshnikov, M. Kamenskii, O. Makarenkov and N. Voskovskaia, One-period stability analysis of polygonal sweeping processes with application to an elastoplastic model, Mathematical Modelling of Natural Phenomena, in press.
Regularization of differential variational inequalities with locally prox-regular sets. M Mazade, L Thibault, Math. Program. 139Ser. BM. Mazade and L. Thibault, Regularization of differential variational inequalities with locally prox-regular sets. Math. Program. 139 (2013), Ser. B, 243-269.
On rate-independent hysteresis models. A Mielke, F Theil, NoDEA Nonlinear Differential Equations Appl. 11A. Mielke and F. Theil, On rate-independent hysteresis models, NoDEA Nonlinear Differential Equations Appl. 11 (2004) 151-189.
Rate-independent Systems, Theory and Application. A Mielke, T Roubíček, SpringerNew YorkA. Mielke and T. Roubíček, Rate-independent Systems, Theory and Application. (Springer, New York 2015)
Dynamics and optimal actuation of a three-sphere low-Reynolds number swimmer with muscle-like arms. A Montino, A Desimone, Acta. Appl. Math. 149A. Montino and A. DeSimone, Dynamics and optimal actuation of a three-sphere low-Reynolds number swimmer with muscle-like arms, Acta. Appl. Math. 149 (2017) 53-86.
Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory. B Sh, SpringerB. Sh. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory, Springer (2006).
A robotic crawler exploiting directional frictional interactions: Experiments, numerics and derivation of a reduced model. G Noselli, A Desimone, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 47020140333G. Noselli and A. DeSimone, A robotic crawler exploiting directional frictional interactions: Experiments, nu- merics and derivation of a reduced model, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 470 (2014) 20140333.
. R T Rockafellar, R , J-B Wets, Variational Analysis. SpringerR. T. Rockafellar, R. J-B. Wets, Variational Analysis, Springer (1998).
Worm-like locomotion systems: Development of drives and selective anisotropic friction structures. M Schulke, L Hartmann, C Behn, Proc. 56th Int. Scientific Colloq. 56th Int. Scientific ColloqIlmenau, GermanyM. Schulke, L. Hartmann, and C. Behn, Worm-like locomotion systems: Development of drives and selective anisotropic friction structures. Proc. 56th Int. Scientific Colloq., Ilmenau, Germany, Sep. 2011.
Model-free control framework for multi-limb soft robots. V Vikas, P Grover, B Trimmer, IEEE/RSJ International Conference on Intelligent Robots and Systems. V. Vikas, P. Grover and B. Trimmer, Model-free control framework for multi-limb soft robots, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. (2015) 1111-1116.
. R B Vinter, Optimal Control. BirkhäuserR.B. Vinter, Optimal Control, Birkhäuser, Boston (2000).
Crawling scallop: friction-based locomotion with one degree of freedom. G L Wagner, E Lauga, J. Theoret. Biol. 324G. L. Wagner and E. Lauga, Crawling scallop: friction-based locomotion with one degree of freedom, J. Theoret. Biol. 324 (2013) 42-51.
Giovanni Colombo) Università di Padova, Dipartimento di Matematica. M Zoppello, F Cardin, 35121 Padova, Italy E-mail address: [email protected] (Paolo Gidoni) Czech Academy of Sciences, Institute of Information Theory and Automation (UTIA). 63Swim-like motion of bodies immersed in an ideal fluid ESAIM: COCV 25. pod vodárenskou veží 4, 182 08, Prague 8, Czech Republic E-mail address: [email protected]. Zoppello and F. Cardin, Swim-like motion of bodies immersed in an ideal fluid ESAIM: COCV 25(16) (2019). (Giovanni Colombo) Università di Padova, Dipartimento di Matematica "Tullio Levi-Civita", via Tri- este 63, 35121 Padova, Italy E-mail address: [email protected] (Paolo Gidoni) Czech Academy of Sciences, Institute of Information Theory and Automation (UTIA), pod vodárenskou veží 4, 182 08, Prague 8, Czech Republic E-mail address: [email protected]
| zyda_arxiv-1927000 |
Zero-shot causal learning
Hamed Nilforoshan
Department of Computer Science
Stanford University
Michael Moor [email protected]
Department of Computer Science
Stanford University
Yusuf Roohani
Department of Biomedical Data Science
Stanford University
Yining Chen
Department of Computer Science
Stanford University
Anja Šurina
Department of Neuroinformatics
ETH Zürich
Michihiro Yasunaga
Department of Computer Science
Stanford University
Sara Oblak
Department of Computer Science
University of Ljubljana
Jure Leskovec
Department of Computer Science
Stanford University
Zero-shot causal learning
Predicting how different interventions will causally affect a specific individual is important in a variety of domains such as personalized medicine, public policy, and online marketing. There are a large number of methods to predict the effect of an existing intervention based on historical data from individuals who received it. However, in many settings it is important to predict the effects of novel interventions (e.g., a newly invented drug), which these methods do not address. Here, we consider zero-shot causal learning: predicting the personalized effects of a novel intervention. We propose CaML, a causal meta-learning framework which formulates the personalized prediction of each intervention's effect as a task. CaML trains a single meta-model across thousands of tasks, each constructed by sampling an intervention, along with its recipients and nonrecipients. By leveraging both intervention information (e.g., a drug's attributes) and individual features (e.g., a patient's history), CaML is able to predict the personalized effects of novel interventions that do not exist at the time of training. Experimental results on real world datasets in large-scale medical claims and cell-line perturbations demonstrate the effectiveness of our approach. Most strikingly, CaML's zero-shot predictions outperform even strong baselines trained directly on data from the test interventions.However, in many real-world applications natural experiment data is entirely unavailable, and yet CATE estimation is critical. For instance, when new drugs are discovered, or new government policies are passed, it is important to know the effect of these novel interventions on individuals and subgroups Preprint. Under review.
Introduction
Personalized predictions about how an intervention will causally affect a specific individual are important across many high impact applications in the physical, life, and social sciences. For instance, consider a doctor deciding whether or not to prescribe a drug to a patient. Depending on the patient, the same drug could either (a) cure the disease, (b) have no effect, or (c) elicit a life-threatening adverse reaction. Predicting which effect the drug will have for each patient could revolutionize healthcare by enabling personalized treatments for each patient.
The causal inference literature formalizes this problem as conditional average treatment effects (CATE) estimation, in which the goal is to predict the effect of an intervention, conditioned on patient characteristics (X). When natural experiment data is available, consisting of individuals who already did and did not receive an intervention, a variety of CATE estimators exist to accomplish this task [1,2,14,21,28,32,42,53,34,66]. These methods can then predict the effect of an existing intervention (W ) on a new individual (X ).
in advance, i.e., before anybody is treated. There is thus a need for methods that can predict the effect of a novel intervention (W ) on a new individual (X ) in a zero-shot fashion, i.e., without relying on any historical data from individuals who received the intervention.
Generalizing to novel interventions is especially challenging because it requires generalizing across two dimensions simultaneously: to new interventions and new individuals. This entails efficiently "aligning" newly observed interventions to the ones previously observed in the training data. A zero-shot causal learning framework thus requires drawing analogies between the properties of interventions, and linking these to the features and outcomes of individual samples.
Present work. Here, we propose CaML (Causal Meta-learning), a general framework for training a single meta-model to estimate CATE across many interventions, including novel interventions that did not exist at the time of model training (Figure 1). Our key insight is to frame CATE estimation for each intervention as a separate meta-learning task. For each task observed during training, we sample a retrospective natural experiment of individuals who did and who did not receive the intervention. This natural experiment data is used to estimate the effect of the intervention for each individual (using any off-the-shelf CATE estimator), which serves as the training target for the task.
In order to achieve zero-shot generalization to new interventions, we include information (W ) about the intervention (e.g., a drug's attributes), in the task. We then train a single meta-model which fuses intervention features with individual-level features (X) to predict the intervention's effect. Our approach allows us to predict the causal effect of novel interventions, i.e. interventions without sample-level training data, such as a newly discovered drug ( Figure 1). We refer to this capability as zero-shot causal learning.
Experiments across different real-world settings show that CaML is both scalable and effective, including the application to a large-scale medical dataset featuring tens of millions of patients. Most strikingly, CaML's zero-shot performance exceeds even strong baselines that were trained directly on data from the test interventions. We further find that CaML is capable of zero-shot generalization even under challenging conditions: when trained only on single interventions, at inference time it can accurately predict the effect of combinations of novel interventions. Finally, we explain these findings, by proving a zero-shot generalization bound.
Related Work
We discuss recent work which is most closely related to zero-shot causal learning, and provide an extended discussion of other related work in Appendix B. Most CATE estimators do not address novel interventions, requiring that all considered interventions be observed during training. A notable exception is recent methods which estimate CATE for an intervention using structured information about its attributes [23,33]. In principle, these methods can also be used for zeroshot predictions. These methods estimate CATE directly from the raw triplets (W, X, Y ), without considering natural experiments, by tailoring specific existing CATE estimators (the S-learner [42] and Robinson decomposition [53], respectively) to structured treatments. The main drawback of these approaches is that they are inflexible, i.e., they are restricted to using a single estimator and are unable to take advantage of the recent advances in the broader CATE estimation literature (e.g., recently developed binary treatment estimators [14,19,38]). This is a limitation because any single CATE estimator can be unstable across different settings [13]. Notably, the estimators which these methods build on have already been shown to result in high bias in many domains [42,35,9,14]. Likewise, we find that these methods struggle with zero-shot predictions (Section 6). CaML's key difference from prior work is that we construct a separate task for each training intervention by synthesizing natural experiments. This allows us to (a) flexibly wrap any existing CATE estimator to obtain labels for each task, and thus take advantage of the most recent CATE estimation methods and (b) leverage meta-learning, which requires task-structured data. Consequently, CaML is able to achieve strong zero-shot performance (Section 6).
Background: Single-intervention CATE estimation
Each task in the CaML framework consists of estimating conditional average treatment effects (CATEs) for a single binary treatment. In this section, we first provide background on CATE estimation under this simple case of a single treatment (W ) and outcome (Y ), and subsequently generalize it to our zero-shot setting. Under a single intervention and outcome, we consider n independent observations P 1 , . . . , P n drawn from a distribution P. For unit i = 1, ..., n, P i = Figure 1: Overview of the zero-shot causal learning problem. Each individual has features (X), an intervention with features (W ), and an outcome (Y ). Lightning bolts ( ) represent interventions (e.g. drugs). The personalized effect of an intervention (τ ) is always unobserved. The goal is to predict the τ for a novel intervention (W ) and individual (X ) that did not exist during training.
(W i , X i , Y i ) ∼ P collects: a binary or continuous outcome of interest Y i ∈ Y ⊂ R, instance features X i ∈ X ⊂ R d , a treatment-assignment indicator W i ∈ {0, 1}. We use the Neyman-Rubin potential outcomes framework [31], in which Y i (1), Y i (0) reflect the outcome of interest either under treatment (W i = 1), or under control (W i = 0), respectively. In our running medical example, Y i (1) is health status if exposed to the drug, and Y i (0) is health status if not exposed to the drug. Notably, the fundamental problem of causal inference is that we only observe one of the two potential outcomes, as Y i = W i · Y i (1) + (1 − W i ) · Y i (0) (e.g. either health status with or without drug exposure can be observed for a specific individual, depending on whether they are prescribed the drug). However, it is possible to make personalized decisions by estimating treatment effects that are tailored to the attributes of individuals (based on features X). Thus, we focus on estimating τ (x), known as the conditional average treatment effect (CATE):
CATE = τ (x) = E P Y (1) − Y (0) | X = x(1)
A variety of methods have been developed to estimate τ (x) from observational data [14]. These rely on standard assumptions of unconfoundedness, consistency, and overlap [50]. Unconfoundedness: there are no unobserved confounders, i.e. Y i (0), Y i (1) ⊥ ⊥ W i | X i . Consistency: Y i = Y i (W i ), i.e. treatment assignment determines whether Y i (1) or Y i (0) is observed. Overlap: Treatment assignment is nondeterministic, such that for all x in support of X: 0 < P (W i = 1 | X i = x) < 1.
Zero-shot causal learning
In many real-world settings (e.g. drugs, online A/B tests) novel interventions are frequently introduced, for which no natural experiment data are available. These settings require zero-shot CATE estimates. The zero-shot CATE estimation problem extends the prior section, except the intervention variable W i is no longer binary, but rather contains rich information about the intervention: W i ∈ W ⊂ R e (e.g., a drug's chemistry), where W i = 0 corresponds to a sample that did not receive any intervention. Thus, each intervention value w has its own CATE function that we seek to estimate:
CATE w = τ w (x) = E P Y (w) − Y (0) | X = x ,(2)
During training, we observe n independent observations P 1 , . . . , P n drawn from a distribution P.
Each P i = (W i , X i , Y i ) ∼ P.
Let W seen be set of all interventions observed during training. The zero-shot CATE estimation task consists of estimating CATE for a novel intervention that was never observed during training:
Problem 1 (Zero-shot CATE estimation). Given n training observations (W 1 , X 1 , Y 1 ), . . . , (W n , X n , Y n ) drawn from P containing intervention information, individual features, and outcomes... estimate τ w (x) for a novel intervention w / ∈ W seen .
This problem formulation extends in a straightforward manner to combinations of interventions, by allowing a single intervention W i to consist of a set of intervention vectors. CaML supports combinations of interventions, as we elaborate on in Section 4.1
CaML overview. We propose a novel framework for estimating CATE across multiple interventions, even including ones that were never encountered during training. Our framework consists of three Figure 2: Visual illustration of the CaML (causal meta-learning) framework.
X W Loss {~~ X W { } { } { } { } Y
(1) We sample a task (i.e., an intervention) and a natural experiment from the training data consisting of individuals who either received the intervention (W={ }), or did not (W={}). Each individual has features (X) and an outcome (Y ), and the intervention also has information (W ) (e.g., a drug's attributes).
(2) For each individual we estimate the effect of the intervention on the outcome (pseudo-outcomesτ ).
(3) We predict an individual's pseudo-outcomesτ using a model that fuses X and W . CaML is trained by repeating this procedure across many tasks and corresponding natural experiments.
key components ( Figure 2). First, we formulate CATE estimation as a meta-learning problem in which each task corresponds to the CATE estimation for a unique intervention. A task dataset for a given intervention is constructed by sampling a natural experiment of all individuals who received the intervention, and a sample of individuals who did not. Tasks are augmented with intervention information (W ). Synthesizing these natural experiments allows us to compute a noisy CATE label τ using any off-the-shelf estimator (τ is referred to as pseudo-outcomes by the causal inference literature [14]). Finally, we train a single meta-model to predict these labels using individual-level (X) and intervention-level (W ) information, such that it is able to generalize to novel tasks, i.e., estimating CATE for novel interventions.
The CaML framework incorporates three important design considerations: (1) Single meta-model. In domains such as electronic health records and online marketing, we observe that large-scale datasets contain thousands of interventions with rich feature information (W ). Instead of training a separate model for each intervention, CaML trains a single meta-model that can estimate CATE across all interventions. This approach lets us leverage shared structure across tasks and generalize to novel interventions that were not present during training.
(2) Pseudo-outcomes. Rather than directly modeling the response surfaces E[Y (w) | X = x] and E[Y (0) | X = x], we train our model using pseudo-outcomes for each intervention. This decision is based on recent research that highlights the estimation bias when inferring CATE from direct predictions of observed outcomes [9, 42]. CaML outperforms strong baselines that meta-learn these outcomes directly, as demonstrated in our experiments (see Tables 2 and 3, rows S-learner and T-learner CaML identifies CATE for novel interventions under the assumptions that: (1) for each observed intervention w, τ w (x) is identifiable under the binary treatment assumptions (unconfoundedness, consistency, and overlap) in Section 3. This allows for valid training labels for each task.
(2) τ w (x) = τ (w, x), i.e., a global function τ (w, x) unifies all intervention-specific CATE functions, (3) τ (w, x) is continuous in w. This allows the model to smoothly extrapolate the treatment effect to new interventions that are close to observed interventions in the interventions space. Lastly, (4) W follows a continuous distribution.
Meta-dataset
We formulate CATE estimation as a meta-learning problem. For this, each task refers to CATE estimation for a distinct intervention. Interventions as well as tasks in our meta-dataset are jointly indexed by j ∈ N with 1 ≤ j ≤ K, such that we can refer to the j-th intervention features with w (j) .
We then construct a meta-dataset D in the following way:
D = D (j) treated ∪ D (j) control , w (j) K j=1 , with (3) D (j) treated = {(X i , Y i ) | W i = w (j) } and D (j) control = {(X i , Y i ) | W i = 0)}.(4)
D (j) denotes the natural experiment dataset for task j, composed of a treated group (instances which received the intervention, i.e. W i = w (j) ) and control group (instances which did not receive any intervention, i.e. W i = 0). Each sample i represents an individual, for which the quantities (X i , Y i ) are collected as introduced in Section 3. In practice, we down-sample both groups (i.e. to 1 million samples for the treated and control groups) in our large-scale experiments.
We augment each task dataset D (j) with intervention information, w (j) ∈ R e , for zero-shot generalization to new interventions [33,16,82,37]. The form of w (j) varies with the problem domainfor text interventions, it could be a language model's text embedding [74,79,55], while biomedical treatments can be represented as nodes in a knowledge graph [7,47]. Additionally, domain-specific features, like treatment categories from an ontology, may be included in w (j) . To handle combinations of interventions (e.g. pairs of drugs), we aggregate the w for each intervention using an order-invariant pooling operation (we used the sum operator), and sample a separate natural experiment for individuals who received the full combination.
Estimating pseudo-outcomes
We next estimate the training targets for each task (i.e. intervention) in the meta-dataset. The training target (τ (j) ) is an unbiased, but noisy, estimate of CATE. More formally, for each task j (which points to the natural experiment dataset for intervention w (j) ), we
estimateτ (j) , where E P [τ (j) |X = x] = τ w (j) (x). Thus,τ (j) i
denotes the target for the i-th sample in the j-th task (indexing will be omitted when it is clear from context). We refer to these targets as pseudo-outcomes, following prior literature [14]. For more details on pseudo-outcomes, refer to Section B in the appendix.
CaML is agnostic to the specific choice of pseudo-outcome estimator. Thus, we assume a function η(D (j) ) which takes as input a task dataset D (j) ∈ D and returns a vector containing the pseudooutcomesτ for each sample in the task. We extend each task dataset D (j) with the pseudo-outcomes, such that a sample holds the elements (X i , Y i ,τ i ). Our key insight is that by collecting these pseudo-outcomes across multiple tasks, and predicting them using a combination of intervention and individual information (W, X) we can develop a CATE estimator which generalizes to novel interventions. In practice, we use the RA-learner [15] and treat pseudo-outcome estimation as a data pre-processing step (Appendix C.6).
Algorithm 1 The CaML algorithm
Require: meta-dataset D, meta-model Ψ θ with initialized parameters θ, hyperparameter k.
for iteration = 1, 2, . . . , L do j ← SAMPLETASK() D (j) treat , D (j) ctrl , w (j) ← QUERYTASKDATA(j) τ (j) ←ESTIMATEPSEUDOOUTCOMES(D (j) treat , D (j) ctrl ) θ ← ADAPT((D (j) treat , D (j) ctrl ),τ (j) , w (j) , Ψ θ , k) g ← θ − θ {Reptile gradient} θ ← θ − βg {Gradient step for meta-model Ψ θ } end for return Ψ θ function ADAPT(Data D, Pseudo-outcomesτ , Intervention features w, Model Ψ θ , # of Steps k)
Ψ θ ← Create copy of Ψ θ for s = 1, 2, . . . , k do Draw batch of size b from D. Compute loss Ls by feeding instances through model, conditioned on task:
Ls = 1 b b i=1 (τi − Ψ θ (wi, xi)) 2
Update parameters of Ψ θ : θ ← θ − α∇Ls end for end function Algorithm 1. Note that depending on the choice of CATE estimator, this routine iterates only over treated samples of a task dataset D (j) (as in our experiments), or over all samples, including untreated ones.
CaML architecture
To parameterize Ψ θ , we propose a simple but effective model architecture (see Section 6):
Ψ θ (w, x) = MLP 1 ([w;x]), withx = MLP 2 (x) andw = MLP 3 (w),(7)
where [· ; ·] denotes concatenation. Equation 7 shows that the intervention features w and individual features x are encoded separately into dense vectorsw andx, respectively. Our MLPs consist of layers of the form g (z) = z + ReLU(Linear(z)).
Theoretical analysis
We now consider zero-shot causal learning from a theoretical perspective. Under simplified assumptions, we bound the prediction error in the zero-shot setting.
We formulate the setting as a supervised learning problem with noisy labels (pseudo-outcomes) where we learn a smooth function f = Ψ(w, x) → τ among a family F. We focus on τ ∈ R, and assume τ ∈ [0, 1] without loss of generality, since we can normalize τ to this range. The training dataset has n interventions with m samples each, i.e. first n i.i.d. draws from P W : w (1) , . . . , w (n) and then for each w (j) , m i.i.d. draws from P X : x
(j) 1 , . . . , x (j)
m . The main theorem quantifies the rate that combining information across different interventions helps with zero-shot performance. We prove a finite-sample generalization bound for the ERM variant of CaML. The ERM is a special case of ADAPT with k = 1 that is more conducive to rigorous analysis. The advantage of Reptile over ERM is orthogonal and we refer the readers to the original discussion [52]. We assume the estimated pseudo-outcomesτ during training satisfyτ = τ + ξ where ξ is an independent zero-mean noise with |ξ| ≤ almost surely for some ≥ 0,
f = min f ∈FL (f ) = min f 1 nm n j=1 m i=1 (f (w (j) , x (j) i ) −τ (j) i ) 2 . The test error is L(f ) = E W,X,τ [(f (w, x) − τ ) 2 ]. Let f * = min f L(f ). We bound the excess loss L(f ) − L(f * ).
Our key assumption is that interventions with similar features W have similar effects in expectation. We assume that all functions in our family are smooth with respect to W , i.e., ∀f ∈ F, E W,X ∂f /∂W 2 2 ≤ β 2 . Theorem 1. Under our assumptions, with probability 1 − δ,
L(f ) ≤ L(f * ) + 8(1 + )R nm (F) + 8
(1 + )R nm (F) log(1/δ) n + 2 log(1/δ) 3n +
(1 + ) (32Cβ 2 + 2(1 + ) 2 /m) log (1/δ) n where R nm is a novel notion of zero-shot Rademacher complexity defined in equation (8); C is a Poincaré constant that only depends on the distribution of W . For large n, m, the leading terms are the function complexity R nm (F), and an O( 1/n) term with a numerator that scales with β and (1 + ) 2 /m. This validates our intuition that when the intervention information W is more informative of the true treatment effects (smaller β), and when the estimation of τ in the training dataset is more accurate, the performance is better on novel interventions. Please refer to Section A for the full proof. Compared to standard generalization bound which usually has a 1/n term, our main technical innovation involves bounding the variance by the smoothness of the function class plus Poincaré-type inequalities. When β is much smaller than 1 we achieve a tighter bound.
Experiments
We explore to what extent zero-shot generalization is practical when predicting the effects of interventions. We thus design two novel evaluation settings using real-world data in domains where zero-shot CATE estimation will be highly impactful: (1) Health Insurance Claims: predicting the effect of a drug on a patient, and (2) LINCS: predicting the effect of a perturbation on a cell. We use new datasets because existing causal inference benchmarks [29,69] focus on a single intervention. By contrast, zero-shot causal learning must be conceptualized in a multi-intervention setting.
Zero-shot Evaluation. Each task corresponds to estimating CATE across different individual samples that received the same intervention. We split all tasks into meta-training/meta-validation, and a hold-out meta-testing set for evaluating zero-shot predictions ( Table 2, unseen drugs for Claims and Table 3, unseen molecular perturbations in LINCS). For the Claims dataset, we also consider the challenging setting of combinations of unseen drugs (Table B.3). The same patient (Claims) or cell-line (LINCS) can appear in multiple tasks (if they received different interventions at different times). Thus, to ensure a fair zero-shot evaluation, we exclude all samples who have ever received a meta-testing intervention from meta-val/meta-train. Similarly, we exclude all meta-validation patients from meta-train. Details on holdout selection are provided in Appendix C.2. Table 1 gives an overview of both benchmarks. In the Claims dataset, we compare zero-shot predictions with strong single-intervention baselines which cannot generalize to unseen interventions.
To do so, we further split each task in meta-validation and meta-testing into a train/test (50/50) split of samples. These baselines are trained on a task's train split, and all methods are evaluated on the test split of the meta-testing tasks. On the LINCS dataset, as each task consists of < 100 cells, single-intervention baselines performed weakly and are excluded from analysis.
Baselines. We compare the zero-shot performance of CaML to two distinct categories of baselines.
(1) Trained directly on test interventions. (2) Zero-shot baselines are trained across all meta-training tasks and are able to incorporate intervention information (W ). These methods are thus, in principle, capable of generalizing to unseen interventions. We use GraphITE [23] and Structured Intervention Networks (SIN) [33]. We also introduce two strong baselines which learn to directly estimate potential outcomes by meta-learning across all training interventions, without using pseudo-outcomes: S-learner and T-learner with meta-learning. These extend the S-learner and T-learner from prior work [42] to incorporate intervention information (W ) in their predictions. We elaborate on implementation details of baselines in Appendix C.7. For details on hyperparameter search and fair comparison, see Appendix C.1.
Setting 1: Personalized drug side effect prediction from large-scale medical claims
Our first setting (Claims) is to predict the increased likelihood of a life-threatening side effect caused by a drug prescription. We leverage a large-scale insurance claims dataset of over 3.5 billion claims across 30.6 million patients in the United States 1 . Each datestamped insurance claim contains a set of diagnoses (ICD-10 codes), drug prescriptions (DrugBank ID), procedures (ICD-10 codes), and laboratory results (LOINC codes). Laboratory results were categorized by whether the result was high, low, normal, abnormal (for non-continuous labs), or unknown.
Interventions are administration of one drug (n = 745), or two drugs (n = 22,883) prescribed in combination. Time of intervention corresponds to the first day of exposure. Intervention information (W ) was generated from pre-trained drug embeddings from a large-scale biomedical knowledge graph [7] (Appendix C). We compute drug combination embeddings as the sum of the embeddings of the constituent drugs. We focus on the binary outcome (Y ) of the occurrence of the side effect pancytopenia within 90 days of intervention exposure. Pancytopenia is a deficiency across all three blood cell lines (red blood cells, white blood cells, and platelets). Pancytopenia is life-threatening, with a 10-20% mortality rate [36,41], and is a rare side effect of many common medications [40] (e.g. arthritis and cancer drugs), which in turn require intensive monitoring of the blood work. Following prior work [22], patient medical history features (X) were constructed by time-binned counts of each unique medical code (diagnosis, procedure, lab result, drug prescription) at seven different time scales before the drug was prescribed, resulting in a total of 443,940 features. For more details, refer to Appendix C.1.
Metrics We rely on best practices for evaluating CATE estimators in observational data, as established by recent work [81,10], which recommend to assess treatment rules by comparing subgroups across different quantiles of estimated CATE. We follow the high vs. others RATE (rank-weighted average treatment effect) approach from Yadlowsky et. al [81], which computes the difference in average treatment effect (ATE) of the top u percent of individuals (ranked by predicted CATE), versus all individuals (for more details, see Appendix C.1). For instance, RATE @ 0.99 is the difference between the top 1% of the samples (by estimated CATE) vs. the average treatment effect (ATE) across all samples, which we would expect to be high if the CATE estimator is accurate. Note that estimates of RATE can be negative if model predictions are inversely associated with CATE. We elaborate on the RATE computation in Appendix C.1.
The real-world use case of our model is preventing drug prescription for a small subset of high-risk individuals. Thus, more specifically, for each task j, intervention w j in the meta-dataset, and metamodel Ψ θ , we compute RAT E @ u for each u in [0.999, 0.998, 0.995, 0.99] across individuals who received the intervention.
Additionally, because our meta-testing dataset consists of individuals treated with drugs known to cause pancytopenia, observational metrics of recall and precision are also a rough proxy for successful CATE estimation (and highly correlated to RATE, Table 2). Thus, as secondary metrics, we also compute Recall @ u and P recision @ u for the same set of thresholds as RATE, where a positive label is defined as occurrence of pancytopenia after intervention.
Setting 2: Cellular gene expression response due to perturbation
Our second setting (LINCS) is to predict how a cell's gene expression (Y ) will respond to intervention from perturbagen (small molecule compound such as a drug). This is a critical problem as accurately predicting intervention response will accelerate drug-discovery. We use data for 10,325 different perturbagens from the LINCS Program [70]. Each perturbagen corresponds to a different small molecule. Molecular embeddings were generated using the RDKit featurizer Table 2: Performance results for the Claims dataset (predicting the effect of drug exposure on pancytopenia onset from patient medical history). Key findings are (1) CaML outperforms all zeroshot baselines (RATE is 18-27% higher than T-Learner w/ meta-learning, the strongest zero-shot baseline) (2) CaML performs stronger (up to 8× higher RATE values) than 6 of the 7 baselines which are trained directly on the test interventions, and performs comparably to the strongest baseline trained directly on the test interventions (RA-learner). Mean is reported across all runs; standard deviations included in (Appendix Table 4). Analogous trends hold for generalization to pairs of unseen drugs (Appendix Table B.3).
cell-line (n = 99), each of which correspond to unperturbed gene expression measured in a different lab environment using a different experimental assay. For more details, see Appendix C.1.
Metrics.
A key advantage of experiments on cells is that at evaluation time we can observe both Y (0) and Y (1) for the same cell line X, through multiple experiments on clones of the same cellline in controlled lab conditions. In the LINCS dataset, Y (0) is also measured for all cells which received an intervention. Thus, we can directly compute the precision in estimating heterogeneous effects (PEHE) on all treated cells in our meta-testing dataset, an established measure for CATE estimation performance analogous to mean-squared error [28] (see Appendix C.1).
Trained on test intervention
Zero-shot (Best in bold) (Best underlined)
Key findings
CaML's zero-shot predictions outperform baselines with direct access to the target intervention. In the medical claims setting, single intervention baselines (Tables 2, dark grey rows) are the highest performing baselines as we train them directly on the meta-test intervention. Still, CaML outperforms 6 out of 7 of these baselines (up to 8× higher RATE) and achieves comparable performance to the strongest of these baselines, the RA-learner. Furthermore, CaML strongly outperforms alternative zero-shot CATE estimators (RATE is 18-27% higher than T-Learner w/ meta-learning, the strongest zero-shot baseline). In the LINCS data, multi-intervention learners are strongest as there are only a small number of instances (cell lines) per intervention 2 . CaML outperforms both single-intervention and multi-intervention learners by drawing from both of their strengths-it allows us to use strong CATE estimation methods (i.e. the RA-learner) which previously were restricted to single interventions, while sharing information across multiple interventions.
CaML learns to generalize from single interventions to combinations of unseen interventions (drug pairs). We evaluate CaML's performance in the challenging setting of predicting the personalized effects of combinations of two drugs which are both unseen during training, while only training on interventions consisting of single drugs. CaML achieves strong performance results (see Appendix Table B.3), surpassing the best baseline trained on the test tasks, and outperforms all zero-shot baselines, across all 12 metrics.
Understanding CaML's performance results. Our ablation studies explain that CaML's performance gains are due to (1) our meta-learning formulation and algorithm (in contrast to the w/o meta-learning 3.56 ± 0.001 3.78 ± 0.005 Table 3: Performance results for the LINCS dataset (predicting the effect of an unseen perturbation on the gene expression of an unseen cell-line). CaML outperforms all baselines. Improvement is largest for the 20 most differentially expressed genes, where most signal is expected.
row, in which ERM is used to train the model), and (2) the flexible CATE estimation strategy, allowing to take advantage of recently developed CATE estimators previously restricted to single interventions (in contrast to the w/o RA-learner row, in which an alternative pseudo-outcome estimator is used). Lastly, (3) comparison to existing binary intervention CATE estimators trained separately on each meta-testing intervention ( Table 2, grey rows) shows that we gain from learning from thousands interventions. See Appendix C.3 for details on ablations.
Conclusion
We introduce a novel approach to predict the effects of novel interventions. CaML consistently outperforms state-of-the-art baselines, by unlocking zero-shot capability for many recently developed CATE estimation methods which were previously restricted to studying single interventions in isolation. While our study is limited to retrospective data, we plan to prospectively validate our findings. Future work includes designing new model architectures and CATE estimators which learn well under the CaML framework, as well as more generally exploring novel learning strategies that enable zero-shot causal learning.
Societal impacts. In high-stakes decision-making inaccurate predictions can lead to severe consequences. It is important not to overly rely on model predictions and proactively involve domain experts, such as doctors, in the decision-making process. Additionally, it is crucial to ensure that underserved communities are not disadvantaged by errors in treatment effect estimates due to underrepresentation in the training data. This risk can be monitored by evaluating CATE estimators on underserved patient groups prior to clinical deployment.
References
[1] Ahmed M Alaa and Mihaela Van Der Schaar. Bayesian inference of individualized treatment effects using multi-task gaussian processes. Advances in neural information processing systems, 30, 2017.
[2] Susan Athey and Guido Imbens. Recursive partitioning for heterogeneous causal effects.
Proceedings of the National Academy of Sciences, 113 (27) [60] Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In International conference on machine learning, pages 2152-2161. PMLR, 2015.
[61] Yusuf Roohani, Kexin Huang, and Jure Leskovec. Gears: Predicting transcriptional outcomes of novel multi-gene perturbations. bioRxiv, 2022.
[ A Zero-shot Rademacher complexity and Proof of Theorem 1
A.1 Problem setup and assumptions
Let w ∈ W ⊆ R e denote an intervention and x ∈ X ⊆ R d denote an individual that received it. Assume the outcome to predict is a scalar y ∈ [0, 1]. The hypothesis class is F = {f : (w, x) → y}. The dataset has n interventions with m independent units which received each intervention, i.e. first n i.i.d. draws from P W and then m i.i.d. draws from P X for each w (j) . During training we have access to noisy estimateỹ = y + ξ where ξ is an independent noise with Eξ = 0 and |ξ| ≤ almost surely. We are tested directly on y.
The ERM isf = min fL (f ) = min f 1 nm n j=1 m i=1 (f (w (j) , x (j) i ) −ỹ (j) i ) 2 .
The test error is
L(f ) = E w,x,y (f (w, x) − y) 2 and let f * = min f L(f ).
We are interested in bounding the excess error L(f ) − L(f * ).
Our key assumption is that interventions with similar attributes (w) have similar effects in expectation.
More concretely, we assume that all hypotheses in our family are smooth with respect to w:
Assumption 2. ∀f ∈ F, E w,x ∂f ∂w 2 2 ≤ β 2 .
Furthermore, we assume that P W satisfies a Poincaré-type inequality: Assumption 3. For some constant C that only depends on P W , for any smooth function F ,
V ar w [F (w)] ≤ CE ∇ w F (w) 2 2 .
For example, P W can be any of the following distributions:
• Multivariate Gaussian: w ∈ R e ∼ N (µ, Σ) for some vector µ ∈ R e and positive semidefinite matrix Σ ∈ R e×e ;
• w ∈ R e has independent coordinates; each coordinate has the symmetric exponential distribution 1/2e −|t| for t ∈ R.
• P W is a mixture over base distributions satisfying Poincaré inequalities, and their pair-wise chi-squared distances are bounded.
• P W is a mixture of isotropic Gaussians in R e .
• P W is the uniform distribution over W ⊂ R e , which is open, connected, and bounded with Lipschitz boundary.
We note that isotropic Gaussian can approximate any smooth densities in R e [39] (since RBF kernels are universal), showing that Assumption 3 is fairly general. We define a novel notion of function complexity specialized to the zero-shot setting. Intuitively, it measure how well we can fit random labels, which is first drawing n interventions and m recipients for each intervention. For examples of concrete upper bound on zero-shot Rademacher complexity see section A.4.
R nm (F ) = 1 nm E w,x,σ sup f n j=1 m i=1 σ j i f (w (j) , x (j) i )(8)
where σ j i are independently randomly drawn from {−1, 1}.
A.2 Formal theorem statement
Theorem 4. Under Assumptions 2 3, with probability 1 − δ,
L(f ) ≤ L(f * ) + 8(1 + )R nm (F) + 8 (1 + )R nm (F) log(1/δ) n +(1 + ) (32Cβ 2 + 2(1+ ) 2 m ) log (1/δ) n + 2 log (1/δ) 3n .
A.3 Proof of the main theorem
We define the population loss on the noisy label
L(f ) = E w,x,ỹ (f (w, x) −ỹ) 2 . Due to independence of ξ, E w,x,y,ξ (f (w, x) − y − ξ) 2 = E w,x,y (f (w, x) − y) 2 + E[ξ 2 ] = L(f ) + E[ξ 2 ] for any f , so L(f ) − L(f * ) = L(f ) − L(f * ).
We shall focus on bounding the latter.
We first need a lemma that bounds the supremum of an empirical process indexed by a bounded function class. Lemma 5 (Theorem 2.3 of [6]). Assume that X j are identically distributed according to P , G is a countable set of functions from X to R and, and all g ∈ G are P -measurable, square-integrable, and satisfy E[g] = 0. Suppose sup g∈G g ∞ ≤ 1, and we denote Z = sup g n j=1 g(X j ) . Suppose σ 2 ≥ sup g∈G V ar(g(X j )) almost surely, the for all t ≥ 0, we have
Pr Z ≥ EZ + 2t(nσ 2 + 2EZ) + t 3 ≤ e −t .
We apply Lemma 5 with
X j = (w (j) , x j 1 , . . . , x j m ,ỹ j 1 , . . . ,ỹ j m ), g(X j ) = 1 m i (f (w (j) , x (j) i ) −ỹ (j) i ) 2 − L(f ) , σ 2 = sup f ∈F (V ar( 1 m i (f (w (j) , x (j) i ) −ỹ (j) i ) 2 )), t = log(1/δ). Since f −ỹ ∈ [−1, 1], g ∈ [−1, 1]. With probability 1 − δ, n sup f L(f ) − L(f ) ≤ nE sup f L (f ) − L(f ) + 2 log 1 δ nσ 2 + 2nE sup f L (f ) − L(f ) + 1 3 log 1 δ .
Multiplying both sides by 1/n, and using
√ a + b ≤ √ a + √ b, sup f L(f ) − L(f ) ≤ E sup f L(f ) − L(f ) + 2 E sup f L(f ) − L(f ) log(1/δ) n + 2σ 2 log (1/δ) n + log (1/δ) 3n .(9)
The next lemma bounds the variance σ 2 in equation (9). Lemma 6.
∀f ∈ F, V ar w (j) ,x j 1...m ,ỹ j 1...m 1 m m i=1 (f (w (j) , x (j) i ) −ỹ (j) i ) 2 ≤ 4(1 + ) 2 Cβ 2 + (1 + ) 4 4m .
Proof of Lemma 6. Using the law of total variance, if we write
g(w (j) , x j 1...m ,ỹ j 1...m ) = 1 m m i=1 (f (w (j) , x (j) i ) −ỹ (j) i ) 2 , then V ar[g] = V ar w [E x,ỹ [g(w, x,ỹ) | w]] + E h [V ar x,ỹ [g(w, x,ỹ) | w]](10)
To bound the first term of equation (10), we use Poincaré-type inequalities in Assumption 3. For each of the example distributions, we show that they indeed satisfy Assumption 3.
Lemma 7. Each of the example distributions in Assumption 3 satisfies a Poincare-type inequality.
Proof.
• When P W is the uniform distribution over W ∈ R e , which is open, connected, and bounded with Lipschitz boundary, we use Poincaré-Wirtinger inequality [57] on the smooth function E[g | w]: For some constant C that only depends on P W , We can apply probabilistic Poincaré inequalities over non-Lebesgue measure P W :
V ar w [E[g | w]] ≤ CE ∇ w E[g | w] 2 2 .(11
• When w ∼ N (µ, Σ), we use the Gaussian Poincaré inequality (see e.g. Theorem 3.20 of [5] and using change of variables),
V ar[F (w)] ≤ E[ Σ∇ w F (w), ∇ w F (w) ].
We apply this with
F (w) = E[g | w]. Since E[v Av] = E[T r(v Av)] = E[T r(Avv )] = T r(AE[vv ]) ≤ A 2 E v 2 2 , V ar w [E[g | w]] ≤ Σ 2 E ∇ w E[g | w] 2 2 , which satisfies equation (11) with C = Σ 2 .
• When w ∈ R e has independent coordinates w 1 , . . . , w e and each coordinate has the symmetric exponential distribution 1/2e −|t| for t ∈ R, we first bound a single dimension using Lastly, we consider the case where P W is a mixture over base distributions satisfying Poincaré inequalities. We first consider the case where the pair-wise chi-squared distances are bounded. Next, we show that mixture of isotropic Gaussians satisfies Poincaré inequality without further condition on pair-wise chi-squared distances.
• When {P q W } q∈Q is a family of distributions, each satisfying Poincaré inequality with constant C q , and P W is any mixture over {P q W } q∈Q with density µ, let K P (µ) = ess µ sup q C q , which is an upper bound on the base Poincaré constants almost surely, and K p χ 2 (µ) = E q,q ∼µ [(1 + χ 2 (P q W ||P q W )) p ] 1/p , which is an upper bound on the pairwise χ 2 -divergence. Using Theorem 1 of [8] we get that P W satisfies Poincaré inequality with constant C such that C ≤ K P (µ)(p * + K p χ 2 (µ)) where p * is the dual exponent of p satisfying 1/p + 1/p * = 1.
As an example, when base distributions are from the same exponential family and the natural parameter space is affine, such as mixture of Poisson or Multinomial distributions, the pair-wise chi-squared distances are bounded (under some additional conditions) and hence the mixture satisfies Poincaré inequality. More formally, let p θ (x) = exp T (x) θ − F (θ) + k(x) where θ ∈ Θ is the natural parameter space and A(θ) is the log partition function. Lemma 1 in [54] shows that
χ 2 (p θ1 ||p θ2 ) = e (A(2θ2−θ1)−(2A(θ2)−A(θ1))) − 1,
which is bounded as long as 2θ 2 − θ 1 ∈ Θ. This is satisfied for mixture of 1-D Poisson distributions which can be written as p(w|λ) = 1 w! exp (w log λ − λ) with natural parameter space R, and mixture of e-dimensional Multinomial distributions p(w|π) = exp w, log π/ 1 − e−1 i=1 π i + log 1 − e−1 i=1 π i with natural parameter space R e−1 . When applied to Gaussian family the natural parameters are
θ q = Σ −1 q µ q vec − 1 2 Σ −1 q .
Since the covariance has to be positive definite matrices, 2θ q − θ q may not be a set of valid natural parameter. We deal with this in the next case.
• When {P q W } q∈Q is a mixture of isotropic Gaussians, each with mean µ q ∈ R e and covariance Σ q = σ 2 q I e , each satisfying Poincaré inequality with constant C q (in the single-Gaussian case above we know that C q ≤ σ 2 q ), P W also satisifes Poincaré inequality. We prove this via induction. The key lemma is below:
Lemma 8 (Corollary 1 of [64]). Suppose measure p 0 is absolutely continuous with respect to measure p 1 , and p 0 , p 1 satisfy Poincaré inequality with constants C 0 , C 1 respectively, then for all α ∈ [0, 1] and β = 1 − α, mixture measure p = αp 0 + βp 1 satisfies Poincaré inequality with with C ≤ max {C 0 , C 1 (1 + αχ 1 )} where χ 1 = dp0 dp1 dp 0 − 1.
We sort the components in the order of non-decreasing σ 2 q , and add in each component one by one. For each new component i = 2, . . . , |Q|, we apply the above lemma with p 0 being mixture of P 1 W , . . . , P i−1 W and p 1 being the new component P i W . We only need to prove that χ 1 is bounded at every step. Suppose p 0 = i−1 j=1 α j P j W with i−1 j=1 α j = 1, p 1 = P i W , and P j W = 1 (2π) e/2 σ e j exp − 1 2 (w − µ j ) Σ −1 j (w − µ j ) . Therefore χ 1 + 1 = dp 0 dp 1 dp 0 = w p 0 (w) 2 p 1 (w) dw
= w i−1 j=1 α 2 j σ 2e j exp − w−µj 2 σ 2 j + i−1 j=1 j =j 2αj α j σ e j σ e j exp − w−µj 2 2σ 2 j − w−µ j 2 2σ 2 j (2π) e/2 σ e i exp − w−µi 2 2σ 2 i dw
The convergence condition of the above integral is 2σ 2 i ≥ 2σ 2 j for all j < i which is satisfied when σ 2 i ≥ σ 2 j .
Next we observe that
∇ w E[g | w] = ∇ w x,ỹ (f (w, x) −ỹ) 2 p(x,ỹ)dxdỹ = 2 x,y (f (w, x) −ỹ) ∂f ∂w p(x,ỹ)dxdỹ = 2E (f (w, x) −ỹ) ∂f ∂w .
Since |f (w, x) −ỹ| ≤ 1 + almost surely, E ∂f ∂w 2 2 ≤ β 2 ,
E h ∇ w E[g | w] 2 2 = 4E (f (w, x) − y) ∂f ∂w 2 2 ≤ 4(1 + ) 2 β 2 . Therefore V ar w [E[g | w]] ≤ CE ∇ w E[g | w] 2 2 ≤ 4(1 + ) 2 Cβ 2 .
To bound the second term of equation (10), we use concentration of mean of m i.i.d. random variables.
Conditioned on w (j) , each of the loss (f (w (j) , x
L(f ) − L(f * ) ≤ 2 sup f ∈F | L(f ) −L(f )| ≤ 2E sup f | L(f ) −L(f )| + 4 E sup f L (f ) − L(f ) log(1/δ) n + (32(1 + ) 2 Cβ 2 + 2(1+ ) 4 m
) log (1/δ) n + 2 log (1/δ) 3n (12) by equation (9) and Lemma 6.
We now show that E sup f | L(f ) −L(f )| ≤ 2(1 + )R nm (F )
. This is similar to the argument for classical Rademacher complexity
E w,x,ỹ sup f 1 nm i,j (f (w (j) , x (j) i ) −ỹ (j) i ) 2 − E w,x,ỹ (f (w (j) , x (j) i ) −ỹ (j) i ) 2 ≤ 1 nm E S,S sup f i,j [(f (w (j) , x (j) i ) −ỹ (j) i ) 2 − (f (w (j) , x (j) i ) −ỹ (j) i ) 2 ] = 1 nm E S,S ,σ sup f i,j [σ j i (f (w (j) , x (j) i ) −ỹ (j) i ) 2 − σ j i (f (w (j) , x (j) i ) −ỹ (j) i ) 2 ] ≤ 1 nm E S,σ sup f i,j σ j i (f (w (j) , x (j) i ) −ỹ (j) i ) 2 + 1 nm E S ,σ sup f i,j σ j i (f (w (j) , x (j) i ) −ỹ (j) i ) 2 = 2R nm ( L).
where the first inequality uses Jensen's inequality and convexity of sup.
Now we prove the equivalent of Talagrand's contraction lemma to show that R nm ( L) ≤ 2R nm (F).
Note that the squared loss is 2(1 + )-Lipschitz since ∂(f −ỹ) 2 ∂f = 2|f −ỹ| ≤ 2(1 + ). We use the following lemma to prove this:
E σ sup θ c(θ) + N i=1 σ i φ i (θ) ≤ E σ sup θ c(θ) + N i=1 σ i ψ i (θ)
For any set of w, x, we apply Lemma 9 with Θ = F,
θ = f , N = nm, φ ij (f ) = (f (w (j) , x (j) i ) − y (j) i ) 2 , ψ ij (f ) = 2(1 + )f (w (j) , x (j) i ), and c(θ) = 0. Since (f −ỹ) 2 − (f −ỹ) 2 ≤ 2(1 + )|f − f |,
so the condition for Lemma 9 hold. We take expectation over w, x and divide both sides by nm to get
1 nm E w,x,σ sup f n j=1 m i=1 σ j i (f (w (j) , x (j) i ) −ỹ (j) i ) 2 ≤ 2(1 + ) nm E w,x,σ sup f n j=1 m i=1 σ j i f (w (j) , x (j) i )
which means R nm (L) ≤ 2(1 + )R nm (F). Substituting this into inequality (12) finishes the proof.
A.4 Zero-shot Rademacher complexity bound for the linear hypothesis class
Consider the linear classifier F = {(w 1 w + w 2 x : w 1 2 ≤ B, w 1 2 ≤ C}. Suppose w 2 ≤ 1 and x 2 ≤ 1.
R nm (F ) = 1 nm E σ,w,x sup w w 1 , ij σ j i w (j) + w 2 , ij σ j i x (j) i = 1 nm B 1 E σ,w ij σ j i w (j) 2 + B 2 E σ,x ij σ j i x (j) i 2 ≤ 1 nm B 1 m j w (j) 2 2 + B 2 ij x (j) i 2 2 = (B 1 + B 2 )/ √ nm.
We observe that the bound is the same as the standard Rademacher complexity for nm independent samples, which is interesting. The relationship between standard and zero-shot Rademacher complexity for other function classes is an important future direction.
B Extended Related Work
Our approach to zero-shot prediction of intervention effects is related to recent advances in heterogenous treatment effect (HTE) estimation, zero-shot learning, and meta-learning.
B.1 Heterogenous treatment effect (HTE) estimation
Conditional average treatment effect (CATE) estimation. A number of approaches have been developed to predict the effect of an existing intervention on an individual or subgroup, based on historical data from individuals who received it. This problem is often referred to in the literature as heterogeneous treatment effect (HTE) estimation [26,11], to denote that the goal is to detect heterogeneities in how individuals respond to an intervention. A more specific instance of HTE estimation, which we focus on here, is conditional average treatment effect (CATE) estimation [76,42], in which the goal is to predict the effect of a treatment conditioned on an individual's features. A variety of methods and specific models have been developed to achieve this goal [ . These methods estimate CATE for an existing intervention, based on historical data from individuals who received it and those that did not.
While these approaches have a number of useful applications, they do not address CATE for novel interventions which did not exist during training (zero-shot). Our primary contribution is a metalearning framework to leverage these existing CATE estimators for zero-shot predictions. In the CaML framework (Figure 2), each task corresponds to predicting CATE for a single intervention. We synthesize a task by sampling a natural experiment for each intervention, and then use any existing CATE estimator to generate a noisy target label for our the task (Step 2: estimate pseudo-outcomes). We rely on pseoudo-outcome estimates as training labels because prior work has shown that training on observed outcomes directly leads to biased CATE estimates [9, 42, 34], a result which we find holds true in our experiments as well (see T-learner and S-learner w/ meta-learning in Tables 2 and 3).
Pseudo-outcome estimators. Prior work has developed a variety of methods to estimate CATE pseudo-outcomes, which are noisy but unbiased estimates of CATE, such as the X-learner [42], R-learner [53], DR-learner [34], and RA-learner [14]. Moreover, the outputs of any other CATE estimation method, such as methods which directly estimate CATE via an end-to-end neural network [32,66,68] are an equally valid choice of pseudo-outcome. The literature on pseudo-outcome estimation is growing continuously as new estimators are being developed [19,38]. Typically, these estimators are specific to a single binary intervention, for which a set of nuisance models are trained and used to compute the pseoudo-outcomes. As such, applying meta-learning algorithms to these pseudo-outcomes requires synthesizing a natural experiment for each intervention, which corresponds to a single task in the CaML framework.
Multi-cause estimators. Our methods to address zero-shot CATE estimation for combinations of interventions are distinct from multi-cause estimators for combinations of binary or categorical interventions [78,58,62]. Recent work has shown that these methods can predict the effects of new combinations of interventions [48], when every intervention in the combination has been observed at some point during. However, these methods do not estimate CATE for novel interventions which did not exist during training. By contrast, CaML estimates CATE for zero-shot intervention combinations in which none of the interventions in the combo was ever observed during training (Appendix Table C).
B.2 Zero-shot learning
Zero-shot learning (ZSL) has traditionally aimed to reason over new concepts and classes [80,60] which did not exist during training time. While ZSL has primarily focused on natural language processing and computer vision [77], recent interest has been sparked in generalizing over novel interventions (zero-shot) in the biomedical domain [61,27] in which data can be cheaply collected for hundreds or thousands of possible interventions [87,71,17]. However, general-purpose zero-shot causal methods have been largely unexplored. Notable exceptions include GranITE [23] and SIN [23], which each extend a specific CATE estimation [53, 42] method to incorporate intervention features (W ). However, these approaches have significant drawbacks, which we discuss in Section 2.
B.3 Meta-learning
Meta-learning, or learning to learn, aims to train models which can quickly adapt to new settings and tasks. The key idea is to enable a model to gain experience over multiple learning episodes -in which episodes typically correspond to distinct tasks -to accelerate learning in subsequent learning episodes [30]. The meta-learning literature is rich and spans multiple decades [72,65,63,3], with recent interest focused on model-agnostic methods to train deep learning models to quickly adapt to new tasks [18, 59, 52]. A common focus in the meta-learning literature is few-shot learning, in which a model must adapt to a new task given a small support set of labeled examples. By contrast, we focus on the zero-shot setting, in which no such support set exists. However, we hypothesize that the typical meta-learning problem formulation and training algorithms may also improve zero-shot performance. Thus, CaML's problem formulation and algorithm inspiration from the meta-learning literature, particularly the Reptile algorithm [52] and its application to other tasks in causal inference [67].
Our experimental results show that this meta-learning formulation improves CaML's performance, compared to a standard multi-task learning strategy.
C Experimental details C.1 Experimental setup
Here, we provide more details about the experimental setup for each investigated setting. This serves to complement the high-level overview given in Table 1. Experiments were run using Google Cloud Services. Deep learning-based methods (i.e., CaML and its ablations, S-learner w/ meta-learning, T-learner w/ meta-learning, SIN, GraphITE, FlexTENET, TARNet, and DragonNet) were run on n1-highmem-64 machines with 4x NVIDIA T4 GPU devices. The remaining baselines (RA-learner, R-learner, X-learner, and T-learner) were run on n1-highmem-64 machines featuring 64 CPUs.
Fair comparison. We perform hyper-parameter optimization with random search for all models, with the meta-testing dataset predetermined and held out. To avoid "hyperparameter hacking", hyperparameters ranges are consistent between methods wherever possible, and were chosen using defaults similar to prior work [33,23]. Choice of final model hyper-parameters was determined using performance metrics (specific to each dataset) computed on the meta-validation dataset, using the best hyper-parameters over 48 runs (6 servers x 4 NVIDIA T4 GPUs per server x 2 runs per GPU ) (Appendix C.4). All table results are computed as the mean across 8 runs of the final model with distinct random seeds.
C.1.1 Claims dataset
Interventions (W ): We consider drug prescriptions consisting of either one drug, or two drugs prescribed in combination. We observed 745 unique single drugs, and 22,883 unique drug pairs, excluding interventions which occurred less than 500 times. Time of intervention corresponds to the first day of exposure. To obtain intervention information, we generated pre-trained drug embeddings from a large-scale biomedical knowledge graph [7] (see Appendix C.5). Drugs correspond to nodes in the knowledge graph, which are linked to other nodes (e.g. genes, based on the protein target of the drug). Drug combination embeddings are the sum of the embeddings for their constituent drugs.
Control group. A challenge in such causal analyses of clinical settings is defining a control group. We randomly sample 5% (1.52M patients) to use as controls, with a 40/20/40 split betweem metatrain/meta-val/meta-test. When sampling a natural experiment for a given intervention, we select all patients from this control group that did not receive such an intervention. An additional challenge is defining time of intervention for the control group. It is not possible to naively sample a random date, because there are large quiet periods in the claims dataset in which no data is logged. We thus sample a date in which the control patient received a random drug, and thus our measure of CATE estimates the increase in side effect likelihood from the drug(s) W , compared to another drug intervention chosen at random.
Outcome (Y ):
We focus on the side effect pancytopenia: a deficiency across all three blood cell lines (red blood cells, white blood cells, and platelets). Pancytopenia is life-threatening, with a 10-20% mortality rate [36,41], and is a rare side effect of many common medications [40] (e.g. arthritis and cancer drugs), which in turn require intensive monitoring of the blood work. Our outcome is defined as the (binary) occurrence of pancytopenia within 90 days of intervention exposure.
Features (X): Following prior work [22], patient medical history features were constructed by time-binned counts of each unique medical code (diagnosis, procedure, lab result, drug prescription) before the drug was prescribed. In total, 443,940 features were generated from the following time bins: 0-24 hours, 24-48 hours, 2-7 days, 8-30 days, and 31-90 days, 91-365 days, and 365+ days prior. All individuals in the dataset provided by the insurance company had at least 50 unique days of claims data.
Metrics: We rely on best practices for evaluating CATE estimators as established established by recent work [81,10], which recommend to assess treatment rules by comparing subgroups across different quantiles of estimated CATE. We follow the high vs. others RATE (rank-weighted average treatment effect) approach from Yadlowsky et. al [81], which computes the difference in average treatment effect (ATE) of the top u percent of individuals (ranked by predicted CATE), versus all individuals:
RAT E @ u = E Y (1) − Y (0) | F S (S(X)) ≥ 1 − u − E Y (1) − Y (0) ,(13)
where S(·) is a priority score which ranks samples lowest to highest predicted CATE, and F S (·) is the cumulative distribution function (CDF) of S(X i ). For instance, RATE @ 0.99 would be the difference between the top 1% of the samples (by estimated CATE) vs. the average treatment effect (ATE) across all samples, which we would expect to be high if the CATE estimator is accurate. The real-world use case of our model would be preventing drug prescription a small subset of high-risk individuals. Thus, more specifically, for each task j, intervention w j in the meta-dataset, and meta-model Ψ θ (our priority score S(·)), we compute RAT E @ u for each u in [0.999, 0.998, 0.995, 0.99] across individuals who received the intervention.
We now summarize how to estimate RATE performance metrics for a single intervention (task). As RATE performance is calculated separately per-intervention we are concerned with a single intervention, we use the simplified notation (i.e. Y i (1) instead of Y i (w)) from Section 3. Due to the fundamental problem of causal inference (we can only observe Y i (0) or Y i (1) for a given sample), the true RATE, as defined above, cannot be directly observed.
We follow the method outlined in Section 2.2 and 2.4 of Yadlowsky et. al, [81] in which we compute Γ i , a (noisy but unbiased) estimate for CATE which is in turn used to estimate RATE:
E Γ i X i ≈ τ (X i ) = E Y i (1) − Y i (0) X i .(14)
Our data is observational, and as such we can estimate Γ i using a direct non-parametric estimator [75]:
Γ i = W i (Y i −m(X i , 0)) + (1 − W i )(m(X i , 1) − Y i ) (15) m(x, w) = E [Y i (w)|X i = x](16)
where m(x, w) is a model that predicts the outcome. Herem(x, w) represent nonparametric estimates of m(x, w), respectively, which we obtain by fitting a cross-fitting a model to the intervention natural experiment over 5-folds. We use random forest models form(x, w), as they perform well (achieving ≥ 0.90 ROC AUC across all meta-testing tasks for predicting outcomes) and are robust to choice of hyperparameters.
RATE can then be estimated via sample-averaging estimator. Specifically, we compute the difference between the average value of Γ i for those in the top u percent of individuals (based on our metamodel's predictions), compared to the average Γ i across all individuals. For further discussion on estimating RATE, we refer readers to [81]. Note that estimates of RATE are unbounded: RATE can be less than 0 (due to predictions inversely relating to CATE).
Finally, because our meta-testing dataset consists of individuals treated with drugs known in the medical literature to cause pancytopenia (identified by filtering drugs using the side effect database SIDER [40]), observational metrics of recall and precision are also a rough proxy for successful CATE estimation. Thus, as secondary metrics, we also compute Recall @ u and P recision @ u for the same set of thresholds as RATE, where a positive label is defined as occurrence of pancytopenia after intervention. We find that these metrics are highly correlated to RATE in our performance results.
Training & Evaluation: For each method, we ran a hyperparameter search with 48 random configurations (48 due to running 8 jobs in parallel on 6 servers each) that were drawn uniformly from a pre-defined hyperparameter search space (see Appendix C.4). Methods that can be trained on multiple tasks to then be applied to tasks unseen during training (i.e., CaML and its ablations, S-learner w/ meta-learning, T-learner w/ meta-learning, SIN, GraphITE) were trained for 24 hours (per run) on the meta-training tasks. Model selection was performed on the meta-validation tasks by maximizing the mean [email protected] across meta-validation tasks. Then, the best hyperparameter configuration was used to fit 8 repetition runs across 8 different random seeds. Each repetition model was then tested on the meta-testing tasks, where for all metrics averages across the testing tasks are reported. To make the setting of multi-task models comparable with single-task models that were trained on meta-testing tasks (requiring a train and test split of each meta-testing task), the evaluation of all models was computed on the test split of the meta-testing tasks, respectively. Single-task baselines (FlexTENET, TARNet, and DragonNet, RA-learner, R-learner, X-learner, and T-learner) were given access to the meta-testing tasks during training. Specifically, model selection was performed on the meta-validation tasks, while the best hyperparameter configuration was used to train 8 repetition models (using 8 random seeds) on the train split of each meta-testing task. For the final evaluation, each single-task model that was fit on meta-testing task i was tested on the test split of the same meta-testing task i, and the average metrics were reported across meta-testing tasks.
C.1.2 LINCS
Interventions (W ): Interventions in the LINCS dataset consist of a single perturbagen (small molecule). For intervention information, we used the molecular embeddings for each perturbagen using the RDKit featurizer The same cell line-perturbagen combinations are tested with different perturbagen dosages and times of exposure.
[44].To maintain consistency in experimental conditions while also ensuring that the dataset is sufficiently large for training a model, we filter for most frequently occurring dosage and time of exposure in the dataset, which are 10µM and 24 hours, respectively. We use data from 10,322 different perturbagens.
Control group. For each perturbagen (at a given timepoint and dose), we use cell lines which did not receive that intervention as the control group.
Outcomes (Y ):
We measure gene expression across the top-50 and top-20 landmark differentially expressed genes (DEGs) in the LINCS dataset. Accurately predicting in gene expression in these DEGs is most crucial to the drug discovery process.
Metrics:
A key advantage of experiments on cells is that at evaluation time we can observe both Y (0) and Y (1) for the same cell line X, through multiple experiments on clones of the same cellline in controlled lab conditions. In the LINCS dataset, Y (0) is also measured for all cells which received an intervention. Thus, we can directly compute the Precision Estimation of Heterogenous Effects (PEHE) on all treated cells in our meta-testing dataset. PEHE is a standard metric for CATE estimation performance [28], analagous to mean squared error (MSE).
P EHE = 1 N N i=1 (τ i −τ i ) 2 (17)
Training & Evaluation: For each method, we ran a hyperparameter search with 48 random configurations (48 due to running 8 jobs in parallel on 6 servers each) that were drawn uniformly from a pre-defined hyperparameter search space (see Appendix C.4). Methods that can be trained on multiple tasks to then be applied to tasks unseen during training (i.e., CaML and its ablations, S-learner w/ meta-learning, T-learner w/ meta-learning, SIN) were trained for 12 hours (per run) on the meta-training tasks. Model selection was performed on the meta-validation tasks by minimizing the overall PEHE for the Top-20 most differentially expressed genes (DEGs) across meta-validation tasks. Then, the best hyperparameter configuration was used to fit 8 repetition runs across 8 different random seeds. Each repetition model was then tested on the meta-testing tasks, where for all metrics averages across the testing tasks are reported.
C.2 Selecting holdout interventions for meta-validation and meta-testing C.2.1 Claims.
In the 30.4 million patient insurance claims dataset, each intervention task in meta-train/metaval/meta-testing corresponds to a natural experiment of multiple patients, with some interventions (e.g. commonly prescribed drugs) having millions of associated patients who were prescribed the drug. One challenge is that in this setting, there is overlap in subjects between the natural experiments sampled by CaML, which can lead to data leakage between training and testing. For instance, if a patient received Drug 1 (in meta-test) and Drug 2 (meta-train), they would appear in both natural experiments, resulting in data leakage.
We take a conservative approach and exclude all patients who have ever received a meta-testing drug in their lifespan from the natural experiments for meta-val/meta-train. Similarly, we exclude all patients who received a meta-validation drug from meta-training.
This approach means we must take great care in selecting meta-testing drugs. Specifically, we must trade off between selecting drugs that are important (covering enough patients) while not diminishing the training dataset size. For instance selecting a commonly prescribed (e.g. aspirin) for meta-testing would deplete our meta-training dataset by over 50% of patients. Thus we only selected meta-test/meta-validation drugs which were prescribed to between 1,000,000 and 100K patients in our dataset, after filtering for only drugs which known to cause Pancytopenia [40] (using the SIDER database). From this subset of drugs, we randomly selected 10 meta-testing drugs and 2 meta-validation drugs, resulting in a total meta-testing/meta-validation pool of 4.1 million patients and 685K patients respectively.
To evaluate on unseen pairs of drugs on the same hold-out test dataset, we additionally created a second pairs testing dataset from the 5 most frequently occurring combinations from the metatesting dataset. This allowed us to train a single model on the same meta-train split and evaluate on both single drug and drug pair interventions without occurrence of data leakage. Designing a larger evaluation of pairs was not possible because while pairs of drugs are commonly prescribed as intervention, each particular pair of drugs is a rare event, and accurately evaluating CATE estimation performance (for a rare outcome such as Pancytopenia) requires amassing a natural experiment with at least several thousand patients who received the same intervention.
C.2.2 LINCS.
The goal in selecting holdout interventions for the meta-validation and meta-testing sets was to ensure that they consisted of both cell lines and tasks (small molecules) that had not been seen previously at the time of training (i.e. zero-shot on cell lines and tasks).
Using a random data splitting approach would result in large portions (up to 50%) of the data being unused to comply with the zero-shot requirements on cell lines and tasks. One approach to tackle this was to reserve only those tasks in the held-out sets which had been tested on the fewest cell lines. This preserved the maximum amount of data but resulted in an average of just 1 cell line per task in the meta-testing and meta-validation sets, which would not be fair to the non-zero shot baselines.
To address these issues, we designed a new data split procedure that exploits the structure of how tasks and cell lines are paired. To do so, We first clustered tasks by the cell lines they are tested on. We then identified a set of 600 drugs that had all been tested on a shared set of roughly 20 cell lines. We divided the cell lines and tasks within this set into the meta-validation and meta-testing set, while enforcing zero-shot constraints on both. This resulted in roughly 10 cell lines per intervention in both the meta-validation and meta-testing sets, while still maintaining a reasonably large size of 11 distinct cell lines and 300 distinct tasks in both sets. All remaining tasks and cell lines were reserved for the training set. (See Table 8)
C.3 Understanding CaML's performance
Our comparison to CATE estimators which are restricted to single interventions (Grey, Table 2,B.3) shows that a key reason for CaML's strong performance is the ability to joinly learn across from many intervention datasets, in order to generalize to unseen intervention.
Additionally, in both the Claims and LINCS settings, we conduct two key ablation studies to further understand the underlying reason for CaML's strong performance results.
In our first ablation experiment (w/o meta-learning), we trained the CaML model without employing meta-learning, instead using the standard empirical risk minimization (ERM) technique [73]. This can be seen as a specific implementation of the CaML algorithm (refer to Algorithm 1) when k = 1 [52].
The results of this experiment showed a varying degree of performance deterioration across our primary tests. In the Claims settings, we observed a decrease in the RATE performance metric by 15%-22% (refer to Table 2), while in the LINCS settings, the PEHE performance metric decreased by approximately 0.01 (see Table 3). These results indicate that the absence of meta-learning affects the model's performance, although the impact varies depending on the specific setting. An important detail to consider is that the Claims data experiments dealt with substantially larger datasets, each comprising hundreds of thousands of patients per intervention. This extensive scale of data potentially amplifies the benefits of using meta-learning in the CaML model for the Claims dataset. The larger dataset enables the model to adapt to a given task over a larger set of iterations without reusing the same data, thereby enhancing the efficacy of meta-learning.
Our second ablation (w/o RA-learner) assesses the sensitivity of CaML's performance to different pseudo-outcome estimation strategies. A key aspect of CaML is flexibility in choice of any pseudooutcome estimator to infer CATE, in contrast to prior work which uses specific CATE estimation strategies [23,33]. We find that CaML performance benefits strongly from flexibility of pseudooutcome estimator choice. We assess this by using an alternative pseudo-outcome estimator. Firstly, we find that this ablation results in much noisier model training. For instance, the standard deviation in RATE across the 8 random seeds increases by 20× when using the alternative pseudo-outcome estimator in the claims setting. Moreover, the alternative pseudo-outcome estimator typyically worsens performance, decreasing RATE by up to 6% in the Claims setting , and increasing PEHE by 20%-21% in the LINCS setting (Table 3). We note that this ablation performs slightly better at the 0.99 threshold, which may be a result of the high variance in this ablation. Specific choice of alternative pseudo-outcome estimator for this ablation varies by setting. We use the R-learner [53] for Claims as it also achieves strong single task performance ( We list the hyperparameter search spaces for the medical claims dataset in the following tables. Table 9 represents the search space for CaML. The SIN baseline consists of two stages, Stage 1 and Stage 2.
For the Stage 1 model, we searched the identical hyperparameter search space as for CaML (Table 9). For Stage 2, we used the hyperparameters displayed in Table 10. The search space for the GraphITE baseline is displayed in Table 11. For the S-learner and T-learner w/ meta-learning baselines, we use the same hyperparameter space as for CaML (Table 9) with the only major difference that the these baselines predicts the outcome Y instead ofτ . For all deep learning-based methods, we employed a batch size of 8,192, except for GraphITE, where we were restricted to using a batch size of 512 due to larger memory requirements. Single-task neural network baselines (FlexTENet, TARNet, and DragonNet) are shown in Tables 12,13, and 14, respectively. For the remaining baselines, i.e., the model-agnostic CATE estimators, the (shared) hyperparameter search space is shown in Table 15. Finally, applied L1 regularization to the encoder layer of the customizable neural network models (that were not reused as external packages), i.e., SIN learner, GraphITE, T-learner w/ meta-learning, and S-learner w/ meta-learning, and CaML.
C.4.2 LINCS hyperparameter space
We list the hyperparameter search spaces for LINCS in the following tables. CaMLis shown in Table 16. SIN Stage 1 used the same search space as CaML (Table 16. The search space of SIN Stage 2 is shown in Table 17. S learner and T-learner w/ meta-learning used the same search space as CaML. The search space of GraphITE is shown in Table 18. All methods that were applied to LINCS used a batch size of 20.
C.5 More details on intervention information
Here we give more details about the intervention information used for the medical claims dataset.
In order to perform zero-shot generalization, we acquired information about a specific intervention through the use of pretrained embeddings. We generated these embeddings on the Precision Medicine Knowledge Graph [7] that contains drug nodes as well as 9 other node types. We extracted embeddings for 7957 drugs from the knowledge graph.
To extract rich neighborhood information from the knowledge graph we used Stargraph [47], which is a coarse-to-fine representation learning algorithm. StarGraph generates a subgraph for each node by sampling from its neighbor nodes (all nodes in the one-hop neighborhood) and anchor nodes (a preselected subset of nodes appearing in the multihop neighborhood). In our case the anchor nodes were the 2% of graph nodes with the highest degree. For the scoring function we used the augmented version of TripleRE [85] presented in the StarGraph article [47].
We performed a hyperparameter optimization to compare different models and determine the one we used to calculate our final embeddings (see Table C.5). The hyperparameter search was random with the objective of minimizing the loss function used in training on held out data. The search range for each of the parameters is displayed in C.5. Since certain parameters did not seem to influence the final score as much we decided to use them as constants and focus on optimizing the hyperparameters in the table. Therefore the number of sampled anchors was set to 20 and u = 0.1 in the augmented TripleRE function, the values matching those seen in Stargraph [46].
Our final embeddings were 256-dimensional, the learning rate was 2e-4, the drop-ratio was 5e-3. We used the self-adversarial negative sampling loss with γ = 8 and we sampled 4 neighbor nodes for each subgraph.
To additionally evaluate the quality of the embeddings we assigned classes to drug combinations and then scored them using multiple clustering metrics. We were interested to see if embeddings of drug combinations used for similar purposes would be embedded closer together than other drug combinations. For the class label of single drugs we used the first level of the Anatomical Therapeutic Chemical (ATC) code, which represents one of the 14 anatomical or pharmacological groups. Since certain medications have more than one ATC code, we took the mode of all labels for a specific drug. For multiple drugs we combined all distinct first level values and took the mode of them as the label. We used the Silhouette metric, Calinski Harabasz index and Davies Bouldin index as well as the average classification accuracy over 10 runs of training a random forest classifier on a random sample of 80% of the dataset and evaluating on the remaining 20%. Out of all tested embeddings the hyperparameter optimized StarGraph embeddings performed best (exceeding 93% in the classification accuracy metric).
C.6 Pseudo-outcome estimation
In our experiments, we estimate pseudo-outcomesτ for a given intervention w using the RAlearner [14]:τ
= W (Y −μ 0 (X)) + (1 − W )(μ 1 (X) − Y ) (18) whereμ w is an estimate of µ w (X) = E P Y | X = x, W = w .
Furthermore, in both settings we only estimate CATE for treated individuals. We focus on treated individuals in the Claims setting because we care about the risk of an adverse event for prescribing a sick patients drugs that may cure their sickness, not the adverse event risk of prescribing healthy patients drugs (which is of less clinical interest). In the LINCS setting, we focus on treated cells as for these cell-lines Y (0) is also measured from a cloned cell-line under similar laboratory conditions, which allows us to directly estimate CATE prediction performance using the PEHE metric. As we focus on treated samples, the RA-learner can be simplified toτ = Y −μ 0 (X). We estimateμ 0 (X) using a random forest model in the Claims setting, whereas in the LINCS setting we use the point estimate from the untreated control cell line's gene expression.
C.7 Baselines
Here we provide more details on the baselines used in our experiments.
Trained on test task: These baselines leverage CATE estimators which can only be trained on a single task (typically these are the strongest baselines, when there is a large enough dataset for a single task Zero-shot. These baselines use CATE estimators which incorporate intervention information (W ) and are capable of multi-task learning. We train these baselines on all meta-training tasks. These baselines have no access to the meta-testing tasks during training. We found in preliminary experiments that in some cases, baseline models trained with vanilla ERM would not even converge. To allow for fair comparison to baselines, we allow for all zero-shot baselines to be trained using Reptile (by training using the same optimization strategy as Algorithm 1, while allowing for training with ERM by including k = 1 in the hyperparameter search space).
Firstly, we use GraphITE [23] and Structured Intervention Networks [33]. These are, to the best of our knowledge, the only methods from prior work which are (in principle) capable of zero-shot generalization. We use existing implementations provided by the authors [33].
Additionally, we implement two strong baselines which estimate CATE by modeling potential outcomes, rather than via pseudo-outcomes. These are variants of the S-learner and T-learner [42] with meta-learning, which use the intervention information as input, rather than one-hot encoded vectors of the different interventions-such that they also have zero-shot capability. Specifically, we train MLPs using the same architecture as CaML to estimate the response function from observed outcomes:
µ(x, w) = E P Y | X = x, W = w(19)
and estimate CATE byτ
w (x) =μ(x, w) −μ(x, 0)(20)
Where w denotes the corresponding intervention information w for an intervention, and 0 denotes a null intervention vector. In the LINCS setting, we represent 0 as a vector of zeros, whereas in the Claims setting we represent 0 as the mean embedding of all drugs (as the estimand is the increase in adverse event likelihood compared to a randomly chosen drug). The difference between the T-learner and the S-learner is that the T-learner estimates two models, one for control units and one for treated units. By contrast, the S-learner estimates a shared model across all units.
RATE @u (↑)
Recall @u (↑) This table extends Table 2 with standard deviations. Table 5: Performance results for the medical claims dataset, in which the task is to predict the effect of a pair of drugs the drug on pancytopenia occurrence. Mean and standard deviation between runs is reported. Single-task methods were trained on the meta-testing tasks (best model underlined). Methods that were capable of training across multiple tasks were trained on meta-training tasks and applied to previously unseen meta-testing tasks (best model in bold).CaML outperforms the strongest baseline that had access to testing tasks on 12 out of 12 metrics, and outperforms all zero-shot baselines. Notably, due to the small sample size for natural experiments with combinations of drugs, the RATE estimation process is very noisy which is reflected in high variability of the measured RATE. Here, the secondary metrics (Recall and Precision) that are not affected, additionally assert the dominance of CaML over all baselines. Table 19: The hyperparameter optimization search ranges used in the selection of the optimal model for the generation of knowledge graph node embeddings that would serve as intervention information for the medical claims dataset.
[44] and used as intervention information (W ). Outcomes (Y ) of interest are post-intervention gene expression across the top-50 and top-20 differentially expressed landmark genes (DEGs) in the LINCS dataset. We did not look at all 978 genes since most do not show significant variation upon perturbation. We use 19,221 features (X) from the Cancer Cell Line Encyclopedia (CCLE)[20] to characterize each
Lemma 4.1 of [45], which says for any function k ∈ L 1 , V ar(k(w i )) ≤ 4E k (w i ) 2 which, combined with the Efro-Stein inequality (Theorem 3.1 of [5]), V ar(F (w)) = E e i=1 V ar(F (w) | w 1 , . . . , w i−1 , w i+1 , . . . , w n ), yields: V ar(F (w)) ≤ 4E F (w) 2 2 which satisfies equation (11) with C = 4.
Lemma 9 (
9Lemma 5 of [49]). Suppose {φ i }, {ψ i }, i = 1, . . . , N are two sets of functions on Θ such that for each i anθ, θ ∈ Θ, |φ i (θ) − φ i (θ )| ≤ |ψ i (θ) − ψ i (θ )|.Then for all functions c: Θ → R,
Features (X ) :
)We use 19,221 features from the Cancer Cell Line Encyclopedia (CCLE) [20] to describe each cell-line, based on historical gene expression values in a different lab environment. Our dataset consisted of 99 unique cell lines (after filtering for cell-lines with CCLE features).
00±<0.001 0.00±<0.001 0.00±<0.001 0.00±<0.001 0.0±<0.001 0.0±<0.001 0.01±<0.001 0.01±<0.001 0.01±<0.0014 0.01±<0.001 0.01±<0.001 0.00±<0.001 T-learner 0.10±<0.001 0.07±<0.001 0.05±<0.001 0.04±<0.001 0.05±<0.001 0.07±<0.001 0.11±<0.001 0.13±<0.001 0.10±<0.001 0.08±<0.001 0.06±<0.001 0.04±<0.001 X-learner 0.00±<0.001 -0.01±<0.001 0.00±<0.001 0.00±<0.001 0.00±<0.001 0.00±<0.001 0.01±<0.001 0.02±<0.001 0.00±<0.001 0.00±<0.001 0.00±<0.001 0.01±<0.001 R-learner -0.01±<0.001 -0.01±<0.001 -0.01±<0.001 0.00±<0.001 0.00±<0.001 0.00±<0.001 0.00±<0.001 0.04±<0.001 0.00±<0.001 0.00±<0.001 0.00±<0.001 0.01±<0.001 RA-learner 0.28±<0.001 0.26±<0.001 0.17±<0.001 0.10±<0.001 0.10±<0.001 0.19±<0.001 0.30±<0.001 0.37±<0.001 0.30±<0.001 0.28±<0.
with meta-learning). (3) Discrete tasks from continuous interventions. CaML takes advantage of the extensive literature on CATE estimation for single, binary interventions. By creating a natural experiment for each intervention, CaML taps into this literature and benefits from the high performance of recently developed nonparametric CATE estimators[14, 53, 42].
Table 1 :
1High-level overview of our two experimental settings. Details in Appendix C.1.
Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Citeseer, 1990.[4] Ioana Bica, Ahmed M Alaa, Craig Lambert, and Mihaela Van Der Schaar. From real-world patient data to individualized treatment effects using machine learning: current and future methods to address underlying challenges. Clinical Pharmacology & Therapeutics, 109(1):87-100, 2021.[5] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford university press, 2013.[6] Olivier Bousquet. A bennett concentration inequality and its application to suprema of empirical processes.Comptes Rendus Mathematique, 334(6):495-500, 2002. [7] Payal Chandak, Kexin Huang, and Marinka Zitnik. Building a knowledge graph to enable precision medicine. bioRxiv, 2022. [8] Hong-Bin Chen, Sinho Chewi, and Jonathan Niles-Weed. Dimension-free log-sobolev inequalities for mixture distributions. Journal of Functional Analysis, 281(11):109236, 2021. [9] Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters, 2018. [10] Victor Chernozhukov, Mert Demirer, Esther Duflo, and Ivan Fernandez-Val. Generic machine learning inference on heterogeneous treatment effects in randomized experiments, with an application to immunization in india. Technical report, National Bureau of Economic Research, 2018. [11] Richard K Crump, V Joseph Hotz, Guido W Imbens, and Oscar A Mitnik. Nonparametric tests for treatment effect heterogeneity. The Review of Economics and Statistics, 90(3):389-405, 2008. [12] Alicia Curth, David Svensson, Jim Weatherall, and Mihaela van der Schaar. Really doing great at estimating cate? a critical look at ml benchmarking practices in treatment effect estimation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. [13] Alicia Curth and Mihaela van der Schaar. Doing great at estimating cate? on the neglected assumptions in benchmark comparisons of treatment effect estimators. arXiv preprint arXiv:2107.13346, 2021. [14] Alicia Curth and Mihaela van der Schaar. Nonparametric estimation of heterogeneous treatment effects: From theory to learning algorithms. In International Conference on Artificial Intelligence and Statistics, pages 1810-1818. PMLR, 2021. [15] Alicia Curth and Mihaela van der Schaar. On inductive biases for heterogeneous treatment effect estimation. Advances in Neural Information Processing Systems, 34:15883-15894, 2021. [16] Gerald DeJong and Raymond Mooney. Explanation-based learning: An alternative view. Machine learning, 1986. [17] Qiaonan Duan, Corey Flynn, Mario Niepel, Marc Hafner, Jeremy L Muhlich, Nicolas F Fernandez, Andrew D Rouillard, Christopher M Tan, Edward Y Chen, Todd R Golub, et al. Lincs canvas browser: interactive web app to query, browse and interrogate lincs l1000 gene expression signatures. Nucleic acids research, 42(W1):W449-W460, 2014. [18] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pages 1126-1135. PMLR, 2017. [36] Jitender Mohan Khunger, S Arulselvi, Uma Sharma, Sunil Ranga, and VH Talib. Pancytopeniaa clinico haematological study of 200 cases. Indian journal of pathology & microbiology, 45(3):375-379, 2002. [37] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning (ICML), 2021. [38] Andrei V Konstantinov, Stanislav R Kirpichenko, and Lev V Utkin. Heterogeneous treatment effect with trained kernels of the nadaraya-watson regression. arXiv preprint arXiv:2207.09139, 2022. [39] N Kostantinos. Gaussian mixtures and their applications to signal processing. Advanced signal processing handbook: theory and implementation for radar, sonar, and medical imaging real time systems, pages 3-1, 2000. [40] Michael Kuhn, Ivica Letunic, Lars Juhl Jensen, and Peer Bork. The sider database of drugs and side effects. Nucleic acids research, 44(D1):D1075-D1079, 2016. [41] R Kumar, SP Kalra, H Kumar, AC Anand, and H Madan. Pancytopenia-a six year study. The Journal of the Association of Physicians of India, 49:1078-1081, 2001. [42] Sören R Künzel, Jasjeet S Sekhon, Peter J Bickel, and Bin Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the national academy of sciences, 116(10):4156-4165, 2019. [43] Nikolay Kuznetsov and Alexander Nazarov. Sharp constants in the poincaré, steklov and related inequalities (a survey). Mathematika, 61(2):328-344, 2015. [44] Greg Landrum et al. Rdkit: Open-source cheminformatics. 2006. [45] Michel Ledoux. Concentration of measure and logarithmic sobolev inequalities. In Seminaire de probabilites XXXIII, pages 120-216. Springer, 1999. [46] Hongzhu Li, Xiangrui Gao, and Yafeng Deng. Stargraph: A coarse-to-fine representation method for large-scale knowledge graph, 2022. [47] Michelle M Li, Kexin Huang, and Marinka Zitnik. Graph representation learning in biomedicine and healthcare. Nature Biomedical Engineering, pages 1-17, 2022. [48] Jing Ma, Ruocheng Guo, Aidong Zhang, and Jundong Li. Multi-cause effect estimation with disentangled confounder representation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021. [49] Ron Meir and Tong Zhang. Generalization error bounds for bayesian mixture algorithms. Journal of Machine Learning Research, 4(Oct):839-860, 2003. [50] Stephen L Morgan and Christopher Winship. Counterfactuals and causal inference. Cambridge University Press, 2015. [51] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018. [52] Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. Xinkun Nie and Stefan Wager. Quasi-oracle estimation of heterogeneous treatment effects. Frank Nielsen and Richard Nock. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Processing Letters, 21(1):10-13, 2013. [55] Hamed Nilforoshan and Eugene Wu. Leveraging quality prediction models for automatic writing feedback. In Twelfth International AAAI Conference on Web and Social Media, 2018. [56] Lawrence E Payne and Hans F Weinberger. An optimal poincaré inequality for convex domains. Archive for Rational Mechanics and Analysis, 5(1):286-292, 1960. [57] Henri Poincaré. Sur les équations aux dérivées partielles de la physique mathématique. American Journal of Mathematics, pages 211-294, 1890. [58] Zhaozhi Qian, Alicia Curth, and Mihaela van der Schaar. Estimating multi-cause treatment effects via single-cause perturbation. Advances in Neural Information Processing Systems, 34:23754-23767, 2021.[59] Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv preprint arXiv:1909.09157, 2019.:7353-7360, 2016.
[3] arXiv preprint
arXiv:1803.02999, 2(3):4, 2018.
[53] Biometrika, 108(2):299-319, 2021.
[54]
62] Shiv Kumar Saini, Sunny Dhamnani, Akil Arif Ibrahim, and Prithviraj Chavan. Multiple treatment effect estimation using deep generative model with task embedding.In The World
Wide Web Conference, pages 1601-1611, 2019.
[63] Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to
accelerate training of deep neural networks. Advances in neural information processing systems,
29, 2016.
[64] André Schlichting. Poincaré and log-sobolev inequalities for mixtures. Entropy, 21(1):89,
2019.
[65] Jürgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to
learn: the meta-meta-... hook. PhD thesis, Technische Universität München, 1987.
[66] Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect:
generalization bounds and algorithms. In International Conference on Machine Learning, pages
3076-3085. PMLR, 2017.
)C is the Poincaré constant for the domain W in L 2 norm. It can be bounded by 1/λ 1
where λ 1 is the first eigenvalue of the negative Laplacian of the manifold W [83]. Many
previous works study the optimal Poincaré constants for various domains [43]. For example,
when w is uniform over W which is a bounded, convex, Lipschitz domain with diameter d,
C ≤ d/π [56].
Table 2 ,
2grey) on Claims data. However,
). Thus, we train a single model for each meta-testing task on its train split, and evaluate performance on its test split. We use a number of strong baselines for CATE estimation developed by prior work including both model-agnostic and end-to-end deep learning approaches: T-learner. Specifically, we use the model-agnostic CATE estimators: [42], X-learner [42], RA-learner [14], R-learner [53]. We additionally use the end-to-end deep learning estimators DragonNet [68], TARNet [66], and FlexTENet [15], using implementations from [15]. For model-agnostic CATE estimators, we use random forest models following prior work [12, 76].
Table 4 :
4Performance results for the Claims dataset (predicting pancytopenia onset from drug exposure using patient medical history.
: Held-out test and validation drugs for our single-drug meta-testing and meta-validation datasets for our Claims evaluation inTable 2. Drugs are unseen (excluded) during training. All drugs are known to cause pancytopenia[40] : Held-out test pairs of drugs for our meta-testing and meta-validation datasets in AppendixTable B.3. Both drugs are unseen (excluded) during training. All drugs are known to cause pancytopenia [40] Split # Perturbagens # Cell-Lines Mean #Cell Lines/Task : Composition of the meta-training, meta-validation and meta-testing sets for the LINCS dataset. No cell lines or drugs (tasks) were shared across any of the splits. {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } Meta learning rate {1} Weight decay {5 × 10 −3 } Reptile k {1, 10, 50} L1 regularization coefficient {0, 1 × 10 −7 , 5 × 10 −7 } Table 9: Hyperparameter search space for CaML (our proposed method) on the medical claims dataset. Search range Num. of como layers {2, 4, 6} Num. of covariate layers {2, 4, 6} Num. of propensity layers {2, 4, 6} Num. of treatment layers {2, 4, 6} Dim. of hidden como layers {128, 256} Dim. of hidden covariate layers {128, 256} Dim. of hidden treatment layers {128, 256} Dim. of hidden propensity layers {16, 32, 64, 128} Dropout{0, 0.1} Learning rate {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } Meta learning rate {1} Sin Weight decay {0, 5 × 10 −3 } Pro Weight decay {0, 5 × 10 −3 } GNN Weight decay {0, 5 × 10 −3 } Reptile k {1, 10, 50} L1 regularization coefficient {0, 1 × 10 −7 , 5 × 10 −7 }Table 10: Hyperparameter search space for SIN on the medical claims dataset. The SIN model consists of two stages, Stage 1 and Stage 2. For the Stage 1 model we searched the identical hyperparameter search space as for CaML (Table 9). For Stage 2, we used the hyperparameters shown in this table. Search range Num. of covariate layers {2, 4, 6} Num. of treatment layers {2, 4, 6} Dim. of hidden treatment layers {128, 256} Dim. of hidden covariate layers {128, 256} Dropout {0, 0.1} Independence regularization coefficient {0, 0.01, 0.1, 1.0} Learning rate {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } Meta learning rate {1} Weight decay {5 × 10 −3 } Reptile k {1, 10, 50} L1 regularization coefficient {0, 1 × 10 −7 , 5 × 10 −7 } Table 11: Hyperparameter search space for GraphITE on the medical claims dataset. Num. of out layers {1, 2, 4} Num. of r layers {2, 4, 6} Num. units p out {32, 64, 128, 256} Num. units s out {32, 64, 128, 256} Num. units s r {32, 64, 128, 256} Num. units p r {32, 64, 128, 256} Weight decay {5 × 10 −3 } Orthogonal penalty {0, 1 × 10 −5 , 1 × 10 −3 , 0.1} Private out {True, False } Learning rate {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } Table 12: Hyperparameter search space for FlexTENet on the medical claims dataset. Search range Num. of out layers {1, 2, 4} Num. of r layers {2, 4, 6} Num. units out {128, 256} Weight decay {5 × 10 −3 } Penalty disc {0, 1 × 10 −3 } Learning rate {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } Table 13: Hyperparameter search space for TARNet on the medical claims dataset. {5 × 10 −3 } Learning rate {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } Table 14: Hyperparameter search space for DragonNet on the medical claims dataset. Criterion regress {squared error, absolute error} Criterion binary {gini, entropy} Max features {sqrt, log2, auto}Table 15: Hyperparameter search space for model-agnostic CATE estimators, i.e., R-learner, Xlearner, RA-learner, and T-learner on the medical claims dataset. {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } {0, 1 × 10 −7 , 5 × 10 −7 }Table 16: Hyperparameter search space for CaML (our proposed method) on the LINCS dataset. Search range Num. of como layers {2, 4, 6} Num. of covariates layers {2, 4, 6} Num. of propensity layers {2, 4, 6} Num. of treatment layers {2, 4, 6} Dim. output {128, 256} Dim. of hidden treatment layers {128, 256} Dim. of hidden covariate layers {128, 256} Dim. of hidden como layers {128, 256} Dim. of hidden propensity layers {16, 32, 64, 128} Model dim. {512, 1024} Dropout {0, 0.1} Learning rate {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } {0, 1 × 10 −7 , 5 × 10 −7 } Table 17: Hyperparameter search space for the SIN baseline on the LINCS dataset. Search range Num. of covariate layers {2, 4, 6} Num. of treatment layers {2, 4, 6} Num. of layers {2, 4, 6} Dim. of hidden covariate layers {128, 256} Independence regularization coefficient {0, 0.01, 0.1, 1.0} Dropout {0, 0.1} Model dim. {512, 1024} Learning rate {3 × 10 −3 , 1 × 10 −3 , 3 × 10 −4 , 1 × 10 −4 } Meta learning rate {0.1, 0.5, 0.9} Weight decay {0.1} Reptile k {1, 2, 3} L1 regularization coefficient {0, 1 × 10 −7 , 5 × 10 −7 } Table 18: Hyperparameter search space for the GraphITE baseline on the LINCS dataset.Split # of Patients
Allopurinol
Test
815,921
Pregabalin
Test
636,995
Mirtazapine
Test
623,980
Indomethacin
Test
560,380
Colchicine
Test
370,397
Hydralazine
Test
363,070
Hydroxychloroquine Test
324,750
Methotrexate
Test
323,387
Memantine
Test
306,832
Fentanyl
Test
261,000
Etodolac
Val
438,854
Azathioprine
Val
100,000
Table 6Split # of Patients
Allopurinol + Hydralazine
Test
7,859
Methotrexate + Hydroxychloroquine Test
25,716
Pregabalin + Fentanyl
Test
5,424
Indomethacin + Colchicine
Test
42,846
Mirtazapine + Memantine
Test
10,215
Table 7Meta-training
9717
77
5.79
Meta-validation
304
11
9.99
Meta-testing
301
11
10.77
Table 8Hyperparameter
Search range
Num. of layers
{2, 4, 6}
Dim. of hidden layers
{128, 256}
Dropout
{0, 0.1}
Learning rate
Hyperparameter
Hyperparameter
Hyperparameter
Search range
Hyperparameter
Hyperparameter
Search range
Num. of out layers
{1, 2, 4}
Num. of r layers
{2, 4, 6}
Num. units r
{128, 256}
Num. units out
{128, 256}
Weight decay
Hyperparameter
Search range
Num. of estimators
[50, 250]
Max depth
[10, 50]
Min sample split
[2, 8]
Hyperparameter
Search range
Num. of layers
{2, 4, 6}
Dim. of hidden layers
{512, 1024}
Dropout
{0, 0.1}
Learning rate
Meta learning rate
{0.1, 0.5, 0.9}
Weight decay
{0.1}
Reptile k
{1, 2, 3}
L1 regularization coefficient
Hyperparameter
Meta learning rate
{0.1, 0.5, 0.9}
Sin weight decay
{0.0, 0.005}
Pro weight decay
{0.0, 0.005}
GNN weight decay
{0.0, 0.005}
Weight decay
{0.1}
Reptile k
{1, 2, 3}
L1 regularization coefficient
Hyperparameter
Hyperparameter
Search range
Dropout
[1e-4,1e-1]
Learning rate
[1e-5,1e-3]
Weight decay
[1e-5,1e-2]
Adversarial temperature
[1,10]
Gamma
[0,30]
Num. of sampled neighbors
0-10
Dim. of hidden layers
{ 64, 128, 256, 512}
Insurance company undisclosed per data use agreement.
Single-task baselines excluded fromTable 3: all performed similar or worse than mean baseline due to low task sample size.
Acknowledgements
Estimating individual treatment effects under unobserved confounding using binary instruments. Dennis Frauen, Stefan Feuerriegel, arXiv:2208.08544arXiv preprintDennis Frauen and Stefan Feuerriegel. Estimating individual treatment effects under unobserved confounding using binary instruments. arXiv preprint arXiv:2208.08544, 2022.
. Mahmoud Ghandi, W Franklin, Judit Huang, Jané-Valbuena, V Gregory, Kryukov, C Christopher, Robert Lo, Jordi Mcdonald, Ellen T Barretina, Gelfand, M Craig, Haoxin Bielski, Kevin Li, Alexander Y Hu, Jaegil Andreev-Drakhlin, Kim, M Julian, Brian J Hess, François Haas, Aguet, A Barbara, Weir, V Michael, Brenton R Rothberg, Paolella, S Michael, Rehan Lawrence, Yiling Akbani, Lu, L Hong, Tiv, C Prafulla, Antoine Gokhale, Ali De Weck, Coyin Amin Mansour, Juliann Oh, Kevin Shih, Yanay Hadi, Jonathan Rosen, Kavitha Bistline, Anupama Venkatesan, Dmitriy Reddy, Manway Sonkin, Joseph Liu, Joshua M Lehar, Korn, A Dale, Porter, D Michael, Javad Jones, Giordano Golji, Jordan E Caponigro, Caitlin M Taylor, Amanda L Dunning, Allison C Creech, James M Warren, Mahdi Mcfarland, Audrey Zamanighomi, Nicolas Kauffmann, Marcin Stransky, Imielinski, E Yosef, Andrew D Maruvka, Aviad Cherniack, Francisca Tsherniak, Vazquez, D Jacob, Andrew A Jaffe, Lane, M David, Cory M Weinstock, Johannessen, P Michael, Frank Morrissey, Robert Stegmeier, Schlegel, C William, Gad Hahn, Getz, B Gordon, Jesse S Mills, Todd R Boehm, Golub, A Levi, William R Garraway, Sellers, Nature. 5697757Next-generation characterization of the cancer cell line encyclopediaMahmoud Ghandi, Franklin W Huang, Judit Jané-Valbuena, Gregory V Kryukov, Christopher C Lo, E Robert McDonald, 3rd, Jordi Barretina, Ellen T Gelfand, Craig M Bielski, Haoxin Li, Kevin Hu, Alexander Y Andreev-Drakhlin, Jaegil Kim, Julian M Hess, Brian J Haas, François Aguet, Barbara A Weir, Michael V Rothberg, Brenton R Paolella, Michael S Lawrence, Rehan Akbani, Yiling Lu, Hong L Tiv, Prafulla C Gokhale, Antoine de Weck, Ali Amin Mansour, Coyin Oh, Juliann Shih, Kevin Hadi, Yanay Rosen, Jonathan Bistline, Kavitha Venkatesan, Anupama Reddy, Dmitriy Sonkin, Manway Liu, Joseph Lehar, Joshua M Korn, Dale A Porter, Michael D Jones, Javad Golji, Giordano Caponigro, Jordan E Taylor, Caitlin M Dunning, Amanda L Creech, Allison C Warren, James M McFarland, Mahdi Zamanighomi, Audrey Kauffmann, Nicolas Stransky, Marcin Imielinski, Yosef E Maruvka, Andrew D Cherniack, Aviad Tsherniak, Francisca Vazquez, Jacob D Jaffe, Andrew A Lane, David M Weinstock, Cory M Johannessen, Michael P Morrissey, Frank Stegmeier, Robert Schlegel, William C Hahn, Gad Getz, Gordon B Mills, Jesse S Boehm, Todd R Golub, Levi A Garraway, and William R Sellers. Next-generation characterization of the cancer cell line encyclopedia. Nature, 569(7757):503-508, May 2019.
Modeling heterogeneous treatment effects in survey experiments with bayesian additive regression trees. P Donald, Green, Holger L Kern, Public opinion quarterly. 763Donald P Green and Holger L Kern. Modeling heterogeneous treatment effects in survey experiments with bayesian additive regression trees. Public opinion quarterly, 76(3):491-511, 2012.
Ehr foundation models improve robustness in the presence of temporal distribution shift. medRxiv. Ethan Lin Lawrence Guo, Scott Lanyon Steinberg, Jose Fleming, Joshua Posada, Lemmon, R Stephen, Nigam Pfohl, Shah, Jason Fries, and Lillian SungLin Lawrence Guo, Ethan Steinberg, Scott Lanyon Fleming, Jose Posada, Joshua Lemmon, Stephen R Pfohl, Nigam Shah, Jason Fries, and Lillian Sung. Ehr foundation models improve robustness in the presence of temporal distribution shift. medRxiv, 2022.
Graphite: Estimating individual effects of graphstructured treatments. Shonosuke Harada, Hisashi Kashima, Proceedings of the 30th ACM International Conference on Information & Knowledge Management. the 30th ACM International Conference on Information & Knowledge ManagementShonosuke Harada and Hisashi Kashima. Graphite: Estimating individual effects of graph- structured treatments. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 659-668, 2021.
Counterfactual regression with importance sampling weights. Negar Hassanpour, Russell Greiner, IJCAI. Negar Hassanpour and Russell Greiner. Counterfactual regression with importance sampling weights. In IJCAI, pages 5880-5887, 2019.
Learning disentangled representations for counterfactual regression. Negar Hassanpour, Russell Greiner, International Conference on Learning Representations. Negar Hassanpour and Russell Greiner. Learning disentangled representations for counterfactual regression. In International Conference on Learning Representations, 2019.
The elements of statistical learning: data mining, inference, and prediction. Trevor Hastie, Robert Tibshirani, H Jerome, Jerome H Friedman, Friedman, Springer2Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. The elements of statistical learning: data mining, inference, and prediction, volume 2. Springer, 2009.
Predicting single-cell perturbation responses for unseen drugs. Leon Hetzel, Simon Böhm, Niki Kilbertus, Stephan Günnemann, Mohammad Lotfollahi, Fabian Theis, arXiv:2204.13545arXiv preprintLeon Hetzel, Simon Böhm, Niki Kilbertus, Stephan Günnemann, Mohammad Lotfollahi, and Fabian Theis. Predicting single-cell perturbation responses for unseen drugs. arXiv preprint arXiv:2204.13545, 2022.
Bayesian nonparametric modeling for causal inference. L Jennifer, Hill, Journal of Computational and Graphical Statistics. 201Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Computa- tional and Graphical Statistics, 20(1):217-240, 2011.
Sustained effects of high participation in an early intervention for low-birth-weight premature infants. L Jennifer, Jeanne Hill, Jane Brooks-Gunn, Waldfogel, Developmental psychology. 394730Jennifer L Hill, Jeanne Brooks-Gunn, and Jane Waldfogel. Sustained effects of high participation in an early intervention for low-birth-weight premature infants. Developmental psychology, 39(4):730, 2003.
Meta-learning in neural networks: A survey. Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey, IEEE transactions on pattern analysis and machine intelligence. 44Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(9):5149-5169, 2021.
Causal inference in statistics, social, and biomedical sciences. W Guido, Donald B Imbens, Rubin, Cambridge University PressGuido W Imbens and Donald B Rubin. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015.
Learning representations for counterfactual inference. Fredrik Johansson, Uri Shalit, David Sontag, International conference on machine learning. PMLRFredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International conference on machine learning, pages 3020-3029. PMLR, 2016.
Causal effect inference for structured treatments. Jean Kaddour, Yuchen Zhu, Qi Liu, J Matt, Ricardo Kusner, Silva, Advances in Neural Information Processing Systems. 34Jean Kaddour, Yuchen Zhu, Qi Liu, Matt J Kusner, and Ricardo Silva. Causal effect inference for structured treatments. Advances in Neural Information Processing Systems, 34:24841-24854, 2021.
Optimal doubly robust estimation of heterogeneous causal effects. H Edward, Kennedy, arXiv:2004.14497arXiv preprintEdward H Kennedy. Optimal doubly robust estimation of heterogeneous causal effects. arXiv preprint arXiv:2004.14497, 2020.
Towards optimal doubly robust estimation of heterogeneous causal effects. H Edward, Kennedy, 2020Edward H Kennedy. Towards optimal doubly robust estimation of heterogeneous causal effects (2020). URL https://arxiv. org/abs, 2020.
Metaci: Meta-learning for causal inference in a heterogeneous population. Ankit Sharma, Garima Gupta, Ranjitha Prasad, Arnab Chatterjee, Lovekesh Vig, Gautam Shroff, arXiv:1912.03960arXiv preprintAnkit Sharma, Garima Gupta, Ranjitha Prasad, Arnab Chatterjee, Lovekesh Vig, and Gautam Shroff. Metaci: Meta-learning for causal inference in a heterogeneous population. arXiv preprint arXiv:1912.03960, 2019.
Adapting neural networks for the estimation of treatment effects. Claudia Shi, David Blei, Victor Veitch, Advances in neural information processing systems. 32Claudia Shi, David Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. Advances in neural information processing systems, 32, 2019.
Benchmarking framework for performance-evaluation of causal inference analysis. Yishai Shimoni, Chen Yanover, Ehud Karavani, Yaara Goldschmnidt, arXiv:1802.05046arXiv preprintYishai Shimoni, Chen Yanover, Ehud Karavani, and Yaara Goldschmnidt. Benchmarking frame- work for performance-evaluation of causal inference analysis. arXiv preprint arXiv:1802.05046, 2018.
. Aravind Subramanian, Rajiv Narayan, M Steven, Corsello, D David, Ted E Peck, Xiaodong Natoli, Joshua Lu, Gould, F John, Andrew A Davis, Jacob K Tubelli, Asiedu, L David, Jodi E Lahr, Zihan Hirschman, Melanie Liu, Bina Donahue, Mariya Julian, David Khan, Wadden, C Ian, Daniel Smith, Arthur Lam, Courtney Liberzon, Mukta Toder, Marek Bagul, Orzechowski, M Oana, Federica Enache, Piccioni, A Sarah, Nicholas J Johnson, Alice H Lyons, Alykhan Berger, ; Shamji, S Nathanael, Gray, A Paul, Serena Clemons, Xiaoyun Silver, Wen-Ning Wu, Willis Zhao, Xiaohua Read-Button, Wu, J Stephen, Lucienne V Haggarty, Jesse S Ronco, Boehm, L Stuart, John G Schreiber, Joshua A Doench, David E Bittker, Bang Root, Todd R Wong, Golub, Cell. 1716A next generation connectivity map: L1000 platform and the first 1,000,000 profilesAravind Subramanian, Rajiv Narayan, Steven M Corsello, David D Peck, Ted E Natoli, Xi- aodong Lu, Joshua Gould, John F Davis, Andrew A Tubelli, Jacob K Asiedu, David L Lahr, Jodi E Hirschman, Zihan Liu, Melanie Donahue, Bina Julian, Mariya Khan, David Wadden, Ian C Smith, Daniel Lam, Arthur Liberzon, Courtney Toder, Mukta Bagul, Marek Orzechowski, Oana M Enache, Federica Piccioni, Sarah A Johnson, Nicholas J Lyons, Alice H Berger, Alykhan F Shamji, Angela N Brooks, Anita Vrcic, Corey Flynn, Jacqueline Rosains, David Y Takeda, Roger Hu, Desiree Davison, Justin Lamb, Kristin Ardlie, Larson Hogstrom, Peyton Greenside, Nathanael S Gray, Paul A Clemons, Serena Silver, Xiaoyun Wu, Wen-Ning Zhao, Willis Read-Button, Xiaohua Wu, Stephen J Haggarty, Lucienne V Ronco, Jesse S Boehm, Stuart L Schreiber, John G Doench, Joshua A Bittker, David E Root, Bang Wong, and Todd R Golub. A next generation connectivity map: L1000 platform and the first 1,000,000 profiles. Cell, 171(6):1437-1452.e17, November 2017.
Data-driven prediction of drug effects and interactions. P Nicholas, Tatonetti, Roxana Patrick P Ye, Russ B Daneshjou, Altman, Science translational medicine. 4125Nicholas P Tatonetti, Patrick P Ye, Roxana Daneshjou, and Russ B Altman. Data-driven prediction of drug effects and interactions. Science translational medicine, 4(125):125ra31- 125ra31, 2012.
Learning to learn. Sebastian Thrun, Lorien Pratt, Springer Science & Business MediaSebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Principles of risk minimization for learning theory. Vladimir Vapnik, Advances in neural information processing systems. 4Vladimir Vapnik. Principles of risk minimization for learning theory. Advances in neural information processing systems, 4, 1991.
Adapting text embeddings for causal inference. Victor Veitch, Dhanya Sridhar, David Blei, Conference on Uncertainty in Artificial Intelligence. PMLRVictor Veitch, Dhanya Sridhar, and David Blei. Adapting text embeddings for causal inference. In Conference on Uncertainty in Artificial Intelligence, pages 919-928. PMLR, 2020.
Stats 361: Causal inference. Stefan Wager, Stefan Wager. Stats 361: Causal inference, 2020.
Estimation and inference of heterogeneous treatment effects using random forests. Stefan Wager, Susan Athey, Journal of the American Statistical Association. 113523Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523):1228-1242, 2018.
A survey of zero-shot learning: Settings, methods, and applications. Wei Wang, W Vincent, Han Zheng, Chunyan Yu, Miao, ACM Transactions on Intelligent Systems and Technology (TIST). 102Wei Wang, Vincent W Zheng, Han Yu, and Chunyan Miao. A survey of zero-shot learning: Settings, methods, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1-37, 2019.
The blessings of multiple causes. Yixin Wang, M David, Blei, Journal of the American Statistical Association. 114528Yixin Wang and David M Blei. The blessings of multiple causes. Journal of the American Statistical Association, 114(528):1574-1596, 2019.
Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. Galen Weld, Peter West, Maria Glenski, David Arbour, A Ryan, Tim Rossi, Althoff, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media16Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan A Rossi, and Tim Althoff. Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 1109-1120, 2022.
Zero-shot learning-the good, the bad and the ugly. Yongqin Xian, Bernt Schiele, Zeynep Akata, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYongqin Xian, Bernt Schiele, and Zeynep Akata. Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4582-4591, 2017.
Evaluating treatment prioritization rules via rank-weighted average treatment effects. Steve Yadlowsky, Scott Fleming, Nigam Shah, Emma Brunskill, Stefan Wager, arXiv:2111.07966arXiv preprintSteve Yadlowsky, Scott Fleming, Nigam Shah, Emma Brunskill, and Stefan Wager. Evaluating treatment prioritization rules via rank-weighted average treatment effects. arXiv preprint arXiv:2111.07966, 2021.
QA-GNN: Reasoning with language models and knowledge graphs for question answering. Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec, North American Chapter of the Association for Computational Linguistics (NAACL). 2021Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. QA- GNN: Reasoning with language models and knowledge graphs for question answering. In North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
Isoperimetric constants and the first eigenvalue of a compact riemannian manifold. Shing-Tung Yau, Annales Scientifiques de l'École Normale Supérieure. 8Shing-Tung Yau. Isoperimetric constants and the first eigenvalue of a compact riemannian manifold. In Annales Scientifiques de l'École Normale Supérieure, volume 8, pages 487-507, 1975.
Ganite: Estimation of individualized treatment effects using generative adversarial nets. Jinsung Yoon, James Jordon, Mihaela Van Der, Schaar, International Conference on Learning Representations. Jinsung Yoon, James Jordon, and Mihaela Van Der Schaar. Ganite: Estimation of individualized treatment effects using generative adversarial nets. In International Conference on Learning Representations, 2018.
Triplere: Knowledge graph embeddings via tripled relation vectors. Long Yu, Zhicong Luo, Huanyong Liu, Deng Lin, Hongzhu Li, Yafeng Deng, arXiv:2209.08271arXiv preprintLong Yu, Zhicong Luo, Huanyong Liu, Deng Lin, Hongzhu Li, and Yafeng Deng. Triplere: Knowledge graph embeddings via tripled relation vectors. arXiv preprint arXiv:2209.08271, 2022.
Learning overlapping representations for the estimation of individualized treatment effects. Yao Zhang, Alexis Bellot, Mihaela Schaar, International Conference on Artificial Intelligence and Statistics. PMLRYao Zhang, Alexis Bellot, and Mihaela Schaar. Learning overlapping representations for the estimation of individualized treatment effects. In International Conference on Artificial Intelligence and Statistics, pages 1005-1014. PMLR, 2020.
Modeling polypharmacy side effects with graph convolutional networks. Marinka Zitnik, Monica Agrawal, Jure Leskovec, Bioinformatics. 3413Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457-i466, 2018.
| zyda_arxiv-1934000 |
DiffusionNER: Boundary Diffusion for Named Entity Recognition
Yongliang Shen
Kaitao Song
†
Xu Tan
Dongsheng Li
Weiming Lu
Yueting Zhuang
Zhejiang University
Microsoft Research
Asia
DiffusionNER: Boundary Diffusion for Named Entity Recognition
In this paper, we propose DIFFUSIONNER, which formulates the named entity recognition task as a boundary-denoising diffusion process and thus generates named entities from noisy spans. During training, DIFFUSIONNER gradually adds noises to the gold entity boundaries by a fixed forward diffusion process and learns a reverse diffusion process to recover the entity boundaries. In inference, DIFFUSIONNER first randomly samples some noisy spans from a standard Gaussian distribution and then generates the named entities by denoising them with the learned reverse diffusion process. The proposed boundary-denoising diffusion process allows progressive refinement and dynamic sampling of entities, empowering DIF-FUSIONNER with efficient and flexible entity generation capability. Experiments on multiple flat and nested NER datasets demonstrate that DIFFUSIONNER achieves comparable or even better performance than previous state-ofthe-art models 1 .
Introduction
Named Entity Recognition (NER) is a basic task of information extraction (Tjong Kim Sang and De Meulder, 2003), which aims to locate entity mentions and label specific entity types such as person, location, and organization. It is fundamental to many structured information extraction tasks, such as relation extraction (Li and Ji, 2014;Miwa and Bansal, 2016) and event extraction (McClosky et al., 2011;Wadden et al., 2019).
Most traditional methods (Chiu and Nichols, 2016) formulate the NER task into a sequence labeling task by assigning a single label to each token. To accommodate the nested structure between entities, some methods (Ju et al., 2018;Wang et al., Figure 1: Boundary diffusion in named entity recognition. The fixed forward diffusion process adds Gaussian noise to the entity boundaries at each timestep, and the noisy boundaries recover original state by denoising with the learnable reverse diffusion process. For inference, the reverse diffusion process generates entity boundaries and performs entity typing based on the noisy spans sampled from the Gaussian distribution. 2020) further devise cascaded or stacked tagging strategies. Another class of methods treat NER as a classification task on text spans (Sohrab and Miwa, 2018;Eberts and Ulges, 2020), and assign labels to word pairs (Yu et al., 2020;Li et al., 2022a) or potential spans (Lin et al., 2019;. In contrast to the above works, some pioneer works (Paolini et al., 2021;Yan et al., 2021b; propose generative NER methods that formulate NER as a sequence generation task by translating structured entities into a linearized text sequence. However, due to the autoregressive manner, the generation-based methods suffer from inefficient decoding. In addition, the discrepancy between training and evaluation leads to exposure bias that impairs the model performance.
We move to another powerful generative model for NER, namely the diffusion model. As a class of deep latent generative models, diffusion models have achieved impressive results on image, audio and text generation (Rombach et al., 2022;Ramesh et al., 2022;Kong et al., 2021;Li et al., 2022b;Gong et al., 2022). The core idea of diffusion models is to systematically perturb the data through a forward diffusion process, and then recover the data by learning a reverse diffusion process.
Inspired by this, we present DIFFUSIONNER, a new generative framework for named entity recognition, which formulates NER as a denoising diffusion process (Sohl-Dickstein et al., 2015;Ho et al., 2020) on entity boundaries and generates entities from noisy spans. As shown in Figure 1, during training, we add Gaussian noise to the entity boundaries step by step in the forward diffusion process, and the noisy spans are progressively denoised by a reverse diffusion process to recover the original entity boundaries. The forward process is fixed and determined by the variance schedule of the Gaussian Markov chains, while the reverse process requires learning a denoising network that progressively refines the entity boundaries. For inference, we first sample noisy spans from a prior Gaussian distribution and then generate entity boundaries using the learned reverse diffusion process.
Empowered by the diffusion model, DIFFUSION-NER presents three advantages. First, the iterative denoising process of the diffusion model gives DIFFUSIONNER the ability to progressively refine the entity boundaries, thus improve performance. Second, independent of the predefined number of noisy spans in the training stage, DIF-FUSIONNER can sample a different number of noisy spans to decode entities during evaluation. Such dynamic entity sampling makes more sense in real scenarios where the number of entities is arbitrary. Third, different from the autoregressive manner in generation-based methods, DIFFUSION-NER can generate all entities in parallel within several denoising timesteps. In addition, the shared encoder across timesteps can further speed up inference. We will further analyze these advantages of DIFFUSIONNER in § 6.2. In summary, our main contributions are as follows:
• DIFFUSIONNER is the first to use the diffusion model for NER, an extractive task on discrete text sequences. Our exploration provides a new perspective on diffusion models in natural language understanding tasks.
• DIFFUSIONNER formulates named entity recognition as a boundary denoising diffusion process from the noisy spans. DIFFUSION-NER is a novel generative NER method that generates entities by progressive boundary refinement over the noisy spans.
• We conduct experiments on both nested and flat NER to show the generality of DIFFU-SIONNER. Experimental results show that our model achieves better or competitive performance against the previous SOTA models.
2 Related Work
Named Entity Recognition
Named entity recognition is a long-standing study in natural language processing. Traditional methods can be divided into two folders: tagging-based and span-based. For tagging-based methods (Chiu and Nichols, 2016;Ju et al., 2018;Wang et al., 2020), they usually perform sequence labeling at the token level and then translate into predictions at the span level. Meanwhile, the span-based methods (Sohrab and Miwa, 2018;Eberts and Ulges, 2020;Li et al., 2022a) directly perform entity classification on potential spans for prediction. Besides, some methods attempt to formulate NER as sequence-to-set or reading comprehension (Li et al., 2020; tasks for prediction. In addition, autoregressive generative NER works (Athiwaratkun et al., 2020;De Cao et al., 2021;Yan et al., 2021b; linearize structured named entities into a sequence, relying on sequence-to-sequence language models, such as BART (Lewis et al., 2020), T5 (Raffel et al., 2020), etc., to decode entities. These works designed various translation schemas, including from word index sequence to entities (Yan et al., 2021b) and from label-enhanced sequence to entities (Paolini et al., 2021), to unify NER to the text generation task and achieved promising performance and generalizability. Other works focus on the disorder of the entities and mitigate incorrect decoding bias from a causal inference perspective. Different from previous works, our proposed DIFFUSIONNER is the first one to explore the utilization of the generative diffusion model on NER, which enables progressive refinement and dynamic sampling of entities. Furthermore, compared with previous generation-based methods, our DIFFUSIONNER can also decode entities in a nonautoregressive manner, and thus result in a faster inference speed with better performance.
Diffusion Model
Diffusion model is a deep latent generative model proposed by (Sohl-Dickstein et al., 2015). With the development of recent work (Ho et al., 2020), diffusion model has achieved impressive results on image and audio generation (Rombach et al., 2022;Ramesh et al., 2022;Kong et al., 2021). Diffusion model consists of the forward diffusion process and the reverse diffusion process. The former progressively disturbs the data distribution by adding noise with a fixed variance schedule (Ho et al., 2020), and the latter learns to recover the data structure. Despite the success of the diffusion model in continuous state spaces (image or waveform), the application to natural language still remains some open challenges due to the discrete nature of text (Austin et al., 2021;Hoogeboom et al., 2022;Strudel et al., 2022;He et al., 2022). Diffusion-LM (Li et al., 2022b) models discrete text in continuous space through embedding and rounding operations and proposes an extra classifier as a guidance to impose constraints on controllable text generation. DiffuSeq (Gong et al., 2022) and SeqDiffuSeq (Yuan et al., 2022a) extend diffusionbased text generation to a more generalized setting. They propose classifier-free sequence-to-sequence diffusion frameworks based on encoder-only and encoder-decoder architectures, respectively.
Although diffusion models have shown their generative capability on images and audio, its potential on discriminative tasks has not been explored thoroughly. Several pioneer works (Amit et al., 2021;Baranchuk et al., 2022;Chen et al., 2022) have made some attempts on diffusion models for object detection and semantic segmentation. Our proposed DIFFUSIONNER aims to solve an extractive task on discrete text sequences.
Preliminary
In diffusion models, both the forward and reverse processes can be considered a Markov chain with progressive Gaussian transitions. Formally, given a data distribution x 0 ∼ q (x 0 ) and a predefined variance schedule {β 1 , . . . , β T }, the forward process q gradually adds Gaussian noise with variance β t ∈ (0, 1) at timestep t to produce latent variables x 1 , x 2 , . . . , x T as follows:
q (x 1 , . . . , x T | x 0 ) = T t=1 q (x t | x t−1 ) (1) q (x t | x t−1 ) = N x t ; 1 − β t x t−1 , β t I (2)
An important property of the forward process is that we can sample the noisy latents at an arbitrary timestep conditioned on the data x 0 . With the notation α t := 1 − β t andᾱ t := t s=0 α s , we have:
q (x t | x 0 ) = N x t ; √ᾱ t x 0 , (1 −ᾱ t ) I(3)
Asᾱ T approximates 0, x T follows the standard Gaussian distribution: p (x T ) ≈ N (x T ; 0, I). Unlike the fixed forward process, the reverse process p θ (x 0:T ) is defined as a Markov chain with learnable Gaussian transitions starting at a prior p (x T ) = N (x T ; 0, I):
p θ (x 0:T ) = p (x T ) T t=1 p θ (x t−1 | x t ) p θ (x t−1 | x t ) = N (x t−1 ; µ θ (x t , t) , Σ θ (x t , t))
where θ denotes the parameters of the model and µ θ and Σ θ are the predicted covariance and mean of q (x t−1 | x t ). We set Σ θ (x t , t) = σ 2 t I and build a neural network f θ to predict the data x 0 , denoted
asx 0 = f θ (x t , t). Then we have µ θ (x t , t) = µ t (x t ,x 0 ) =μ t (x t , f θ (x t , t)), whereμ t denotes the mean of posterior q (x t−1 | x t ,x 0 )
. The reverse process is trained by optimizing a variational upper bound of − log (p θ (x 0 )). According to the derivation in Ho et al. (2020), we can simplify the training objective of the diffusion model by training the model f θ (·) to predict the data x 0 .
Method
In this section, we first present the formulation of diffusion model for NER (i.e., the boundary denoising diffusion process) in § 4.1. Then, we detail the architecture of the denoising network for boundary reverse process in § 4.2. Finally, we describe the inference procedure of DIFFUSIONNER in § 4.3.
Boundary Denoising Diffusion Model
Given a sentence S with length M , the named entity recognition task is to extract the entities
E = {(l i , r i , t i )} N i=0
contained in the sentence, where N is the number of entities and l i , r i , t i denote the left and right boundary indices and type of the i-th entity. We formulate NER as a boundary denoising diffusion process, as shown in Figure 2. We regard entity boundaries as data samples, then the boundary forward diffusion is to add Gaussian noise to the entity boundaries while the reverse diffusion process is to progressively recover the original entity boundaries from the noisy spans. Boundary Forward Diffusion Boundary forward diffusion is the process of adding noise to the entity boundary in a stepwise manner. In order to align the number of entities in different instances, we first expand the entity set to a fixed number K (> N ). There are two ways to expand the entities, repetition strategy and random strategy, which add K − N entities by duplicating entities or sampling random spans from a Gaussian distribution 2 . For convenience, we use B ∈ R K×2 to denote the boundaries of the K expanded entities, with all of them normalized by the sentence length M and scaled to (−λ, λ) interval. Formally, given the entity boundaries as data samples x 0 = B, we can obtain the noisy spans at timestep t using the forward diffusion process. According to Equation (3), we have:
x t = √ᾱ t x 0 + √ 1 −ᾱ t(4)
where ∼ N (0, I) is the noise sampled from the standard Gaussian. At each timestep, the noisy spans have the same shape as x 0 , i.e.,
x 1 , x 2 , . . . , x T ∈ R K×2 .
Boundary Reverse Diffusion Starting from the noisy spans x T sampled from the Gaussian distribution, boundary reverse diffusion adopts a non-Markovian denoising practice used in DDIM (Song et al., 2021) to recover entities boundaries. Assuming τ is an arithmetic subsequence of the complete timestep sequence [1, . . . , T ] of length γ with τ γ = T . Then we refine the noisy spans x τ i to 2 We will discuss these two practices in § 6.3.
x τ i−1 as follows:
x 0 = f θ (x τ i , S, τ i ) (5) τ i = x τ i − √ α τ ix 0 √ 1 − α τ i (6) x τ i−1 = √ α τ i−1x 0 + 1 − α τ i−1ˆ τ i(7)
wherex 0 andˆ τ i are the predicted entity boundary and noise at timestep τ i . f θ (x t , S, t) is a learnable denoising network and we will cover the network architecture in the next section ( § 4.2). After γ iterations of DDIM, the noisy spans are progressively refined to the entity boundaries.
Network Architecture
Denoising network f θ (x t , S, t) accepts the noisy spans x t and the sentence S as inputs and predicts the corresponding entity boundariesx 0 . As shown in Figure 2, we parameterize the denoising network with a sentence encoder and an entity decoder.
Sentence Encoder consists of a BERT (Devlin et al., 2019) plus a stacked bi-directional LSTM. The whole span encoder takes the sentence S as input and outputs the sentence encoding H S ∈ R M ×h . The sentence encoding H S will be calculated only once and reused across all timesteps to save computations.
Entity Decoder uses the sentence encoding H S to first compute the representations of K noisy spans x t and then predicts the corresponding entity boundaries. Specifically, we discretize the noisy spans into word indexes by rescaling, multiplying and rounding 3 , then perform mean pooling over the
x0 = B ∈ R K×2 5 t ∼ Uniform ({1, . . . , T }) 6 ∼ N (0, I) 7 xt = √ᾱ tx0 + √ 1 −ᾱt 8
Compute P l , P r and P c by running f θ (xt, S, t) 9 Take gradient descent step by optimize
− K i=1 log P c i (π c (i)) + δ∈l,r log P δ i (π δ (i)) 10 until converged;
inner-span tokens. The extracted span representations can be denoted as H X ∈ R K×h . To further encode the spans, we design a span encoder that consists of a self-attention and a cross-attention layer. The former enhances the interaction between spans with key, query, and value as H X . And the latter fuses the sentence encoding to the span representation with key, value as H S , and query as H X . We further add the sinusoidal embedding E t (Vaswani et al., 2017) of timestep t to the span representations. Thus the new representationsH X of the noisy spans can be computed:
H X = SpanEncoder(H S , H X ) + E t ,
Then we use two boundary pointers to predict the entity boundaries. For boundary δ ∈ {l, r}, we compute the fusion representation H δ SX ∈ R K×M ×h of the noisy spans and the words, and then the probability of the word as the left or right boundaries P δ ∈ R K×M can be computed as:
H δ SX = H S W δ S +H X W δ X P δ = sigmoid(MLP(H δ SX ))
where W δ S , W δ X ∈ R h×h are two learnable matrixes and MLP is a two-layer perceptron. Based on the boundary probabilities, we can predict the boundary indices of the K noisy spans. If the current step is not the last denoising step, we computê x 0 by normalizing the indices with sentence length M and scaling to (−λ, λ) intervals. Then we conduct the next iteration of the reverse diffusion process according to Equations (5) to (7).
It is worth noting that we should not only locate entities but also classify them in named entity recognition. Therefore, we use an entity classifier to classify the noisy spans. The classification probability P c ∈ R K×C is calculated as follows:
P c = Classifier(H X ) Algorithm 2: Inference 1 xT ∼ N (0, I) ∈ R K eval ×2 2 τ is an arithmetic sequence of length γ with τγ = T 3 for i = γ, . . . , 1 do 4 Computex0, P l , P r and P c via f θ (xt, S, t) 5 xτ i−1 = √ ατ i−1x 0 + 1 − ατ i−1 · xτ i − √ ατ ix 0 √ 1−ατ i 6 end 7 Decode entities (li, ri, ci) K eval i=0 , where δi = argmax P δ i , δ ∈ {l, r, c} 8 Perform post-processing on (li, ri, ci) K eval i=0 9 return final entities
where C is the number of entity types and Classifier is a two-layer perceptron with a softmax layer.
Training Objective With K entities predicted from the noisy spans and N ground-truth entities, we first use the Hungarian algorithm (Kuhn, 1955) to solve the optimal matchingπ between the two sets 4 as in Carion et al. (2020).π(i) denotes the ground-truth entity corresponding to the i-th noisy span. Then, we train the boundary reverse process by maximizing the likelihood of the prediction:
L = − K i=1 δ∈{l,r,c} log P δ i π δ (i)
whereπ l (i),π r (i) andπ c (i) denote the left and right boundary indexes and type of theπ(i) entity. Overall, Algorithm 1 displays the whole training procedure of our model for an explanation.
Inference
During inference, DIFFUSIONNER first samples K eval noisy spans from a Gaussian distribution, then performs iterative denoising with the learned boundary reverse diffusion process based on the denoising timestep sequence τ . Then with the predicted probabilities on the boundaries and type, we can decode K eval candidate entities (l i , r i , c i ) K eval i=0 , where δ i = argmax P δ i , δ ∈ {l, r, c}. After that, we employ two simple post-processing operations on these candidates: de-duplication and filtering. For spans with identical boundaries, we keep the one with the maximum type probability. For spans with the sum of prediction probabilities less than the threshold ϕ, we discard them. The inference procedure is shown in Algorithm 2. (Walker et al., 2006), and GE-NIA (Ohta et al., 2002). ACE04 and ACE05 belong to the news domain and GENIA is in the biological domain. For flat NER, we use three common datasets to validate: CoNLL03 (Tjong Kim Sang and De Meulder, 2003), OntoNotes (Pradhan et al., 2013), and MSRA (Levow, 2006). More details about datasets can be found in Appendix B.
Baselines
We choose a variety of recent advanced methods as our baseline, which include: 1) Tagging-based methods (Straková et al., 2019;Ju et al., 2018;Wang et al., 2020); 2) Span-based methods (Yu et al., 2020;Li et al., 2020;Wan et al., 2022;Lou et al., 2022;Zhu and Li, 2022;Yuan et al., 2022b); 3) Generation-based methods Yan et al., 2021b;. More details about baselines can be found in Appendix D. For diffusion model, the number of noisy spans K (K eval ) is set as 60, the timestep T is 1000, and the sampling timestep γ is 5 with a filtering threshold ϕ = 2.5. The scale factor λ for noisy spans is 1.0. Please see Appendix C for more details.
Implementation Details
6 Results and Analysis 6.1 Performance ary refinement, and thus obtain better performance.
The results also validate that our DIFFUSIONNER can recover entity boundaries from noisy spans via boundary denoising diffusion.
Analysis
Inference Efficiency To further validate whether our DIFFUSIONNER requires more inference computations, we also conduct experiments to compare the inference efficiency between DIFFUSIONNER and other generation-based models Yan et al., 2021a). Just as shown in Table 3, we find that DIFFUSIONNER could achieve better performance while maintaining a faster inference speed with minimal parameter scale. Even with a denoising timestep of γ = 10, DIFFUSIONNER is 18× and 3× faster than them. This is because DIFFU-SIONNER generates all entities in parallel within several denoising timesteps, which avoids generating the linearized entity sequence in an autoregressive manner. In addition, DIFFUSIONNER shares sentence encoder across timesteps, which further accelerates inference speed.
Denoising Timesteps
We also conduct experiments to analyze the effect of different denoising timesteps on model performance and inference speed of DIFFUSIONNER under various numbers of noisy spans. Just as shown in Figure 3, we find that, with an increase of denoising steps, the model obtains incremental performance improvement while sacrificing inference speed. Considering the trade-off between performance and efficiency, we set γ = 5 as the default setting. In addition, when the noisy spans are smaller, the improvement brought by increasing the denoising timesteps is more obvious. This study indicates that our DiffusionNER can effectively counterbalance the negative impact of undersampling noise spans on performance by utilizing additional timesteps. Sampling Number As a generative latent model, DIFFUSIONNER can decouple training and eval-uation, and dynamically sample noisy spans during evaluation. To manifest this advantage, we train DIFFUSIONNER on ACE04 with K = 60 noisy spans and evaluate it with different sampling numbers K eval . The results are shown in Figure 4. Overall, the model performance becomes better as the sampling number of noisy spans increases. Specifically, we find that DIFFUSIONNER performs worse when K eval < 30. We guess this is because fewer noisy spans may not cover all potential entities. When sampling number K eval > 60, we find it could also slightly improve model performance. Overall, the dynamic sampling of noisy spans in DIFFUSIONNER has the following advantages: 1) we can improve model performance by controlling it to sample more noisy spans; 2) dynamic sampling strategy also allows the model to predict an arbitrary number of entities in any realworld application, avoiding the limitations of the sampling number at the training stage.
Ablation Study
Network Architecture As shown in Table 4, we conduct experiments to investigate the network architecture of the boundary reverse diffusion process. We found that DIFFUSIONNER performs better with a stronger pre-trained language model (PLM), as evidenced by an improvement of +0.53% on ACE04 and +0.11% on CoNLL03 when using roberta-large. Additionally, for the span encoder, we find that directly removing self-attention between noisy spans or cross-attention of spans to the sentence can significantly impair performance. When both are ablated, model performance decreases by 1.37% and 1.15% on ACE04 and CoNLL03. These results indicate that the interaction between the spans or noisy spans and the sentence is necessary. the added noise at each timestep during boundary forward diffusion process. Therefore, we analyze the performance of DIFFUSIONNER on different variance schedulers with different noise timesteps T . The results on ACE04 and CoNLL03 are shown in Table 5. We find that the cosine scheduler generally yields superior results on the ACE04, while the linear scheduler proves to be more effective on CoNLL03. In addition, the performance of DIFFU-SIONNER varies with the choice of noise timestep, with the best performance achieved at T = 1000 for ACE04 and T = 1500 for CoNLL03.
Expansion Stratagy
The expansion stratagy of the entity set can make the number of K noisy spans consistent across instances during training.
We conduct experiments to analyze the performance of DIFFUSIONNER for different expansion strategies with various numbers of noisy spans. The experimental results are shown in Table 6. Generally, we find that the random strategy could achieve similar or better performance than the repetitive strategy. In addition, Table 6 shows that DIFFU-SIONNER is insensitive to the number of noisy spans during training. Considering that using more noisy spans brings more computation and memory usage, we set K = 60 as the default setting.
Conclusion
In this paper, we present DIFFUSIONNER, a novel generative approach for NER that converts the task into a boundary denoising diffusion process. Our evaluations on six nested and flat NER datasets show that DIFFUSIONNER achieves comparable or better performance compared to previous stateof-the-art models. Additionally, our additional analyses reveal the advantages of DIFFUSIONNER in terms of inference speed, progressive boundary refinement, and dynamic entity sampling. Overall, this study is a pioneering effort of diffusion models for extractive tasks on discrete text sequences, and we hope it may serve as a catalyst for more research about the potential of diffusion models in natural language understanding tasks.
Limitations
We discuss here the limitations of the proposed DIF-FUSIONNER. First, as a latent generative model, DIFFUSIONNER relies on sampling from a Gaussian distribution to produce noisy spans, which leads to a random characteristic of entity generation. Second, DIFFUSIONNER converges slowly due to the denoising training and matching-based loss over a large noise timestep. Finally, since discontinuous named entities often contain multiple fragments, DIFFUSIONNER currently lacks the ability to generate such entities. We can design a simple classifier on top of DIFFUSIONNER, which is used to combine entity fragments and thus solve the problem of discontinuous NER.
References
Tomer Amit, Eliya Nachmani, Tal Shaharbany, and Lior Wolf. 2021. Segdiff: Image segmentation with diffusion probabilistic models. ArXiv, abs/2112.00390.
Ben Athiwaratkun, Cicero Nogueira dos Santos, Jason Krone, and Bing Xiang. 2020. Augmented natural language for generative sequence labeling. • BARTNER (Yan et al., 2021b) is also a sequence-to-sequence framework that transforms entity labels into word index sequences and decodes entities in a word-pointer manner.
• Seq2Set treats NER as a sequence-to-set task and constructs learnable entity queries to generate entities.
• UIE (Lu et al., 2022) designs a special schema for the conversion of structured information to sequences, and adopts a generative model to generate linearized sequences to unify various information extraction tasks.
• Biaffine (Yu et al., 2020) reformulates NER as a structured prediction task and adopts a dependency parsing approach for NER.
• MRC (Li et al., 2020) reformulates NER as a reading comprehension task and extracts entities to answer the type-specific questions.
• Locate&label ) is a twostage method that first regresses boundaries to locate entities and then performs entity typing.
• SpanGraph (Wan et al., 2022) utilizes a retrieval-based span-level graph to improve the span representation, which can connect spans and entities in the training data.
• LLCP (Lou et al., 2022) treat NER as latent lexicalized constituency parsing and resort to constituency trees to model nested entities.
• BoundarySmooth (Zhu and Li, 2022), inspired by label smoothing, proposes boundary smoothing for span-based NER methods.
• Triffine (Yuan et al., 2022b) proposes a triaffine mechanism to integrate heterogeneous factors to enhance the span representation, including inside tokens, boundaries, labels, and related spans.
• Word2Word (Li et al., 2022a) treats NER as word-word relation classification and uses multi-granularity 2D convolutions to construct the 2D word-word grid representations.
Figure 2 :
2Overview of DIFFUSIONNER. Boundary denoising diffusion process for NER with a denoising network.
a fair comparison, we use bert-large (Devlin et al., 2019) on ACE04, ACE05, CoNLL03 and OntoNotes, biobert-large (Chiu et al., 2016) on GENIA and chinese-bert-wwm (Cui et al., 2020) on MSRA. We adopt the Adam (Kingma and Ba, 2015) as the default optimizer with a linear warmup and linear decay learning rate schedule. The peak learning rate is set as 2e − 5 and the batch size is 8.
Figure 3 :Figure 4 :
34Analysis of denoising timestep γ on ACE04. Analysis of #sampled noisy spans on ACE04.
Premier of … British Columbia Premier of … British ColumbiaBritish Columbia
British Columbia
the western … Columbia
the western … Columbia
western Canadian
western Canadian
Entity Decoder
University of Sydney
University of Sydney
PER PER
GPE GPE
ORG ORG
LOC LOC
noisy span
Sentence Encoder
Sentence Encoder
Premier of the western Canadian province of British Columbia
achieved his master's degree from the University of Sydney
Algorithm 1: Training Sample a sentence S with entities E from D Expand E and get entity boundaries B1 repeat
2
3
4
Table 1 :
1Results on nested NER datasets.5 Experimental Settings
5.1 Datasets
For nested NER, we choose three widely used
datasets for evaluation: ACE04 (Doddington et al.,
2004), ACE05
Table 1
1illustrates the performance of DIFFUSION-
NER as well as baselines on the nested NER
datasets. Our results in Table 1 demonstrate that
DIFFUSIONNER is a competitive NER method,
achieving comparable or superior performance
compared to state-of-the-art models on the nested
NER. Specifically, on ACE04 and GENIA datasets,
DIFFUSIONNER achieves F1 scores of 88.39%
and 81.53% respectively, with an improvement of
+0.77% and +0.41%. And on ACE05, our method
achieves comparable results. Meanwhile, DIFFU-
SIONNER also shows excellent performance on flat
NER, just as shown in Table 2. We find that DIFFU-
SIONNER outperforms the baselines on OntoNotes
with +0.16% improvement and achieves a compara-
ble F1-score on both the English CoNLL03 and the
Chinese MSRA. These improvements demonstrate
that our DIFFUSIONNER can locate entities more
accurately due to the benefits of progressive bound-
Table 2 :
2Results on flat NER datasets. † means that we reproduce the results under the same setting.
Table 3 :
3Comparison with generation-based methods in
terms of parameters, performance, and inference speed.
# P means the number of parameters. All experiments
are conducted on a single GeForce RTX 3090 with the
same setting. The results are reported on ACE04.
Table 4 :
4Ablation study of network architecture.Variance Scheduler The variance scheduler
plays a crucial role in controlling the intensity of
Table 5 :
5Ablation study of variance scheduler.Strategy
# Noisy Spans ACE04 CoNLL03
Repetition
K = 60
88.15
92.66
K = 120
88.49
92.54
K = 150
88.19
92.71
Random
K = 60
88.46
92.78
K = 120
88.53
92.79
K = 150
88.11
92.60
Table 6 :
6Ablation study of expansion strategy.
Xin Huang, Ashish Khetan, Rene Bidart, and Zohar Karnin. 2022. Pyramid-BERT: Reducing complexity via successive core-set based token selection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8798-8817, Dublin, Ireland. Association for Computational Linguistics. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459, New Orleans, Louisiana. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861-871, New Orleans, Louisiana. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3th International Conference on Learning Representations, ICLR 2021. Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2021. Diffwave: A versatile diffusion model for audio synthesis. In International Conference on Learning Representations. Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83-97. Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108-117, Sydney, Australia. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022a. Unified named entity recognition as word-word relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10965-10973. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 402-412, Baltimore, Maryland. Association for Computational Linguistics. Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori Hashimoto. 2022b. Diffusionlm improves controllable text generation. ArXiv, abs/2205.14217. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182-5192, Florence, Italy. Association for Computational Linguistics. Chao Lou, Songlin Yang, and Kewei Tu. 2022. Nested named entity recognition as latent lexicalized constituency parsing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6183-6198, Dublin, Ireland. Association for Computational Linguistics. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics -nesting ratio (%)In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 375-385, Online. Association for Computa-
tional Linguistics.
Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel
Tarlow, and Rianne van den Berg. 2021. Structured
denoising diffusion models in discrete state-spaces.
In Advances in Neural Information Processing Sys-
tems, volume 34, pages 17981-17993. Curran Asso-
ciates, Inc.
Dmitry Baranchuk, Andrey Voynov, Ivan Rubachev,
Valentin Khrulkov, and Artem Babenko. 2022.
Label-efficient semantic segmentation with diffu-
sion models. In International Conference on Learn-
ing Representations.
Nicolas Carion, Francisco Massa, Gabriel Synnaeve,
Nicolas Usunier, Alexander Kirillov, and Sergey
Zagoruyko. 2020. End-to-end object detection with
transformers. In Computer Vision -ECCV 2020,
pages 213-229, Cham. Springer International Pub-
lishing.
Shoufa Chen, Peize Sun, Yibing Song, and Ping Luo.
2022. Diffusiondet: Diffusion model for object de-
tection. arXiv preprint arXiv:2211.09788.
Billy Chiu, Gamal Crichton, Anna Korhonen, and
Sampo Pyysalo. 2016. How to train good word em-
beddings for biomedical NLP. In Proceedings of
the 15th Workshop on Biomedical Natural Language
Processing, pages 166-174, Berlin, Germany. Asso-
ciation for Computational Linguistics.
Jason P.C. Chiu and Eric Nichols. 2016. Named En-
tity Recognition with Bidirectional LSTM-CNNs.
Transactions of the Association for Computational
Linguistics, 4:357-370.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shi-
jin Wang, and Guoping Hu. 2020. Revisiting pre-
trained models for Chinese natural language process-
ing. In Findings of the Association for Computa-
tional Linguistics: EMNLP 2020, pages 657-668,
Online. Association for Computational Linguistics.
Nicola De Cao, Gautier Izacard, Sebastian Riedel, and
Fabio Petroni. 2021. Autoregressive entity retrieval.
In 9th International Conference on Learning Repre-
sentations, ICLR 2021, Virtual Event, Austria, May
3-7, 2021. OpenReview.net.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
George Doddington, Alexis Mitchell, Mark Przybocki,
Lance Ramshaw, Stephanie Strassel, and Ralph
Weischedel. 2004. The automatic content extraction
(ACE) program -tasks, data, and evaluation. In
Proceedings of the Fourth International Conference
on Language Resources and Evaluation (LREC'04),
Lisbon, Portugal. European Language Resources As-
sociation (ELRA).
Markus Eberts and Adrian Ulges. 2020. Span-based
joint entity and relation extraction with transformer
pre-training. In Proceedings of the 24th European
Conference on Artificial Intelligence, Santiago de
Compostela, Spain.
Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu,
and Lingpeng Kong. 2022. Diffuseq: Sequence
to sequence text generation with diffusion models.
arXiv preprint arXiv:2210.08933.
Zhengfu He, Tianxiang Sun, Kuanning Wang, Xuan-
jing Huang, and Xipeng Qiu. 2022. Diffusionbert:
Improving generative masked language models with
diffusion models. arXiv preprint arXiv:2211.15029.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020.
Denoising diffusion probabilistic models. In Ad-
vances in Neural Information Processing Systems,
volume 33, pages 6840-6851. Curran Associates,
Inc.
Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bast-
ings, Ben Poole, Rianne van den Berg, and Tim Sal-
imans. 2022. Autoregressive diffusion models. In
International Conference on Learning Representa-
tions.
45.71
46.69
45.61
38.41
34.75
37.35
17.95
21.78
average sentence length
22.50
23.02
23.05
19.21
18.93
17.2
25.35
25.99
maximum number of entities
28
22
20
27
23
17
25
14
average number of entities
3.58
3.37
3.73
3.39
3.30
2.86
3.03
2.97
CoNLL03
OntoNotes
Chinese MSRA
Train
Dev
Test
Train
Dev
Test
Train
Dev
Test
number of sentences
14041 3250
3453
49706
13900 10348 41728 4636
4365
number of entities
23499 5942
5648 128738 20354 12586 70446 4257
6181
average sentence length
14.50 15.80 13.45
24.94
20.11
19.74
46.87 46.17 39.54
maximum number of entities
20
20
31
32
71
21
125
18
461
average number of entities
1.67
1.83
1.64
2.59
1.46
1.22
1.69
0.92
1.42
Table 7 :
7Statistics of the nested and flat datasets used in our experiments.Hyperparameter
ACE04
ACE05
GENIA
learning rate
2e-5
3e-5
2e-5
weight decay
0.1
0.1
0.1
lr warmup
0.1
0.1
0.1
batch size
8
8
8
epoch
100
50
50
hidden size h
1024
1024
1024
threshold ϕ
2.55
2.65
2.50
scale factor λ
1.0
1.0
2.0
Hyperparameter CoNLL03 Ontonotes
MSRA
learning rate
2e-5
2e-5
5e-6
weight decay
0.1
0.1
0.1
lr warmup
0.1
0.1
0.1
batch size
8
8
16
epoch
100
50
100
hidden size h
1024
1024
768
threshold ϕ
2.50
2.55
2.60
scale factor λ
1.0
2.0
1.0
Table 8 :
8Detailed Hyperparameter Settings
First scaled with 1 λ , then multiplied by M , and finally rounded to integers.
See Appendix A for the solution of the optimal matchπ.
AcknowledgmentsA Optimal MatchingπGiven a fixed-size set of K noisy spans, DIFFU-SIONNER infers K predictions, where K is larger than the number of N entities in a sentence. One of the main difficulties of training is to assign the ground truth to the prediction. Thus we first produce an optimal bipartite matching between predicted and ground truth entities and then optimize the likelihood-based loss.Assuming, where l i , r i , c i are the boundary indices and type for the i-th entity. Since K is larger than the number of N entities, we pad Y with ∅ (no entity). To find a bipartite matching between these two sets we search for a permutation of K elements π ∈ S(K) with the lowest cost:where L match Ŷ i , Y π(i) is a pair-wise matching cost between the predictionŶ i and ground truth Y π(i) with index π(i). We define it as −1(Y π(i) = ∅) σ∈{l,r,c} P σ i Y σ π(i) , where 1(·) denotes an indicator function. Finally, the optimal assignment π can be computed with the Hungarian algorithm.B DatasetsWe conduct experiments on six widely used NER datasets, including three nested and three flat datasets.Table 7reports detailed statistics about the datasets.C Detailed Parameter SettingsEntity boundaries are predicted at the word level, and we use max-pooling to aggregate subwords into word representations. We use the multi-headed attention with 8 heads in the span encoder, and add a feedforward network layer after the self-attention and cross-attention layer. During training, we first fix the parameters of BERT and train the model for 5 epochs to warm up the parameters of the entity decoder. We tune the learning rate from {1e − 5, 2e − 5, 3e − 5} and the threshold ϕ from range [2.5, 2.7] with a step 0.05, and select the best hyperparameter setting according to the performance of the development set. The detailed parameter settings are shown inTable 8.D BaselinesWe use the following models as baselines:• LinearedCRF(Straková et al., 2019)concatenates the nested entity multiple labels into one multilabel, and uses CRF-based tagger to decode flat or nested entities.• CascadedCRF (Ju et al., 2018) stacks the flat NER layers and identifies nested entities in an inside-to-outside way.• Pyramid(Wang et al., 2020)constructs the representations of mentions from the bottom up by stacking flat NER layers in a pyramid, and allows bidirectional interaction between layers by an inverse pyramid.• Seq2seq(Straková et al., 2019)converts the labels of nested entities into a sequence and then uses a seq2seq model to decode entities.Train Dev Test Train Dev Test Train Test number of sentences 6200 745 812 7194 969 1047 16692 1854 -with nested entities 2712 294 388 2691 338 320 3522 446 number of entities 22204 2514 3035 24441 3200 2993 50509 5506 -nested entities 10149 1092 1417 9389 1112 1118 9064
Long Papers). Dublin, IrelandAssociation for Computational Linguistics1Volume 1: Long Papers), pages 5755-5772, Dublin, Ireland. Association for Computational Linguistics.
Event extraction as dependency parsing. David Mcclosky, Mihai Surdeanu, Christopher Manning, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsDavid McClosky, Mihai Surdeanu, and Christopher Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 1626-1635, Portland, Oregon, USA. Association for Computa- tional Linguistics.
End-to-end relation extraction using LSTMs on sequences and tree structures. Makoto Miwa, Mohit Bansal, 10.18653/v1/P16-1105Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsMakoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguis- tics.
The genia corpus: An annotated research abstract corpus in molecular biology domain. Tomoko Ohta, Yuka Tateisi, Jin-Dong Kim, https:/dl.acm.org/doi/10.5555/1289189.1289260Proceedings of the Second International Conference on Human Language Technology Research. the Second International Conference on Human Language Technology ResearchSan Francisco, USAMorgan Kaufmann Publishers IncTomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceed- ings of the Second International Conference on Hu- man Language Technology Research, page 82-86, San Francisco, USA. Morgan Kaufmann Publishers Inc.
Structured prediction as translation between augmented natural languages. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira, Bing Santos, Stefano Xiang, Soatto, International Conference on Learning Representations. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Ci- cero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. In Interna- tional Conference on Learning Representations.
Towards robust linguistic analysis using OntoNotes. Alessandro Sameer Pradhan, Nianwen Moschitti, Xue, Tou Hwee, Anders Ng, Olga Björkelund, Yuchen Uryupina, Zhi Zhang, Zhong, Proceedings of the Seventeenth Conference on Computational Natural Language Learning. the Seventeenth Conference on Computational Natural Language LearningSofia, BulgariaAssociation for Computational LinguisticsSameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using OntoNotes. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152, Sofia, Bulgaria. Association for Computational Lin- guistics.
Exploring the limits of transfer learning with a unified text-totext transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.
Hierarchical textconditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, 10.48550/ARXIV.2204.06125Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text- conditional image generation with clip latents.
Highresolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, 10.1109/CVPR52688.2022.010422022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- els. In 2022 IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 10674- 10685.
Locate and label: A two-stage identifier for nested named entity recognition. Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, Weiming Lu, 10.18653/v1/2021.acl-long.216Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named en- tity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2782-2794, Online. Association for Computational Linguistics.
Parallel instance query network for named entity recognition. Yongliang Shen, Xiaobin Wang, Zeqi Tan, Guangwei Xu, Pengjun Xie, Fei Huang, Weiming Lu, Yueting Zhuang, 10.18653/v1/2022.acl-long.67Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsYongliang Shen, Xiaobin Wang, Zeqi Tan, Guangwei Xu, Pengjun Xie, Fei Huang, Weiming Lu, and Yuet- ing Zhuang. 2022. Parallel instance query network for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 947-961, Dublin, Ireland. Association for Computa- tional Linguistics.
Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, Surya Ganguli, PMLRProceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningLille, France37Jascha Sohl-Dickstein, Eric Weiss, Niru Mah- eswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermo- dynamics. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2256-2265, Lille, France. PMLR.
Deep exhaustive model for nested named entity recognition. Golam Mohammad, Makoto Sohrab, Miwa, 10.18653/v1/D18-1309Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsMohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2843-2849, Brussels, Belgium. Associa- tion for Computational Linguistics.
Denoising diffusion implicit models. Jiaming Song, Chenlin Meng, Stefano Ermon, International Conference on Learning Representations. Jiaming Song, Chenlin Meng, and Stefano Ermon. 2021. Denoising diffusion implicit models. In Inter- national Conference on Learning Representations.
Neural architectures for nested NER through linearization. Jana Straková, Milan Straka, 10.18653/v1/P19-1527Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsJana Straková, Milan Straka, and Jan Hajic. 2019. Neu- ral architectures for nested NER through lineariza- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331, Florence, Italy. Association for Computational Linguistics.
Laurent Sifre, et al. 2022. Self-conditioned embedding diffusion for text generation. Robin Strudel, Corentin Tallec, Florent Altché, Yilun Du, Yaroslav Ganin, Arthur Mensch, Will Grathwohl, Nikolay Savinov, Sander Dieleman, arXiv:2211.04236arXiv preprintRobin Strudel, Corentin Tallec, Florent Altché, Yilun Du, Yaroslav Ganin, Arthur Mensch, Will Grath- wohl, Nikolay Savinov, Sander Dieleman, Lau- rent Sifre, et al. 2022. Self-conditioned embed- ding diffusion for text generation. arXiv preprint arXiv:2211.04236.
A sequence-to-set network for nested named entity recognition. Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, Yueting Zhuang, 10.24963/ijcai.2021/542Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, and Yueting Zhuang. 2021. A sequence-to-set net- work for nested named entity recognition. In Pro- ceedings of the Thirtieth International Joint Con- ference on Artificial Intelligence, IJCAI-21, pages 3936-3942. International Joint Conferences on Ar- tificial Intelligence Organization. Main Track.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł Ukasz, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz
Attention is all you need. Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.
Entity, relation, and event extraction with contextualized span representations. David Wadden, Ulme Wennberg, Yi Luan, Hannaneh Hajishirzi, 10.18653/v1/D19-1585Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsDavid Wadden, Ulme Wennberg, Yi Luan, and Han- naneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5784- 5789, Hong Kong, China. Association for Computa- tional Linguistics.
Ace 2005 multilingual training corpus. linguistic. Christopher Walker, Stephanie Strassel, Kazuaki Maeda, Linguistic Data Consortium. Philadelphia57Christopher Walker, Stephanie Strassel, and Kazuaki Maeda. 2006. Ace 2005 multilingual training cor- pus. linguistic. In Linguistic Data Consortium, Philadelphia 57.
Nested named entity recognition with span-level graphs. Juncheng Wan, Dongyu Ru, Weinan Zhang, Yong Yu, 10.18653/v1/2022.acl-long.63Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Juncheng Wan, Dongyu Ru, Weinan Zhang, and Yong Yu. 2022. Nested named entity recognition with span-level graphs. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 892- 903, Dublin, Ireland. Association for Computational Linguistics.
Pyramid: A layered model for nested named entity recognition. Jue Wang, Lidan Shou, Ke Chen, Gang Chen, 10.18653/v1/2020.acl-main.525Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsJue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named en- tity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 5918-5928, Online. Association for Computational Linguistics.
Propose-and-refine: A two-stage set prediction network for nested named entity recognition. Shuhui Wu, Yongliang Shen, Zeqi Tan, Weiming Lu, 10.24963/ijcai.2022/613Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22. the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22Shuhui Wu, Yongliang Shen, Zeqi Tan, and Weiming Lu. 2022. Propose-and-refine: A two-stage set pre- diction network for nested named entity recognition. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI- 22, pages 4418-4424. International Joint Confer- ences on Artificial Intelligence Organization. Main Track.
A unified generative framework for aspect-based sentiment analysis. Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, Zheng Zhang, 10.18653/v1/2021.acl-long.188Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021a. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416-2429, Online. Association for Computational Linguistics.
Tener: adapting transformer encoder for named entity recognition. Hang Yan, Bocao Deng, Xiaonan Li, Xipeng Qiu, arXiv:1911.04474arXiv preprintHang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. Tener: adapting transformer encoder for named entity recognition. arXiv preprint arXiv:1911.04474.
A unified generative framework for various NER subtasks. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, Xipeng Qiu, 10.18653/v1/2021.acl-long.451Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021b. A unified genera- tive framework for various NER subtasks. In Pro- ceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808- 5822, Online. Association for Computational Lin- guistics.
Zheng Zhang, and Xipeng Qiu. 2021c. A unified generative framework for various NER subtasks. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, 10.18653/v1/2021.acl-long.451Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. the 59th Annual Meeting of the Association for Computational LinguisticsHang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021c. A unified genera- tive framework for various NER subtasks. In Pro- ceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5808- 5822, Online. Association for Computational Lin- guistics.
Named entity recognition as dependency parsing. Juntao Yu, Bernd Bohnet, Massimo Poesio, 10.18653/v1/2020.acl-main.577Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsJuntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6470- 6476, Online. Association for Computational Lin- guistics.
Seqdiffuseq: Text diffusion with encoder-decoder transformers. Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Fei Huang, Songfang Huang, abs/2212.10325ArXiv. Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Fei Huang, and Songfang Huang. 2022a. Seqdiffuseq: Text dif- fusion with encoder-decoder transformers. ArXiv, abs/2212.10325.
Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. Zheng Yuan, Chuanqi Tan, Songfang Huang, Fei Huang, 10.18653/v1/2022.findings-acl.250Findings of the Association for Computational Linguistics: ACL 2022. Dublin, IrelandAssociation for Computational LinguisticsZheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022b. Fusing heterogeneous factors with triaffine mechanism for nested named entity recog- nition. In Findings of the Association for Compu- tational Linguistics: ACL 2022, pages 3174-3186, Dublin, Ireland. Association for Computational Lin- guistics.
De-bias for generative extraction in unified NER task. Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, Weiming Lu, 10.18653/v1/2022.acl-long.59Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsShuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. 2022. De-bias for generative ex- traction in unified NER task. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 808-818, Dublin, Ireland. Association for Computa- tional Linguistics.
Boundary smoothing for named entity recognition. Enwei Zhu, Jinpeng Li, 10.18653/v1/2022.acl-long.490Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsEnwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 7096-7108, Dublin, Ireland. Association for Com- putational Linguistics.
| zyda_arxiv-2026000 |
Contrasting string holography to its optical namesake
6 Nov 2014
D V Khveshchenko
Department of Physics and Astronomy
University of North Carolina
Chapel Hill27599NC
Contrasting string holography to its optical namesake
6 Nov 2014
We assess the prospects of using metamaterials for simulating various aspects of analogue gravity and holographic correspondence. Albeit requiring a careful engineering of the dielectric media, some hallmark features reminiscent of the hypothetical 'generalized holographic conjecture' can be detected by measuring non-local optical field correlations. The possibility of such simulated behavior might also shed light on the true origin of those apparent holography-like phenomena in the condensed matter systems with emergent effective metrics which may not, in fact, require any references to the string-theoretical holography.
In the past one and a half decade, the holographic conjecture that originated from the string theory (where it is known as the 'AdS/CFT' correspondence) has crossed the inter-disciplinary borders and permeated other fields. However, despite the fact that some avid proponents of its broad ('non-AdS/non-CFT') generalizations would routinely refer to the latter as 'a well established tool', the condensed matter community, by and large, continued to hold back from engaging in a substantive discussion of the status of this, arguably, the most intriguing paradigm shift since the inception of quantum theory.
Indeed, compared to the original string-theoretical holography its ad hoc applications to the condensed matter and AMO systems 1 require the most radical (albeit least verified) assumptions, since such systems are generically neither very strongly coupled, nor conformally, Lorentz or even translationally/rotationally invariant and lack any supersymmetry or even an ordinary gauge symmetry with some rank-N > 1 (let alone, N ≫ 1) non-abelian group.
Such striking discrepancies notwithstanding, the available holographic machinery makes it almost too (conceptually, if not technically) easy to pursue its applications to the ever expanding list of geometries, thereby favoring computational tractability over physical relevance.
While some agreement between the holographic predictions and certain selectively chosen experimental data (for the most part, pertaining to those situations where the extreme strongly-correlated hydrodynamic regime can indeed be attained) has been claimed, there is still no consensus on neither the ultimate implications of such circumstantial evidence, nor the general applicability conditions of the holographic approach itself. Conspicuously, though, the best quantitative agreement between the results of the holographic and alternative (e.g., Monte Carlo) calculations has been achieved in those cases where the allegedly all-important N ≫ 1 condition does not seem to play much of a role, as in the 2d Bose-Hubbard (or quantum XY -) model with N = 2 2 .
Conceivably, such non-compliance of the hypothetical generalized holography with the symmetry requirements that are viewed as instrumental in its original stringtheoretical reincarnation might suggest that, apart from the common name, the two may not even be related. In that regard, it must be noted that emergent metrics and effective gravity-like descriptions are not that uncommon, such examples ranging from thermodynamics of phase transitions to quantum information theory and tensor network states, from topological properties of the Bloch states to adiabatic time evolution and Quantum Hall effect, etc.
Nevertheless, even in the current absence of a definitive way to unequivocally ascertain its true status, the holographic concept can still benefit from the possibility of being simulated in various controlled 'analogue' environments. For one thing, any concrete physical realization of some apparent holography-like features might contribute towards elucidating the physical phenomena responsible for such behaviors without invoking radically new hypothetical principles of nature.
Recently, one potential implementation of analogue holography in the strain-engineered graphene devices was proposed 3 . In this note, we discuss the prospects of using optical metamaterials which have long been envisioned as viable candidates for mimicking such effects of general relativity as event horizons, redshift, black, white, and worm-holes, Hawking radiation, dark energy, inflation, multiverse, Big Bang and Rip, metric signatures transitions, 'end-of-time', and other cosmological scenaria 4,5 .
The above add to the list of such previously explored applications of metamaterials as space-time transformation optics, negative refraction, sub-diffraction imaging and superlensing, cloaking, waveguiding, nearperfect heat absorption/emission, and broadband photonic engineering 6 .
Specifically, it is a formal mathematical analogy between the Maxwell's equations in the metamaterial media and those of wave propagation in curved space-time that allows one to draw such parallels and use metamaterials as a potential playground for duplicating certain gravitational phenomena. Thus, it puts metamaterials on the already extensive list of the previously proposed experimental implementations of general relativity which includes supersonic fluids, trapped Bose-Einstein condensates of cold atoms and ions, slow light in atomic vapors and nonlinear liquids, electromagnetic wavequides, exciton-polariton systems, etc 7 .
In the metamaterial setups, the locations of effective event horizons are determined by the singularities of the permittivity ǫ ij and permeability µ ij tensors, and the early designs of the putative 'Tamm media' involved special engineering of both functions 4 , e.g.: ǫ ij = µ ij = √ −ĝg ij /|g τ τ |. The proposed means of creating non-uniform patterns of ǫ ij and µ ij include electro-optical modulation and split-ring resonators, respectively. while amongst the prospective candidate media are such metal-dielectric pairs as silver and silica, SiC and vacuum, as well as plasmonic metals and high index dielectrics (T iO2, SiN ).
Still, the task of varying both,ǫ andμ -let alone, maintaining their equality ('impedance matching') -can hamper any practical realizations of such metrics. Another impeding factor is the invariable value of g τ τ = −1 which limits the choice of the viable metrics to their equivalent ('optical') ones,
γ ij = g ij /|g τ τ | = ǫ ij /detǫ = µ ij /detμ.
Another (lower-dimensional, yet potentially more practical) approach to the ways in which hyperbolic metamaterials can be used for desktop simulations of general relativity and cosmology was put forward in Refs. 5 . Their central observation was that for µ ij = 1 and under the condition that the photon wave function varies faster than the (non-uniform) permittivityǫ, the dispersion relation for extraordinary (TM-polarized) photons ω 2 = k 2 z /ǫ xy + k 2 xy /ǫ zz can instead be viewed as that in the empty 3d curved space-time with the metric
ds 2 = −ǫ xy dz 2 −ǫ zz (dx 2 + dy 2 )(1)
In the hyperbolic regime ǫ xy > 0 and ǫ zz < 0, the component of the momentum k z behaves as an effective frequency, while ω plays the role of mass. For its most part, the discussion in Refs. 5 pertained to the rotationally-invariant uniaxial metamaterial configuration, in which case by introducing the cylindrical coordinates and choosing z and r as the effective time τ and natural 'radial' holographic variable, respectively, one arrives at the metric g τ τ = −ǫ xy , g rr = g φφ /r 2 = −ǫ zz .
The attainable metrics would then be described by the diagonal dielectric tensor of a two-component system with the permittivities ǫ m < 0 and ǫ d > 0 ('metal' and 'dielectric', respectively). The dielectric properties of such a combination can be evaluated by using the Maxwell-Garnett formula
ǫ (u) zz (r) = ǫ 1 n + ǫ 2 (1 − n) ǫ (u) xy (r) = ǫ 2 ǫ 1 (1 + n) + ǫ 2 (1 − n) ǫ 2 (1 + n) + ǫ 1 (1 − n)(2)
where the structure factor n(r) represents the local (radially-dependent) fraction of the metallic component. The analysis of Eq. (2) shows that the viable metrics can possess either a pole or zero in ǫ (u) xy (r) but only a zero in ǫ (u) zz (r), although those would generally occur at different values of the function n(r) and, therefore, different radial distances.
Contrary to the assertion of Refs. 5 , though, the setup in question would be unsuitable for mimicking any 2 + 1dimensional black hole type of geometry, as the latter requires the simultaneous presence of a zero in g τ τ and a pole in g rr at some putative horizon r h (i.e., g τ τ ∼ 1/g rr ∼ r − r h ), whereas g φφ should develop neither. Likewise, modelling an optical counterpart of the Schwarzschild-like metric would require γ rr ∼ γ 2 φφ ∼ (r − r h ) 2 and, therefore, is out of reach as well.
Instead, one finds that g τ τ containing the would-be emblackening factor features either a zero at
n 1 = (ǫ m + ǫ d )/(ǫ d − ǫ m ) for |ǫ m | < ǫ d or a pole at −n 1 for |ǫ m | > ǫ d .
In turn, the components g rr = g φφ /r 2 can develop a zero at n 2 = ǫ d /(ǫ d − ǫ m ). Moreover, for ǫ m = −2ǫ d the zero in g rr ∼ g φφ and pole in g τ τ merge together at n 2 = −n 1 = 1/3.
In this case, choosing the density profile in the nearboundary (i.e., large radii) regime as n(r) = 1/3 + c(R/r) 2α , where c ≪ 1, x = Rφ and u = R/r is the customary holographic 'inverse radial' variable, produces a metric of the general type
ds 2 = dτ 2 u 2α + R 2 du 2 u 2β + dx 2 u 2γ(3)
with the exponents related as follows:
β = 2 − α, γ = 1 − α.
For arbitrary values of α, β, and γ = 0 the metric (3) can be conformally transformed to that of the hyperscaling violation (HV) variety 8
ds 2 = u 2θ/d ( dτ 2 u 2ζ + L 2 du 2 + dx 2 u 2 )(4)
where x is the d-dimensional spatial vector (henceforth, d = 1), L ∼ R is the overall length scale (the size of the AdS space for α = 1), while the dynamical exponent ζ and the HV parameter θ are given by the expressions
ζ = 1 − β + α 1 − β + γ , θ = 1 − β 1 − β + γ(5)
Given the above relations between the exponents in Eq. (4), however, the only attainable geometry of the HV-type appears to be that of the extremal limit where θ, z → ∞, while their ratio θ/ζ = (α−1)/(2α−1) remains finite. In contrast, for the special values γ = 0, α = β = 1 the metric (4) conforms to the zero-temperature Euclidean AdS 2 × R background (a lower-dimensional counterpart of the space AdS 2 × R 2 which provides the arena for the popular 'semi-local' holographic scenario 9 ).
As an alternative, one can exploit the zero of g τ τ which develops for |ǫ m | < ǫ d and which is not accompanied by the zero in g rr . In this case, the density profile n(r) = n 1 + c(R/r) 2 gives rise to the geometry resembling the zero-temperature limit of the Rindler's metric, ds 2 = (r/R) 2 dτ 2 + dr 2 + (r/R) 2 dx 2 , although the actual horizon at r = 0 would be unattainable within its regime of validity.
Also, by tuning the density to the value n 2 = ±n 1 and making use of the zero in g rr ∼ g φφ one could achieve the 'end-of-time' situation 5 , albeit only in the case of a boundary which is orthogonal to the effective time direction.
Lastly, for a generic choice of ǫ m,d the pole and zeros in Eq.(1) are all separated and the corresponding geometry bears no resemblance to any physically relevant one (this includes the original proposal of Refs. 5 which makes use of the sole pole in g τ τ ).
In the complementary, layered, metamaterial configuration, the composition rules for the dielectric tensor of a two-component hybrid system read
ǫ (l) zz (z) = ǫ 1 ǫ 2 ǫ 1 (1 − n) + ǫ 2 n ǫ (l) xy (z) = ǫ 1 n + ǫ 2 (1 − n)(6)
thus revealing a potentail zero in ǫ (l) xy (z) and a pole in ǫ (l) zz (z) at, generally, two different values of n(z). The two coincide at ǫ m = −ǫ d , though, in which case by choosing a power-law density profile near the z = 0 boundary, n(z) = 1/2 + c(z/L) 2α , and designating u = z as the holographic variable, one arrives at the HV metric (4) with α = −β = γ. The Euclidean time τ and the 1d spatial coordinate x can be chosen arbitrarily within the rotationally invariant xy plane, while the length scale L is now set by the span of the region supporting the above algebraic density profile in the z-direction.
Using Eq. (5), one then finds ζ = 1, θ = (1 + α)/(1 + 2α). Although for α = −1 the metric (4) may seem to be that of the Euclidean AdS 3 , such a behavior would be limited to the small, rather than asymptotically large, values of u.
Besides, the effective metrics corresponding to neither the uniaxial, nor layered configurations could incorporate the thermal emblackening factor replacing the constant 1/c and vanishing at some u h ∼ 1/T , as it would result in the unwanted divergence of g φφ or vanishing of g xx , respectively. It is also worth mentioning that in all the above situations both, ǫ zz and ǫ xy , have the same sign (for either sign of c), and so the metamaterial as a whole remains in the (anisotropic) metallic or insulating regime.
In the semiclassical limit ωL >> 1 (here ω plays the role of mass which is important for justifying the use of this approximation), the propagator of a massive 3d field in a radially-symmetric bulk metric is governed by the classical Euclidean action
S(τ, x) = Lω du g uu + g τ τ ( dτ du ) 2 + g xx ( dx du ) 2 (7)
whose extremal paths correspond to the geodesic trajectories. In particular, by choosing their endpoints to lie on the boundary one accesses the geodesics which dive into the bulk, thus exploring its geometry, and saturate the semiclassical boundary propagator G ω (τ, x) ∼ exp[−S(τ, x)].
After having been evaluated upon such an extremal trajectory, Eq.(7) takes the form
S(τ, x) = Lω 2 ut u0 du g uu r(u)(8)
where r(u) = ω 2 − k 2 x /g xx (u) − k 2 τ /g τ τ (u) is a function of the conjugate momenta k x = δS/δ(dx/du) and k τ = δS/δ(dτ /du). The latter, in turn, are determined by the equations
τ = Lk τ ut u0 du g τ τ g uu r(u) , x = Lk x ut u0 du g xx g uu r(u)(9)
where u 0 ∼ 1/L and the turning point satisfies r(u t ) = 0. For the sake of generality, we compute Eq. (8) in the general case of the power-law permittivities , ǫ zz (u) ∼ (L/u) 2α , ǫ xy (u) ∼ (u/L) 2δ , thus obtaining S(τ, x) ∼ |x| (1+δ)/(1+α+δ) as long as α > 0 and δ > −1. This condition guarantees that the integrals in Eqs. (8,9) are dominated by the turning point u t = (ω/ k 2 τ + k 2 x ) 1/α which moves deeper into the bulk with the increasing Euclidean interval on the boundary, |x| = √ τ 2 + x 2 . The semiclassical condition S(x) ≫ 1, alongside the aforementioned applicability of Eq. (1), dictate that c 1−ζ/2θ (Lω) −ζ/θ ≪ x/L ≪ 1, which conditions are compatible for c ≪ 1 and Lω ≫ 1.
In either of the above metamaterial configurations, though, the function n(u) yields just one independent exponent, hence α = δ. In the layered case, one then obtains a stretched exponential asymptotic of the propagator
G ω (x) ∼ exp[− √ cLω|x/cL| θ/ζ ](10)
where the HV parameter-to-dynamical exponent ratio falls into the range 1/2 < θ/ζ = (1 + α)/(1 + 2α) < 1. In contrast, for the uniaxial radial-dependent density distribution with α < 1 the integrals in Eqs. (8,9) are dominated by the lower bound u 0 ∼ 1/R and, therefore, become insensitive to the position of the turning point u t , thus rendering S(τ, x) ∼ R 1−α . This independence of the extremal action of its arguments indicates that the logarithm of the boundary propagator varies slower than any fractional exponent of either τ or x. In fact, the decay of G ω (τ, x) appears to be algebraic and governed by the non-exponential prefactor which is beyond the leading order of the semiclassical approximation.
At α = 1 one finds S(τ, 0) ∼ ln τ and S(0, x) ∼ x, which dependencies are consistent with the 'semilocal' regime characterized by the long-ranged (powerlaw) temporal, yet only short-ranged (exponential) spatial, decay of the propagator G ω (τ, x) 9 .
Lastly, for α > 1 one again finds a stretched exponential time dependence, S(τ, 0) ∼ τ θ/ζ , where 0 < θ/ζ = (α − 1)/(2α − 1) < 1/2. However, the spatial correlations decay extremely fast, thus reaching the truly localized limit, which is fully consistent with ζ → ∞.
Incidentally, the behavior (10) is akin to that found in, e.g., the studies of relaxation in disordered media whose field-theoretical description implies a strongly coupled nature of the putative 2d boundary theory. To that end, it can be reproduced by choosing the standard exponential construction of the boundary operators ψ(x) ∼ exp[iφ(x)] in terms of a 2d scalar bosonic field whose Lorentz-invariant action is governed by the quadratic part
S boundary = 1 2ν d 2 kk 2+θ/ζ |φ k | 2(11)
where ν ∼ L 1−θ/ζ ω. Notably, the (anomalous) dimension [φ] = −θ/2ζ is negative, thereby resulting in the correlation function < |φ k | 2 > which diverges faster than 1/k 2 at small k. Incidentally, the effective Gaussian action similar to Eq.(10) emerges in the theory of thermally fluctuating membranes governed by the elastic free energy 10
F = d d x[ κ 2 (∇ 2 h) 2 + µv 2 αβ + λ 2 v 2 αα ](12)
where v αβ = ∂ α ξ β + ∂ β ξ α + ∂ α h∂ β h is the strain tensor expressed in terms of the in-plane ξ α (x) and out-of-plane h(x) displacements. Upon integrating out the former, the effective selfenergy of the latter is generated, ∆F ∼ d d kk 4−η |h k | 2 , thus overpowering the first term in Eq. (12) at small k. The anomalous exponent η characterizes the behavior of the fluctuation-renormalized bending rigidity κ ∼ 1/k η , taking the value η c ∼ 1 at the crumpling transition of a physical (2d) membrane embedded in the 3d space.
As compared to the standard (marginally non-Fermi liquid) Luttinger regime where the vertex operators are given by exponentials of the ordinary 1d phonon field with a linear dispersion, the ultrasoft out-of-plane ('flexor') phonon modes with the free energy (11) result in the expressly non-Fermi liquid behavior. Also, in contrast to the holographic semi-local regime where the operator dimensions appear to be featureless continuous functions of the momentum 9 , the pertinent exponent θ/ζ is a robust function of the given metric (4).
Thus, by properly adjusting the density profile n(z) of the metallic component one could, in principle, use the proposed setup for mimicking the correlation functions of a certain family of the non-trivial 2d boundary theories. Notably, though, establishing such a limited correspondence does not require any novel principles of nature, including the sophisticated string-theoretical constructions. However, if observed in, e.g., numerical studies of the theory (12) such a matching behavior could be (misleadingly) argued to support the generalized holographic conjecture.
In the metamaterial implementations, the propagator (10) describes static boundary correlations of the electromagnetic field and, as such, should be contrasted against its counterpart in a medium with the isotropic (negative) dielectric constant < E ω (x)E * −ω (0) >∼ exp(−ω|x|). Experimentally, such correlations can be studied by analyzing the statistics of the spatial distribution of a monochromatic optical field at the metamaterial's boundary. Conceivably, the actual measurement should employ such techniques as holographic and speckle interferometry 11 , by which a comparison can be made between the light that travels thorough the bulk and the outside reference beam. In particular, by measuring the local correlations, S 2 (ω) =< |E ω (0)| 2 >∼ G ω (0), one can access noise power spectrum (although the semiclassical result (10) would be inapplicable for x → 0). Also, calculating the higher-order correlations, such as noise of noise S 4 (ω), one can ascertain the (non-)Gaussianity of the fluctuations and applicability of the quadratic effective action (11). Moreover, the field distribution might be accompanied by a correlated pattern of currents associated with the local plasmon-polariton modes at the conducting metamaterial's boundary, thus allowing for alternate detection techniques based on the measurements of Johnson-Nyquist current noise.
As to the practical realizations, in Refs. 5 the uniaxial devices were constructed with the use of ferromagnetic cobalt (or silver/gold) nanoparticles floating in a dielectric liquid, such as kerosin, which align themselves in filaments in the presence of a magnetic field in the zdirection. Also, the polymer (PMMA) stripes deposited on gold films have been utilized 5 . Regarding the layered design, it can be manufactured from alternating metallic and dielectric layers of variable width. One such proposal involves n + -doped InGaAs for the metallic and AlGaAs/GaAs for the dielectric components, respectively 12 .
To arrive at one of the pole-zero matching points one can utilize the frequency dependence of ǫ m,d . For example, in the devices used in Refs. 5 the zero of ǫ xy becomes attainable at the long wavelength infrared frequencies.
In conclusion, we discussed the prospects of using metamaterial devices for mimicking certain holographic correspondence-like phenomena. To that end, we identified the class of metrics that, in principle, can be reproduced in the two popular (uniaxial and layered) architectures and evaluated the correlation function of monochromatic electromagnetic fluctuations at the metamaterial's boundary.
The latter function was also shown to be obtainable as the correlator of vertex operators in the theory of a strongly self-interacting 2d bosonic field, akin to that describing the thermodynamics of a fluctuating elastic membrane. This observation opens up the possibility of, both, simulating certain non-trivial boundary theories with custom-tailored metamaterial media as well as gaining a better insight into the potential origin of those apparent holography-like properties of condensed matter systems which may not, in fact, require any strained references to (yet, can be misinterpreted as an evidence for) some far-fetched generalizations of the 'bona fide' holographic principle of string theory.
The author acknowledges the Aspen Center for Physics funded by the grant NSF under Grant 1066293 for its hospitality and workshop participation support.
. S A Hartnoll, Class. Quant. Grav. 26224002S. A. Hartnoll, Class. Quant. Grav.26, 224002 (2009);
. C , C.
. Herzog, J.Phys. 42343001Herzog, J.Phys. A42 343001 (2009);
. J Mcgreevy, S.Sachdev, Annual Review of Cond. Matt. Phys. 20109Adv. High Energy Phys.J. McGreevy, Adv. High Energy Phys. 2010, 723105 (2010). S.Sachdev, An- nual Review of Cond. Matt. Phys. 3, 9 (2012).
. W Witczak-Krempa, E Sorensen, S Sachdev, Nature Physics. 10361W.Witczak-Krempa, E.Sorensen, and S.Sachdev, Nature Physics 10, 361 (2014);
. K Chen, Phys. Rev. Lett. 11230402K. Chen et al, Phys. Rev. Lett. 112, 030402 (2013).
. D V Khveshchenko, Euro Phys.Lett., v. 10447002D.V.Khveshchenko, Euro Phys.Lett., v.104, p.47002 (2013).
. Wanli Lu, J. Appl. Phys. 10864517Wanli Lu et al, J. Appl. Phys., 108, 064517, (2010);
. T G Mackay, A Lakhtakia, Phys.Lett. 3742305T.G. Mackay and A. Lakhtakia, Phys.Lett.A374, 2305 (2010);
. Phys.Rev. 83195424Phys.Rev.B83, 195424 (2011);
. M Li, R.-X Miao, Yi Pang, Phys.Lett. 68955M. Li, R.-X. Miao, Yi Pang, Phys.Lett.B689, 55 (2010);
. R.-X Miao, R Zheng, M , R.-X. Miao, R. Zheng, and M.
. Li, Phys.Lett. 69655Li, Phys.Lett.B696, 55 (2011);
. T.-M Zhao, R.-X Miao, Optics Letters. 364467T.-M. Zhao and R.-X.Miao, Optics Letters 36, 23 (2011) pp.4467.
. I I Smolyaninov, J. Optics. 1675101I.I. Smolyaninov, J. Optics 16, 075101 (2014);
. Phys. Rev. A. 8833843Phys. Rev. A 88, 033843 (2013);
. J.Optics. 1324004J.Optics 13, 024004 (2011);
. I I Smolyaninov, Yu-Ju Hung, E Hwang, Phys. Lett. A. 3762575I.I. Smolyaninov, Yu-Ju Hung, and E. Hwang, Phys. Lett. A 376, 2575 (2012);
. I I Smolyaninov, Optics Express. 2114918I. I. Smolyaninov et al, Optics Express 21, 14918 (2013);
. I I Smolyaninov, E Hwang, E Narimanov, Phys. Rev. B. 85235122I. I. Smolyaninov, E.Hwang, and E. Narimanov, Phys. Rev. B 85, 235122 (2012);
. I I Smolyaninov, E Narimanov, Phys.Rev.Lett. 10567402I. I. Smolyaninov and E. Narimanov, Phys.Rev.Lett.105, 067402(2010).
. U Leonhardt, T G Philbin, New J.Phys. 8247U. Leonhardt and T. G. Philbin, New J.Phys. 8, 247 (2006);
. Prog, Opt, 5369Prog. Opt. 53, 69 (2009).
M Novello, M Visser, G E Volovik, Artificial black holes. SingaporeWorld ScientificM. Novello, M. Visser, and G. E. Volovik (eds.), Arti- ficial black holes, World Scientific, Singapore (2002);
Quantum Analogues: From Phase Transitions to Black Holes and Cosmology. R. Schutzhold and W.G. Unruh,Springer718R. Schutzhold and W.G. Unruh, (eds.), Quantum Analogues: From Phase Transitions to Black Holes and Cosmology, Springer Lecture Notes in Physics 718 (2007).
. S Kachru, X Liu, M Mulligan, Phys. Rev. D. 78106005S. Kachru, X. Liu, and M. Mulligan, Phys. Rev. D 78, 106005 (2008);
. C Charmousis, JHEP. 101189ibid 1201. ibid 1209, 011C. Charmousis et al, JHEP 1011, 151 (2010);ibid 1201, 089 (2012);ibid 1209, 011, (2012);
. B Gouteraux, E Kiritsis, 111253ibid 1304B. Gouteraux and E. Kiritsis, ibid 1112, 036, (2011); ibid 1304, 053 (2013);
. Xi Dong, 110741Xi Dong et al, ibid 1107, 041 (2012);
. B Gouteraux, 1201159B. Gouteraux et al, ibid 1201, 089, 2012; J. Gath et al, ibid 1304, 159 (2013);
. N Iizuka, arXiv:1105.1162arXiv:1205.0242Phys. Rev. 86115115E. PerlmutterN. Iizuka et al, arXiv:1105.1162; E. Perl- mutter, arXiv:1205.0242; D.V.Khveshchenko, Phys. Rev. B86, 115115 (2012).
. N Iqbal, H Liu, Fortsch.Phys. 57367N.Iqbal and H. Liu, Fortsch.Phys.57, 367 (2009);
. M Cubrovic, J Zaanen, K S Schalm ; S, Lee, arXiv:1012.5681Phys. Rev. 32586006ScienceM. Cubrovic, J. Zaanen and K. Schalm, Science 325, 439 (2009); arXiv:1012.5681; S. S. Lee, Phys. Rev. D79, 086006 (2009);
. H Liu, J Mcgreevy, D Vegh, 8365029H. Liu, J. McGreevy and D. Vegh, ibid, D83, 065029 (2011);
. T Faulkner, 8365029T. Faulkner et al, ibid D83, 065029 (2011);
. T Faulkner, J Polchinski, JHEP. 110612T. Faulkner and J. Polchinski, JHEP 1106, 012(2011).
. P , Le Doussal, L Radzihovsky, Phys.Rev.Lett. 691209P. Le Doussal and L. Radzihovsky, Phys.Rev.Lett.69, 1209 (1992).
R Jones, C Wykes, Holographic and Speckle Interferometry. Cambridge Univ. PressR.Jones and C.Wykes, Holographic and Speckle Interfer- ometry, Cambridge Univ. Press (1989);
T Kreis, Handbook on holographic interferometry. Wiley-VCHT. Kreis, Handbook on holographic interferometry, Wiley-VCH (2005).
. P Shekhar, Z Jacob, arXiv:1402.4475P.Shekhar and Z.Jacob, arXiv:1402.4475.
| zyda_arxiv-2037000 |
Tackling Loopholes in Experimental Tests of Bell's Inequality
Oxford University PressCopyright Oxford University Press2021
David I Kaiser
Program in Science, Technology, and Society
Department of Physics
Massachusetts Institute of Technology
02139CambridgeMassachusettsUSA
Tackling Loopholes in Experimental Tests of Bell's Inequality
the Oxford Handbook of the History of Interpretations of Quantum Physics
Oxford University Press2021(Dated: November 18, 2020)
Bell's inequality sets a strict threshold for how strongly correlated the outcomes of measurements on two or more particles can be, if the outcomes of each measurement are independent of actions undertaken at arbitrarily distant locations. Quantum mechanics, on the other hand, predicts that measurements on particles in entangled states can be more strongly correlated than Bell's inequality would allow. Whereas experimental tests conducted over the past half-century have consistently measured violations of Bell's inequality-consistent with the predictions of quantum mechanics-the experiments have been subject to one or more "loopholes," by means of which certain alternatives to quantum theory could remain consistent with the experimental results. This chapter reviews three of the most significant loopholes, often dubbed the "locality," "fair-sampling,"and "freedom-of-choice" loopholes, and describes how recent experiments have addressed them.
arbitrarily distant location. 1 Quantum mechanics is not compatible with local realism and, as Bell demonstrated, quantum mechanics predicts that measurements on pairs of particles in so-called "entangled" states can be more strongly correlated than the local-realist bound would allow. 2 Virtually every published experimental test of Bell's inequality, stretching over half a century, has found results compatible with the predictions of quantum mechanics, and (hence) in violation of Bell's inequality. 3 Yet since the earliest efforts to subject Bell's inequality to experimental test, physicists have recognized that several "loopholes" must be addressed before one may conclude that local-realist alternatives to quantum mechanics really have been ruled out. The loopholes consist of logical possibilities-however seemingly remote or implausible-by which a local-realist theory could give rise to correlated measurements that mimic the expectations from quantum theory, exceeding Bell's bound. (For reviews, see Refs. [35][36][37][38][39][40].)
In this chapter, I discuss the three major loopholes that have been identified for experimental tests of Bell's inequality. In Section II, I briefly review the form of Bell's inequality on which most experimental efforts have been focused. This form, which was introduced by John Clauser, Michael Horne, Abner Shimony, and Richard Holt [41] soon after Bell published his original paper on the topic, is usually referred to as the "Bell-CHSH inequality." Several of those physicists, often in close dialogue with Bell himself, were also among the first to identify various loopholes. The first of these, known as the "locality loophole," is the subject of Section III. In Section IV, I discuss the "fairsampling loophole," while in Section V I turn to the "freedom-of-choice loophole." Brief concluding remarks follow in Section VI. As described below, tests of Bell's inequality have been performed on many different physical systems, subjecting different types of particles to measurements with different types of detectors. In this chapter I focus primarily on conceptual analysis of the various loopholes, more than on the details of particular experimental implementations. 1 Early work on Bell's inequality, including Bell's first derivation [1], was deeply influenced by the EPR paper [3], in which the authors argued that particles should be considered to have definite properties on their own, prior to and independent of physicists' efforts to measure them ("realism"), and that distant events should not influence local ones arbitrarily quickly ("locality"). More recent work has clarified the minimal requirements for Bell's inequality to hold: the measurement outcome at one detector should not depend on either the detector setting or the measurement outcome at a distant detector, and the selection of detector settings on each experimental run should be independent of the properties of the particles to be measured. For recent, succinct discussions of "local realism" in the context of Bell's inequality, see the Appendix of Ref. [4] and Section 3.1 of Ref. [5]. Note that Bell's inequality does not apply to formulations such as Bohmian mechanics [6,7], which, as Bell [1] noted, has a "grossly non-local structure." 2 For historical treatments, see Refs. [8][9][10][11][12][13][14]. For a range of philosophical responses, see Refs. [5,[15][16][17][18][19][20][21][22][23][24]. Recent popular accounts include Refs. [25][26][27][28][29][30][31][32]. 3 The only published experimental test of Bell's inequality that appeared to contradict the predictions from quantum theory was published in Ref. [33], though that experiment was criticized in Refs. [34,35]. For any pair of detector settings (a, b), we may construct the correlation function
E(a, b) ≡ A(a) B(b) ,(1)
where the angular brackets indicate averages over the many experimental runs in which pairs of particles were subjected to measurements with detector settings (a, b). For measurements of a property such as spin, the outcomes A(a) and B(b) can only ever be ±1, so on any given experimental 4 David Bohm first suggested that EPR-type experiments could be conducted using measurements of observables such as spin, which have discrete sets of possible measurement outcomes, in his influential textbook on quantum mechanics: Ref. [42], pp. 614-622. Bell was inspired by Bohm's variation while working on Ref. [1]. (See Ref. [12], pp. 31-37.) On the wider impact of Bohm's textbook, see Ref. [13], chap. 8. A beautiful variation on Bell's original argument-which (in principle) can force an empirical contradiction between predictions from local realism and quantum mechanics with a single set of measurements rather than statistical averages over many experimental runs-concerns measurements of a discrete observable such as spin on N -particle entangled states, with N ≥ 3. See Refs. [43,44]. run, the product A(a) B(b) can only ever be ±1. Upon averaging over many runs in which the detector settings were (a, b), the correlation function E(a, b) therefore satisfies −1 ≤ E(a, b) ≤ 1.
One might try to account for the behavior of such correlation functions E(a, b) by constructing a local-realist theory and using it to calculate p(A, B|a, b), the conditional probability that physicists would find measurement outcomes A and B upon selecting detector settings a and b. Bell [1,46] argued that within any local-realist formulation, such conditional probabilities would take the form
p(A, B|a, b) = dλ p(λ) p(A, B|a, b, λ) = dλ p(λ) p(A|a, λ) p(B|b, λ) .(2)
Here λ represents all the properties of the particles prepared at the source σ that could affect the measurement outcomes A and B. Bell imagined that whatever specific form the variables λ took, their values on a given experimental run would be governed by some probability distribution p(λ).
Given detector setting a at the left detector, there would be some probability p(A|a, λ) to find measurement outcome A at that detector, and likewise some probability p(B|b, λ) to find outcome B at the right detector given detector setting b. 5 Note that these expressions encode "locality": nothing about the probability to find outcome B at the right detector depends on either the setting (a) or the outcome (A) at the distant detector, and vice versa [1,2,46]. (For a helpful and succinct discussion, see Ref. [5].) In his original derivation, Bell [1] quoted from Einstein's "Autobiographical Notes." As Einstein had written, "But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of the system S 2 [the particle being measured at the right detector] is independent of what is done with the system S 1 [the particle at the left detector], which is spatially separated from the former." (Ref. [47], p. 85. See also Refs. [48][49][50].) See Fig. 2.
Bell's inequality can be cast in a particularly simple form if we consider experiments in which each particle is subjected to measurement in one of two detector settings: either a or a at the left detector, and b or b at the right detector. Then one may consider a particular combination of correlation functions, as one varies the pairs of detector settings:
S ≡ E(a, b) + E(a , b) − E(a, b ) + E(a , b ) .(3)
The quantity S, first derived by Clauser, Horne, Shimony, and Holt [41], is known as the Bell-CHSH parameter. Closely following Bell's original reasoning in Ref. [1], the CHSH authors demonstrated 5 In his original derivation, Bell [1] assumed that the measurement outcomes were governed by deterministic functions A(a, λ) and B(b, λ). He generalized his derivation to stochastic models, with conditional probabilities p(A|a, λ) and p(B|b, λ), in Ref. [46].
S ≤ 2 .(4)
Eq. (4) is known as the Bell-CHSH inequality. 6 A straightforward calculation (see, e.g., Ref. [52], pp. 223-232) suffices to show that for pairs of particles prepared in a maximally entangled state, such as
|Ψ (±) = 1 √ 2 |+1 A ⊗ |−1 B ± |−1 A ⊗ |+1 B ,(5)
quantum mechanics predicts that S could violate the bound in Eq. (4), achieving a maximum value
S max QM = 2 √ 2 .(6)
According to quantum mechanics, the value S max QM should arise for particular choices of settings (a, a ) and (b, b ). The value S max QM = 2 √ 2 is known as the "Tsirelson bound" [53].
Consider, for example, an experiment involving pairs of linearly polarized photons prepared in the state |Ψ (±) of Eq. (5), with |+1 → |H (horizontal polarization) and |−1 → |V (vertical polarization) with respect to some orientation in space. If the photons travel along the z axis toward each detector, then the detector settings (a, a , b, b ) are simply unit vectors pointing at various angles within the x-y plane, along which polarizing filters could be oriented. A photon in state |H A 6 As in Ref. [1], the original CHSH derivation [41] applied to local-realist models in which the measurement outcomes were given by deterministic functions A(a, λ) and B(b, λ). The CHSH inequality in Eq. (4) also applies to stochastic models in which A(a, λ) → p(A|a, λ) and B(b, λ) → p(B|b, λ). See, e.g., Appendix A of Ref. [51] and Ref. [5].
with respect to orientation a would yield measurement outcome A(a) = +1, whereas a photon in state |V A along a would yield A(a) = −1. 7 For this set-up, the quantum-mechanical prediction for the correlation function is simply E(a, b) = − cos(2θ ab ), where cos θ ab ≡ a · b. 8 In that case, the On the other hand, both local-realist theories and quantum mechanics predict that S ≤ 2 for choices such that a · b = a · b = 0 or 1, regardless of the angle between a and a .
The size of the difference between predictions from local-realist theories and from quantum protons' spins were measured, to avoid only measuring in bases such that a · b = a · b = 0 or 1.
As Stapp wrote, "The precise experiments considered here have not all actually been performed.
But they are only slight variations of experiments that have been performed" [55]. 9 7 The earliest experimental tests involving polarized photons used single-channel measuring devices at each detector. Hence (most) photons whose polarization aligned with the orientation of the polarizing filter would pass through the filter and be registered by a device such as a photomultiplier tube, yielding A(a) = +1, whereas photons whose polarization was perpendicular to the orientation of the polarizing filter would yield no registered detection, coded as A(a) = −1. In practice, such an approach combined data on perpendicular polarization with all other reasons that the detector might have failed to register a photon on a given experimental run. Later experiments adopted two-channel measuring devices at each detector, taking advantage of the fact that a photon in state |H A with respect to orientation a will pass through the polarizing filter along a distinct trajectory than a photon in state |V A . See, e.g., Refs. [5,54]. 8 The form of E(a, b) in this case is easy to understand. When measured along the same orientation in space, a = b and hence θ ab = 0 • , pairs of polarized photons prepared in the state |Ψ (+) of Eq. (5) should be perfectly anti-correlated, with E(a, b) = −1. If the photons in that state are measured along perpendicular orientations in space, with θ ab = 90 • , then the polarization measurements should be perfectly correlated, with E(a, b) = +1. The variation of E(a, b) with θ ab follows from considering measurements in rotated bases. For example, if the polarizer at the left detector is rotated by angle ϕ, the eigenstates of the new detector settingã will be given by |H(ϕ) A = cos ϕ |H A + sin ϕ |V A and |Ṽ (ϕ) A = − sin ϕ |H A + cos ϕ |V A . 9 Stapp originally composed and circulated Ref. [55] during the summer of 1968, in advance of a conference on "Quantum Theory and Beyond," which was held in Cambridge, England. The paper was not published in the conference proceedings, and Stapp later released the paper as a technical report from the Lawrence Berkeley Laboratory "to fill continuing requests" [55]. On Stapp's training and his early interest in Bell's inequality, see Ref. [12], pp. 55-56. In 1976, around the time that Stapp circulated his technical report, M. Lamehi-Rachti and W. Mittig [56] reported results of their analysis of violations of Bell's inequality using low-energy proton-proton scattering data. They reported good agreement with the quantum-mechanical predictions, though given experimental limitations they needed to make several additional assumptions in order to put the proton scattering data into a form with which to test Bell's inequality, beyond those typically required of Bell tests [35]. Pipkin measured results suggesting a strong compatibility with local-realist theories, equivalent to S = 1.728 ± 0.104, easily consistent with the Bell-CHSH bound S ≤ 2 and in strong disagreement with the prediction from quantum mechanics. They circulated a preprint of their results but never 11 Freedman and Clauser [58] reported their results in terms of a quantity ∆(ϕ), closely related to the Bell-CHSH parameter S of Eq. (3). For choices of detector settings such that θ ab = θ a b = θ a b = ϕ and θ ab = 3ϕ, the quantities are related by S = |4∆(ϕ) + 2|. They measured ∆(22.5 • ) = 0.104 ± 0.026 and ∆(67.5 • ) = −1.097 ± 0.018, which represent violations of S ≤ 2 by 4.0 and 5.4 standard deviations, respectively. None of the early experiments found results close to saturating the (theoretical) Tsirelson bound of quantum mechanics, S max QM = 2 √ 2 = 2.83, largely because of limited detector efficiencies, even though they were able to measure S > 2 to high statistical significance. On subtleties of normalization for various Bell-like inequalities, which can complicate direct comparisons between parameters like the Bell-CHSH parameter S and related quantities, see Ref. [4]. had not attempted to repeat their measurements [8,35,61]. 12 Since Freedman and Clauser's original experiment [58], virtually every published test has measured violations of Bell's inequality, consistent with the quantum-mechanical predictions [35,38].
One might therefore ask: following the 1976 repetitions of the Holt-Pipkin experiment [59,61], alongside similar experiments completed during the mid-1970s [8,35], why have physicists continued to subject Bell's inequality to experimental test? The answer is that each of the early experiments was subject to multiple loopholes: explanations consistent with local realism that could (in principle) account for the experimental results.
III. THE LOCALITY LOOPHOLE
The first loophole that physicists identified for tests of Bell's inequality is often referred to as the "locality loophole." It concerns the flow of information during a given experimental run. In particular, could any communication among elements of the experiment-with information traveling at or below the speed of light-account for the strong correlation among measurement outcomes, even if the particles being subjected to test really did obey local realism?
Each experimental run in a test of the Bell-CHSH inequality involves (at least) five relevant 12 Holt and Pipkin actually measured a closely related quantity to the Bell-CHSH parameter S, which had been introduced in Ref. [58]: |R(ϕ) − R(3ϕ)|/R 0 , where R(ϕ) is the number of coincidence count rates when the relative orientation of the polarizers at the two detectors is equal to θ ab = ϕ, and R 0 is the coincidence count rate when both polarizers are removed from their respective detectors. According to quantum mechanics, for ideal measurements the parameter R(ϕ) = emit the pair of entangled particles from the source σ; and perform a measurement on each particle, yielding outcomes A and B. In the space-time arrangement shown on the left, information traveling at or below the speed of light (thick gray lines) could account for the correlations among measurement outcomes even if the particles obeyed local realism. In the space-time arrangement shown on the right, the relevant events have been spacelike separated, so that such information exchange would not suffice to account for the measured correlations.
events, whose space-time arrangement we may depict as in Fig. 4: experimenters must select the detector settings a and b, emit the entangled particles from the source σ, and perform a measurement on each particle, yielding outcomes A and B. John Bell articulated a version of the locality loophole in his original article on Bell's inequality [1]. He closed his now-famous paper by writing that the quantum-mechanical predictions might apply "only to experiments in which the settings of the instruments are made sufficiently in advance to allow them to reach some mutual rapport by exchange of signals with velocity less than or equal to that of light." It would therefore be "crucial," he concluded, to conduct experiments "in which the settings are changed during the flight of the particles." 14 is not a faithful representative of the whole ensemble emitted" [69]. After all, in their experiment, they had successfully completed measurements on only about 5% of all the photon pairs that had been emitted from the source. Tests of the Bell-CHSH inequality require performing statistical averages over measurements on many pairs of particles. What if the subset of particles that was successfully detected had been drawn from some biased sample, skewing the statistical results? This second loophole has been dubbed the "detector-efficiency loophole" or "fair-sampling loophole."
Physicists typically define the "efficiency," η, of a given detector as the probability that for any particle impinging upon the device, the detector will register a definite measurement outcome.
In any real experiment, the efficiency will be less than 100%, with 0 < η < 1. If one assumes that the detector efficiencies are identical for the two detector stations in a test of the Bell-CHSH inequality, and that the detectors operate independently of each other, then only in a fraction η 2 of experimental runs will each detector successfully perform a measurement on its member of an entangled pair. In a fraction η(1 − η) of experimental runs, only the left detector will register a measurement of its particle while the right detector registers nothing, and in a separate fraction η(1 − η) of runs, only the right detector will complete a measurement. Finally, in a fraction (1 − η) 2 of runs, neither detector will register a measurement. If one only considers experimental runs in which at least one detector completes a measurement, then the Bell-CHSH inequality of Eq. (4) is modified to read
S ≤ 4 η − 2 ,(7)
with the Bell-CHSH parameter S defined in Eq. (3). 16 (See Refs. [37,[73][74][75].) In the limiting case of detectors with perfect efficiency, η → 1, Eq. (7) reverts to the original form of the Bell-CHSH inequality, S ≤ 2. On the other hand, if the efficiency of each detector is below a critical threshold,
η ≤ η * with η * ≡ 2 √ 2 − 1 = 0.828 ,(8)
then the inequality in Eq. (7) becomes S ≤ 2 √ 2, indistinguishable from the Tsirelson bound for quantum mechanics, S max QM of Eq. (6). In other words, if the detector efficiencies are less than 82.8%, then a local-realist explanation could account for any experimental result that found 2 ≤ S ≤ 2 √ 2 simply by invoking the argument that the pairs of particles that happened to be detected during the experiment represented a biased (rather than "fair") sample of all the pairs that had been emitted. E (a, b) and E(a, b), one then arrives at the updated inequality for the original Bell-CHSH parameter S, defined in terms of E(a, b), as in Eq. (7).
A few years later, in 2013, two groups exploited advances in highly efficient single-photon detectors to conduct Bell tests with polarization-entangled photons that closed the fair-sampling loophole.
By using cryogenically cooled transition-edge sensors (TES) operating at the superconducting transition, the teams achieved detector efficiencies η ≥ 0.90. By conducting short-distance tests in which the entangled photons traveled to their respective detectors via carefully shielded optical fibers, the total losses between emission at the source and measurements at the detectors remained sufficiently low to enable tests of a Bell-type inequality while closing the fair-sampling loophole. 17 One group measured violations of the inequality by nearly 8 standard deviations [82], and the other group by more than 65 standard deviations [83]. As an indication of the technical challenges posed by these experiments, note that each of them-those using trapped ions in the early and mid-2000s [79,80] and those using entangled photons in 2013 [82,83]-focused only on closing the fair-sampling loophole, and did not even attempt to address the locality loophole. were subjected to a Bell-state measurement, which (as in entanglement-swapping protocols [85]) projected the associated electron spins at stations A and B into a maximally entangled state. After the heralding photons left stations A and B but before they arrived at station C, quantum random number generators (QRNGs) at stations A and B selected bases in which the electron spins would be measured. As in Weihs's experiment, the selections of detector settings at stations A and B
were spacelike-separated from the preparation of the entangled state, and the measurements of each spin at stations A and B were spacelike-separated from each other, thereby closing the locality loophole. Meanwhile, by using an "event-ready" protocol, in which the joint detection of photons at station C indicated the successful preparation of an entangled state, the group could carefully 17 The photon experiments in Refs. [82,83] each measured violations of a Bell-type inequality first derived by Philippe Eberhard [75], who demonstrated that the fair-sampling loophole could be closed with overall efficiencies below η * in Eq. (8) if one performed measurements on pairs of particles in the non-maximally entangled state |Ψ = [1 + r 2 ] −1/2 (|H A ⊗ |V B + r |V A ⊗ |H B ), with r a real parameter within the range 0 < r < 1. In addition, these experiments exploited recent advances to efficiently produce polarizationentangled photons via spontaneous parametric down conversion [84]: photons of a particular frequency are directed from a pump laser onto a special nonlinear crystal, which absorbs the incoming photons and emits pairs of photons that conserve overall energy, linear momentum, and angular momentum. monitor the total number of entangled states produced, and thereby verify that their series of spin measurements closed the fair-sampling loophole. For their first experiment, the group conducted 245 experimental runs and measured S = 2.42 ± 0.20 [86]. A few months later, the group collected data on an additional 300 runs. With the combined datasets, they measured S = 2.38 ± 0.14, a violation of the Bell-CHSH inequality by 2.7 standard deviations [87].
Soon after Hanson's group completed its first experiment, two other groups conducted experiments that likewise closed both the locality and fair-sampling loopholes [88,89]. Building directly upon the 2013 experiments with high-fidelity single-photon detectors [82,83], each of these groups performed The third loophole is often denoted the "measurement-dependence loophole," the "settings-dependence loophole," or the "freedom-of-choice loophole," and is conceptually distinct from the locality loophole. The locality loophole concerns the flow of information during a given experimental run, and relies upon direct communication between 19 Shimony, Horne, and Clauser originally circulated their analysis [91] in the informal newsletter Epistemological Letters, which served as a forum for discussions of Bell's inequality and other issues in the foundations of quantum mechanics throughout the 1970s, at a time when many physics journals, such as the Physical Review, downplayed the topic. Several years later, their exchange with Bell was republished in the philosophical journal Dialectica [92]. Bell later included his own contributions to the exchange [93,94] as chapters in his well-known book, Speakable and Unspeakable in Quantum Mechanics [2], where they appear as chapters 7 and 12. On the role of Epistemological Letters, see Ref. [8], pp. 602-603 and Ref. [12], p. 122. parts of the apparatus to account for the strong correlations; locality, in other words, concerns whether p(A, B|a, b, λ) factorizes as p(A|a, λ) p(B|b, λ). The freedom-of-choice loophole, on the other hand, concerns whether any common cause could have established statistical correlations between the parameters λ that affect measurement outcomes and the selection of detector settings (a, b), such that p(λ|a, b) = p(λ). Such statistical correlations could arise even in the absence of direct communication between parts of the experimental apparatus.
Shimony, Horne, and Clauser [91] pointed out that in Bell's original derivation of his inequality, he had relied upon expressions of the form in Eq. (2) for the conditional probabilities p (A, B|a, b).
Yet the law of total probability requires that one write such expressions as
p(A, B|a, b) = dλ p(A, B|a, b, λ) p(λ|a, b) .(9)
(On the law of total probability, see, e.g., Ref. [96], Sec. 2.3.) In general, when calculating p(A, B|a, b) one must take into account possible correlations between λ and the detector settings (a, b), represented by the term p(λ|a, b), regardless of whether p(A, B|a, b, λ) factorizes as p(A|a, λ) p(B|b, λ).
In his original derivation, however, Bell had tacitly neglected any possible correlation between λ and (a, b), writing simply p(λ) in place of p(λ|a, b). Via Bayes's theorem, that was equivalent to replacing p(a, b|λ) → p(a, b), that is, to assuming (by fiat) that the selection of detector settings (a, b) was entirely independent of the parameters λ that could affect the behavior of the entangled particles. As Bell [93] wrote in his exchange with Shimony, Horne, and Clauser, "It has been assumed that the settings of instruments are in some sense free variables-say at the whim of the experimenters-or in any case not determined in the overlap of the backward light cones."
A dozen years after Shimony, Horne, and Clauser identified the freedom-of-choice loophole, Carl Brans [97] developed an explicit local-realist model that exploited nontrivial correlations p(λ|a, b) = p(λ) in order to mimic the predictions from quantum mechanics for Bell tests. More recent theoretical work has clarified that the freedom-of-choice loophole offers by far the most efficient means by which local-realist models can produce correlations that exceed Bell's inequality.
Only a minuscule amount of statistical correlation between the selection of detector settings (a, b) and the parameters λ is required for local-realist models to mimic the correlations of a maximally entangled quantum state like |Ψ (±) in Eq. (5), for example-over twenty times less coordination (or "mutual information") than required for local-realist models that exploit the locality loophole in order to reproduce the quantum-mechanical predictions. 20 (See Refs. [51,[98][99][100][101][102].) In other words, despite claims that have occasionally been made in the literature, local-realist models that exploit the freedom-of-choice loophole certainly do not require the strong assumption of "superdeterminism,"
in which experimenters' every single action would be determined by initial conditions (set, for example, at the time of the Big Bang). Rather, the freedom-of-choice loophole merely requires that in a small fraction of experimental runs, the source of entangled particles could predict (better than chance) at least one of the detector settings, a or b, that would be used on a given run [98].
The freedom-of-choice loophole thus appears to be quite robust, theoretically: the basic rules for Physicists have pursued two distinct approaches to address the freedom-of-choice loophole in recent experiments. One approach has been to crowd-source seemingly random bits in real time.
During the course of a single day-November 30, 2016-about 100,000 volunteers around the world, 20 One may quantify the amount of correlation required for a local-realist model to mimic the quantummechanical predictions by exploiting the freedom-of-choice loophole in terms of the mutual information, I = λ,a,b p(λ|a, b) p(a, b) log 2 [p(λ|a, b)/p(λ)]. The most efficient local-realist models that can reproduce predictions for correlations in a maximally entangled two-particle state by exploiting the freedom-of-choice loophole require merely I = 0.046 1/22 of a bit of mutual information [98]. Local-realist models that exploit the locality loophole in order to mimic the quantum-mechanical predictions for Bell tests, on the other hand, require at least 1 full bit of mutual information [51]. dubbed "Bellsters," played a specially designed video game. Their task was to try to produce an unpredictable sequence of 0's and 1's; while they played, a sophisticated machine-learning algorithm analyzed each Bellster's first few entries and tried to predict what the next one would be. With real-time feedback from the algorithm, players could improve their scores by making their next selections less predictable. The outputs from all those volunteers-which totaled nearly 10 8 (quasi-)random bits-were directed via high-speed networks to twelve laboratories distributed across five continents: from Australia to Shanghai, Vienna to Barcelona, Buenos Aires to Boulder, Colorado. In each of those laboratories that day, the real-time Bellster bits determined which detector settings (a, b) would be used, run by run, in independent Bell tests. Every participating laboratory measured statistically significant violations of Bell's inequality [104].
A complementary approach to addressing the freedom-of-choice loophole has been to isolate the events that determine detector settings (a, b) as much as possible, in space and time, from the rest of the experiment. In place of QRNGs or large numbers of Earthbound volunteers, such "Cosmic
Bell" experiments make use of astronomical random number generators (ARNGs) for selecting detector settings: devices that perform real-time astronomical measurements of light from distant objects, and rapidly convert some aspect of those measurements into a (quasi-)random bitstream [45]. For example, in the experiments reported in Refs. [105][106][107], the ARNGs used dichroic filters to rapidly distinguish light from astronomical sources that was more red or more blue than some reference wavelength. In keeping with the space-time arrangement shown on the right in Fig. 4, each ARNG implemented a fresh detector setting at its station every few microseconds, while the entangled photons were in flight, and the measurement outcomes (A, B) were determined at spacelike-separated events. In addition, the causal alignment of the three experimental stations and the two astronomical sources was carefully analyzed, to ensure that the causal wave front from the stellar emission event that was intended for the ARNG at the Austrian National Bank arrived at its intended location before any information about that astronomical photon could have arrived at either the source of entangled Recent efforts to test Bell's inequality have featured advances in statistical analysis as well as experimental design. It was common in early experiments, for example, to adopt Gaussian statistics when analyzing the statistical significance of a measured violation of (say) the Bell-CHSH
inequality. Yet Gaussian statistics rely on several simplifying assumptions. First, and most obvious, the Gaussian distribution is an idealized form that holds in the limit of an infinite number of measurements, N → ∞; when applied to any finite series of measurements, one must adopt an additional assumption about the convergence of the actual statistical distribution to the idealized form. Several recent Bell tests have involved large numbers of measurements on pairs of particles, with N ∼ 10 4 − 10 7 [82,83,88,89,105,107,111], which might plausibly justify the approximation N → ∞, though other recent tests (especially those involving event-ready protocols) have included as few as N ∼ 10 2 − 10 3 measurements [80,86,87,90]. In other words, successful tests of Bell-type inequalities do not always approximate the domain N → ∞ for which Gaussian statistics might be appropriate.
More important, use of the Gaussian distribution is predicated on the assumption that relevant variables for each experimental run are independent and identically distributed (often abbreviated 21 The brighter the astronomical object, the greater the flux of astronomical photons that an ARNG can collect per unit time, and hence the quicker an ARNG can output a bitstream of (quasi-)random numbers. as "i.i.d."). This assumption is usually violated in real experiments. For example, whatever processes are used to select detector settings (a, b) on a given experimental run usually do not yield precisely equal numbers of trials with the various joint-settings pairs (a, b), (a , b), (a, b ), and (a , b )-hence a local-realist mechanism could exploit the "excess predictability" that certain combinations of detector settings can be expected to arise more frequently than others [112]. Even more subtle, the i.i.d. assumption neglects what have come to be called "memory" effects. Like a seasoned poker player who carefully tracks the cards that have been played and updates her strategy over the course of a game, a local-realist mechanism could make use of information about the previous detector settings and measurement outcomes and adjust its strategy over the course of an experiment. Exploiting such locally available information would remain compatible with local-realist conditional probabilities of the form in Eq. (2), let alone Eq. (9). Taking into account both excess predictabilities and "memory" effects therefore requires more sophisticated calculations of the statistical significance with which measured correlations in a Bell test exceed what could be accounted for by a local-realist scenario [105,107,[112][113][114][115][116].
In recent years, several groups of physicists have found an additional motivation for performing tests of Bell's inequality, beyond the enduring question about local realism. Quantum entanglement is now at the core of new devices, including quantum computation and quantum encryption. Such real-world technologies will only function as expected if entanglement, as described by quantum mechanics, is a robust fact of nature rather than an illusion that arises from some local-realist underpinning. In particular, many quantum encryption protocols rely upon embedded Bell tests to verify the security of a communication channel. If some local-realist mechanism could exploit loopholes like locality, fair sampling, and/or freedom of choice to produce the expected results in a Bell test, then (in principle) such mechanisms would also be available to eavesdroppers or hackers, intent on gaining access to unauthorized information. (See esp. Refs. [117][118][119][120][121][122][123].) Some of the most ambitious and audacious recent tests of Bell's inequality-including the breathtaking experiment by Jian-Wei Pan and his group, involving pairs of polarization-entangled photons emitted from the specially built Micius satellite, in low-Earth orbit, and measured at detector stations 1200 km apart from each other on Earth [124]-have been key components in building and testing real-world quantum encryption infrastructure [125,126]. In our new era of quantum information science, the stakes for tests of local realism have only grown, even beyond the deep questions that drove Bell, Clauser, Horne, Shimony, and their early colleagues to pursue tests of Bell's inequality.
FIG. 1 :
1Schematic illustration of a typical Bell test. A source σ emits a pair of particles, which travel in opposite directions. At each detector, a physicist selects a particular measurement to be performed by adjusting the detector settings (a, b); each detector then yields a measurement outcome (A, B). (Adapted from Ref. [45].) II. THE BELL-CHSH INEQUALITY AND THE FIRST EXPERIMENTAL TESTS Most experimental tests of Bell's inequality have concerned correlations among measurements on pairs of particles. Such tests can be pictured as in Fig. 1: a source (σ) in the center of the experiment emits a pair of particles which travel away from the source in opposite directions. At each detector, a physicist selects a particular measurement to be performed by adjusting the detector settings (a, b); each detector then yields a measurement outcome (A, B). For example, if the particles emitted from the source consist of pairs of electrons, a physicist at the left detector might choose to measure the spin of the left-moving electron along the x-axis, or along the y-axis, or along some intermediate angle; her choice of basis in which to measure the electron's spin is labeled a. The physicist at the right detector chooses to measure the spin of the right-moving electron along a particular orientation in space by adjusting the detector setting b. In this example, for any pair of detector settings (a, b) that have been selected, the measurement outcomes (A, B) at each detector can only be spin-up or spin-down. If we label the measurement outcome spin-up as +1 and spin-down as −1, then we have A(a), B(b) ∈ {+1, −1}. 4
FIG. 2 :
2John S. Bell in his office at CERN, 1982. (Courtesy CERN.) that for any model in which conditional probabilities p(A, B|a, b) took the form of Eq. (2), the parameter S obeys the inequality [41]
Tsirelson bound corresponds to the choice of settings (a, a ) = (0 • , 45 • ) and (b, b ) = (22.5 • , 67.5 • ).
Around the same time, other physicists hit upon a similar idea. Abner Shimony, at the time a young professor in both the Physics and Philosophy Departments at Boston University, wondered whether data from previous correlation experiments, which had been conducted for other reasons, could be used to test Bell's inequality. Together with then-graduate student Michael Horne, he delved into the published literature, conducting what Horne playfully dubbed "quantum archaeology." Much like Stapp, however, Horne and Shimony realized that previous correlation experiments had failed to consider the range of angles among the bases (a, a , b, b ) for which quantum mechanics predicts S > 2. 10 Independent of Stapp, Shimony, and Horne, John Clauser also became intrigued by the possibility of testing Bell's inequality in a laboratory experiment, while still in graduate school. He wrote directly to Bell in February 1969, asking if anyone had conducted such an experiment during the years since Bell's paper [1] had appeared. Upon hearing back from Bell that no one had as yet performed such an experiment-and with Bell's additional encouragement that if Clauser were to measure something different from what quantum mechanics predicts, that would "shake the world!"-Clauser began thinking about how to perform such a test. In the midst of that work, he learned of Shimony's and Horne's interest in Bell's inequality, and soon they began to collaborate together [57]. (See also Ref. [8], pp. 590-91, and Ref. [12], pp. 43-45.) Soon after Clauser began a postdoctoral fellowship at the Lawrence Berkeley Laboratory, he asked his supervisor, quantum-electronics pioneer Charles Townes, if he could design and conduct an experimental test of Bell's inequality as a side project, separate from the main research project for which Townes had hired him. Townes agreed and arranged for Clauser to work with Stuart Freedman, at the time a graduate student at the laboratory. Freedman and Clauser used pairs of linearly polarized photons in a maximally entangled state, emitted by atomic cascades within excited calcium atoms. They mounted the polarizers in such a way that their orientations at the left and right detectors could be adjusted. The distance between the two detectors was approximately four meters. To collect their data, Freedman and Clauser first fixed the polarizer orientations at each detector, beginning with (a, b) = (0 • , 22.5 • ). Then they recorded the measurement outcomes A(a) and B(b) within brief coincidence windows (∆t = 8.1 ns), to ensure that the pairs of measurements (A, B) on a given experimental run pertained to a single pair of entangled photons that had been emitted from the source. Upon collecting many thousands of measurements on pairs of photons with FIG. 3: John Clauser working on the instrumentation with which he and Stuart Freedman conducted the first experimental test of Bell's inequality, in 1972. (Courtesy Lawrence Berkeley National Laboratory.) the polarizers set to these orientations and averaging those results, they constructed the correlation function E(a, b). Then they paused the experiment, rotated the polarizer on the left side from orientation a = 0 • to a = 45 • while keeping the polarizer on the right side fixed at b = 22.5 • , and conducted new measurements with which to construct E(a , b)-and so on, until they had collected sufficient data with the various joint settings (a, b), (a , b), (a, b ), and (a , b ) to construct the combination of correlation functions needed for the Bell-CHSH parameter S in Eq. (3). Their findings were equivalent to S = 2.388 ± 0.072, violating the Bell-CHSH inequality of Eq. (4) by more than five standard deviations [58]. 11 (See also Refs. [8, 12, 35].) See Fig. 3. Around the same time, Richard Holt and his graduate-school supervisor Francis Pipkin performed their own experimental test of Bell's inequality at Harvard. Like Freedman and Clauser, they conducted measurements on pairs of linearly polarized photons in a maximally entangled state, in this case using photons emitted from a particular cascade in excited mercury atoms. They used the combination of detector settings (a, a , b, b ) predicted by quantum mechanics to yield the maximum violation of Eq. (4). Yet unlike Freedman and Clauser's experiment, Holt and
pursued formal publication, given their own lingering doubts about possible systematic errors. (See Ref.[8], p. 595.) Three years later, Clauser repeated their experiment, using the same cascade within excited mercury atoms to produce the entangled photons, and measured the equivalent of S = 2.308 ± 0.0744, a violation of the Bell-CHSH inequality by more than 4 standard deviations.(Around the same time, Edward Fry and Randall Thompson independently performed a Bell test at Texas A & M University using entangled photons from excited mercury atoms, and, like Clauser, measured a strong violation of Bell's inequality; see Refs. [35, 59, 60].) In the course of repeating the Holt-Pipkin experiment, Clauser found that the measured correlations depended sensitively upon stresses both in the walls of the glass bulb containing the mercury vapor as well as in the lenses used to focus the emitted photons toward their respective detectors. Although Holt and Pipkin had reported observing similar stresses in the mercury bulb during their experiment, they
1 4
4[1 + cos(2ϕ)]. For ϕ = 22.5 • , the quantum-mechanical prediction is therefore |R(ϕ) − R(3ϕ)|/R 0 = √ 2/4, whereas local-realist models predict |R(ϕ) − R(3ϕ)|/R 0 ≤ 1/4. Holt and Pipkin reported |R(22.5 • ) − R(67.5 • )|/R 0 = 0.216 ± 0.013, considerably below the local-realist threshold of 0.25, much less the quantum-mechanical prediction of 0.35[35]. When Clauser repeated their experiment[61], he measured |R(22.5 • ) − R(67.5 • )|/R 0 = 0.2885 ± 0.0093, violating the local-realist bound by 4.1 standard deviations.
FIG. 4 :
4During a test of the Bell-CHSH inequality, experimenters must select the detector settings a and b;
(As usual, we adopt coordinates such that light travels one unit of space in one unit of time, so that light-like trajectories follow 45 • diagonals.) If the experimenters first select the detector settings and later emit particles from the source (as shown on the left side of Fig. 4), then a local-realist description could readily account for the observed correlations between A and B. In such a case, the measurement outcome A could depend on information about detector setting b and/or outcome B could depend on setting a, such that the expression in the integrand of the top line of Eq. (2) would no longer factorize: p(A, B|a, b, λ) = p(A|a, λ) p(B|b, λ). Likewise, if the measurement on the left-moving particle were completed well before the measurement on the right-moving particle, then the detector on the right side could exploit information about the distant measurement to arrange a correlated outcome. Such explanations would be compatible with local realism. On the other hand, if special care were taken with the space-time arrangement of these five crucial events, as shown on the right side of Fig. 4, then the locality assumptions under which the bottom line of Eq. (2) had been derived would hold, and none of the scenarios depicted on the left side ofFig. 4would be available for a local-realist account of the measured correlations.13
John Clauser agreed. In his first letter to Bell (written in February 1969), Clauser noted that "it might also be possible to 'rotate' the polarizers by means of magneto-optic effects while the photons are in flight" (quoted in Ref. [8], p. 591). Achieving such fast switching among polarizer orientations, however, proved to be quite a technical challenge. As noted in Section II, when Freedman and Clauser conducted their experiment in 1972, they first manually set a and b for a given run by adjusting their polarizers to particular orientations before emitting the entangled photons. (Holt and Pipkin followed the same approach in their 1973 test, as did the other experiments conducted during the 1970s; see Refs. Clauser:1978ng,FreireOpticsLab.) Bell returned to the point when summarizing discussions at a 1976 summer school devoted to the foundations of quantum mechanics, declaring that the experiments that had been conducted to date "have nothing to do with Einstein locality," and that a test "of the very highest interest" would be one in which "the polarization analyzers are in effect re-set while the photons are in flight" (quoted in Ref. [8], p. 606). A participant in that 1976 summer school, Alain Aspect, was already hard at work on just such an experiment [66]. Together with colleagues Jean Dalibard and Gérard Roger at the Institut d'Optique Théorique et Appliquée in Orsay, near Paris, Aspect designed and built an experiment with fast-changing acoustico-optical switches inserted in the photons' paths from the source σ to the left and right detectors. Depending on which orientation the optical switch on the left side happened to be in when a photon arrived, that photon would be directed toward one of two polarizing filters, oriented at different angles; and likewise for the optical switch on the right side. The polarizer orientations (a, a ) on the left and (b, b ) on the right were fixed in advance, while the optical switches on the left and right sides changed every 10 ns. The detectors in Aspect's experiment were each about 6 meters from the source of entangled particles, which meant that photons emitted from the source would require at least 20 ns to travel to their respective detectors. Hence the optical switches on each side changed one or more times while each pair of photons was in flight. The particular detector settings (a, a ) and (b, b ) that each photon encountered therefore had not been fixed at the time of the photons' emission, and the measurements at each detector were completed such that no signal traveling at light speed could inform the distant detector about the settings or outcome at the local detector before each side had completed its measurement. Aspect and his colleagues thus performed the first Bell test, in 1982, in which the critical events were arranged as in the right side ofFig. 4. Even with this improved space-time arrangement, they measured a violation of the Bell-CHSH inequality by five standard deviations[67]. (See also Refs.[8,12,54].)Aspect and his colleagues noted in their original article that their acoustico-optical switches did not determine the detector settings (a, a ) or (b, b ) in a truly random manner. Instead, the switches operated quasi-periodically, which suggested-at least in principle-that information would have been available at the photons' source σ, in advance of each emission event, that could have sufficed to predict the detector settings that each photon would ultimately encounter[67,68]. In addition, the detectors on the left and right sides in Aspect's experiment were linked in real time via electronic coincidence circuits; results from each side were not recorded independently[67]. Each of these points suggested that although Aspect's 1982 experiment clearly represented an enormous milestone in tests of Bell's inequality, the "locality" loophole had not yet been closed conclusively.More than fifteen years later, Anton Zeilinger and his group, at the time based at the University of Innsbruck in Austria, completed a new Bell test that addressed the locality loophole head on.Led by Gregor Weihs, the group set up two detector stations 400 meters apart, making it easier to determine the measurement outcomes A and B at spacelike-separated events. (For this experiment, the light travel time between detectors was 1.3 µs rather than the 40 ns in Aspect's experiment in Orsay.) Each of the detector stations was equipped with its own quantum random number generator (QRNG), a device that could output a fresh bit (either a 0 or a 1) at a rate of 500 MHz.15 The output from each random-number generator was linked to an electro-optical modulator, a device that could quickly rotate the basis in which a photon's polarization would be measured by an angle proportional to the applied voltage, changing bases at a frequency up to 30 MHz. Each detector station also had its own atomic clock with an accuracy of 0.5 ns, with which the time of each detection event at each detector could be recorded. Using this scheme, information about the detector settings (a, a ) and (b, b ) for a given run should not have been available at either the emission event σ or at the distant measurement events that yielded A or B, and no direct link connected the two detector stations, more conclusively achieving the space-time arrangement depicted on the right side ofFig. 4. The group measured S = 2.73 ± 0.02, a violation of the Bell-CHSH inequality by more than 35 standard deviations[69]. (See also Refs.[70][71][72].)IV. THE FAIR-SAMPLING LOOPHOLENear the conclusion of their article, Weihs, Zeilinger, and their colleagues noted that "while our results confirm the quantum theoretical predictions, we admit that, however unlikely, local realistic or semiclassical interpretations [of their experimental results] are still possible," if one invoked a different loophole than locality: "we would then have to assume that the sample of pairs registered
Several physicists developed explicit local-realist models, with conditional probabilities p(A, B|a, b) satisfying Bell's form in Eq.(2), that could exploit non-detection events at either detector in order to mimic the predictions from quantum mechanics. (See Refs.[73][74][75][76][77][78].)Although the fair-sampling loophole was identified as early as 1970[76]-and Clauser, Horne, and Shimony focused on it in various papers during the 1970s[35,73]-addressing this loophole in a real experiment proved to be quite challenging, given technological limitations on available instrumentation. In fact, more than thirty years elapsed between the identification of the loophole and the earliest experiments to address it. The first groups to attempt Bell tests that closed the fair-sampling loophole used pairs of slow-moving, entangled ions in high-fidelity magnetic traps, rather than entangled photons. Since the traps kept the ions accessible for long periods of time, the teams achieved very high efficiencies, η 0.98, easily above the critical threshold η * , and managed to measure violations of the Bell-CHSH inequality. A group at the U.S. National Institute of Standards and Technology (NIST) in Boulder, Colorado performed such a test on pairs of beryllium ions in 2001, finding S = 2.25 ± 0.03[79], and a separate group, based at the University of Maryland, measured S = 2.22 ± 0.07 with pairs of ytterbium ions in 2008[80]. (See also Ref.[81].)16 If one assumes perfect detector efficiencies, η → 1, then the correlation function E(a, b) in Eq.(1) may be evaluated as E(a, b) = A,B=±1 (AB N AB ab )/N tot ab , where N tot ab is the total number of entangled pairs that are emitted when the detector settings are (a, b), and N AB ab is the number of double-coincidence measurements in which the left and right detectors yield {A, B} ∈ {+1, −1}. However, if one takes into account imperfect detector efficiencies, one may define E (a, b) = A,B=±1,0 (AB N AB ab )/N AB ab , in which A = 0 (B = 0) indicates the lack of a successful measurement at the left (right) detector. If one neglects the (unobservable) runs in which neither detector completes a measurement, then one finds E (a, b) = [N double ab /(N double ab + N single ab )]E(a, b) = [η/(2 − η)]E(a, b), where the number of double-coincidence measurements is N double ab = η 2 N tot ab and the number of single-sided measurements is N single ab = 2η(1 − η)N tot ab . One may then construct S ≡ |E (a, b) + E (a , b) − E (a, b ) +E (a , b )|, and, using the same arguments as in Ref.[46], derive that S ≤ 2 for any local-realist model in which p(A, B|a, b) takes the form of Eq. (2). Taking into account the different normalizations of
An enormous milestone was achieved late in 2015, when three groups performed Bell tests that closed both the locality and fair-sampling loopholes in the same experiments. The first group to accomplish this feat was directed by Ronald Hanson at the Delft University of Technology in the Netherlands. Hanson's group set up three stations across the university campus. Stations A and B, which were separated by 1.3 km, each included a single electron spin degree of freedom, associated with a single nitrogen-vacancy (NV) defect center in a diamond chip. Each spin was entangled with a single photon; the photons from stations A and B were then transmitted to a central station C via optical fibers. Upon arrival at station C, the photons from stations A and B
Bell tests with polarization-entangled photons. To close the locality loophole, the groups needed to increase the distances between the source of entangled photons and the detector stations, so that the relevant space-time events could be arranged as in the right side ofFig. 4.For the experiment led by Krister Shalm at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, the team set up a triangular arrangement. The entangled source sat at the vertex of a (nearly) right triangle; photons were transmitted to detector stations A and B via optical fibers. Stations A and B were each about 130m from the entangled source, and 185m from each other. The light travel time from the source to a detector station was thus about 0.43 µs. Detector settings at stations A and B were determined by QRNGs co-located at each detector, which produced fresh, random bits about every 5 ns, thereby changing multiple times while the entangled photons were in flight. The group measured clear violation of a Bell-type inequality; the probability that their experimental results could have arisen from a local-realist model, in which p(A, B|a, b) took the form of Eq. (2), was p = 2.3 × 10 −7 [88]. If one makes some simplifying assumptions and adopts Gaussian statistics, this p value corresponds to a violation of the relevant inequality by more than 5 standard deviations. 18 At the same time, Anton Zeilinger's group in Vienna completed its own Bell test using polarizationentangled photons while closing the locality and fair-sampling loopholes. Led by Marissa Giustina, the group adopted a colinear spatial arrangement for the various experimental stations, rather than the triangular set-up of the NIST group. Optical fibers transmitted entangled photons from the central source toward detector stations on opposite ends of a long, narrow hallway in a sub-basement of the fabled Hofburg Palace in central Vienna. See Fig. 5. (The project constituted a major part of Giustina's doctoral dissertation. Many graduate students feel as if they are trapped in a castle FIG. 5: Marissa Giustina (center) describes her 2015 experiment to colleagues in the sub-basement of the Hofburg Palace in Vienna. The source of entangled photons is within the kiosk, visible in the middle of the hallway. Optical fibers (beneath the shielding in the center of the floor) transmitted the photons to detector stations on opposite ends of the long hallway, each 29 meters from the source. (Photo by the author.) dungeon while working on their theses...) Each detector station was 29m from the source, yielding a light travel time of just 96.7 ns from source to detector station. Within that brief time period, QRNGs at each detector station implemented fresh detector settings within 26 ns windows, ensuring spacelike-separation of the relevant events, as in the right side of Fig. 4. Like the NIST group, the Vienna experiment measured a substantial violation of a Bell-like inequality. The probability that their measured correlations would arise in a local-realist theory of the form in Eq. (2) was just p = 3.74 × 10 −31 ; if one again (naively) adopts Gaussian statistics, this result corresponds to a violation of the relevant Bell inequality by nearly 12 standard deviations [89]. (See also Ref. [90].) V. THE FREEDOM-OF-CHOICE LOOPHOLE In 1976 Shimony, Horne, and Clauser [91] identified a third significant loophole in tests of Bell's inequality, which had eluded Bell himself. 19 (A few years later, Richard Feynman independently articulated a version of this loophole: Ref. [95], p. 485.)
calculating conditional probabilities require that the term p(λ|a, b) be included in expressions like Eq.(9), and the possibility that p(λ|a, b) = p(λ) offers the most efficient mechanism by which a local-realist model could yield strong correlations among measurement outcomes. So how might one address the freedom-of-choice loophole experimentally? The remarkable experiments described in Sec. IV, which managed to close both the locality and fair-sampling loopholes, relied upon quantum random number generators (QRNGs) to select detector settings (a, b) for each experimental run.According to quantum mechanics, the outputs of such devices should be intrinsically random, and hence unpredictable. But the purported intrinsic randomness of quantum mechanics is part of what is at stake in tests of Bell's inequality, so the use of QRNGs in Bell tests raises the specter of circularity. Put another way: if the world were in fact governed by a local-realist theory rather than by quantum mechanics, then the behavior of QRNGs should-in principle-be susceptible to a description of the form in Eq. (9), including the critical term p(λ|a, b). If that conditional probability incorporated even modest statistical correlations between λ and the selection of various detector settings (a, b), then the outputs of the QRNGs might very well appear to be random-that is, they might pass the usual suite of randomness tests, such that knowledge of the previous N bits would not suffice to predict bit N + 1 at greater than chance levels. Yet the outputs could nonetheless have been sufficiently correlated with the (unseen) parameters λ to produce measurement outcomes that yield S → S max QM = 2 √ 2 in tests of the Bell-CHSH inequality. (See also Ref.[103].)
The first Cosmic Bell test was performed in Vienna in April 2016, by a collaboration including Anton Zeilinger and his group together with astrophysicists at MIT, Harvey Mudd College, and NASA's Jet Propulsion Laboratory. Polarization-entangled photons were emitted from the roof of the Institute for Quantum Optics and Quantum Information (IQOQI). One detector station, on the top floor of the Austrian National Bank (about 0.5 km from IQOQI), included an ARNG that performed real-time measurements of light from a bright, Milky Way star that was about 600lightyears from Earth. The other detector station, in a university building (more than 1 km from IQOQI, in the opposite direction), included its own ARNG trained on a distinct Milky Way star about 1930 lightyears from Earth[105].
FIG. 6 :
6(Top left) Andrew Friedman, Jason Gallicchio, Anton Zeilinger, and David Kaiser discuss early plans for Cosmic Bell tests at MIT, October 2014. (From the author's collection.) (Top right) Johannes Handsteiner sets up the astronomical random number generator (ARNG) in the Austrian National Bank in Vienna for the first Cosmic Bell test, April 2016. (Courtesy Sören Wengerovsky.) (Bottom left) Anton Zeilinger (back to the camera) discusses observing options in the control room of the William Herschel Telescope on La Palma, January 2018. Others shown (from left to right) are Christopher Benn (leaning), Thomas Scheidl, Armin Hochrainer, and Dominik Rauch. (Photo by the author.) (Bottom right) Two of the large telescopes at the Roque de los Muchachos Observatory in La Palma; on the left is the Telescopio Nazionale Galileo, which the Cosmic Bell group used during its January 2018 experiment. (Courtesy Calvin Leung.)
Stapp wrote a preprint noting that Bell's inequality could be tested in experiments much like the proton-scattering ones on which he had previously focused. The critical update that would be required, compared to previous experiments, would be to vary the angles along which themechanics-S ≤ 2 versus S ≤ 2
√
2 for clever choices of detector settings-is sufficiently large that
some physicists quickly began to imagine conducting experimental tests of Bell's inequality. One
of the first to highlight this possibility was Henry Stapp, a research scientist at the Lawrence
Berkeley Laboratory in California. Stapp had been trained in particle physics at Berkeley in the
early 1950s; for his Ph.D. dissertation, he had studied spin correlations in proton-proton scattering
experiments. During the summer of 1968-even before the CHSH version of Bell's inequality had
appeared-
photons or the distant detector station (and vice versa). In this way, the experiment closed the locality loophole. In addition, by selecting the detector settings on each run based on events that had occurred hundreds of years ago, quadrillions of miles from Earth, the experiment pushed back to 600 years ago the most recent time by which any local-realist mechanism could have exploited the freedom-of-choice loophole to engineer the necessary correlations between detector settings and properties of the entangled photons. The experiment measured S = 2.502 ± 0.042, violating the Bell-CHSH inequality by nearly 12 standard deviations[105].In January 2018, the group performed a second Cosmic Bell test, this time using a pair of 4m telescopes at the Roque de los Muchachos Observatory atop La Palma, in the Canary Islands. SeeFig. 6. With the larger telescopes, the ARNGs could measure light from cosmologically distant sources: high-redshift quasars rather than Milky Way stars. The group performed measurements on pairs of polarization-entangled photons while detector settings for each experimental run were determined by emission events that had occurred 7.78 billion years ago for one detector station and 12.21 billion years ago for the other. (For reference, the Big Bang occurred 13.80 billion years ago [108].) As with the first Cosmic Bell experiment, causal alignment was carefully analyzed to ensure that no information about a given cosmic emission event could have arrived at either the source of entangled particles or at the distant detector before that cosmic photon was measured by its intended ARNG. Fresh detector settings were implemented within brief windows (of order 1 µs) while the entangled photons were in flight, again closing the locality loophole. The experiment measured S = 2.646 ± 0.070, violating the Bell-CHSH inequality by more than 9 standard deviations. By deploying ARNGs focused on cosmologically distant quasars, the experiment pushed back to nearly 8 billion years ago the most recent time by which any local-realist influences could have exploited the freedom-of-choice loophole to engineer the observed violation of the Bell-CHSH inequality. Given the space-time arrangement of the particular quasars used for the experiment, the past light cones of each emission event, and the expansion history of the universe since the Big Bang, this second Cosmic Bell experiment excluded such local-realist, freedom-of-choice scenarios from 96.0% of the space-time volume of the past light cone of the experiment, extending from the Big Bang to the present time [107]. (See also Ref. [13], chap. 4.)The two Cosmic Bell experiments thus managed to close the locality loophole while constraining the freedom-of-choice loophole by dozens of orders of magnitude, compared to earlier, pioneering efforts to address this third loophole[109,110]. Because the Cosmic Bell experiments relied upon free-space transmission of the entangled photons rather than transmitting the photons via low-loss optical fibers, however, they were not able to close the fair-sampling loophole. In a stunning accomplishment, a separate group in Shanghai led by Jian-Wei Pan conducted a Bell test in which they distributed entangled photons via optical fibers across relatively short distances: about 90m between the source and each detector station. The (vacuum) light travel time between source and each detector station was therefore about 300 ns, requiring a correspondingly faster rate of generating and implementing fresh detector settings for each experimental run. Pan's group used ARNGs focused on very bright, nearby stars, with which they could implement fresh detector settings within windows as brief as 250 ns.21 The group measured violations of a Bell-like inequality with p = 7.87 × 10 −4 (roughly equivalent to 3.4 standard deviations), while closing the locality and fair-sampling loopholes and constraining any local-realist, freedom-of-choice scenario to have been set in motion no more recently than 11.5 years prior to the experiment[111].John Bell first formulated his now-famous inequality in 1964, and physicists have subjected Bell's inequality to experimental tests since 1972. Virtually every published test has measured violations of Bell's inequality, consistent with predictions from quantum mechanics. Over that period, physicists have identified several significant loopholes and devised clever, updated experimental designs, allwith the goal of producing the most compelling evidence possible with which to evaluate the core question that had animated Bell's work: is the universe governed by a theory compatible with local realism, or not? Bolstered by a recent slew of experiments that have addressed various combinations of the major loopholes,[86-90, 104, 105, 107, 111], the evidence against local realism is stronger than ever.VI. CONCLUSIONS
Abner Shimony, interview with Joan Bromberg, September 9, 2002, transcript available in the Niels Bohr Library of the Center for History of Physics, American Institute of Physics, College Park, Maryland. See also Ref.[12], pp.45-46.
Several authors have considered retrocausal models, in which p(A, B|a, b, λ) = p(A|a, λ) p(B|b, λ) because of the flow of certain information backwards in time from the future light cone of a given experimental run. See Refs.[62][63][64].
Bell [1] cited an article by David Bohm and Yakir Aharonov[65], in which they had argued that a proper test of the original Einstein-Podolsky-Rosen scenario should include the provision to change the detector settings while the particles were "still in flight." Bohm had made a similar observation in his discussion of EPR in his 1951 textbook: Ref.[42], p. 622.
The quantum random number generator (QRNG) used in Ref.[69] produced a rapid bitstream of 0's and 1's by shining the output from a light-emitting diode onto a beam splitter. In principle, each photon encountering the beam splitter had a 50-50 chance to be transmitted or reflected. Each path (transmission and reflection) was monitored by a photomultiplier capable of detecting single photons. Depending on which detector recorded a photon within a very brief time interval (∆t 2 ns), the device would output a 0 or a 1.
As I discuss in Sec. VI, there has been substantial progress and innovation in the statistical analysis of recent Bell tests, as well as in experimental designs. In particular, the use of Gaussian statistics implies several simplifying assumptions, and hence many experimental groups now report their results in terms of p-values rather than (or in addition to) standard deviations.
On the Einstein-Podolsky-Rosen paradox. John S Bell, 10.1103/PhysicsPhysiqueFizika.1.195Physics Physique Fizika. 1John S. Bell, "On the Einstein-Podolsky-Rosen paradox," Physics Physique Fizika 1, 195-200 (1964).
John S Bell, Speakable and Unspeakable in Quantum Mechanics. New YorkCambridge University PressJohn S. Bell, Speakable and Unspeakable in Quantum Mechanics (New York: Cambridge University Press, 1990).
Can quantum mechanical description of physical reality be considered complete?. Albert Einstein, Boris Podolsky, Nathan Rosen, 10.1103/PhysRev.47.777Phys. Rev. 47Albert Einstein, Boris Podolsky, and Nathan Rosen, "Can quantum mechanical description of physical reality be considered complete?" Phys. Rev. 47, 777-780 (1935).
Bell's theorem, Bell inequalities, and the 'probability normalization loophole'," in Quantum [Un]Speakables II: Half a Century of Bell's Theorem. John F Clauser, Reinhold Bertlmann and Anton ZeilingerSpringerBerlinJohn F. Clauser, "Bell's theorem, Bell inequalities, and the 'probability normalization loophole'," in Quantum [Un]Speakables II: Half a Century of Bell's Theorem, edited by Reinhold Bertlmann and Anton Zeilinger (Berlin: Springer, 2017) pp. 451-484.
Bell's theorem. Wayne Myrvold, Marco Genovese, Abner Shimony, The Stanford Encyclopedia of Philosophy. Edward N. ZaltaMetaphysics Research Lab, Stanford UniversityWayne Myrvold, Marco Genovese, and Abner Shimony, "Bell's theorem," in The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta (Metaphysics Research Lab, Stanford University, 2019).
A suggested interpretation of the quantum theory in terms of hidden variables. 1. David Bohm, 10.1103/PhysRev.85.166Phys. Rev. 85David Bohm, "A suggested interpretation of the quantum theory in terms of hidden variables. 1." Phys. Rev. 85, 166-179 (1952).
A suggested interpretation of the quantum theory in terms of hidden variables. 2. David Bohm, 10.1103/PhysRev.85.180Phys. Rev. 85David Bohm, "A suggested interpretation of the quantum theory in terms of hidden variables. 2." Phys. Rev. 85, 180-193 (1952).
Philosophy enters the optics laboratory: Bell's theorem and its first experimental tests (1965-1982). Olival FreireJr, 10.1016/j.shpsb.2005.12.003Stud. Hist. Phil. Mod. Phys. 37Olival Freire, Jr., "Philosophy enters the optics laboratory: Bell's theorem and its first experimental tests (1965-1982)," Stud. Hist. Phil. Mod. Phys. 37, 577-616 (2006).
The Quantum Dissidents: Rebuilding the Foundations of Quantum Mechanics. Olival FreireJr, SpringerBerlinOlival Freire, Jr., The Quantum Dissidents: Rebuilding the Foundations of Quantum Mechanics, 1950-1990 (Berlin: Springer, 2015).
Device physics vis-à-vis fundamental physics in Cold War America: The case of quantum optics. Joan Lisa Bromberg, 10.1086/504733Isis. 97Joan Lisa Bromberg, "Device physics vis-à-vis fundamental physics in Cold War America: The case of quantum optics," Isis 97, 237-259 (2006).
Louisa Gilder, The Age of Entanglement: When Quantum Physics was Reborn. New YorkKnopfLouisa Gilder, The Age of Entanglement: When Quantum Physics was Reborn (New York: Knopf, 2008).
David Kaiser, How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival. New YorkW. W. NortonDavid Kaiser, How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival (New York: W. W. Norton, 2011).
David Kaiser, Quantum Legacies: Dispatches from an Uncertain World. ChicagoUniversity of Chicago PressDavid Kaiser, Quantum Legacies: Dispatches from an Uncertain World (Chicago: University of Chicago Press, 2020).
Andrew Whitaker, John Stewart Bell and Twentieth-Century Physics: Vision and Integrity. OxfordOxford University PressAndrew Whitaker, John Stewart Bell and Twentieth-Century Physics: Vision and Integrity (Oxford: Oxford University Press, 2016).
Bernard D'espagnat, Conceptual Foundations of Quantum Mechanics. W. A. Benjamin, 2ndReading, MABernard d'Espagnat, Conceptual Foundations of Quantum Mechanics (Reading, MA: W. A. Benjamin, 2nd. ed., 1976).
Michael Redhead, Nonlocality Incompleteness, Realism , A Prolegomenon to the Philosophy of Quantum Mechanics. New YorkOxford University PressMichael Redhead, Incompleteness, Nonlocality, and Realism: A Prolegomenon to the Philosophy of Quantum Mechanics (New York: Oxford University Press, 1987).
Philosophical Consequences of Quantum Theory: Reflections on Bell's Theorem. James T. Cushing and Ernan McMullinNotre Dame, INUniversity of Notre Dame PressJames T. Cushing and Ernan McMullin, eds., Philosophical Consequences of Quantum Theory: Reflec- tions on Bell's Theorem (Notre Dame, IN: University of Notre Dame Press, 1989).
David Z Albert, Quantum Mechanics and Experience. Cambridge, MAHarvard University PressDavid Z. Albert, Quantum Mechanics and Experience (Cambridge, MA: Harvard University Press, 1992).
Jeffrey Bub, Interpreting the Quantum World. New YorkCambridge University PressJeffrey Bub, Interpreting the Quantum World (New York: Cambridge University Press, 1997).
Jeffrey Bub, Bananaworld, Quantum Mechanics for Primates. New YorkOxford University PressJeffrey Bub, Bananaworld: Quantum Mechanics for Primates (New York: Oxford University Press, 2016).
Quantum Information and Entanglement. Alisa Bokulich and Gregg JaegerNew YorkCambridge University PressAlisa Bokulich and Gregg Jaeger, eds., Quantum Information and Entanglement (New York: Cambridge University Press, 2010).
Tim Maudlin, Quantum Non-Locality and Relativity. Blackwell, 3rdOxfordTim Maudlin, Quantum Non-Locality and Relativity (Oxford: Blackwell, 3rd. ed., 2011).
John G Cramer, The Quantum Handshake: Entanglement, Nonlocality, and Transactions. BerlinSpringerJohn G. Cramer, The Quantum Handshake: Entanglement, Nonlocality, and Transactions (Berlin: Springer, 2016).
Quantum [Un]Speakables II: Half a Century of Bell's Theorem. Reinhold Bertlmann and Anton ZeilingerBerlinSpringerReinhold Bertlmann and Anton Zeilinger, eds., Quantum [Un]Speakables II: Half a Century of Bell's Theorem (Berlin: Springer, 2017).
Chad Orzel, How to Teach Quantum Physics to Your Dog. New YorkScribnerChad Orzel, How to Teach Quantum Physics to Your Dog (New York: Scribner, 2009).
Anton Zeilinger, Dance of the Photons: From Einstein to Quantum Teleportation. New YorkFarrar, Straus & GirouxAnton Zeilinger, Dance of the Photons: From Einstein to Quantum Teleportation (New York: Farrar, Straus & Giroux, 2010).
George Musser, Spooky Action at a Distance. New YorkMacmillanGeorge Musser, Spooky Action at a Distance (New York: Macmillan, 2015).
What is Real: The Unfinished Quest for the Meaning of Quantum Physics. Adam Becker, Basic BooksNew YorkAdam Becker, What is Real: The Unfinished Quest for the Meaning of Quantum Physics (New York: Basic Books, 2018).
Philip Ball, Beyond Weird, Why Everything You Thought You Knew about Quantum Physics is Different. ChicagoUniversity of Chicago PressPhilip Ball, Beyond Weird: Why Everything You Thought You Knew about Quantum Physics is Different (Chicago: University of Chicago Press, 2018).
Tanya Bub, Jeffrey Bub, Totally Random: Why Nobody Understands Quantum Mechanics. NJPrinceton University PressPrincetonTanya Bub and Jeffrey Bub, Totally Random: Why Nobody Understands Quantum Mechanics (Prince- ton, NJ: Princeton University Press, 2018).
Quantum Strangeness: Wrestling with Bell's Theorem and the Ultimate Nature of Reality. George Greenstein, MIT PressCambridge, MAGeorge Greenstein, Quantum Strangeness: Wrestling with Bell's Theorem and the Ultimate Nature of Reality (Cambridge, MA: MIT Press, 2019).
Jed Brody, Quantum Entanglement. Cambridge, MAMIT PressJed Brody, Quantum Entanglement (Cambridge, MA: MIT Press, 2020).
An experimental test of the EPR paradox. G Faraci, D Gutkowski, S Notarrigo, A R Pennisi, 10.1007/BF02763124Lett. Nuovo Cimento. 9G. Faraci, D. Gutkowski, S. Notarrigo, and A. R. Pennisi, "An experimental test of the EPR paradox," Lett. Nuovo Cimento 9, 607-611 (1974).
Angular correlation of Compton-scattered annihilation photons and hidden variables. L R Kasday, J D Ullman, C S Wu, 10.1007/BF02724742Nuovo Cimento B. 25L. R. Kasday, J. D. Ullman, and C. S. Wu, "Angular correlation of Compton-scattered annihilation photons and hidden variables," Nuovo Cimento B 25, 633-661 (1975).
Bell's theorem: Experimental tests and implications. F John, Abner Clauser, Shimony, 10.1088/0034-4885/41/12/002Rept. Prog. Phys. 41John F. Clauser and Abner Shimony, "Bell's theorem: Experimental tests and implications," Rept. Prog. Phys. 41, 1881-1927 (1978).
Experiment and the foundations of quantum physics. Anton Zeilinger, 10.1103/RevModPhys.71.S288Rev. Mod. Phys. 71Anton Zeilinger, "Experiment and the foundations of quantum physics," Rev. Mod. Phys. 71, S288-S297 (1999).
Loopholes in experiments. Gregor Weihs, Compendium of Quantum Physics: Concepts, Experiments, History, and Philosophy. Friedel Weinert, Klaus Hentschel, and Daniel GreenbergerBerlinSpringerGregor Weihs, "Loopholes in experiments," in Compendium of Quantum Physics: Concepts, Experi- ments, History, and Philosophy, edited by Friedel Weinert, Klaus Hentschel, and Daniel Greenberger (Berlin: Springer, 2009) pp. 348-355.
Bell nonlocality. Nicolas Brunner, Daniel Cavalcanti, Stefano Pironio, Valerio Scarani, Stephanie Wehner, 10.1103/RevModPhys.86.419arXiv:1303.2849Rev. Mod. Phys. 86quant-phNicolas Brunner, Daniel Cavalcanti, Stefano Pironio, Valerio Scarani, and Stephanie Wehner, "Bell nonlocality," Rev. Mod. Phys. 86, 419-478 (2014), arXiv:1303.2849 [quant-ph].
Loopholes in Bell inequality tests of local realism. Jan-Åke Larsson, 10.1088/1751-8113/47/42/424003arXiv:1407.0363J. Phys. A. 47424003quant-phJan-Åke Larsson, "Loopholes in Bell inequality tests of local realism," J. Phys. A 47, 424003 (2014), arXiv:1407.0363 [quant-ph].
On loopholes and experiments. Marissa Giustina, Quantum [Un]Speakables II: Half a Century of Bell's Theorem. Reinhold Bertlmann and Anton ZeilingerBerlinSpringerMarissa Giustina, "On loopholes and experiments," in Quantum [Un]Speakables II: Half a Century of Bell's Theorem, edited by Reinhold Bertlmann and Anton Zeilinger (Berlin: Springer, 2017) pp. 485-501.
Proposed experiment to test local hidden variable theories. John F Clauser, Michael A Horne, Abner Shimony, Richard A Holt, 10.1103/PhysRevLett.23.880Phys. Rev. Lett. 23John F. Clauser, Michael A. Horne, Abner Shimony, and Richard A. Holt, "Proposed experiment to test local hidden variable theories," Phys. Rev. Lett. 23, 880-884 (1969).
David Bohm, Quantum Mechanics. Englewood Cliffs, NJPrentice-HallDavid Bohm, Quantum Mechanics (Englewood Cliffs, NJ: Prentice-Hall, 1951).
Going beyond Bell's theorem. M Daniel, Michael A Greenberger, Anton Horne, Zeilinger, arXiv:0712.0921Bell's Theorem, Quantum Theory, and Conceptions of the Universe. M. KafatosDordrechtKluwerquant-phDaniel M. Greenberger, Michael A. Horne, and Anton Zeilinger, "Going beyond Bell's theorem," in Bell's Theorem, Quantum Theory, and Conceptions of the Universe, edited by M. Kafatos (Dordrecht: Kluwer, 1989) pp. 69-72, arXiv:0712.0921 [quant-ph].
Bell's theorem without inequalities. M Daniel, Michael A Greenberger, Abner Horne, Anton Shimony, Zeilinger, 10.1119/1.16243Am. J. Phys. 58Daniel M. Greenberger, Michael A. Horne, Abner Shimony, and Anton Zeilinger, "Bell's theorem without inequalities," Am. J. Phys. 58, 1131-1143 (1990).
Testing Bell's inequality with cosmic photons: Closing the setting-independence loophole. Jason Gallicchio, Andrew S Friedman, David I Kaiser, 10.1103/PhysRevLett.112.110405arXiv:1310.3288Phys. Rev. Lett. 112110405quant-phJason Gallicchio, Andrew S. Friedman, and David I. Kaiser, "Testing Bell's inequality with cosmic pho- tons: Closing the setting-independence loophole," Phys. Rev. Lett. 112, 110405 (2014), arXiv:1310.3288 [quant-ph].
Introduction to the hidden-variable question. John S Bell, Proceedings of the International School of Physics. B. d'Espagnatthe International School of PhysicsNew YorkAcademic PressEnrico FermiJohn S. Bell, "Introduction to the hidden-variable question," in Proceedings of the International School of Physics "Enrico Fermi," Foundations of Quantum Mechanics, edited by B. d'Espagnat (New York: Academic Press, 1971) pp. 171-181.
Autobiographical notes. Albert Einstein, Albert Einstein: Philosopher-Scientist. P. A. SchilppLaSalle, ILOpen CourtAlbert Einstein, "Autobiographical notes," in Albert Einstein: Philosopher-Scientist, edited by P. A. Schilpp (LaSalle, IL: Open Court, 1949) pp. 3-94.
Einstein on locality and separability. Don Howard, 10.1016/0039-3681(85)90001-9Stud. Hist. Phil. Sci. 16Don Howard, "Einstein on locality and separability," Stud. Hist. Phil. Sci. 16, 171-201 (1985).
Athur Fine, The Shaky Game: Einstein, Realism, and the Quantum Theory. ChicagoUniversity of Chicago PressAthur Fine, The Shaky Game: Einstein, Realism, and the Quantum Theory (Chicago: University of Chicago Press, 1986).
Hanoch Gutfreund, Jürgen Renn, Einstein on Einstein: Autobiographical and Scientific Reflections. Princeton, NJPrinceton University PressHanoch Gutfreund and Jürgen Renn, Einstein on Einstein: Autobiographical and Scientific Reflections (Princeton, NJ: Princeton University Press, 2020).
Relaxed Bell inequalities and Kochen-Specker theorems. J W Michael, Hall, 10.1103/PhysRevA.84.022102arXiv:1102.4467Phys. Rev. A. 8422102quant-phMichael J. W. Hall, "Relaxed Bell inequalities and Kochen-Specker theorems," Phys. Rev. A 84, 022102 (2011), arXiv:1102.4467 [quant-ph].
J J Sakurai, Modern Quantum Mechanics. Reading, MAAddison-WesleyJ. J. Sakurai, Modern Quantum Mechanics (Reading, MA: Addison-Wesley, 1994).
Quantum generalizations of Bell's inequality. B S Cirel'son, 10.1007/BF00417500Lett. Math. Phys. 4B. S. Cirel'Son, "Quantum generalizations of Bell's inequality," Lett. Math. Phys. 4, 93-100 (1980).
Bell's theorem: The naive view of an experimentalist. Alain Aspect, Quantum [Un]Speakables: From Bell to Quantum Information. Reinhold Bertlmann and Anton ZeilingerBerlinSpringerAlain Aspect, "Bell's theorem: The naive view of an experimentalist," in Quantum [Un]Speakables: From Bell to Quantum Information, edited by Reinhold Bertlmann and Anton Zeilinger (Berlin: Springer, 2002) pp. 119-153.
Correlation experiments and the nonvalidity of ordinary ideas about the physical world. P Henry, Stapp, Lawrence Berkeley Laboratory report LBL. 5333Henry P. Stapp, "Correlation experiments and the nonvalidity of ordinary ideas about the physical world," Lawrence Berkeley Laboratory report LBL-5333 (1976).
Quantum mechanics and hidden variables: A test of Bell's inequality by the measurement of the spin correlation in low-energy proton-proton scattering. M Lamehi-Rachti, W Mittig, 10.1103/PhysRevD.14.2543Phys. Rev. D. 14M. Lamehi-Rachti and W. Mittig, "Quantum mechanics and hidden variables: A test of Bell's inequality by the measurement of the spin correlation in low-energy proton-proton scattering," Phys. Rev. D 14, 2543-2555 (1976).
Early history of Bell's theorem. John F Clauser, Quantum [Un]Speakables: From Bell to Quantum Information. Reinhold Bertlmann and Anton ZeilingerBerlinSpringerJohn F. Clauser, "Early history of Bell's theorem," in Quantum [Un]Speakables: From Bell to Quantum Information, edited by Reinhold Bertlmann and Anton Zeilinger (Berlin: Springer, 2002) pp. 61-98.
Experimental test of local hidden-variable theories. J Stuart, John F Freedman, Clauser, 10.1103/PhysRevLett.28.938Phys. Rev. Lett. 28Stuart J. Freedman and John F. Clauser, "Experimental test of local hidden-variable theories," Phys. Rev. Lett. 28, 938-941 (1972).
Experimental test of local hidden-variable theories. Edward S Fry, Randall C Thompson, 10.1103/PhysRevLett.37.465Phys. Rev. Lett. 37Edward S. Fry and Randall C. Thompson, "Experimental test of local hidden-variable theories," Phys. Rev. Lett. 37, 465-468 (1976).
Atom based tests of the Bell inequalities: The legacy of John Bell continues. Edward S Fry, Thomas Walther, Quantum [Un]Speakables: From Bell to Quantum Information. Reinhold Bertlmann and Anton ZeilingerBerlinSpringerEdward S. Fry and Thomas Walther, "Atom based tests of the Bell inequalities: The legacy of John Bell continues," in Quantum [Un]Speakables: From Bell to Quantum Information, edited by Reinhold Bertlmann and Anton Zeilinger (Berlin: Springer, 2002) pp. 103-118.
Experimental investigation of a polarization correlation anomaly. John F Clauser, 10.1103/PhysRevLett.36.1223Phys. Rev. Lett. 36John F. Clauser, "Experimental investigation of a polarization correlation anomaly," Phys. Rev. Lett. 36, 1223-1226 (1976).
S-matrix, Feynman zigzag and Einstein correlation. O Costa De, Beauregard , 10.1016/0375-9601(78)90480-2Phys. Lett. A. 67O. Costa de Beauregard, "S-matrix, Feynman zigzag and Einstein correlation," Phys. Lett. A 67, 171-174 (1978).
Bell's theorem and the causal arrow of time. Nathan Argaman, 10.1119/1.3456564arXiv:0807.2041Am. J. Phys. 78quant-phNathan Argaman, "Bell's theorem and the causal arrow of time," Am. J. Phys. 78, 1007-1013 (2010), arXiv:0807.2041 [quant-ph].
Disentangling the quantum world. Huw Price, Ken Wharton, 10.3390/e17117752arXiv:1508.01140Entropy. 17quant-phHuw Price and Ken Wharton, "Disentangling the quantum world," Entropy 17, 7752-7767 (2015), arXiv:1508.01140 [quant-ph].
Discussion of experimental proof for the paradox of Einstein, Rosen, and Podolsky. David Bohm, Yakir Aharonov, 10.1103/PhysRev.108.1070Phys. Rev. 108David Bohm and Yakir Aharonov, "Discussion of experimental proof for the paradox of Einstein, Rosen, and Podolsky," Phys. Rev. 108, 1070-1076 (1957).
Proposed experiment to test the nonseparability of quantum mechanics. Alain Aspect, 10.1103/PhysRevD.14.1944Phys. Rev. D. 14Alain Aspect, "Proposed experiment to test the nonseparability of quantum mechanics," Phys. Rev. D 14, 1944-1951 (1976).
Experimental test of Bell's inequalities using time varying analyzers. Alain Aspect, Jean Dalibard, Gérard Roger, 10.1103/PhysRevLett.49.1804Phys. Rev. Lett. 49Alain Aspect, Jean Dalibard, and Gérard Roger, "Experimental test of Bell's inequalities using time varying analyzers," Phys. Rev. Lett. 49, 1804-1807 (1982).
Testing Bell's inequalities with periodic switching. Anton Zeilinger, 10.1016/0375-9601(86)90520-7Phys. Lett. A. 118Anton Zeilinger, "Testing Bell's inequalities with periodic switching," Phys. Lett. A 118, 1-2 (1986).
Violation of Bell's inequality under strict Einstein locality conditions. Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger, 10.1103/PhysRevLett.81.5039arXiv:quant-ph/9810080Phys. Rev. Lett. 81quant-phGregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, and Anton Zeilinger, "Violation of Bell's inequality under strict Einstein locality conditions," Phys. Rev. Lett. 81, 5039-5043 (1998), arXiv:quant-ph/9810080 [quant-ph].
Bell's theorem for space-like separation. Gregor Weihs, Quantum [Un]Speakables: From Bell to Quantum Information. Reinhold Bertlmann and Anton ZeilingerBerlinSpringerGregor Weihs, "Bell's theorem for space-like separation," in Quantum [Un]Speakables: From Bell to Quantum Information, edited by Reinhold Bertlmann and Anton Zeilinger (Berlin: Springer, 2002) pp. 155-162.
Violation of Bell inequalities by photons more than 10 km apart. W Tittel, J Brendel, H Zbinden, N Gisin, 10.1103/PhysRevLett.81.3563arXiv:quant-ph/9806043Phys. Rev. Lett. 81quant-phW. Tittel, J. Brendel, H. Zbinden, and N. Gisin, "Violation of Bell inequalities by photons more than 10 km apart," Phys. Rev. Lett. 81, 3563-3566 (1998), arXiv:quant-ph/9806043 [quant-ph].
Bell's inequality test: More ideal than ever. Alain Aspect, 10.1038/18296Nature. 398Alain Aspect, "Bell's inequality test: More ideal than ever," Nature 398, 189-190 (1999).
Experimental consequences of objective local theories. F John, Michael A Clauser, Horne, 10.1103/PhysRevD.10.526Phys. Rev. D. 10John F. Clauser and Michael A. Horne, "Experimental consequences of objective local theories," Phys. Rev. D 10, 526-535 (1974).
Detector inefficiencies in the Einstein-Podolsky-Rosen experiment. Anupam Garg, N D Mermin, 10.1103/PhysRevD.35.3831Phys. Rev. D. 35Anupam Garg and N. D. Mermin, "Detector inefficiencies in the Einstein-Podolsky-Rosen experiment," Phys. Rev. D 35, 3831-3835 (1987).
Background level and counter efficiencies required for a loophole-free Einstein-Podolsky-Rosen experiment. Philippe H Eberhard, 10.1103/PhysRevA.47.R747Phys. Rev. A. 47Philippe H. Eberhard, "Background level and counter efficiencies required for a loophole-free Einstein- Podolsky-Rosen experiment," Phys. Rev. A 47, R747-R750 (1993).
Hidden-variable example based upon data rejection. Philip M Pearle, 10.1103/PhysRevD.2.1418Phys. Rev. D. 2Philip M. Pearle, "Hidden-variable example based upon data rejection," Phys. Rev. D 2, 1418-1425 (1970).
Some local models for correlation experiments. Arthur Fine, 10.1007/BF00416904Synthese. 50Arthur Fine, "Some local models for correlation experiments," Synthese 50, 279-294 (1982).
Local realism has not been refuted by atomic cascade experiments. T W Marshall, E Santos, F Selleri, 10.1016/0375-9601(83)90531-5Phys. Lett. A. 98T. W. Marshall, E. Santos, and F. Selleri, "Local realism has not been refuted by atomic cascade experiments," Phys. Lett. A 98, 5-9 (1983).
Experimental violation of a Bell's inequality with efficient detection. M A Rowe, D Kielpinski, V Meyer, C A Sackett, W M Itano, C Monroe, D J Wineland, 10.1038/35057215Nature. 409M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett, W. M. Itano, C. Monroe, and D. J. Wineland, "Experimental violation of a Bell's inequality with efficient detection," Nature 409, 791-794 (2001).
Bell inequality violation with two remote atomic qubits. D N Matsukevich, P Maunz, D L Moehring, S Olmschenk, C Monroe, 10.1103/PhysRevLett.100.150404Phys. Rev. Lett. 100150404D. N. Matsukevich, P. Maunz, D. L. Moehring, S. Olmschenk, and C. Monroe, "Bell inequality violation with two remote atomic qubits," Phys. Rev. Lett. 100, 150404 (2008).
. H Markus Ansmann, Radoslaw C Wang, Max Bialczak, Erik Hofheinz, M Lucero, A D Neeley, Markus Ansmann, H. Wang, Radoslaw C. Bialczak, Max Hofheinz, Erik Lucero, M. Neeley, A. D.
Violation of Bell's inequality in Josephson phase qubits. D O'connell, M Sank, J Weides, A N Wenner, John M Cleland, Martinis, 10.1038/nature08363Nature. 461O'Connell, D. Sank, M. Weides, J. Wenner, A. N. Cleland, and John M. Martinis, "Violation of Bell's inequality in Josephson phase qubits," Nature 461, 504-506 (2009).
. B G Christensen, K T Mccusker, J B Altepeter, B Calkins, T Gerrits, A E Lita, A Miller, L K , B. G. Christensen, K. T. McCusker, J. B. Altepeter, B. Calkins, T. Gerrits, A. E. Lita, A. Miller, L. K.
Detectionloophole-free test of quantum nonlocality, and applications. Y Shalm, S W Zhang, N Nam, C C W Brunner, N Lim, P G Gisin, Kwiat, 10.1103/PhysRevLett.111.130406arXiv:1306.5772Phys. Rev. Lett. 111130406quant-phShalm, Y. Zhang, S. W. Nam, N. Brunner, C. C. W. Lim, N. Gisin, and P. G. Kwiat, "Detection- loophole-free test of quantum nonlocality, and applications," Phys. Rev. Lett. 111, 130406 (2013), arXiv:1306.5772 [quant-ph].
Bell violation using entangled photons without the fair-sampling assumption. Marissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Jörn Beyer, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, Anton Zeilinger, 10.1038/nature12012arXiv:1212.0533Nature. 497quant-phMarissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Jörn Beyer, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, and Anton Zeilinger, "Bell violation using entangled photons without the fair-sampling assumption," Nature 497, 227-230 (2013), arXiv:1212.0533 [quant-ph].
New high-intensity source of polarization-entangled photon pairs. Paul G Kwiat, Klaus Mattle, Harald Weinfurter, Anton Zeilinger, Alexander V Sergienko, Yanhua Shih, 10.1103/PhysRevLett.75.4337Phys. Rev. Lett. 75Paul G. Kwiat, Klaus Mattle, Harald Weinfurter, Anton Zeilinger, Alexander V. Sergienko, and Yanhua Shih, "New high-intensity source of polarization-entangled photon pairs," Phys. Rev. Lett. 75, 4337-4341 (1995).
Event-ready-detectors' Bell experiment via entanglement swapping. M Żukowski, A Zeilinger, M A Horne, A K Ekert, 10.1103/PhysRevLett.71.4287Phys. Rev. Lett. 71M.Żukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert, "'Event-ready-detectors' Bell experiment via entanglement swapping," Phys. Rev. Lett. 71, 4287-4290 (1993).
Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres. B Hensen, H Bernien, A E Dréau, A Reiserer, N Kalb, M S Blok, J Ruitenberg, R F L Vermeulen, R N Schouten, C Abellán, W Amaya, V Pruneri, M W Mitchell, M Markham, D J Twitchen, D Elkouss, S Wehner, T H Taminiau, R Hanson, 10.1038/nature15759arXiv:1508.05949Nature. 526quant-phB. Hensen, H. Bernien, A. E. Dréau, A. Reiserer, N. Kalb, M. S. Blok, J. Ruitenberg, R. F. L. Vermeulen, R. N. Schouten, C. Abellán, W. Amaya, V. Pruneri, M. W. Mitchell, M. Markham, D. J. Twitchen, D. Elkouss, S. Wehner, T. H. Taminiau, and R. Hanson, "Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres," Nature 526, 682-686 (2015), arXiv:1508.05949 [quant-ph].
Loophole-free Bell test using electron spins in diamond: Second experiment and additional analysis. B Hensen, N Kalb, M S Blok, A E Dréau, A Reiserer, R F L Vermeulen, R N Schouten, M Markham, D J Twitchen, K Goodenough, D Elkouss, S Wehner, T H Taminiau, R Hanson, 10.1038/srep30289arXiv:1603.05705Scientific Reports. 6quant-phB. Hensen, N. Kalb, M. S. Blok, A. E. Dréau, A. Reiserer, R. F. L. Vermeulen, R. N. Schouten, M. Markham, D. J. Twitchen, K. Goodenough, D. Elkouss, S. Wehner, T. H. Taminiau, and R. Hanson, "Loophole-free Bell test using electron spins in diamond: Second experiment and additional analysis," Scientific Reports 6, 30289 (2016), arXiv:1603.05705 [quant-ph].
. K Lynden, Evan Shalm, Bradley G Meyer-Scott, Peter Christensen, Michael A Bierhorst, Martin J Wayne, Thomas Stevens, Scott Gerrits, Deny R Glancy, Michael S Hamel, Kevin J Allman, Lynden K. Shalm, Evan Meyer-Scott, Bradley G. Christensen, Peter Bierhorst, Michael A. Wayne, Martin J. Stevens, Thomas Gerrits, Scott Glancy, Deny R. Hamel, Michael S. Allman, Kevin J.
Strong loophole-free test of local realism. Shellee D Coakley, Carson Dyer, Adriana E Hodge, Lita, B Varun, Camilla Verma, Edward Lambrocco, Alan L Tortorici, Yanbao Migdall, Daniel R Zhang, William H Kumor, Francesco Farr, Matthew D Marsili, Jeffrey A Shaw, Carlos Stern, Waldimar Abellán, Valerio Amaya, Thomas Pruneri, Morgan W Jennewein, Paul G Mitchell, Joshua C Kwiat, Richard P Bienfang, Emanuel Mirin, Sae Woo Knill, Nam, 10.1103/PhysRevLett.115.250402arXiv:1511.03189Phys. Rev. Lett. 115250402quant-phCoakley, Shellee D. Dyer, Carson Hodge, Adriana E. Lita, Varun B. Verma, Camilla Lambrocco, Edward Tortorici, Alan L. Migdall, Yanbao Zhang, Daniel R. Kumor, William H. Farr, Francesco Marsili, Matthew D. Shaw, Jeffrey A. Stern, Carlos Abellán, Waldimar Amaya, Valerio Pruneri, Thomas Jennewein, Morgan W. Mitchell, Paul G. Kwiat, Joshua C. Bienfang, Richard P. Mirin, Emanuel Knill, and Sae Woo Nam, "Strong loophole-free test of local realism," Phys. Rev. Lett. 115, 250402 (2015), arXiv:1511.03189 [quant-ph].
Significant-loophole-free test of Bell's theorem with entangled photons. Marissa Giustina, A M Marijn, Sören Versteegh, Johannes Wengerowsky, Armin Handsteiner, Kevin Hochrainer, Fabian Phelan, Johannes Steinlechner, Jan-Åke Kofler, Carlos Larsson, Waldimar Abellán, Valerio Amaya, Morgan W Pruneri, Jörn Mitchell, Thomas Beyer, Adriana E Gerrits, Lynden K Lita, Sae Woo Shalm, Thomas Nam, Rupert Scheidl, Bernhard Ursin, Anton Wittmann, Zeilinger, 10.1103/PhysRevLett.115.250401arXiv:1511.03190Phys. Rev. Lett. 115250401quant-phMarissa Giustina, Marijn A. M. Versteegh, Sören Wengerowsky, Johannes Handsteiner, Armin Hochrainer, Kevin Phelan, Fabian Steinlechner, Johannes Kofler, Jan-Åke Larsson, Carlos Abellán, Waldimar Amaya, Valerio Pruneri, Morgan W. Mitchell, Jörn Beyer, Thomas Gerrits, Adriana E. Lita, Lynden K. Shalm, Sae Woo Nam, Thomas Scheidl, Rupert Ursin, Bernhard Wittmann, and Anton Zeilinger, "Significant-loophole-free test of Bell's theorem with entangled photons," Phys. Rev. Lett. 115, 250401 (2015), arXiv:1511.03190 [quant-ph].
Event-ready Bell test using entangled atoms simultaneously closing detection and locality loopholes. Wenjamin Rosenfeld, Daniel Burchardt, Robert Garthoff, Kai Redeker, Norbert Ortegel, Markus Rau, Harald Weinfurter, 10.1103/PhysRevLett.119.010402arXiv:1611.04604Phys. Rev. Lett. 11910402quant-phWenjamin Rosenfeld, Daniel Burchardt, Robert Garthoff, Kai Redeker, Norbert Ortegel, Markus Rau, and Harald Weinfurter, "Event-ready Bell test using entangled atoms simultaneously closing detection and locality loopholes," Phys. Rev. Lett. 119, 010402 (2017), arXiv:1611.04604 [quant-ph].
Comment on 'The theory of local beables. A Shimony, M A Horne, J F Clauser, Epistemological Letters. 13A. Shimony, M. A. Horne, and J. F. Clauser, "Comment on 'The theory of local beables'," Epistemo- logical Letters 13, 1-9 (1976).
An exchange on local beables. John S Bell, Abner Shimony, Michael A Horne, John F Clauser, 10.1111/j.1746-8361.1985.tb01249.xDialectica. 39John S. Bell, Abner Shimony, Michael A. Horne, and John F. Clauser, "An exchange on local beables," Dialectica 39, 85-110 (1985).
The theory of local beables. John S Bell, Epistemological Letters. 9John S. Bell, "The theory of local beables," Epistemological Letters 9, 11-24 (1976).
Free variables and local causality. John S Bell, Epistemological Letters. 15John S. Bell, "Free variables and local causality," Epistemological Letters 15, 79-84 (1977).
Simulating physics with computers. Richard P Feynman, 10.1007/BF02650179Int. J. Theo. Phys. 21Richard P. Feynman, "Simulating physics with computers," Int. J. Theo. Phys. 21, 467-488 (1982).
. K Joseph, Jessica Blitzstein, Hwang, CRC PressNew YorkIntroduction to ProbabilityJoseph K. Blitzstein and Jessica Hwang, Introduction to Probability (New York: CRC Press, 2014).
Bell's theorem does not eliminate fully causal hidden variables. Carl H Brans, 10.1007/BF00670750Int. J. Theo. Phys. 27Carl H. Brans, "Bell's theorem does not eliminate fully causal hidden variables," Int. J. Theo. Phys. 27, 219-226 (1988).
Relaxed Bell inequalities with arbitrary measurement dependence for each observer. Andrew S Friedman, Alan H Guth, J W Michael, David I Hall, Jason Kaiser, Gallicchio, 10.1103/PhysRevA.99.012121arXiv:1809.01307Phys. Rev. A. 9912121quant-phAndrew S. Friedman, Alan H. Guth, Michael J. W. Hall, David I. Kaiser, and Jason Gallicchio, "Relaxed Bell inequalities with arbitrary measurement dependence for each observer," Phys. Rev. A 99, 012121 (2019), arXiv:1809.01307 [quant-ph].
Local deterministic model of singlet state correlations based on relaxing measurement independence. J W Michael, Hall, 10.1103/PhysRevLett.105.250404arXiv:1007.5518Phys. Rev. Lett. 105250404quant-phMichael J. W. Hall, "Local deterministic model of singlet state correlations based on relaxing measure- ment independence," Phys. Rev. Lett. 105, 250404 (2010), arXiv:1007.5518 [quant-ph].
The significance of measurement independence for Bell inequalities and locality. J W Michael, Hall, 10.1007/978-3-319-31299-6_11arXiv:1511.00729At the Frontier of Spacetime. T. Asselmeyer-MalugaBerlinSpringerquant-phMichael J. W. Hall, "The significance of measurement independence for Bell inequalities and locality," in At the Frontier of Spacetime, edited by T. Asselmeyer-Maluga (Berlin: Springer, 2016) pp. 189-204, arXiv:1511.00729 [quant-ph].
How much measurement independence is needed to demonstrate nonlocality?. Jonathan Barrett, Nicolas Gisin, 10.1103/PhysRevLett.106.100406arXiv:1008.3612Phys. Rev. Lett. 106100406quant-phJonathan Barrett and Nicolas Gisin, "How much measurement independence is needed to demonstrate nonlocality?" Phys. Rev. Lett. 106, 100406 (2011), arXiv:1008.3612 [quant-ph].
Optimal free will on one side in reproducing the singlet correlation. Manik Banik, Subhadipa Md Rajjak Gazi, Ashutosh Das, Samir Rai, Kunkri, 10.1088/1751-8113/45/20/205301arXiv:1204.3835J. Phys. A. 45205301quant-phManik Banik, MD Rajjak Gazi, Subhadipa Das, Ashutosh Rai, and Samir Kunkri, "Optimal free will on one side in reproducing the singlet correlation," J. Phys. A 45, 205301 (2012), arXiv:1204.3835 [quant-ph].
Random 'choices' and the locality loophole. Stefano Pironio, arXiv:1510.00248quant-phStefano Pironio, "Random 'choices' and the locality loophole," (2015), arXiv:1510.00248 [quant-ph].
Challenging local realism with human choices. C Abellán, BIG Bell Test Collaboration10.1038/s41586-018-0085-3arXiv:1805.04431Nature. 557quant-phC. Abellán et al. (BIG Bell Test Collaboration), "Challenging local realism with human choices," Nature 557, 212-216 (2018), arXiv:1805.04431 [quant-ph].
Cosmic Bell test: Measurement settings from Milky Way stars. Johannes Handsteiner, Andrew S Friedman, Dominik Rauch, Jason Gallicchio, Bo Liu, Hannes Hosp, Johannes Kofler, David Bricher, Matthias Fink, Calvin Leung, Anthony Mark, Hien T Nguyen, Isabella Sand Ers, Fabian Steinlechner, Rupert Ursin, Sören Wengerowsky, Alan H Guth, David I Kaiser, Thomas Scheidl, Anton Zeilinger, 10.1103/PhysRevLett.118.060401arXiv:1611.06985Phys. Rev. Lett. 11860401quant-phJohannes Handsteiner, Andrew S. Friedman, Dominik Rauch, Jason Gallicchio, Bo Liu, Hannes Hosp, Johannes Kofler, David Bricher, Matthias Fink, Calvin Leung, Anthony Mark, Hien T. Nguyen, Isabella Sand ers, Fabian Steinlechner, Rupert Ursin, Sören Wengerowsky, Alan H. Guth, David I. Kaiser, Thomas Scheidl, and Anton Zeilinger, "Cosmic Bell test: Measurement settings from Milky Way stars," Phys. Rev. Lett. 118, 060401 (2017), arXiv:1611.06985 [quant-ph].
Astronomical random numbers for quantum foundations experiments. Calvin Leung, Amy Brown, Hien Nguyen, Andrew S Friedman, David I Kaiser, Jason Gallicchio, 10.1103/PhysRevA.97.042120arXiv:1706.02276Phys. Rev. A. 9742120quant-phCalvin Leung, Amy Brown, Hien Nguyen, Andrew S. Friedman, David I. Kaiser, and Jason Gallicchio, "Astronomical random numbers for quantum foundations experiments," Phys. Rev. A 97, 042120 (2018), arXiv:1706.02276 [quant-ph].
. Dominik Rauch, Johannes Handsteiner, Armin Hochrainer, Jason Gallicchio, Andrew S Friedman, Calvin Leung, Bo Liu, Lukas Bulla, Sebastian Ecker, Fabian Steinlechner, Rupert Ursin, Beili Hu, David Leon, Chris Benn, Adriano Ghedina, Massimo Cecconi, Alan H Guth, David I Kaiser, ThomasDominik Rauch, Johannes Handsteiner, Armin Hochrainer, Jason Gallicchio, Andrew S. Friedman, Calvin Leung, Bo Liu, Lukas Bulla, Sebastian Ecker, Fabian Steinlechner, Rupert Ursin, Beili Hu, David Leon, Chris Benn, Adriano Ghedina, Massimo Cecconi, Alan H. Guth, David I. Kaiser, Thomas
Cosmic Bell test using random measurement settings from high-redshift quasars. Anton Scheidl, Zeilinger, 10.1103/PhysRevLett.121.080403arXiv:1808.05966Phys. Rev. Lett. 12180403quant-phScheidl, and Anton Zeilinger, "Cosmic Bell test using random measurement settings from high-redshift quasars," Phys. Rev. Lett. 121, 080403 (2018), arXiv:1808.05966 [quant-ph].
Planck 2018 results, VI: Cosmological parameters. N Aghanim, Planck collaborationarXiv:1807.06209astro-ph.CON. Aghanim et al. (Planck collaboration), "Planck 2018 results, VI: Cosmological parameters," (2018), arXiv:1807.06209 [astro-ph.CO].
Violation of local realism with freedom of choice. Thomas Scheidl, Rupert Ursin, Johannes Kofler, Sven Ramelow, Xiao-Song Ma, Thomas Herbst, Lothar Ratschbacher, Alessandro Fedrizzi, Nathan K Langford, Thomas Jennewein, Anton Zeilinger, 10.1073/pnas.1002780107arXiv:0811.3129Proc. Nat. Acad. Sci. (USA). 107quant-phThomas Scheidl, Rupert Ursin, Johannes Kofler, Sven Ramelow, Xiao-Song Ma, Thomas Herbst, Lothar Ratschbacher, Alessandro Fedrizzi, Nathan K. Langford, Thomas Jennewein, and Anton Zeilinger, "Violation of local realism with freedom of choice," Proc. Nat. Acad. Sci. (USA) 107, 19708-19713 (2010), arXiv:0811.3129 [quant-ph].
Demonstration of quantum nonlocality in the presence of measurement dependence. Djeylan Aktas, Sébastien Tanzilli, Anthony Martin, Gilles Pütz, Rob Thew, Nicolas Gisin, 10.1103/PhysRevLett.114.220404arXiv:1504.08332Phys. Rev. Lett. 114220404quant-phDjeylan Aktas, Sébastien Tanzilli, Anthony Martin, Gilles Pütz, Rob Thew, and Nicolas Gisin, "Demonstration of quantum nonlocality in the presence of measurement dependence," Phys. Rev. Lett. 114, 220404 (2015), arXiv:1504.08332 [quant-ph].
Test of local realism into the past without detection and locality loopholes. Ming-Han Li, Cheng Wu, Yanbao Zhang, Wen-Zhao Liu, Bing Bai, Yang Liu, Weijun Zhang, Qi Zhao, Hao Li, Zhen Wang, Lixing You, W J Munro, Juan Yin, Jun Zhang, Cheng-Zhi Peng, Xiongfeng Ma, Qiang Zhang, Jingyun Fan, Jian-Wei Pan, 10.1103/PhysRevLett.121.080404arXiv:1808.07653Phys. Rev. Lett. 12180404quant-phMing-Han Li, Cheng Wu, Yanbao Zhang, Wen-Zhao Liu, Bing Bai, Yang Liu, Weijun Zhang, Qi Zhao, Hao Li, Zhen Wang, Lixing You, W. J. Munro, Juan Yin, Jun Zhang, Cheng-Zhi Peng, Xiongfeng Ma, Qiang Zhang, Jingyun Fan, and Jian-Wei Pan, "Test of local realism into the past without detection and locality loopholes," Phys. Rev. Lett. 121, 080404 (2018), arXiv:1808.07653 [quant-ph].
Requirements for a loophole-free photonic Bell test using imperfect setting generators. Johannes Kofler, Marissa Giustina, Jan-Åke Larsson, Morgan W Mitchell, 10.1103/PhysRevA.93.032115arXiv:1411.4787Phys. Rev. A. 9332115quant-phJohannes Kofler, Marissa Giustina, Jan-Åke Larsson, and Morgan W. Mitchell, "Requirements for a loophole-free photonic Bell test using imperfect setting generators," Phys. Rev. A 93, 032115 (2016), arXiv:1411.4787 [quant-ph].
Time, finite statistics, and Bell's fifth position. Richard D Gill, arXiv:quant-ph/0301059Proceedings of Foundations of Probability and Physics. Foundations of Probability and PhysicsVäxjö, SwedenVäxjö University Pressquant-phRichard D. Gill, "Time, finite statistics, and Bell's fifth position," in Proceedings of Foundations of Probability and Physics (Växjö, Sweden: Växjö University Press, 2003) pp. 179-206, arXiv:quant- ph/0301059 [quant-ph].
Statistics, causality and Bell's theorem. Richard D Gill, 10.1214/14-STS490arXiv:1207.5103Statist. Sci. 29stat.APRichard D. Gill, "Statistics, causality and Bell's theorem," Statist. Sci. 29, 512-528 (2014), arXiv:1207.5103 [stat.AP].
A robust mathematical model for a loophole-free Clauser-Horne experiment. Peter Bierhorst, 10.1088/1751-8113/48/19/195302arXiv:1312.2999J. Phys. A. 48195302quant-phPeter Bierhorst, "A robust mathematical model for a loophole-free Clauser-Horne experiment," J. Phys. A 48, 195302 (2015), arXiv:1312.2999 [quant-ph].
Nearly) optimal p values for all Bell inequalities. David Elkouss, Stephanie Wehner, 10.1038/npjqi.2016.26arXiv:1510.07233npj Quantum Information. 216026quant-phDavid Elkouss and Stephanie Wehner, "(Nearly) optimal p values for all Bell inequalities," npj Quantum Information 2, 16026 (2016), arXiv:1510.07233 [quant-ph].
Quantum cryptography based on Bell's theorem. A K Ekert, 10.1103/PhysRevLett.67.661Phys. Rev. Lett. 67A.K. Ekert, "Quantum cryptography based on Bell's theorem," Phys. Rev. Lett. 67, 661 -663 (1991).
Quantum cryptography. N Gisin, G Ribordy, W Tittel, H Zbinden, 10.1103/RevModPhys.74.145arXiv:quant-ph/0101098Rev. Mod. Phys. 74N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, "Quantum cryptography," Rev. Mod. Phys. 74, 146 -195 (2002), arXiv:quant-ph/0101098.
Miloslav Dušek, Norbert Lütkenhaus, and Momtchil Peev. Helle Valerio Scarani, Nicolas J Bechmann-Pasquinucci, Cerf, 10.1103/RevModPhys.81.1301arXiv:0802.4155Rev. Mod. Phys. 81The security of practical quantum key distribution. quant-phValerio Scarani, Helle Bechmann-Pasquinucci, Nicolas J. Cerf, Miloslav Dušek, Norbert Lütkenhaus, and Momtchil Peev, "The security of practical quantum key distribution," Rev. Mod. Phys. 81, 1301-1350 (2009), arXiv:0802.4155 [quant-ph].
Using quantum key distribution for cryptographic purposes: A survey. Romain Alléaume, Cyril Branciard, Jan Bouda, Thierry Debuisschert, Mehrdad Dianati, Nicolas Gisin, Mark Godfrey, Philippe Grangier, Thomas Langer, Norbert Lutkenhaus, Christian Monyk, Philippe Painchault, Momtchil Peev, Andreas Poppe, Thomas Pornin, John Rarity, Renato Renner, Gregoire Ribordy, Michel Riguidel, Louis Salvail, Andrew Shields, Harald Weinfurter, Anton Zeilinger, 10.1016/j.tcs.2014.09.018arXiv:quant-ph/0701168Theo. Computer Science. 560Romain Alléaume, Cyril Branciard, Jan Bouda, Thierry Debuisschert, Mehrdad Dianati, Nicolas Gisin, Mark Godfrey, Philippe Grangier, Thomas Langer, Norbert Lutkenhaus, Christian Monyk, Philippe Painchault, Momtchil Peev, Andreas Poppe, Thomas Pornin, John Rarity, Renato Renner, Gregoire Ribordy, Michel Riguidel, Louis Salvail, Andrew Shields, Harald Weinfurter, and Anton Zeilinger, "Using quantum key distribution for cryptographic purposes: A survey," Theo. Computer Science 560, 62-81 (2014), arXiv:quant-ph/0701168.
Large scale quantum key distribution: Challenges and solutions. Qiang Zhang, Feihu Xu, Yu-Ao Chen, Cheng-Zhi Peng, Jian-Wei Pan, 10.1364/OE.26.024260arXiv:1809.02291Optics Express. 2624260quant-phQiang Zhang, Feihu Xu, Yu-Ao Chen, Cheng-Zhi Peng, and Jian-Wei Pan, "Large scale quantum key distribution: Challenges and solutions," Optics Express 26, 24260 (2018), arXiv:1809.02291 [quant-ph].
Quantum cryptography with realistic devices. Feihu Xu, Xiongfeng Qiang Ma, Hoi-Kwong Zhang, Jian-Wei Lo, Pan, arXiv:1903.09051quant-phFeihu Xu, Xiongfeng Ma. Qiang Zhang, Hoi-Kwong Lo, and Jian-Wei Pan, "Quantum cryptography with realistic devices," (2019), arXiv:1903.09051 [quant-ph].
S Pirandola, U L Andersen, L Banchi, M Berta, D Bunandar, R Colbeck, D Englund, T Gehring, C Lupo, C Ottaviani, J Pereira, M Razavi, J S Shaari, M Tomamichel, V C Usenko, G Vallone, P Villoresi, P Wallden, arXiv:1906.01645Advances in quantum cryptography. quant-phS. Pirandola, U. L. Andersen, L. Banchi, M. Berta, D. Bunandar, R. Colbeck, D. Englund, T. Gehring, C. Lupo, C. Ottaviani, J. Pereira, M. Razavi, J. S. Shaari, M. Tomamichel, V. C. Usenko, G. Vallone, P. Villoresi, and P. Wallden, "Advances in quantum cryptography," (2019), arXiv:1906.01645 [quant-ph].
Satellite-based entanglement distribution over 1200 kilometers. Juan Yin, Yuan Cao, Yu-Huai Li, Sheng-Kai Liao, Liang Zhang, Ji-Gang Ren, Wen-Qi Cai, Wei-Yue Liu, Bo Li, Hui Dai, Guang-Bing Li, Qi-Ming Lu, Yun-Hong Gong, Yu Xu, Shuang-Lin Li, Feng-Zhi Li, Ya-Yun Yin, Zi-Qing Jiang, Ming Li, Jian-Jun Jia, Ge Ren, 10.1126/science.aan3211arXiv:1707.01339Science. Dong He, Yi-Lin Zhou, Xiao-Xiang Zhang, Na Wang, Xiang Chang, Zhen-Cai Zhu, Nai-Le Liu, Yu-Ao Chen, Chao-Yang Lu, Rong Shu, Cheng-Zhi Peng, Jian-Yu Wang, and Jian-Wei Pan356quant-phJuan Yin, Yuan Cao, Yu-Huai Li, Sheng-Kai Liao, Liang Zhang, Ji-Gang Ren, Wen-Qi Cai, Wei-Yue Liu, Bo Li, Hui Dai, Guang-Bing Li, Qi-Ming Lu, Yun-Hong Gong, Yu Xu, Shuang-Lin Li, Feng-Zhi Li, Ya-Yun Yin, Zi-Qing Jiang, Ming Li, Jian-Jun Jia, Ge Ren, Dong He, Yi-Lin Zhou, Xiao-Xiang Zhang, Na Wang, Xiang Chang, Zhen-Cai Zhu, Nai-Le Liu, Yu-Ao Chen, Chao-Yang Lu, Rong Shu, Cheng-Zhi Peng, Jian-Yu Wang, and Jian-Wei Pan, "Satellite-based entanglement distribution over 1200 kilometers," Science 356, 1140-1144 (2017), arXiv:1707.01339 [quant-ph].
. Sheng-Kai Liao, Wen-Qi Cai, Wei-Yue Liu, Liang Zhang, Yang Li, Ji-Gang Ren, Juan Yin, Qi Shen, Yuan Cao, Zheng-Ping Li, Feng-Zhi Li, Xia-Wei Chen, Li-Hua Sun, Jian-Jun Jia, Jin-Cai Wu, Xiao-Jun Jiang, Jian-Feng Wang, Yong-Mei Huang, Qiang Wang, Yi-Lin Zhou, Lei Deng, andLu Ma Tao Xi, TaiSheng-Kai Liao, Wen-Qi Cai, Wei-Yue Liu, Liang Zhang, Yang Li, Ji-Gang Ren, Juan Yin, Qi Shen, Yuan Cao, Zheng-Ping Li, Feng-Zhi Li, Xia-Wei Chen, Li-Hua Sun, Jian-Jun Jia, Jin-Cai Wu, Xiao-Jun Jiang, Jian-Feng Wang, Yong-Mei Huang, Qiang Wang, Yi-Lin Zhou, Lei Deng, andLu Ma Tao Xi, Tai
Satellite-to-ground quantum key distribution. Qiang Hu, Yu-Ao Zhang, Nai-Le Chen, Xiang-Bin Liu, Zhen-Cai Wang, Chao-Yang Zhu, Rong Lu, Cheng-Zhi Shu, Jian-Yu Peng, Jian-Wei Wang, Pan, 10.1038/nature23655Nature. 549Hu, Qiang Zhang, Yu-Ao Chen, Nai-Le Liu, Xiang-Bin Wang, Zhen-Cai Zhu, Chao-Yang Lu, Rong Shu, Cheng-Zhi Peng, Jian-Yu Wang, and Jian-Wei Pan, "Satellite-to-ground quantum key distribution," Nature 549, 43-47 (2017).
. Sheng-Kai Liao, Wen-Qi Cai, Johannes Handsteiner, Bo Liu, Juan Yin, Liang Zhang, Dominik Rauch, Matthias Fink, Ji-Gang Ren, Wei-Yue Liu, Yang Li, Qi Shen, Yuan Cao, Feng-Zhi Li, Jian-Feng Wang, Yong-Mei Huang, Lei Deng, Tao Xi, Lu Ma, Tai Hu, Li Li, Nai-Le Liu, Franz Koidl, Peiyuan Wang, Yu-Ao Chen, Xiang-Bin Wang, Michael Steindorfer, Georg Kirchner, Chao-Yang Lu, Rong Shu, RupertSheng-Kai Liao, Wen-Qi Cai, Johannes Handsteiner, Bo Liu, Juan Yin, Liang Zhang, Dominik Rauch, Matthias Fink, Ji-Gang Ren, Wei-Yue Liu, Yang Li, Qi Shen, Yuan Cao, Feng-Zhi Li, Jian-Feng Wang, Yong-Mei Huang, Lei Deng, Tao Xi, Lu Ma, Tai Hu, Li Li, Nai-Le Liu, Franz Koidl, Peiyuan Wang, Yu-Ao Chen, Xiang-Bin Wang, Michael Steindorfer, Georg Kirchner, Chao-Yang Lu, Rong Shu, Rupert
Satelliterelayed intercontinental quantum network. Thomas Ursin, Cheng-Zhi Scheidl, Jian-Yu Peng, Anton Wang, Jian-Wei Zeilinger, Pan, 10.1103/PhysRevLett.120.030501arXiv:1801.04418Phys. Rev. Lett. 12030501quant-phUrsin, Thomas Scheidl, Cheng-Zhi Peng, Jian-Yu Wang, Anton Zeilinger, and Jian-Wei Pan, "Satellite- relayed intercontinental quantum network," Phys. Rev. Lett. 120, 030501 (2018), arXiv:1801.04418 [quant-ph].
| zyda_arxiv-2039000 |
Turbulence in Zeeman Measurements from Molecular Clouds
Zhuo Cao
Department of Physics
The Chinese University of Hong Kong
Shatin, Hong Kong
Hua-Bai Li
Department of Physics
The Chinese University of Hong Kong
Shatin, Hong Kong
Turbulence in Zeeman Measurements from Molecular Clouds
Magnetic fields (B-fields) play an important role in molecular cloud fragmentation and star formation, but are very difficult to detect. The temporal correlation between the field strength (B) and gas density (n) of an isolated cloud has been suggested as an indication of the dynamical importance of B-fields relative to selfgravity. This temporal B-n relation is, however, unobservable. What can be observed using Zeeman measurements are the "spatial B-n relations" from the current plane of the sky. Nevertheless, the temporal Bn relation argument has still been widely used to interpret observations. Here we present the first numerical test of the legitimacy of this interpretation. From a simulation that can reproduce the observed Zeeman spatial B∝ n 2/3 relation, we found that temporal B-n relations of individual cores bear no resemblance to the spatial B-n relations. This result inspired us to discover that the true mechanism behind the 2/3 index is random turbulence compression instead of symmetrical gravitational contraction.
Introduction
Star formation textbooks have often adduced the example of the collapse of an isolated cloud with uniform B-fields to argue that the B-n relation should follow a power law (Fig. 1). In this model, B is expected to be independent of n in cases where B-field energy is absolutely dominant, as the Lorentz force will limit gas contraction along the field, so that gas cannot compress the field lines to increase B. In the opposite scenario, when self-gravity presides, the contraction will be isotropic and result in / , as only the contraction in the two dimensions perpendicular to the field lines can enhance B (Fig. 1, left panel). In other words, the index of n varies between 0 and 2/3 depending on the dynamical importance of B-fields (e.g., Li 2021;Crutcher et al. 2010).
However, it is impossible to monitor the temporal B-n relation in reality. The published cloud B-n relations depend on either sampling clouds with various mean densities (Crutcher et al. 2010;Jiang, Li & Fan 2020) or sampling various densities within a single cloud (e.g., Li et al. 2015;Kandori et al. 2018;Wang et al. 2020). In each case, B and n are surveyed from different spatial positions, and do not involve temporal developments from individual clouds. Moreover, a real cloud core is never really "isolated" and can exchange mass with its envelope due to, for example, turbulence.
We note with concern that in most extant literature, whether observational studies or simulations, the spatial B-n relation indices have been interpreted using the temporal indexes as posited in the "textbook model". Here, we examine with ideal magnetohydrodynamic (MHD) simulations whether it is reasonable to interpret observations with temporal relations. In section 2, we review briefly the simulations, and in the Appendix, they are described in more detail. A summary of the results is presented in section 3. In section 4, we will examine the true reason behind the B-n relationship of a magnetized turbulent cloud.
Simulations
The setup of the cloud simulation is largely adopted from (Zhang et al. 2019) and is detailed in the Appendix. Briefly, we simulated an isothermal cloud volume under periodic boundary conditions, starting with uniform density, uniform B-field, super-sonic/sub-Alfvenic turbulence driving, and slightly magnetically supercritical mass (Table I). The gravity was "turned on" after the turbulent energy was saturated. This resulted in dense cores of a few solar mass with n peaking at ~10 5 /cc (Fig. 2). Intriguingly, these cores turned trans-to super-Alfvenic (Table I) due to density-enhanced turbulent energy. This simulation accurately reproduced every major observation related to molecular cloud B-fields, including the "ordered cloud B-fields" (Li et al. 2015;Li et al. 2009;Planck Collaboration 2016), the significantly "deviated core B-fields" (Zhang et al. 2019;Zhang et al. 2014;Hull et al. 2014) (Table I), and, most importantly, the 2/3 index of the spatial B-n relation (Crutcher et al. 2010;Jiang, Li & Fan 2020) (Fig. 2).
We aimed to elucidate the temporal B-n relation of cloud cores. For this purpose, we adjusted the simulation setup in Zhang et al. (2019) by halting the turbulence driving after the energy was saturated and the gravity was turned on (see Appendix A1) (had we not done so, even bounded structures would have been dispersed shortly by the artificial driving, leaving no room for a "temporal" study). An interesting observation is that, although the overall kinetic energy became more sub-Alfvenic after turbulence driving was stopped, the dense cores remained trans-to super-Alfvenic (Table I) due to density enhancement.
Following the process summarized in Appendix A2, we found three "long-lived" cores ( Fig. 1). Their temporal B-n relations are plotted with the spatial relation in Fig. 2. Unlike the textbook model, here the cores undergo dynamic changes in both positions and shapes. We need to monitor the development closely to follow the position of the core peaks and use a fixed mass (two solar mass) to define the core volumes for measuring the mean B and n. The textbook model (Fig. 1, left panel) also has a fixed mass; just here the fixed mass may not be composed of the same group of gas, as the gas can flow in and out of the boundary containing the fixed mass.
Results
The temporal relation not only varied from core to core, but also significantly differed from the spatial relation. Therefore, the 2/3 index of the spatial relation certainly needs another interpretation than that provided in the textbook temporal model. To see this from another perspective, we note that the spatial B-n relation before the gravity was turned on (red in Fig. 2) already possessed an index of 2/3 for n > 10 4 /cc. This 2/3 index is due to "random" compression of B-field lines by super-Alfvenic turbulence inside of cloud cores instead of isotropic compression by self-gravity as in the textbook model. We emphasize that the turbulence energy driven at large scale is sub-Alfvenic, and the 2/3 index only occurs at core densities (n > 10 4 /cc; Fig. 2), where turbulence energy is amplified by the density enhancement to slightly super-Alfvenic (Table I) (Li 2021;Zhang et al. 2019).
Fig. 1. A comparison between the cloud cores from the idealized "textbook" model and from MHD simulations.
Left: The textbook model of cloud core B-n relations (adopted from Li 2021). The arrows indicate the direction of effective compression. The case with dominant B-field or self-gravity will possess an index of 0 or 2/3, respectively. An index larger than 2/3 can result from non-gravitational forces, e.g., turbulence, stellar wind, or supernova compression. Right: Cores from MHD simulations appear markedly different from the textbook model. Each row presents the data from one core; from top to bottom are Core 1 to 3. Each column is derived from one snapshot of each core; from left to right are starting, middle, and ending snapshots, which are defined in Appendix A2.
Note: M = <v/c s >; M A = <v/v Alf >.
The last row indicates the angle between the mean magnetic field and the initial magnetic field (along the z-axis). The last two columns are from the entire simulation domain, which are sub-Alfvénic, while the cores are trans-to super-Alfvénic.
Fig. 2. Density distributions and B-n relations. Left:
The black lines represent the density PDFs of the whole simulated domain immediately before the gravity was turned on (dashed), and 2 Myrs later at the end of the simulation (dot-dashed). The trace of the clumps (red) and cores (blue) is detailed in Appendix A2, and the PDFs are derived from their last snapshots. N is the number of pixels. Right: The red dots represent the spatial B-n relation immediately before the gravity was turned on; the blue dots are derived from the last snapshot, which is 2 Myrs after the red dots. The black lines represent the temporal B-n relations of the cores, starting from "•" and ending at "×". There is a time lapse of 0.02 Myrs between each two adjacent symbols. For the temporal B-n, the typical error bar (standard deviation) of number density is 1e2, and that of the magnetic field is 1e-2. N p is the number of pixels within a density bin.
Furthermore, another type of spatial B-n relation based on the B and n profiles within individual clouds/cores has also been observed, and a wide range of indices between 2/5 and 2/3 has been reported (e.g., Li et al. 2015;Kandori et al. 2018;Wang et al. 2020). In Fig. 3, this type of B-n relation is shown for the three cores in our simulation. It appears that the indices are closer to the lower end of the observed range. It is worth noting, however, that Fig. 3 is produced by binning the simulation pixels by n and obtaining the mean B from each bin. Observers, on the other hand, must "somehow" estimate the mean n from each line of sight, which usually covers a broad range of n. Moreover, observers usually use the DCF method when estimating B (e.g., Chen et al. 2022;Liu, Qiu & Zhang 2022), which is also not taken into account when plotting Fig. 3. It is beyond the scope of this article to make a detailed comparison between Fig. 3 and the observations. The point that we wish to make is that the B-n spatial relations obtained from individual cores (Fig. 3) do not reflect the Bn temporal relations (Fig. 2, right panel) even without the observational "lens", nor should they be interpreted using the textbook temporal model. Fig. 3. The spatial B-n relations from individual cores. The 'start' and 'end' represent the starting and ending snapshots of each core, respectively (Appendix A2). The dashed straight lines give a reference slope of 0.3. The typical error bar (standard deviation) density is 1e2, and that of the magnetic field is 1e-2.
Discussion
In the following, we take a closer look at the three cores to better understand their temporal B-n relations. They are all among the highest densities in the simulation, as illustrated by the density distributions in Fig. 2. For Core 1, the density maps in Fig. 1 show a trend of decreasing density (the red-color region decreases as time passes), which is also indicated by the temporal plots (Figs. 2 and 3). Conversely, the density of Core 3 increased with time, and the B-n relation index was negative at the beginning but became 2/3 at the end. Core 2 started with a shallow B-n relation and the B and n "bounced" back in the later stage, which appears as a turning point in the B-n plot. The presence of turbulence makes the expectation of a constant B-n index (Fig. 1) unrealistic.
Force interactions
The "textbook" argument for the B-n relation indices solely depends on gravity to concentrate gas, and the B-field can only play a role against gravity (Fig. 1). In reality, however, thermal pressure should help support the cloud, and turbulence can either disperse or enhance the densities of gas and/or field-lines. As a consequence, even the temporal B-n relation should not be interpreted by the textbook model before a careful examination of the model assumptions. Next, we check whether the gas flows, which shape the B-n relation, are converged only by self-gravity and hindered by B-field forces.
Only forces perpendicular to the B-field, f⊥, can regulate the field strength. Positive/negative divergence of f⊥ can accelerate the gas to disperse/compress field lines, thereby decreasing/increasing the field strength. To determine what force is mainly compressing or dispersing core fields, we can integrate the divergence of f⊥within the core; the results are shown in Fig. 4, where, for simplicity, f⊥ is defined by the mean-field direction of a core. Fig. 4. How the forces regulate B-field strength. This figure illustrates the volume integration of the divergences of the force f i ⊥ in the volume above the mean density of a core (details of the calculation can be found in Methods), where "i" can be G, Th, B, or Adv, corresponding to gravitational, thermal, magnetic, and advection, respectively. "⊥" indicates that only the component perpendicular to the mean field is considered here, as only the perpendicular component is capable of affecting the strength of the field. Note that the negative of f Th ⊥ divergence is plotted to conserve space, as most of the divergences of other forces are negative. A positive/negative f i ⊥ means that, in an average sense, the force was dispersing/compressing the field lines.
As expected, the gravitational force always compressed the field (with negative f G ⊥ divergence in Fig. 4); whereas, the thermal pressure constantly opposed gravity with a positive divergence (note that plotted in Fig. 4 is negative f Th ⊥ divergences to save space). What should be noticed is that, most of the time, the magnitude of f Th ⊥ divergence is significantly greater than that of f G ⊥. In other words, it is gravity that needs assistance against thermal pressure in order to hold a core together. This differs greatly from the textbook model (Fig. 1), in which the thermal force is assumed to be negligible. Indeed, the divergence of the advection and B-field forces ( f Adv ⊥ and f B ⊥ in Fig. 4, respectively) are negative for significant periods of time, when they confined the cores rather than supported the cores against gravity. It is worth noting that the simulation is isothermal, and the turbulence remains supersonic in the cores (Table I). It is the density gradients set up by turbulent shocks that are responsible for the f Th ⊥. This is in agreement with the scenario that the 2/3 index is also due to the random turbulence compression. Only the densest Core 3 survived until violating the Truelove criterion (Truelove et al. 1997), while the other two cores are dynamic transient structures. B-fields play more of a stabilizing role in all cases. In Fig. 4, the divergences of -f Th ⊥ and f B ⊥ (the two green lines) share the same trend, i.e., when f Th ⊥ became more dispersive, f B ⊥ turned more compressive, and vice versa. In addition, B-fields also reacted against the advection force, which can be observed from the fact that the higher-frequency variations of the divergences of f Adv ⊥ and f B ⊥ are usually complementary. This is especially apparent for Core 3.
Virial parameters
In assessing the gravitational boundness of molecular clouds, observers estimate the Virial parameter (Bertoldi & McKee 1992), α = 2 / (two times the turbulence to gravitational energy ratio; see the Appendix A2 for detailed definitions of and ), which usually has a negative power law relationship with the core mass (e.g., Keown et al. 2017;Kirk et al. 2017;Kerr et al. 2019). Cores 1-3 have α's close to one, which are consistent with observations ( Figure 5). Note that α ignores thermal and magnetic energy, so α < 1 does not guarantee a contraction. Our simulation also replicated other observations, such as magnetic criticality = 1-2 (Table I; Li et al. 2013;Li et al. 2015;Myers & Basu 2021) and core density profiles n ∝r -1.46±0.12 (Zhang et al. 2019, Pirogov 2009Kurono et al. 2013) leading to strong pressure gradients (Fig. 4). With the marginal Virial parameter, magnetic criticality, and strong density gradient, cores are on the verge of contraction and expansion, as demonstrated by Cores 1 (expanding), 2 (bouncing), and 3 (contracting). This is another perspective to avoid using the ever-collapsing textbook model (Fig. 1, left) to interpret observations.
Magnetic field diffusion
Finally, we note that turbulence may induce both ambipolar (Li & Houde 2008;Tang, Li & Lee 2018) and/or reconnection (Lazarian et al. 2012) diffusion, which are not simulated here but can potentially modify Bn relations. This study, however, is not intended to predict the exact index of B-n relations, but rather aims to explore the rationale behind accessing temporal B-n relations via Zeeman measurements (spatial relations). Field diffusion should only flatten both the temporal and spatial B-n relations to some extent (see, e.g., Tsukamoto et al. 2015) instead of resolving their significant discrepancy (Figure 2).
Conclusion
Since magnetized turbulent clouds exhibit significant discrepancies in their spatial and temporal B-n relations, the 2/3 index inferred from Zeeman measurements (Crutcher et al. 2010;Jiang, Li & Fan 2020) should not be interpreted as highly magnetic supercritical. The puzzle (Pattle et al. 2023) of the coexistence of a 2/3 index and ordered B-fields, which is a sign of magnetic sub-to trans-criticality, has thus now been solved. In fact, recent observations of magnetic field-column density relations suggest that cloud cores are only slightly super-critical (Li et al. 2013;Myers & Basu 2021) , which is also in accordance with the simulation presented here (Table I). As indicated in Table I, within cloud cores (n > 10 4 /cc), turbulence can be super-Alfvenic to randomly compress B-fields, and thus result in the 2/3 index. We learned from "Dynamic Cores in Hydrostatic Disguise" (Ballesteros-Paredes, Klessen & Vázquez-Semadeni 2003) two decades ago that a Bonnor-Ebert-like density profile does not necessarily indicate structural stability. Our study suggests that, in the past decade, turbulence likely misled us again by leaving a "2/3" footprint in the B-n relation.
A1. Simulation Setup
We simulated the interior of a molecular cloud using the ideal magnetohydrodynamic (MHD) code ZEUS-MP (Hayes et al. 2006;Otto, Ji & Li 2017). Assuming isothermal, the code solves the following set of equations:
+ ⋅ ( ) = 0 + ⋅ = × − − = × ( × ) = = × ⋅ = 0 = 4
where ρ and p are mass density and thermal pressure, respectively; and are velocity and magnetic field vector, respectively; the constant ∼ 0.2 km/s is the sound speed assuming a temperature of 10 K; and = 4.3 × 10 pc ⋅ ⊙ ⋅ is the gravitational constant.
The initial B-field is 14.4 G in the -direction uniformly. The ratio between thermal pressure and magnetic pressure is = 0.05. The uniform initial density is = 1.2 × 10 g cm or = 300 H /cc , assuming a mean molecular weight of 2.36. The size of the simulation domain is 4.8 pc , resolved by 960 cells, and a periodic boundary condition is applied. Correspondingly, the mass of the whole domain is twice the magnetic critical mass ( = Φ/2 / , where Φ is magnetic flux).
The simulation was separated into two phases. During the first phase, turbulence was driven without self-gravity. When the turbulent energy was saturated (i.e., when the turbulent energy spectrum is stable), the Mach number of turbulence ( ) was 5.6 (Table 1) and the rms Alfven Mach number ( ) was 0.6. Selfgravity was turned on in the second phase, while the turbulence driving was halted. The simulation was terminated after gravity had been turned on for 2 Myrs, before any core can violate the Truelove criterion (Truelove et al. 1997).
A detailed description of turbulence driving is given in (Otto, Ji & Li 2017). Briefly, we set up a vector field ( ) in the Fourier space with a zero mean and a variance ∝ (8 / ). Therefore, the power spectrum peaks sharply at , which we set as 2, corresponding to a driving scale of 2.4 pc. We then apply inverse Fourier transform on vector field ( ) with a random phase to obtain the driving velocity field ( ). Details of the saturated velocity spectrum are given in (Zhang et al. 2019) (see their Fig. 3). In this simulation, the turbulence driving is purely solenoidal.
A2. Core Finder
As stated in the main text, our goal is to study the temporal B-n relation of individual molecular cloud cores. This requires the data of a core at different evolution stages. Defining a core in a single simulation snapshot is simple. The difficulty lies in tracing this core in other snapshots due to transposition and deformation. Here, we describe how to trace a core across snapshots.
Prior to core extraction, we first define "clumps", as dense clumps are the birth beds of cores. We define clumps using potential field contour surfaces, such that, within a clump C, | | > 1/2, which is a relaxed gravitational bounded condition. In the equation, = indicates the turbulent energy. The energies = and = are magnetic field and thermal energy, respectively. Finally, the = is the potential energy, where the potential field is due to all of the mass under the periodic boundary condition. We assume the global maximum of potential as the zero potential point.
In accordance with the above criterion, we scanned through potential contour surfaces in each snapshot to look for clumps. From high to low, we scanned through 40 potential values evenly distributed between zero and the minimum potential. For each potential value, the connected component labelling (CCL) method (Silversmith 2021) was applied to index individual volumes confined by the contour surfaces. The clump criterion was applied on each of these volumes, and clumps were excluded from further examination with lower potential values. Two clumps in adjacent snapshots are considered to be temporally connected if they spatially overlap. The algorithm is summarized in Fig. 6 below. Fig. 6 The clump-finding algorithm.
(1) From zero to the minimum potential, 1/40 of this potential range is reduced in each step. (2) Defined by the CCL method.
Several sets of temporally connected clumps are found. Only clumps persisting for more than 1 Myrs are retained for further analysis, because only long-lived clumps can host long-lived cores for temporal study. Three clumps meet the requirements.
We define a core with fixed mass as the following. The mass of a core is typically 0.1 to 10 solar masses (Mac Low & Klessen 2004); we use two solar masses in this study. For each clump, the snapshot with the highest peak density is identified, and the core is defined by the density contour surface containing this peak and two solar masses. The algorithm to identify the same core in the earlier and later snapshots is detailed in Fig. 7. Although a core can transpose the position and deform the shape, the difference in adjacent snapshots is expected to be small as long as the time difference is relatively short. For this purpose, we took frequent snapshots every 0.02 Myrs. The algorithm in Fig. 7 requires the center of mass (com) of a core to always fall within the effective core scale from the projected com position defined by the position and velocity of the com in the previous snapshot; the effective core scale is defined by two times of = / , where V is the core volume in the previous snapshot. Fig. 7 The core-finding algorithm.
A3. Integrated Divergence
A major focus of our analysis is the evolution of the B-field strength, which is driven by force components perpendicular to the local mean field : To determine whether, in general, the field lines were compressed or dispersed by , , we integrated the divergence within , the core volume above the mean density:
∇ ⋅ ,
Fig. 5 A
5comparison of the Virial parameters (α) from the simulation and observation. Ranging from 0.2 to 2, the α's of Cores 1 to 3 are plotted against time in the left three panels. With two solar-mass, α = 0.2 to 2 are plotted on top of the observed α-mass relation fromKerr et al. (2019; right panel).
, or , standing for the Lorentz, advection, thermal, and gravitational force, respectively. More precisely, they are formulated as follows:
Table I Core 1
I1Core 2 Core 3
Cloud, end of
turbulence driving
Cloud, after
core formation
M, Mach number
4.4
1.1
2.4
5.6
3.8
M A , Alfvén Mach number
3.1
0.9
1.8
0.60
0.44
Magnetic criticality
1.5
2.2
2.0
2.0
2.0
B deviation from the
initial condition
10°
12°
48°
0°
0°
Appendix
J Ballesteros-Paredes, R Klessen, E Vázquez-Semadeni, Dynamic Cores in Hydrostat Disguise. 592188Ballesteros-Paredes, J., Klessen, R. and Vázquez-Semadeni, E. 2003, Dynamic Cores in Hydrostat Disguise, ApJ 592: p188
. F Bertoldi, C F Mckee, ApJ. 395140Bertoldi, F., & McKee, C. F. 1992, ApJ, 395, 140
The Davis-Chandrasekhar-Fermi method revisited. C.-Y Chen, Z.-Y Li, R R Mazzei, J Park, L M Fissel, M. C.-Y Chen, R I Klein, P S Li, Monthly Notices of the Royal Astronomical Society. 5141575Chen, C.-Y., Li, Z.-Y., Mazzei, R. R., Park, J., Fissel, L. M., Chen, M. C.-Y., Klein, R. I., & Li, P. S., The Davis-Chandrasekhar-Fermi method revisited. Monthly Notices of the Royal Astronomical Society, 2022, 514, p.1575
R M Crutcher, Magnetic Fields in Interstellar Clouds from Zeeman Observations: Inference of Total Field Strengths by Bayesian Analysis. 725466Crutcher, R.M., et al., Magnetic Fields in Interstellar Clouds from Zeeman Observations: Inference of Total Field Strengths by Bayesian Analysis. The Astrophysical Journal, 2010. 725: p. 466.
Simulating radiating and magnetized flows in multiple dimensions with ZEUS-MP. J C Hayes, M L Norman, R A Fiedler, The Astrophysical Journal Letter. 165188Hayes J. C., Norman M. L., Fiedler R. A. et al., Simulating radiating and magnetized flows in multiple dimensions with ZEUS-MP, The Astrophysical Journal Letter, 2006, 165 188
TADPOL: A 1.3 mm Survey of Dust Polarization in Star-forming Cores and Regions. C L H Hull, The Astrophysical Journal Supplement Series. 21313Hull, C.L.H., et al., TADPOL: A 1.3 mm Survey of Dust Polarization in Star-forming Cores and Regions. The Astrophysical Journal Supplement Series, 2014. 213: p. 13.
Bayesian Revisit of the Relationship between the Total Field Strength and the Volume Density of Interstellar Clouds. H Jiang, H Li, X Fan, The Astrophysical Journal. 890153Jiang, H., H.-b. Li, and X. Fan, Bayesian Revisit of the Relationship between the Total Field Strength and the Volume Density of Interstellar Clouds. The Astrophysical Journal, 2020. 890: p. 153.
Distortion of Magnetic Fields in a Starless Core. IV. Magnetic Field Scaling on Density and Mass-to-flux Ratio Distribution in FeSt 1-457. R Kandori, The Astrophysical Journal. 8653ApJKandori, R., et al., Distortion of Magnetic Fields in a Starless Core. IV. Magnetic Field Scaling on Density and Mass-to-flux Ratio Distribution in FeSt 1-457. The Astrophysical Journal, 2018. 865: p. 121. Keown, J., Di Francesco, J., Kirk, H., et al. 2017, ApJ, 850, 3
. R Kerr, H Kirk, Di Francesco, J , ApJ. 874147Kerr R., Kirk H., Di Francesco J. et al. 2019 ApJ 874 147
. H Kirk, R K Friesen, J E Pineda, ApJ. 846144Kirk, H., Friesen, R. K., Pineda, J. E., et al. 2017, ApJ, 846, 144
. Y Kurono, M Saito, T Kamazaki, K.-I Morita, R Kawabe, ApJ. 76585Kurono Y., Saito M., Kamazaki T., Morita K.-I. and Kawabe R. 2013 ApJ 765 85
H.-B Li, Magnetic Fields in Molecular Clouds-Observation and Interpretation. Galaxies, 2021. 941Li, H.-B., Magnetic Fields in Molecular Clouds-Observation and Interpretation. Galaxies, 2021. 9: p. 41.
Self-similar fragmentation regulated by magnetic fields in a region forming massive stars. H.-B Li, Nature. 520518Li, H.-B., et al., Self-similar fragmentation regulated by magnetic fields in a region forming massive stars. Nature, 2015. 520: p. 518.
Anchoring Magnetic Field in Turbulent Molecular Clouds. H.-B Li, The Astrophysical Journal. 704891Li, H.-b., et al., Anchoring Magnetic Field in Turbulent Molecular Clouds. The Astrophysical Journal, 2009. 704: p. 891.
The link between magnetic fields and filamentary clouds: bimodal cloud orientations in the Gould Belt. H.-B Li, Monthly Notices of the Royal Astronomical Society. 4363707Li, H.-b., et al., The link between magnetic fields and filamentary clouds: bimodal cloud orientations in the Gould Belt. Monthly Notices of the Royal Astronomical Society, 2013. 436: p. 3707.
Magnetic Fields in Star Formation: A Complete Compilation of All the DCF Estimations. J Liu, K Qiu, Q Zhang, The Astrophysical Journal. 202230Liu, J.; Qiu, K.; Zhang, Q. Magnetic Fields in Star Formation: A Complete Compilation of All the DCF Estimations. The Astrophysical Journal 2022, 925, 30
Control of star formation by supersonic turbulence. Mac Low, M.-M , R S Klessen, Reviews of Modern Physics. 125Mac Low, M.-M. and R. S. Klessen, Control of star formation by supersonic turbulence, Reviews of Modern Physics, 2004, 76, 125.
Magnetic Properties of Star-forming Dense Cores. P C Myers, S Basu, The Astrophysical Journal. 91735Myers, P.C. and S. Basu, Magnetic Properties of Star-forming Dense Cores. The Astrophysical Journal, 2021. 917: p35.
Velocity anisotropy in self-gravitating molecular clouds. I. Simulation. F Otto, W Ji, H Li, The Astrophysical Journal. 83695Otto, F., Ji, W., and Li, H.-b., Velocity anisotropy in self-gravitating molecular clouds. I. Simulation. The Astrophysical Journal, 2017, 836, 95
Magnetic fields in star formation: from clouds to cores, Protostars and Planets VII conference proceedings. K Pattle, L Fissel, M Tahani, T Liu, E Ntormousi, Shu-ichiro Inutsuka. Yuri Aikawa, Takayuki Muto, Kengo Tomida, and Motohide Tamura2023Pattle, K., Fissel, L., Tahani, M., Liu, T., Ntormousi, E., Magnetic fields in star formation: from clouds to cores, Protostars and Planets VII conference proceedings. Editors: Shu-ichiro Inutsuka, Yuri Aikawa, Takayuki Muto, Kengo Tomida, and Motohide Tamura. 2023
Planck intermediate results XXXV. Probing the role of the magnetic field in the formation of structure in molecular clouds. 586138Planck Collaboration, Planck intermediate results XXXV. Probing the role of the magnetic field in the formation of structure in molecular clouds, 2016A&A, 2016, 586A,138P
Density profiles in molecular cloud cores associated with high-mass star-forming regions. L E Pirogov, Astron. Rep. 53Pirogov, L.E. Density profiles in molecular cloud cores associated with high-mass star-forming regions. Astron. Rep. 53, 1127-1135 (2009)
W Silversmith, 10.5281/zenodo.5719536cc3d: Connected components on multilabel 3D & 2D images. (v3.2.1). Zenodo, 2021. Silversmith, W., cc3d: Connected components on multilabel 3D & 2D images. (v3.2.1). Zenodo, 2021, https://doi.org/10.5281/zenodo.5719536
The Jeans Condition: A New Constraint on Spatial Resolution in Simulations of Isothermal Self-Gravitational Hydrodynamics. J K Truelove, R I Klein, C F Mckee, The Astrophysical Journal Letter. 489179Truelove J. K., Klein R. I., McKee C. F. et al. The Jeans Condition: A New Constraint on Spatial Resolution in Simulations of Isothermal Self-Gravitational Hydrodynamics, The Astrophysical Journal Letter 1997, 489, L179
Effects of Ohmic and ambipolar diffusion on formation and evolution of first cores, protostars, and circumstellar discs. Yusuke Tsukamoto, Monthly Notices of the Royal Astronomical Society. 452278Tsukamoto, Yusuke, et al. "Effects of Ohmic and ambipolar diffusion on formation and evolution of first cores, protostars, and circumstellar discs." Monthly Notices of the Royal Astronomical Society 452.1 (2015): 278.
J.-W Wang, Multiwavelength Polarimetry of the Filamentary Cloud IC 5146. II. Magnetic Field Structures. 88813Wang, J.-W., et al., Multiwavelength Polarimetry of the Filamentary Cloud IC 5146. II. Magnetic Field Structures. The Astrophysical Journal, 2020. 888: p. 13.
Anchoring Magnetic Fields in Turbulent Molecular Clouds. Y Zhang, The Astrophysical Journal. 87198II. From 0.1 to 0.01 pcZhang, Y., et al., Anchoring Magnetic Fields in Turbulent Molecular Clouds. II. From 0.1 to 0.01 pc. The Astrophysical Journal, 2019. 871: p. 98.
Magnetic Fields and Massive Star Formation. Q Zhang, The Astrophysical Journal. 792116Zhang, Q., et al., Magnetic Fields and Massive Star Formation. The Astrophysical Journal, 2014. 792: p. 116.
| zyda_arxiv-2083000 |
Gravitational wave constraints on Einstein-aether theory with LIGO/Virgo data
Kristen Schumacher
Department of Physics and Illinois Center for Advanced Studies of the Universe
University of Illinois at Urbana-Champaign
61801UrbanaILUSA
Scott Ellis Perkins
Department of Physics and Illinois Center for Advanced Studies of the Universe
University of Illinois at Urbana-Champaign
61801UrbanaILUSA
Ashley Shaw
Department of Physics and Illinois Center for Advanced Studies of the Universe
University of Illinois at Urbana-Champaign
61801UrbanaILUSA
Kent Yagi
Department of Physics
University of Virginia
22904CharlottesvilleVirginiaUSA
Nicolás Yunes
Department of Physics and Illinois Center for Advanced Studies of the Universe
University of Illinois at Urbana-Champaign
61801UrbanaILUSA
Gravitational wave constraints on Einstein-aether theory with LIGO/Virgo data
(Dated: April 13, 2023)
Lorentz symmetry is a fundamental property of Einstein's theory of general relativity that one may wish to test with gravitational wave observations. Einstein-aether theory is a model that introduces Lorentz-symmetry breaking in the gravitational sector through an aether vector field, while still leading to second-order field equations. This well-posed theory passes particle physics constraints because it modifies directly only the gravitational sector, yet it predicts deviations in the inspiral and coalescence of compact objects. We here, for the first time, put this theory to the test by comparing its gravitational wave predictions directly against LIGO/Virgo gravitational wave data. We first construct a waveform model for Einstein-aether theory, EA_IMRPhenomD_NRT, through modifications of the general relativity IMRPhenomD_NRTidalv2 model (used by the LIGO/VIRGO collaboration). This model constructs a reponse function that not only contains the transverse-traceless polarization, but also additional Einstein-aether (scalar and vectorial) polarizations simultaneously. We then use the many current constraints on the theory to construct non-trivial priors for the Einsteinaether coupling constants. After much testing of the waveform model, we finally conduct parameter estimation studies on two gravitational wave events: GW170817 and GW190425. We find that these data are not sufficiently informative to place constraints on the theory that are stronger than current bounds from binary pulsar, solar system and cosmological observations. This is because, although Einstein-aether modifications include additional polarizations and have been computed beyond leading post-Newtonian order, these modifications are dominated by (already-constrained) dipole effects or "Newtonian" corrections to the orbital energy (highly-correlated with the chirp mass). These difficulties make it unclear whether future gravitational wave observations will be able to improve on current constraints on Einstein-aether theory.
I. INTRODUCTION
Gravitational waves (GWs) are beginning to allow for unprecedented ways to probe the gravitational interaction in regimes in which gravity is strong and highly dynamical. Since the first detection in 2015, there have so far been 90 GW events detected by the LIGO/Virgo Collaboration [1]. These waves originate from compact binary mergers and allow for the study of the astrophysical objects that comprise them and for tests of fundamental physics, such as tests of Einstein's theory of general relativity (GR) [2]. Though this theory has passed every test encountered to date, there are still reasons to believe that it might need to be extended [3,4]. Thus, it is imperative that GR be tested in previously unexplored regimes.
One property of gravity that is especially interesting to compare against experiment is Lorentz invariance. This property is a general principle that states that experiments are independent of the reference frame they are performed in. Though Lorentz violation has already been strongly constrained for matter interactions, violations that couple only to the gravitational sector have not yet been stringently constrained [3,5]. Furthermore, there are theoretical reasons to believe that Lorentz invariance may not hold at all energies, and that Lorentz violation may be induced by quantum gravity models [5]. All of this provides a good motivation to search for and/or constrain Lorentz violation in the gravitational sector, since any evidence of a violation would be clear evidence of new physics.
The simplest theory that violates Lorentz symmetry by introducing a single vector field while still leading to second-order equations of motion is Einstein-aether theory [6]. In this theory, spacetime is filled with a congruence of timelike curves, the four-velocity of the aether field [6]. This congruence establishes a preferred direction, implying that there is a locally determined state of rest and breaking local Lorentz invariance [7]. Modifications to the gravitational action in this theory are regulated by four dimensionless coupling constants, which determine the strength of the coupling of the aether field four-velocity to the action. Hence, constraining these coupling constants constrains the theory.
Einstein-aether theory has already been constrained with a plethora of astrophysical observations. The most stringent of these constraints comes from the simultaneous observation of GWs and a gamma ray burst from the 2017 binary neutron star (BNS) merger. This event placed tight observational bounds on the speed of the tensor polarization of GWs, immediately restricting one of the coupling constants of Einstein-aether theory to be on the order of O(10 −15 ) [8]. The lack of observational evidence for gravitational "Cherenkov type" radiation further places tight constraints on the speed of the GWs in Einstein-aether theory, which can be related back to the coupling constants [9]. Meanwhile, cosmological observations of the abundance of primordial Helium restrict the amount by which the aether field can rescale the effective arXiv:2304.06801v1 [gr-qc] 13 Apr 2023 value of Newton's constant that appears in the Friedman equation [10]. Solar system constraints on the preferred frame parameterized post-Newtonian (PN) parameters, due to lunar laser ranging experiments and observations of the solar spin axis, can be translated into constraints on the Einstein-aether coupling constants [11,12]. Finally, in recent work, observations of the damping of the period of binary pulsar and triple systems have further constrained Einstein-aether theory [13]. However, even after combining all of these constraints there are still large regions of parameter space that are not yet stringently constrained.
The inspiral and merger of compact objects, as observed with GWs, provide a new laboratory in which we may place new constraints on Einstein-aether theory, considering the many modifications to GWs in this theory. For instance, modifications to the amplitude and phase of quadrupole radiation in this theory can be searched for in GW data [14,15]. Note that the quadrupole correction is partially degenerate with the chirp mass of the binary system, since this also enters at leading-order in a post-Newtonian (PN) expansion 1 of the phase. Similarly, the emission of dipole radiation due to the propagation of vector and scalar modes is another signature of Einsteinaether theory (though this particular signature is already well constrained by binary pulsar observations) [14]. Finally, the mass of strongly gravitating objects is affected by the aether field, in a way described by the "sensitivity" of objects in this theory [13,17]. This sensitivity enters the Einstein-aether prediction of the gravitational waveform and it depends on the coupling constants of the theory and the binding energy of the compact objects generating the GWs. If these signatures of Einstein-aether theory are not observed in GW data, the coupling constants of the theory can be constrained to smaller and smaller values.
In this paper, we compare the predictions of Einsteinaether theory for the GWs emitted in the inspiral of neutron stars (NS) to all LIGO/Virgo data taken during the O1, O2 and O3 observing campaigns to try to place constraints on the coupling constants of the theory. To execute this analysis the predictions of Einstein-aether theory must first be encoded into a new waveform template that can be directly compared with data. Building off of the IMRPhenomD and IMRPhenomD_NRTidalv2 waveform templates, we construct a new waveform template we call EA_IMRPhenomD_NRT. We first update the code we are using, GW Analysis Tools [18], to be consistent with LALSuite's IMRPhenomD_NRTidalv2 waveform template in GR. From there, we add the binary Love relations to the IMRPhenomD_NRTidalv2 model so that we can search for the symmetric combination of tidal deformabilities instead of searching for each tidal deformability individually [19][20][21]. Next, we include the C-Love relations into the model to obtain the compactness of each NS, given the tidal deformability, and thus be able to compute the binding energies and the sensitivities in Einstein-aether theory [13,[21][22][23][24]. Finally, we add the Einstein-aether corrections to the waveform model to 1PN order, as computed in [15], which now explicitly depend only on the coupling constants, the chirp mass, the symmetric mass ratio, the inclination angle, and the tidal deformabilities, leading to the EA_IMRPhenomD_NRT model.
Once constructed, we use the new EA_IMRPhenomD_NRT waveform model to conduct parameter estimation studies with Bayesian inference on the public LIGO/Virgo data. In parameter estimation studies, previous knowledge about the sampling parameters is encoded in their prior and used to determine the correct sampling region of parameter space. Therefore, we begin by constructing a prior for the Einstein-aether coupling constants, describing in detail how each of the current constraints on the theory affects the complicated shape of this prior. We further use this prior to motivate our choice of a particular parameterization of the coupling constants. We then test the capabilities of our waveform model by using it to recover synthetic (injected) data for GWs as predicted both in GR and in Einstein-aether theory. Finally, we conduct parameter estimation studies on the two BNS mergers so far observed with LIGO: GW170817 and GW190425.
We find that current LIGO/Virgo data is not sufficiently informative to place constraints on Einstein-aether theory that are stronger than other stringent observational bounds from solar system [11,12] and binary pulsar [13] observations. That is, marginalized posteriors on the Einstein-aether coupling parameters from gravitational wave observations are statistically indistinguishable from their priors, even when the latter are enlarged beyond what is allowed by current observational bounds. This is because Einstein-aether modifications are dominated by dipole radiation (which enter at -1PN relative order in the waveform) and corrections to the binary's orbital energy (which enter at 0PN relative order in the waveform). Dipole effects are already very well constrained by binary pulsar observations, because these binaries are sufficiently widely separated that dipole modifications can become large unless suppressed by the coupling constants. Leading PN order corrections to the orbital energy are highly correlated with the chirp mass, therefore diluting any constraints.
Even though constraints placed with GWs cannot yet surpass those from other experiments, it is possible that future observations with more advanced detectors will be able to better constrain Einstein-aether theory. For instance, previous work predicted that third-generation and space-based GW detectors may place comparable constraints, or improve them by a factor of 2 [14]. This work, however, was carried out in a now ruled-out region of parameter space, before the coincident GW and electromagnetic observation of GW170817, which bounded the speed of GWs to be essentially identical to that pre-dicted in GR. Additionally, if the sensitivities of black holes (BHs) in Einstein-aether theory were calculated, studies with BH binaries or mixed NS/BH binaries could also be considered. Even without these two specific advancements, constraints from GWs will only improve over time as more BNS mergers are observed and constraints are stacked. Thus, our current work serves as an important foundation for how such parameter estimation studies with GWs in Einstein-aether theory can be performed in the fourth and fifth observing runs of the LIGO/Virgo collaboration, and in the future with third-generation detectors. Only by carrying out such studies will we be able to determine whether future observations can place competitive bounds on Einstein-aether theory relative to binary pulsar and solar system constraints.
The remainder of this paper is organized as follows. In Sec. II we give a brief introduction to Einstein-aether theory, describing the coupling constants of the theory and the sensitivities of strongly gravitating objects. Here we justify why these studies can currently only be performed with BNS inspirals. Section III mathematically describes GWs in Einstein-aether theory, presenting the Fourier transform of the response function for an L-shaped GW detector, so that we can understand what modifications and extensions had to be made to current waveform template models in Sec. IV to create and test the new Einstein-aether waveform template, EA_IMRPhenomD_NRT.
To determine what priors to use for parameter estimation, all current constraints on Einstein-aether theory are collected in Sec. V. Once we have a prior, the waveform template is tested on injected data in Sec. VI and finally used on GW data from BNS inspirals in Sec. VII. Section VIII discusses our results and potential future work. There are four appendices included to facilitate reproducibility. In Appendix A, we describe in detail the modifications we made to our code to make it consistent with LALSuite's IMRPhenomD_NRTidal waveform model. Appendix B provides more detail about the sensitivities in Einstein-aether theory for the region of parameter space we are considering and justifies why this region cannot be extended. Appendix C gives the exact mathematical expressions used for one of the conditions in the prior, and Appendix D contains plots that demonstrate the recovery of injected parameters with our waveform template.
Conventions:
Greek letters specify spacetime indices, while Latin letters specify spatial indices only. The Einstein summation convention and c = 1 is assumed. The gravitational constant G N is explicitly listed because there are other gravitational constants in Einstein-aether theory and this allows us to keep track of which one is which. Finally, following the conventions of much of the earlier Einstein-aether literature, we use the metric signature (+, −, −, −).
II. EINSTEIN-AETHER THEORY
In this section, we present a brief overview of Einsteinaether theory, following mostly [13]. We begin by introducing the action and the field equations, and then continue by discussing the sensitivities of compact objects, which play a key role in our GW model.
A. Einstein-aether Coupling Constants
The general action of Einstein-aether theory is [25,26]
S = S ae + S mat , (2.1)
where S mat denotes the matter action and S ae is the gravitational action of Einstein-aether theory:
S ae = − 1 16πG ae √ −g d 4 x [R + λ(U µ U µ − 1) + 1 3 c θ θ 2 + c σ σ µν σ µν + c ω ω µν ω µν + c a A µ A µ .
(2.2) In this expression, the quantity G ae is the "bare" gravitational constant, related to Newton's constant G N via
G N = G ae 1 − (c a /2) , (2.3)
g is the determinant of the metric, R is the four dimensional Ricci scalar, λ is a Lagrange multiplier that enforces the unit norm of the aether's four-velocity U µ , and {c θ , c σ , c ω , c a } are dimensionless coupling constants. In much of the earlier Einstein-aether theory literature, the action was written in terms of different coupling constants, namely {c 1 , c 2 , c 3 , c 4 }. However the constants used here (which were defined in [26]) appear in many of the physical quantities relevant to GWs in Einsteinaether theory, so they are particularly convenient to us. The two sets of constants can be related to each other through
c θ = c 1 + c 3 + 3c 2 , (2.4a) c σ = c 1 + c 3 , (2.4b) c ω = c 1 − c 3 , (2.4c) c a = c 1 + c 4 . (2.4d)
The rest of the terms in the action are the expansion θ, the shear σ µν , the vorticity (also called the twist) ω µν , and the acceleration A µ of the aether's four-velocity. These quantities are defined via
A µ = U ν ∇ ν U µ , (2.5a) θ = ∇ µ U µ , (2.5b) σ µν = ∇ (ν U µ) + A (µ U ν) − 1 3 θh µν , (2.5c) ω µν = ∇ [ν U µ] + A [µ U ν] ,(2.
5d)
with h µν = g µν − U µ U ν a projector to directions orthogonal to the aether's four velocity.
Varying the action with respect to the metric, the aether field, and the Lagrange multiplier (and eliminating this last from the equations) gives the modified Einstein field equations [13] E αβ ≡ G αβ − T ae αβ − 8πGT mat αβ = 0 (2.6) and the aether equations
AE µ ≡ ∇ α J αν − c a − c σ + c ω 2 A α ∇ ν U α h µν = 0. (2.7)
In these expressions, G αβ is the usual Einstein tensor, the matter stress-energy tensor is T αβ mat , and the aether stress-energy tensor is
T ae αβ = ∇ µ J µ (α U β) − J µ (α U β) − J (αβ) U µ + c ω + c σ 2 [(∇ µ U α ) (∇ µ U β ) − (∇ α U µ ) (∇ β U µ )] + U ν (∇ µ J µν ) U α U β − c a − c σ + c ω 2 A 2 U α U β − A α A β + 1 2 M σρ µν ∇ σ U µ ∇ ρ U ν g αβ , (2.8) with J α µ ≡ M αβ µν ∇ β U ν , (2.9) M αβ µν ≡ c σ + c ω 2 h αβ g µν + c θ − c σ 3 δ α µ δ β ν + c σ − c ω 2 δ α ν δ β µ + c a U α U β g µν . (2.10)
Linearizing these field equations and perturbing about Minkowski space results in propagation equations for the gravitational wave polarization tensor, which can be classified into a transverse-traceless (spin-2) part, a vector (spin-1) part, and a scalar (spin-0) part. Henceforth, we shall refer to these different spins as the tensor, vector and scalar parts respectively of the gravitational wave polarization. The speeds with which these polarizations propagate are given by [27]
c 2 T = 1 1 − c σ , (2.11a) c 2 V = c σ + c ω − c σ c ω 2c a (1 − c σ ) , (2.11b) c 2 S = (c θ + 2c σ )(1 − c a /2) 3c a (1 − c σ )(1 + c θ /2) . (2.11c) B. Sensitivities
The aether field in Einstein-aether theory couples to matter indirectly via the metric perturbation. In regions where these perturbations are great, as around strongly gravitating bodies, their effect is more important. Hence, the mass of strongly gravitating objects is affected by the aether field. This coupling depends on the relative velocity between the aether field and the object, γ ≡ u α U α , with u α the four-velocity of the object. In most situations, including the inspiral of two widely separated objects, this quantity γ will be small compared to the speed of light. Thus we can Taylor expand the mass of a gravitating body about γ = 1 [13]: (2.12) wherem, σ, and σ are constants. The quantity σ is often referred to as the "sensitivity" and σ its derivative [13,28]:
µ(γ) =m 1 + σ(1 − γ) + 1 2 σ (1 − γ) 2 + ... ,σ ≡ − d ln µ(γ) d ln γ γ=1 , (2.13a) σ ≡ σ + σ 2 + d 2 ln µ(γ) d(ln γ) 2 γ=1 . (2.13b)
Computing the equations of motion for a binary system leads to the definition of an "active" mass for each object
m A , related to the constantm A via m A = (1 + σ A )m A .
This is done such that the Newtonian limit of Einsteinaether theory agrees with Newtonian gravity, with a rescaled gravitational constant
G AB = G N /[(1 + σ A )(1 + σ B )] [28].
The sensitivities play a key role in the GWs emitted by binary systems in Einstein-aether theory. This is because not only do they appear in the Hamiltonian (and, therefore, in the equations of motion) of binaries, but they also enter the fluxes of radiation that back-react on the binary, forcing it to inspiral faster than it would otherwise. Unfortunately, the sensitivities of black holes (BHs) have not yet been calculated, but they are known for neutron stars (NSs) [13]. For these objects, the sensitivities range between 10 −8 and 1 depending on the region of parameter space considered for the coupling constants (see Sec. IV C and Appendix B for more detail). The sensitivities also vary depending on the mass and radius of the NS (and thus on the equation of state (EoS)). Given that we can only model the sensitivity of NSs, henceforth we focus exclusively on GW events produced by binary NS inspirals, namely GW170817 and GW190425.
The calculation of the sensitivity of NSs is highly nontrivial. To solve for this quantity in terms of Einsteinaether parameters, Gupta et al. [13] solved the field equations through linear order in the NS's velocity and extracted the sensitivity from the asymptotic fall off of the metric and aether field. This calculation was done both for (tabulated) realistic EoSs, as well as for the Tolman VII phenomenological EoS. The latter has the advantage of allowing for an analytic solution to the field equations at zeroth-order in velocity, which then renders the calculation of the sensitivities semi-analytical. When compared to the numerical solutions using the other EoSs, the Tolman VII results are highly accurate, and in fact, the sensitivities present an approximately universal behavior (with less than 3% variation between EoSs studied) when written in terms of the stellar binding energy Ω A .
With this at hand, Gupta et al. were able to find an analytic representation of the sensitivities [13]. First, rescaling to a more convenient parameter in the description of GWs, one defines the sensitivities s A for body A in a binary system via [28]
s A ≡ σ A 1 + σ A .
(2.14)
Then, carrying out a small binding energy expansion using the Tolman VII EoS, one finds
s A = (3α 1 + 2α 2 ) 3 Ω A m A + 573α 3 1 + α 2 1 (67669 − 764α 2 ) + 96416α 2 2 + 68α 1 α 2 (9α 2 − 2632) 25740α 1 Ω 2 A m 2 A + 1 656370000c ω α 2 1 −4α 2 1 (α 1 + 8) [36773030α 2 1 − 39543679α 1 α 2 + 11403314α 2 2 ] + c ω [1970100α 5 1 − 13995878400α 3 2 − 640α 1 α 2 2 (−49528371 + 345040α 2 ) − 5α 4 1 (19548109 + 788040α 2 ) − 16α 2 1 α 2 (1294533212 − 29152855α 2 + 212350α 2 2 ) + α 3 1 (2699192440 − 309701434α 2 + 5974000α 2 2 )] Ω 3 A m 3 A + O Ω 4 A m 4 A , (2.15)
where α 1 and α 2 are the preferred frame parameterized post-Newtonian parameters for Einstein-aether theory, namely [29],
α 1 = 4 c ω (c a − 2c σ ) + c a c σ c ω (c σ − 1) − c σ , (2.16a) α 2 = α 1 2 + 3(c a − 2c σ )(c θ + c a ) (2 − c a )(c θ + 2c σ ) , (2.16b)
and Ω A /m A is the ratio of the stellar binding energy to the NS mass m A . For the Tolman VII EoS, the compactness of the star, C := m A /R A , where R A is the radius of the star, can be expressed in terms of this ratio [13],
C = − 7Ω A 5m A + 35819α 1 Ω 3 A 85800m 3 A + O Ω 4 A m 4 A (2.17)
for small compactnesses and binding energies. The leading-order terms of the expansion of the sensitivity in Eq. (2.15) agrees with that derived by Foster [28]. Inverting this relationship, one finds the binding energy over the mass as a function of compactness
Ω A m A = − 5 7 C − 18275α 1 C 3 168168 + O(C 4 ). (2.18)
Our Einstein-aether waveform model relies on knowing the sensitivities s A , but as shown in Eqs. (2.15) and (2.18), these depend ultimately on the compactness. We can relate the compactness of each star to their tidal deformabilities as follows. First, following previous work on nuclear astrophysics with GWs [8,[30][31][32], we will sample the GW likelihood by varying the symmetric tidal deformability Λ s = (Λ 1 + Λ 2 )/2 (among many other parameters). From Λ s , we can obtain Λ a = (Λ 2 − Λ 1 )/2 using the binary Love relations [21,23], and from Λ s and Λ a we can easily obtain Λ 1 and Λ 2 . Now, from the latter two quantities, we will obtain the compactness through the approximately universal C-Love relations [21,23]
C A (Λ A ) = 0.2496Λ −1/5 A 1 + 3 i=1 a i Λ −i/5 A 1 + 3 i=1 b i Λ −i/5 ,(2.19)
where the fitting coefficients are From the compactness, we can then evaluate the stellar binding energy, and from that, the sensitivities. The logic is outlined in Fig. 1.
The binary Love and C-Love relations feature heavily in the construction of our waveform model, but they are known to only be approximately EoS insensitive. In fact, their variability is about 10% [33]. One can include this variability in Bayesian parameter estimation, and then marginalize over it, as done for example in [33]. We will here not include it, however, because the statistical error in the extraction of the symmetric tidal deformability dominates over any systematic error introduced by this variability, as shown in [21], at least in the current GW detector era. 1. A flow chart of computing sensitivities from the parameter sampled on (symmetric tidal deformability, Λs). We use the binary Love relations, the C-Love relations, the Tolmann VII EoS, and the equation for sensitivities as a function of the binding energy to mass ratio. These sensitivities will then be used in the waveform as described in section III B.
III. GWS IN EINSTEIN-AETHER THEORY
In this section, we review the work of [15] and [34] to construct expressions for the GW polarizations of Einstein-aether theory for a quasi-circular inspiraling binary composed of non-spinning NSs. We then present the Fourier transform of the response function in explicit form, ready for use in parameter estimation and data analysis.
A. GW Polarizations in Einstein-aether theory
Following the example of many other studies [15,25,35,36], we begin by considering linear perturbations to a background Minkowski metric, η µν = diag(−1, 1, 1, 1), and linear perturbations to a stationary aether field:
h µν = g µν − η µν , w 0 = U 0 − 1, w i = U i . (3.1)
The one-form h 0i and the vector w i can be uniquely decomposed into irreducible transverse and longitudinal pieces, while the spatial components of the metric perturbation h ij can be uniquely decomposed into a transverse traceless tensor, a transverse vector, and transverse and longitudinal traces [35]:
w i = ν i + ν ,i , (3.2a) h 0i = γ i + γ ,i , (3.2b) h ij = φ ij + 2φ (i,j) + 1 2 P ij [f ] + φ ,ij (3.2c)
where the quantity P ij := δ ij ∆ − ∂ i ∂ j is a transverse differential operator, the quantity ∆ := δ ij ∂ i ∂ j is the flat-space spatial Laplacian, and F := ∆f is a scalar. The transverse vector and tensor fields here satisfy the divergence-free condition,
∂ i γ i = ∂ i ν i = ∂ i φ i = 0, ∂ j φ ij = 0,(3.3)
and the field φ ij is also traceless, φ i i = 0. Note that we also make the conventional gauge choice, φ i = 0 and ν = γ = 0 [35].
With these convenient decompositions in hand, we would like to use the formula for GW polarizations in generic modified theories of gravity provided by [37]. However, that work made the implicit assumption that all polarizations of the GW travel at the same speed, specifically the speed of light, and this assumption does not hold for Einstein-aether theory. We extended the work of [37] in [34] to accommodate for theories that allow for modes with different and arbitrary speeds. In that work, we also explicitly computed the expressions for GW polarizations in Einstein-aether theory by inserting Eqs. (3.2) and (2.11) into our general formula. We found that
h + = 1 2 φ ij e ij + , h × = 1 2 φ ij e ij × , (3.4a) h b = 1 2 F, h L = (1 + 2β 2 )h b , (3.4b) h X = 1 2 β 1 ν i e i X , h Y = 1 2 β 1 ν i e i Y , (3.4c) where β 1 = − 2c σ c V , (3.5) β 2 = c a − 2c σ 2c a (1 − c σ )c 2 S ,(3.6)
and e ij + = e i X e j X − e i Y e j Y and e ij × = e i X e j Y + e i Y e j X are combinations of basis vectors, defined in the orthogonal basis for GWs propagating in the e Z direction: e X = (cos ϑ cos ϕ, cos ϑ sin ϕ, − sin ϑ) , (3.7a) e Y = (− sin ϑ, cos ϕ, 0) , (3.7b) e Z = (sin ϑ cos ϕ, sin ϑ sin ϕ, cos ϑ) .
(3.7c) Equation (3.6) corrects a small minus sign error in [15] that has since been addressed. Aside from that, our results in Eq. (3.4) agree with those of Eq. (3.28) in [15], which were computed in a different way (i.e. starting from the time-like geodesic deviation equation and working with the linearized Riemann tensor in terms of the metric perturbation; see [15] for more details). An intuitive understanding of these different GW polarizations in Einstein-aether theory can be gleaned from considering their impact on a ring of test particles. In modified theories of gravity, the most general GW has up to six polarization modes. This includes two each of tensor, vector, and scalar type. The two tensor polarizations, h + and h × , are the plus and cross modes familiar from GR. The two vector polarizations, h X and h Y , are labeled for the plane in which they would make a ring of test particles oscillate for a wave propagating in the z-direction (see Fig. 2). Finally, the two scalar polarizations, h b and h L , are called the breathing and longitudinal modes for the way in which they would make a ring of
ℎ ! ℎ × ℎ # ℎ $ ℎ % ℎ & FIG. 2.
The oscillation of a ring of test particles when each of the six possible polarizations of a GW in Einstein-aether theory passes through, propagating in the z-direction. The solid black line represents the ring at times ωt = 0, π, the dashed blue line represents the ring at time ωt = π/2, and the dotted orange line shows ωt = 3π/2. test particles oscillate in and out or longitudinally along the direction of propagation (again see Fig. 2).
We continue to follow [15] to compute the GW polarizations that appear in Eq. (3.4) specifically for a binary system. We will not repeat that calculation here, but the details can be found in [15]. That paper assumes that the detectors are far away from the source and solves the linearized Einstein-aether field equations to derive expressions for φ ij , ν i , γ i and F in terms of the Einsteinaether coupling constants, the mass quadrupole moment, the trace-free mass quadrupole moment, the renormalized versions of these quantities, the renormalized mass dipole moment and the renormalized current quadrupole moment. Reference [15] then focuses on two non-spinning compact objects in a quasi-circular orbit to find expressions for these multipolar moments in terms of typical binary system parameters (for example, the binary chirp mass and orbital frequency of the system). Unlike previous work, Ref. [15] allows the center of mass of the binary system to not be comoving with the aether, essentially letting their relative velocity be nonzero, V i = 0. We will again choose to set V i = 0 since we know it must be V i ≈ O(10 −3 ), given the peculiar velocity of our own galaxy relative to the cosmic microwave background, and we consider this to be negligible compared to the other Einstein-aether modifications [28,36].
B. The Response Function
Parameter estimation on actual data from advanced LIGO, advanced Virgo, or KAGRA requires the Fourier transformh(f ) of the response function h(t) for an Lshaped GW detector. From [38], we can write the latter as (3.8) where N ∈ {+, ×, b, L, X, Y } and F N (θ, φ, ψ) are the angle pattern functions, which depend on the polar, azimuthal and polarization angles (θ, φ, and ψ, respectively) 2 :
h(t) = N F N (θ, φ, ψ)h N (t),F + ≡ 1 2 1 + cos 2 θ cos 2φ cos 2ψ − cos θ sin 2φ sin 2ψ, (3.9a) F × ≡ 1 2 1 + cos 2 θ cos 2φ sin 2ψ + cos θ sin 2φ cos 2ψ, (3.9b) F b ≡ − 1 2 sin 2 θ cos 2φ, (3.9c) F L ≡ 1 2 sin 2 θ cos 2φ, (3.9d) F X ≡ − sin θ (cos θ cos 2φ cos ψ − sin 2φ sin ψ) , (3.9e) F Y ≡ − sin θ (cos θ cos 2φ sin ψ + sin 2φ cos ψ) . (3.9f)
Through the stationary phase approximation (SPA), one can compute the Fourier transform of the response function, namelyh
(f ) = h(t)e 2iπf t dt.
(3.10)
Doing so, we have reproduced Eq. 5.7 of [15], and then collect terms by the F N functions of Eq. (3.9). We also choose to separate contributions to these expressions from the = 2 and = 1 orbital harmonics. We do so because the = 1 harmonics are multiplied by an overall amplitude factor that depends on the coupling constants and that is of O(10 −5 ) relative to the overall amplitude of the = 2 harmonic, when one saturates current constraints. Ultimately, we arrive at
h(f ) = N =1,2h N, (f )F N ,(3.11)
with the expressions forh N, given by 3
h (+,2) (f ) = A (2) (f ) (1 + cos 2 ι) e iΨ (2) e −i2πf D L (1−c −1 T ) , (3.12) h (×,2) (f ) = A (2) (f ) [2i cos ι] e iΨ (2) e −i2πf D L (1−c −1 T ) , (3.13) h (b,2) (f ) = A (2) (f ) 1 2 − c a 3c a (Z − 1) − 2S c 2 S sin 2 ι e iΨ (2) e −i2πf D L( 1−c −1 S ) , (3.14) h (L,2) (f ) = a bLh(b,2) (f ), (3.15) h (X,2) (f ) = A (2) (f ) β 1 c σ + c ω − c σ c ω 1 2c V S − c σ 1 − c σ sin(2ι) e iΨ (2) e −i2πf D L( 1−c −1 V ) , (3.16) h (Y,2) (f ) = A (2) (f ) iβ 1 c σ + c ω − c σ c ω 1 c V S − c σ 1 − c σ sin(ι) e iΨ (2) e −i2πf D L( 1−c −1 V ) , (3.17) h (b,1) (f ) = A (1) (f ) 2i (2 − c a )c S sin ι e iΨ (1) e −i2πf D L( 1−c −1 S ) , (3.18) h (L,1) (f ) = a bLh(b,1) (f ), (3.19) h (X,1) (f ) = A (1) (f ) iβ 1 c σ + c ω − c σ c ω cos ι e iΨ (1) e −i2πf D L (1−c −1 V ) , (3.20) h (Y,1) (f ) = A (1) (f ) − β 1 c σ + c ω − c σ c ω e iΨ (1) e −i2πf D L( 1−c −1 V ) , (3.21)
with common amplitude and phase functions given by
A (2) (f ) = − 1 2 5π 48 (2 − c a ) (1 − s 1 )(1 − s 2 ) 1 D L G 2 NM 2 κ −1/2 3 G N πMf −7/6 1 − 1 2 G N πMf −2/3 η 2/5 x , (3.22) Ψ (2) (f ) = 3 64 (1 − s 1 )(1 − s 2 ) (2 − c a ) κ −1 3 G N πMf −5/3 1 − 4 7 G N πMf −2/3 η 2/5 x + 2πft c − 2Φ(t c ) − π 4 , (3.23) A (1) (f ) = − 1 4 5π 48 2 − c a (1 − s 1 )(1 − s 2 ) 1 D L ∆sG 2 NM 2 κ −1/2 3 η 1/5 G N πMf −3/2 1 − 1 2 2G N πMf −2/3 η 2/5 x , (3.24) Ψ (1) (f ) = 3 128 (1 − s 1 )(1 − s 2 ) (2 − c a ) κ −1 3 2G N πMf −5/3 1 − 4 7 2G N πMf −2/3 η 2/5 x + 2πft c − Φ(t c ) − π 4 . (3.25)
Note that the = 1 harmonic only affects the additional non-GR polarizations. The quantities in these expressions that we have not yet explicitly defined are given in [15], but we repeat their definitions here for completeness:
a bL = 1 + 2β 2 , (3.26a) t c = t c + D L , (3.26b) M = (m 1 + m 2 )η 3/5 , (3.26c) Z = (α 1 − 2α 2 )(1 − c σ ) 3(2c σ − c a ) , (3.26d) x = 5∆s 2 32κ 3 C, (3.26e) ∆s = s 1 − s 2 , (3.26f) κ 3 = A 1 + A 2 S + A 3 S 2 , (3.26g) where S = s 1 µ 2 + s 2 µ 1 , (3.27a) µ A = m A (m 1 + m 2 ) , (3.27b) η = m 1 m 2 (m 1 + m 2 ) 2 ,(3.
27c)
and
A 1 = 1 c T + 2c a c 2 σ (c σ + c ω − c σ c ω ) 2 c V + 3c a (Z − 1) 2 2(2 − c a )c S ,(3.
28a)
A 2 = − 2c σ (c σ + c ω − c σ c ω )c 3 V − 2(Z − 1) (2 − c a )c 3 S , (3.28b) A 3 = 1 2c a c 5 V + 2 3c a (2 − c a )c 5 S , (3.28c) C = 4 3c a c 3 V + 4 3c a (2 − c a )c 3 S .
(3.28d)
For convenience, we also have defined a new quantitȳ
M = (1 − s 1 )(1 − s 2 )M. (3.29)
Now that we have the mathematical expressions for the Fourier transform of the response function separated out into these convenient pieces, corresponding to the = 2 and = 1 contributions to each of the different polarizations of the GW, we can implement them in a waveform model, as we shall describe in the next section.
IV. AN EINSTEIN-AETHER WAVEFORM TEMPLATE
To compare gravitational wave predictions from Einstein-aether theory directly with data, we need an Einstein-aether waveform model. This section starts with a basic description of GW Analysis Tools (GWAT), the code used for this analysis. Next we follow [41] and update GWAT to incorporate the IMRPhenomD_NRTidalv2 model for binary NS mergers. Finally, we describe the additions that were made to the GWAT implementation of the IMRPhenomD_NRTidalv2 model to create the EA_IMRPhenomD_NRT model, which is capable of modeling coalescing NSs in Einstein-aether theory. Throughout this section, we compare output from our code to previous work to demonstrate its functionality and validity.
A. GWAT Implementation of BBH Waveform Models in GR
The code used for the parameter estimation analysis that will be presented in this paper was built off of GWAT, a set of tools for statistical studies in GW science developed by Scott Perkins and collaborators at the University of Illinois Urbana Champaign [18]. This software allows the user to select different waveform templates and perform parameter estimation on binary BH systems using Bayesian inference (for a review of how parameter estimation is done in GW science, see e.g. [30]). To gather independent samples for the posterior, GWAT uses a Markov Chain Monte Carlo (MCMC) sampler, aided by parallel tempering and a mix of jump proposals. For example, for the un-tempered chains, 30% of jumps are proposed with differential evolution and 70% of jumps are proposed along the eigenvectors of the Fisher matrix.
GWAT contains several waveform templates available for use, but for the purposes of this paper, we started development from the IMRPhenomD model [40,42]. This waveform is defined in GR with an 11-dimensional parameter space, spanned by the parameter vector θ = {α , sin δ, ψ, cos ι, φ ref , t c , D L , M, η, χ 1 , χ 2 }, where α and δ are the right ascension and declination angles of the binary in the sky, ψ is the polarization angle with respect to Earth-centered coordinates, ι is the inclination angle of the binary, φ ref is the phase at a reference frequency (f ref , chosen to be consistent with LALSuite), t c is the time of coalescence, D L is the luminosity distance, M is the chirp mass of the binary, as defined in Eq. (3.26c), η is the symmetric mass ratio, as defined in Eq. (3.27c), and χ 1 (χ 2 ) is the dimensionless spin of the heavier (lighter) object. The dimensionless astrophysical parameters are sampled from uniform priors in the following
regions: α ∈ [0, 2π], sin δ ∈ [−1, 1], ψ ∈ [0, π], cos ι ∈ [−1, 1], φ ref ∈ [0, 2π], χ 1 ∈ [−.01, .01], χ 2 ∈ [−.01, .01].
The dimensionful astrophysical parameters have the following priors: t c has a flat prior that is restricted to be within 0.1 seconds of the trigger time of the event, D L is sampled uniformly in the volume defined by the range [5,300] Mpc, and instead of using a prior uniform in M and η, we use a prior uniform in m 1 and m 2 in the range [1, 2.5]M for NSs.
B. Extending the GWAT Implementation of BBH Waveform Models to BNS inspirals in GR
As mentioned in Sec. II, constraints on Einstein-aether theory can currently be studied only with signals from BNS inspirals. Thus, as a first step, the GWAT implementation of the IMRPhenomD model has to be extended to include finite-size BNS effects. This extension requires modifications to the GW amplitude and phase, which we implemented following the IMRPhenomD_NRTidalv2 model [43]. The exact form of these modifications can be found in Appendix A, but in essence, they are characterized by the mass-weighted tidal deformabilityΛ, which is defined by [44],
Λ = 8 13 1 + 7η − 31η 2 (Λ 1 + Λ 2 ) + 1 − 4η 1 + 9η − 11η 2 (Λ 1 − Λ 2 ) . (4.1)
Therefore, in addition to the BH astrophysical parameters of Sec. IV A, θ must now also include the tidal deformabilities of each NS, Λ 1 and Λ 2 , increasing the dimensionality of the parameter space to 13. Another important modification is the smooth filtering of the signal at the end of the inspiral, which is accomplished with a Plank taper function. This is implemented to avoid including the merger phase of the BNS coalescence, whose phenomenological analytic description does not yet exist and which would otherwise be present because the IMRPhenomD model includes merger and ringdown.
To compare our code with LALSuite, we first generated 100 different random combinations of source parameters, and then we computed their respective waveforms in GWAT and in LALSuite. We then calculated the relative fractional difference between the amplitudes computed with both codes
A LAL − A GW A avg = 2(A LAL − A GW ) A LAL + A GW , (4.2)
where A LAL is the amplitude calculated by LALSuite and A GW is the amplitude calculated by GWAT. The difference in the phase computed by the two programs was calculated via Ψ LAL − Ψ GW . The relative amplitude and phase differences are below 0.001% and constant across frequency, which will thus not affect our parameter estimation studies.
As we explained in Sec. II B, however, the Einsteinaether modifications to the waveform model will require knowledge of the sensitivites, which are functions of the compactness, and through the Love-C relations, functions of the individual tidal deformabilities. To extract the individual tidal deformabilities, we will use the binary Love relations [21,23]. The symmetric and antisymmetric combinations of the NS tidal deformability [21,23]:
Λ s = Λ 2 + Λ 1 2 , (4.3) Λ a = Λ 2 − Λ 1 2 , (4.4)
can be related to each other through nearly EoSinsensitive relations Λ a = Λ a (Λ s ). The most recent incarnation of this relation is 4
Λ a = F n (q) 1 + 3 i=1 2 j=1 b ij q j Λ −i/5 s 1 + 3 i=1 2 j=1 c ij q j Λ −i/5 s Λ α s , (4.5)
where F n (q) is the Newtonian limiting-control factor, q is the mass ratio with m 2 ≤ m 1 , and {n, α} are constants, given by which were obtained by fitting 100 EoSs that obey physical constraints [21].
F n (q) = 1 − q 10/(3−n) 1 + q 10/(3−n) , q = m 2 m 1 , (4.6a) n = 0.743, α = 1,(4.
Using the binary Love relations, we can then sample the waveform on all astrophysical parameters plus just Λ s , reducing the dimensionality of the parameter space to 12. Moreover, from the sampled value of Λ s , we can also compute Λ a from the binary Love relations, and from these two quantities, we can recover Λ 1 and Λ 2 . All of this, however, requires that we choose a prior for Λ s . We here choose an uniform prior in (10, 10 4 ). However, for the set of EoSs used to generate the binary Love relations, Λ s and q are also related by the approximate inequality,
q ≥ 1.2321 − 0.124616 ln(Λ s ) ,(4.9)
which can be obtained by fitting data from Ref. [21]. Therefore, any point that does not satisfy the above constraint does not pass the prior and is rejected.
To validate our GWAT implementation of the binary Love relations, we computed Λ a (Λ s , q) for three different values of q = {0.5, 0.75, 0.9} and 250 randomly generated values of Λ s each. Figure 3 compares our results to the data published in [21]. Observe that the relative fractional difference is below 5% in all cases, which confirms that our implementation is correct.
Given the agreement between our code and previous work, we conclude that our GWAT implementation of IMRPhenomD_NRT can successfully perform parameter estimation for BNS inspirals, sampling on the symmetric tidal deformability.
C. Extending the GWAT Implementation of BNS Waveform Models in GR to Einstein-aether Theory
With the GR groundwork in place, we now implement Einstein-aether modifications to the IMRPhenomD_NRT model, thus generating the EA_IMRPhenomD_NRT model. We will describe here what these modifications are and how we will implement them in GWAT.
As we discussed in Sec. III B the Einstein-aether modifications to the inspiral part of coalescence include corrections to the amplitude and phase of the Fourier transform of the plus and cross GW polarizations, as well as the introduction of the Fourier transform of the four additional GW polarizations present in this theory (Eqs. (3.12)-(3.25)). We extend the IMRPhenomD_NRT model by introducing these modifications to the inspiral portion of coalescence. Before merger, a Planck taper function takes the amplitude of the response function to zero, ending the waveform model, because both the IMRPhenomD_NRT and the EA_IMRPhenomD_NRT models do not include the merger or post-merger portions of coalescence.
Since the EA_IMRPhenomD_NRT model is new, there does not yet exist any other code infrastructure that has implemented Einstein-aether modifications to a coalescence model. We therefore implemented it all within the GWAT code as follows. Given a point in the 16-dimensional parameter space of Comparison between the binary Love relation implemented in GWAT (points in blue) and that computed in [21] (points in black) for three different values of q. Beneath, the relative fractional differences (for q = 0.50, q = 0.75, and q = 0.90 respectively) demonstrate that the GWAT implementation is correct.
θ = {α , sin δ, ψ, cos ι, φ ref , t c , D L , M, η, χ 1 , χ 2 , Λ s ,c a , c θ , c ω , c σ },
the code first computes sensitivities, since they play a prominent role in all of the Einstein-aether modifications discussed above. The logic for this calculation is outlined in Fig. 1 and proceeds as follows. From the symmetric combination of the tidal deformabilities Λ s , the code uses the binary Love relations to find the antisymmetric combination of the tidal deformabilities Λ a , and from these two quantities, the individual tidal deformabilities Λ 1 and Λ 2 (see discussion in Sec. IV B). From the latter two, the code uses the C-Love relations to compute the individual compactnesses C 1 and C 2 (see Sec. II B). Finally, from the compactnesses and the Einstein-aether coupling constants, the code computes the sensitivities s 1 and s 2 (see Eqs. (2.15) and (2.18)). For validation purposes, the inverse of the Love-C relation, C(Λ) as computed by the GWAT implementation, is shown in the left panel of Fig. 4 for 100 random tidal deformabilities (ranging between 1 and 10 4 ). Comparing this to the data from [21], we can compute the relative fractional difference, shown in the left-bottom panel of Fig. 4. Observe that the relative fractional difference is at most 0.5%, due mostly to interpolation error.
The s-C relation, s(C), as computed by the GWAT implementation, is plotted in the right panel of Fig. 4. First, for direct comparison to [13], we fix the Einsteinaether coupling constants to {c a , c θ , c ω , c σ } = (10 −4 , 4 × 10 −7 , 10 −4 , 0) and compute sensitivity for 250 random values of compactness. As before, the relative fractional difference between the GWAT sensitivities and that of the original paper are shown in the right-bottom panel of Fig. 4. Observe again the relative fractional difference is at most 5%, once more validating our implementation 5 . We then compute sensitivity for 500 random values of compactness when the Einstein-aether parameters, {c a , c θ , c ω , c σ }, are also varied. These coupling constants are randomly drawn from the complicated region of parameter space allowed by current constraints on the theory (described in detail in Sec. V). Note the wide range of sensitivities possible for a single compactness when these coupling constants are varied. Furthermore, Appendix B discusses the magnitude of sensitivities in a wider region of parameter space that will become useful later.
Once the sensitivities have been evaluated, we can then proceed to evaluate all of the other Einstein-aether quantities that appear in the Fourier transform of the response function. Explicitly, this includes the quantities
{c S , c V , β 1 , Z, S, A 1 , A 2 , A 3 , C, κ 3 , x ,M} as defined in Eqs. (2.11c),(2.11b),(3.5), (3.26d)-(3.27a), and (3.28a)- (3.29).
With these Einstein-aether quantities computed, the response function can be put together by first evaluating the amplitude and phase of each of the GW polarizations on a frequency array, and then linearly combining the product of the latter with the antenna patterns.
We want to take advantage of the full machinery of IMRPhenomD_NRT that has already been successfully implemented in GWAT. Thus, we promote the chirpmass, M, to the Einstein-aether scaled version,M (Eq. (3.29)), everywhere in IMRPhenomD_NRT. We then use this waveform template to compute the amplitude, A N RT (f ), and phase, Ψ N RT (f ), of the plus and cross GW polarizations such that
h +,N RT (f ) = A N RT (f )(1 + cos 2 ι)e iΨ N RT (f ) , (4.10) h ×,N RT (f ) = A N RT (f )(2i cos ι)e iΨ N RT (f ) . (4.11)
This introduces uncontrolled remainders at higher orders. However, since the Einstein-aether waveform has not yet been computed to those orders, it is reasonable to use the "promoted" IMRPhenomD_NRT version for higher or- Compactness as a function of Λ computed by GWAT for 100 random combinations of source parameters and compared to data from [21]. The relative fractional difference between these two data sets is plotted below and serves as a test of the C-Love relations in our code. Right: Comparing the sensitivity as a function of compactness computed by GWAT with that published in [13]. For direct comparison, we follow the example of [13] and fix the Einstein-aether coupling constants to {ca, c θ , cω, cσ} = (10 −4 , 4 × 10 −7 , 10 −4 , 0), plotted in blue. The relative fractional difference between these points and those from [13] is shown below. Though these points are computed using different EoSs (APR4 in [13] and Tolmann VII in GWAT), they differ by less than 5% for realistic values of compactness for NSs. We also compute sensitivity as a function of compactness varying the Einstein-aether parameters in the full range of parameter space allowed by current constraints (for a description of this allowed region, see Sec. V). These points are plotted in orange and represent the typical values of sensitivity we expect to appear in the waveform.
der terms. Note that the EA_IMRPhenomD_NRT waveform template is only accurate to 0PN. Now we are ready to construct the amplitude and phase of all of the different GW polarizations in Einsteinaether theory in GWAT for EA_IMRPhenomD_NRT. We will do this by adding the appropriate corrections to the already computed A N RT and Ψ N RT . First for the plus and cross modes,
h +,EA (f ) = A EA (f )(1 + cos 2 ι)e iΨ EA (f ) , (4.12) h ×,EA (f ) = A EA (f )(2i cos ι)e iΨ EA (f ) ,(4.13)
where
A EA (f ) = A N RT (f ) + A (2) (f ) − A 0P N (f ),(4.
14)
Ψ EA (f ) = Ψ N RT (f ) + Ψ (2) (f ) − Ψ 0P N (f ) + Ψ c N (f ).(4
.15)
A N RT and Ψ N RT are the amplitude and phase computed by IMRPhenomD_NRT as described above. A (2) and ψ (2) are given in Eqs. (3.22) and (3.23). A 0P N and Ψ 0P N are the 0PN contributions present in both IMRPhenomD_NRT and A (2) , Ψ (2) respectively that are subtracted off so as not to be double counted. Explicitly,
A 0P N (f ) = − 5π 96 1 D L G 2 NM 2 G N πMf −7/6 , (4.16) Ψ 0P N (f ) = 3 128 G N πMf −5/3 + 2πft c − 2Φ(t c ) − π 4 . (4.17)
Finally, Ψ c N is a term that depends on the speed of the GW polarization,
Ψ c N ≡ −2πf D L (1 − c −1 N ) (4.18)
for N ∈ {T, S, V }. Since the plus and cross modes are tensor polarizations, Eqs. (3.12) and (3.13) show that the Ψ c N term should be −2πf D L (1 − c −1 T ). EA_IMRPhenomD_NRT similarly computes the other terms in the response function that come from the second harmonic of the orbital period (h N,2 with N ∈ {b, L, X, Y } from Eqs. (3.14)-(3.17)). For example, following Eq. (3.14),
h (b,2) = A EA 1 2 − c a 3c a (Z − 1) − 2S c 2 S sin 2 ι e iΨ EA ,(4.19)
where A EA and Ψ EA are defined as in Eqs. (4.14) and (4.15)
, with Ψ c N = −2πf D L (1 − c −1 S ).
For each of the two scalar modes,h (b,2) andh (L,2) , Ψ c N depends on the scalar speed, c S , and likewise for each of the two vector modesh (X,2) andh (Y,2) , Ψ c N depends on the vector polarization speed, c V .
For the terms that come from the first harmonic of the orbital period (h N,1 with N ∈ {b, L, X, Y } from Eqs. (3.18)-(3.21)), EA_IMRPhenomD_NRT computes a new amplitude and phase, A EA,1 and Ψ EA,1 . Since there is no = 1 component of amplitude in IMRPhenomD_NRT, A EA,1 is simply equivalent to A (1) as defined in Eq. (3.24). Meanwhile,
Ψ EA,1 (f ) = Ψ N RT (f /2) + Ψ (1) (f ) − Ψ 0P N,1 (f ) + Ψ c N (f ) (4.20) where Ψ 0P N,1 (f ) = 3 256 2G N πMf −5/3 + 2πft c − Φ(t c ) − π 4 (4.21)
and Ψ c N is defined the same as in the = 2 case (Eq. (4.18)).
Finally, EA_IMRPhenomD_NRT linearly combines each h N, with the appropriate antenna pattern function, F N , to construct the full waveform. In the limit that the Einstein-aether coupling constants go to zero 6 , EA_IMRPhenomD_NRT reduces to IMRPhenomD_NRT. We demonstrate this by comparing the two waveform templates for 100 randomly generated combinations of source parameters, varying each of the parameters in the 16dimensional parameter space except for the Einsteinaether coupling constants, which are fixed to small values. We draw these parameters from the same priors described in Secs. IV A and IV B. The relative fractional difference in the amplitude and the difference in the phase are below 0.001% and constant across frequency. Hence, we conclude that our Einstein-aether waveform template is consistent with GR in the limit that the coupling constants go to zero.
V. CURRENT CONSTRAINTS ON THE THEORY
Several theoretical and experimental results have placed constraints on Einstein-aether theory and its coupling constants. In this section, we discuss the most stringent constraints so that we can use them to construct non-trivial priors for each of the Einstein-aether parameters in two separate parameterizations of the theory. We also explain why the second parameterization is more convenient for analysis of GW data and will be used throughout the rest of this work. 6 Setting the coupling constants identically to zero can lead to nans in the code because of the many instances of nans in the mathematical expressions due to 0/0 numerical problems. In order to take the GR limit without introducing nans, we set the coupling constants to very small values: ca = 1.0 × 10 −30 , c θ = 2 × 10 −30 , cω = 2 × 10 −30 , cσ = 0.
A. Summary of Existing Constraints
Let us begin with theoretical constraints. In order to avoid gradient instabilities and ghosts, the squared speed of the GW polarizations must be positive [27,45],
c 2 T > 0, c 2 V > 0, c 2 S > 0. (5.1)
Furthermore, if we consider a plane wave solution of the linearized field equations with wave vector (k 0 , 0, 0, k 3 ), the energy densities of the different modes [35,46]
E T = 1 8πG k 2 3 |A| 2 , (5.2a) E V = 1 8πG k 2 3 |A| 2 c σ + c ω (1 − c σ ) 1 − c σ , (5.2b) E S = 1 8πG k 2 3 |A| 2 c a (2 − c a ),(5.c ω ≥ − c σ 1 − c σ , (5.3a) 0 ≤c a ≤ 2, (5.3b)
respectively. We refer to Eqs. (5.1) and (5.3) together as the stability conditions, since they are both required to have stable Einstein-aether GWs. Now we turn to constraints on the Einstein-aether parameters due to experimental results. The most stringent of these constraints comes from the simultaneous observation of GWs from a NS binary merger and the corresponding short gamma ray burst, GW170817 and GRB170817A. This event placed observational bounds on the speed of the tensor polarizations of GWs: −3 × 10 −15 < c T − 1 < 7 × 10 −16 [8]. Given the simple dependence of c 2 T on c σ , these observations restrict c σ ≈ O(10 −15 ). Thus, we will henceforth set c σ = 0, dramatically simplifying many of the expressions and reducing the total parameter space from 16 to 15.
Another observational bound on Einstein-aether theory derives from the observation of high-energy cosmic rays. In Einstein-aether theory, the amount of energy atmospheric cosmic rays have is higher than that in GR because GWs and aether field excitations can endow cosmic rays with more energy through a gravitational "Cherenkov type" process [9]. By considering the amount of energy observed in high energy cosmic rays, one can place an upper limit on how efficient this Cherenkov process can be, further constraining the coupling constants of Einstein-aether theory. This was done separately for tensor-like, vector-like, and scalar-like excitations, assuming that all speeds c N (with N = T, V, S) are subluminal. The constraints obtained in [9] with these assumptions are very strict and we will refer to them hereafter as the Cherenkov constraints. They are often summarized in the literature ( [13,47] and others) as
c 2 N 1 − O(10 −15 ) ,(5.4)
because the constraints give very strict conditions on {c a , c θ , c ω , c σ } that must be satisfied if c 2 N < 1. It is very challenging, though not impossible, to pick a point in parameter space that satisfies the latter. For a more careful summary of what the constraints are and how we applied them in our code, see Appendix C.
Another constraint on Einstein-aether theory derives from Big Bang Nucleosynthesis (BBN). The Lorentzviolating aether field of Einstein-aether theory rescales the effective value of Newton's constant that appears in the Friedman equation [6,10,48],
G cosmo = G N (1 − c a /2) 1 + c θ /2 . (5.5)
However, observations of primordial 4 He from BBN restrict [10]
G cosmo G N − 1 1 8 . (5.6)
Inserting Eq. (5.5) into this requirement and simplifying leads to the two inequalities
c θ + 8c a 7 2 7 , (5.7a) c θ + 8c a 9 − 2 9 . (5.7b)
This constraint becomes simpler in certain regions of parameter space, as will be described in the next section. There are three more experimental constraints that should be discussed here, all of which lead to bounds on the preferred-frame PN parameters α 1 and α 2 , which were defined in Eq. (2.16). With the constraint that c σ = 0, these parameters simplify to
α 1 = −4c a , (5.8a) α 2 = −2c a + 3c a (c θ + c a ) (2 − c a )c θ . (5.8b)
Two of the constraints arise from solar system observations. The first one comes from the close alignment of the solar spin axis with the total angular momentum vector of the solar system, which restricts [12] |α 2 | 4 × 10 −7 . (5.9)
The second one comes from lunar laser ranging observations, which bound −1.6 × 10 −4 < α 1 < 2 × 10 −5 to (1-σ) [11]; for simplicity, this bound can be conservatively stated as
|α 1 | 10 −4 (5.10)
as done in several previous papers [3,13,47]. This choice will not affect our results (as discussed later). The bounds in Eqs. (5.9) and (5.10) will be referred to as solar system constraints. Finally, combining these constraints with observations of the damping of the orbital period of certain binary pulsars and the triple binary pulsar places the even tighter bound [13]: − 1.6 × 10 −5 α 1 4.6 × 10 −6 (5.11) to 1-σ uncertainty.
B. Priors on (ca, c θ , cω) from existing constraints Now that we have introduced all of the main constraints on the theory in the previous subsection, let us now study how they lead to a prior on the coupling constant parameter space of Einstein-aether theory. One way to do so is via rejection sampling of the constraints, i.e. to evaluate a given constraint millions of times by sampling uniformly on {c a , c θ , c ω } and rejecting those choices of these parameters that violate the given constraint. We will start by sampling each of these parameters in the arbitrarily chosen region [−3, 3] and show how the parameter space shrinks with the addition of constraints.
Let us first focus on the stability constraints. Equation
c 2 S = c θ (1 − c a /2) 3c a (1 + c θ /2) ,(5.(1 − c a /2)/3c a ≥ 0 always. Thus, c 2 S ≥ 0 requires that c θ (1 + c θ /2) ≥ 0 ,(5.c 2 S = c θ (1 − c a /2) 3c a (1 + c θ /2) ≥ 1 ⇒ 1 − c a /2 3c a ≥ 1 + c θ /2 c θ .
(5.14)
Using the stability restriction of Eq. (5.13), the above expression becomes (5.17)
1 − 2c a 3c a ≥ 1 c θ .
The stability conditions, however, already required the condition c θ < −2 or c θ ≥ 0 from Eq. (5.1). Since c θ < −2 and c θ ≥ −2 cannot simultaneously be true, we must have that c θ ≥ 0. Thus, the BBN constraint becomes Let us now finally discuss solar system constraints, dividing them into two separate cases, as described in previous work [13,47]. In the first case, α 1 10 −4 (but not α 1 << 10 −4 ), which saturates the solar system constraint of Eq. (5.10). In this region of parameter space, which we will denote region 1, c a ≈ O(10 −5 ) and c θ ≈ 3c a (1 + O(10 −3 )) in order to satisfy α 2 4 × 10 −7 from Eq. (5.9). In this limit, the Einstein-aether coupling constants become {c a , c θ , c ω , c σ } = {c a , 3c a (1 + δc θ ), c ω , 0}, (5.19) where δc θ ≈ O(10 −3 ). One might wish to assume that δc θ << 1 and thus ignore this term and set c θ = 3c a exactly; however, inserting this expression into Eq. (5.14) shows that when c a = 0, the Cherenkov constraint, c 2 S ≥ 1, is no longer satisfied. In this regime, when c a ≈ O(10 −5 ), the BBN constraint [Eq. (5.18)] is automatically satisfied (because when c θ = 3c a the BBN constraint becomes c a 2/29), so previous papers did not mention it in association with this region.
c θ ≥ 0, c θ + 8c a
Let us now discuss a second way to satisfy the solar system constraints by setting α 1 10 −4 . In this region of parameter space, which we will denote region 2, Eq. (5.8a) tells us that c a 10 −4 and c θ is essentially unconstrained if one forces c a 10 −7 , other than by the BBN constraint. In this case, the BBN constraint simplifies to 0 ≤ c θ ≤ 2/7, which is consistent with what was reported in [13,47]. One can show analytically that in this region of parameter space, Z Hence, the Einstein-aether modifications to GWs in region 2 of parameter space are negligible compared to those in region 1. Therefore, for the remainder of this work, we will consider only region 1.
Restricting our attention to region 1 8 , we examine the combined constraints. With the addition of the solar system constraints, we arrive at the left panel of Fig. 6. We can see that c a ≈ O(10 −5 ) and is uniformly distributed, as expected, and the correlation between c a and c θ gives a clear diagonal line on the parameter space. Furthermore, adding the bound on α 1 from binary pulsar and triple systems results in a Gaussian distribution of c a (and hence c θ ) as in the right panel of Fig. 6.
C. Priors on (α1, α2,cω) from existing constraints
In this subsection, we discuss the priors on a simpler reparameterization of the theory in terms of {α 1 , α 2 } instead of {c a , c θ } and in terms of a new parameterc ω instead of c ω . We will work a lot with this parametrization in the next section because, as you will see here, the priors are simpler and the GW observables depend more cleanly on them.
Let us first discuss this new parameterc ω . In the previous sections, we saw that c ω is unconstrained from (0, ∞) and that both cases c ω → 0 and c ω → ∞ limit to GR. Since we cannot realistically sample across an infinite range, we will define a new variable,
c ω = 1 1 + c ω ,(5.20)
such that as c ω → 0 thenc ω → 1, and as c ω → ∞ then c ω → 0. With this new parameter, the range of the prior becomesc ω ∈ [0, 1] and one is able to cover the entire c ω range.
Let us now discuss the shape of the priors when we impose all existing constraints. To do so, we sample uniformly on {α 1 , α 2 ,c ω }, and reject those points that violate the constraints on Einstein-aether theory described in Sec. V A. We start by sampling each of these parameters in the regions c ω ∈ [−1, 1], (5.21c) and show how this parameter space shrinks with the addition of constraints.
Let us begin by discussing the stability conditions of Eq. (5.3). Using the definition of α 1 when c σ = 0, one then finds that −8 ≤ α 1 ≤ 0, whilec ω > 0 as expected and shown in the top panel of Fig. 7 through rejection sampling. As we will see later, this is the only constraint that will have any impact onc ω . Further, requiring that the propagation speeds of the GW polarizations be real [Eq. (5.1)] we can derive a constraint on α 2 . Let us then rewrite c S in terms of the α 1 and α 2 to find Since we know that −8 < α 1 < 0, the numerator of the above equation is negative. Thus, to obtain c 2 S ≥ 0, we need the denominator of the above equation to also be negative, which implies that
c 2 S = α 1 α 1 − 8α 2 .(α 2 ≥ α 1 8 . (5.23a)
This explains the relationship between α 1 and α 2 in the top right panel of Fig. 7.
Let us now consider the Cherenkov constraints of Eq. (5.4). Requiring that the scalar speed be larger than unity now translates to
c 2 S = α 1 α 1 − 8α 2 ≥ 1 ⇒ α 1 ≤ α 1 − 8α 2 , (5.24a)
since the denominator of the first expression is negative. This immediately leads to α 2 ≤ 0. This restriction to negative α 2 is the only difference between the top right panel of Fig. 7 and the bottom left panel of Fig. 7.
Let us now study the BBN constraint. Rewriting Eq. (5.18) in terms of α 1 and α 2 , gives two inequalities
α 2 ≥ α 1 2 α 1 + 2 α 1 + 8 , (5.25a) α 2 α 1 8 4α 1 + 1 α 1 + 1 . (5.25b)
The second constraint is much tighter and results in the curved line visible in the bottom right panel of Fig. 7.
Let us then close by discussing solar system constraints. Since these are bounds on α 1 and α 2 directly, it is easy to see how they shrink the allowed range for those parameters in the left panel of Fig. 8. Note that because we are sampling linearly in α 1 , this is automatically the region 1 of parameter space discussed in the previous section (where α 1 ≈ O(10 −4 )). We do not have to enforce any extra conditions on c θ to be in region 1 when we sample in this parameterization. Finally, we add the binary pulsar and triple system constraint on α 1 . This takes α 1 from a uniform distribution in the allowed region to a Gaussian distribution as seen in the right panel of Fig. 8. It has no impact on α 2 orc ω .
Due to the simpler priors in this reparameterization, and the fact that sampling linearly in α 1 is equivalent to sampling in region 1 of parameter space, we will use this parameterization of the theory for the remainder of the paper.
VI. VALIDATION OF EINSTEIN-AETHER MODEL THROUGH PARAMETER ESTIMATION STUDIES WITH INJECTIONS
As confirmation that our Einstein-aether waveform template can successfully recover source parameters from GW data, we performed parameter estimation studies on injected data. This section describes those studies, first for data constructed in GR, and then for data constructed in Einstein-aether theory.
A. GR Injection
We begin by constructing a set of injections in GR. We use the IMRPhenomD_NRT waveform template and source parameters similar to the GW170817 event. We "observe" this data in a three detector network comprised of Hanford, Livingston and Virgo O2-O3 type senstivity for Hanford and Livingston and an optimistic O4 model for Virgo [49] sensitivities, respectively. The distance to the source was rescaled such that the signal-to-noise ratio (SNR) of the synthetic data as measured by this detector network is 32.4, matching the GW170817 event [8]. Explicitly, the parameters used are listed in table I. The Einstein-aether parameters, {α 1 , α 2 ,c ω }, were not specified, because they are not part of the IMRPhenomD_NRT injection. However, it is useful to note that in the GR The Einstein-aether parameters were not explicitly set for the GR injection, and were set to nonzero values listed in Eq. (6.1) for the Einstein-aether injection. Note that in the GR case, M andM are equivalent.
limit, α 1 → 0, α 2 → 0 andc ω → 0 or 9c ω → 1. We then ran an MCMC exploration of the likelihood to perform parameter estimation on this data set, using the EA_IMRPhenomD_NRT waveform template as our recovery model. The code randomly draws points in the 15-dimensional 10 parameter space of
θ = {α , sin δ, ψ, cos ι, φ ref , t c , D L , M, η, χ 1 , χ 2 , Λ s , c a , c θ , c ω },
using the priors described in Secs. IV A, IV B, and V C. Unfortunately, for the Einstein-aether coupling constants, the posteriors were identical to the priors. This means that the prior was more restrictive than the likelihood and we did not learn any new information from the analysis. However, if the most restrictive of the constraints 9c ω → 0 or equivalently cω → ∞ leads to khronometric gravity [26], which reduces to GR if the remaining three coupling constants are set to 0 simultaneously. 10 Recall that cσ is set to zero.
were removed, the posterior was distinct from the prior. In this way, one can attempt to place constraints on the Einstein-aether parameters from GW data that, even if not competitive with the most restrictive constraints to date, is at least independent of other experimental measurements. Hence, throughout the remainder of this paper, the prior used for the Einstein-aether parameters include the stability conditions, the Cherenkov constraint, and the BBN constraint (Eqs. (5.1), (5.3), (5.4), and (5.18)), but it excludes the solar system constraints and the constraint on α 1 from the triple system (Eqs. (5.9), (5.10), and (5.11)).
As a test of the code, we performed parameter estimation on the same injected data three different times. In each test, the MCMC began sampling from a different seed point, but all three converged to the same posteriors. The Gelman-Rubin statistic was also used to test convergence [50]. This method takes the square root of the ratio of two estimates of the variance in the MCMC chains to compute a quantity commonly denoted byR. The numerator of this ratio overestimates the variance and the denominator underestimates it, but both converge to the true value as the number of samples increases. Therefore,R → 1 from above as the number of samples goes to infinity. Reference [51] recommends thatR ≤ 1.1 be the condition for convergence. Comparing chains from our three injections, the maximumR = 1.001 < 1.1. Therefore, we are reasonably confident that the MCMC is exploring the parameter space appropriately and converging properly.
Next we compare the posteriors recovered to the in- jected parameters. For everything but the Einstein-aether specific parameters, plots of the posterior distributions recovered from these injections are compared to the injected values in Appendix D (labeled as "GR Injec 1-3"). All were consistent with the injected value, with the chirp mass exhibiting a bias due to correlations with the α 1 Einstein-aether parameter. This correlation is better exhibited in Fig. 9, which shows a corner plot in the α 1 -M plane. Clearly, the injected value is a point in the top-right corner of the covariance panel, which is poorly recovered by the analysis. The reason for this is that the α 1 = 0 line in the α 1 -M plane is strongly disfavoured by the prior (as discussed already in Sec. V C). This pushes the posterior away from the injected value of α 1 , which can be compensated for through a different choice of chirp mass.
Posteriors on the Einstein-aether parameters are presented in Figs. 10 and 11. The posterior distribution for α 1 is distinct from the prior and shifted towards the injected value. However, given the shape of the prior, note again that α 1 = 0 is possible, but there are fewer combinations of α 2 that allow α 1 to have this value. This is what pushes the peak of α 1 slightly away from the injected value of zero. The posterior distribution for c ω includes both possible GR limits, but seems to favor the limitc ω → 0. It is easier to understand why if we translate these points into the {c a , c θ , c ω } parameter space 11 . Looking at a corner plot of the c a -c ω plane for all three injections as compared to the prior (Fig. 12), we can see that small values of c ω are only allowed when c a is also small. Examining the Einstein-aether quantities that are important to the likelihood, we find analytically that x (c ω ) has an interesting shape (Fig. 13). This function is very large for small c ω , and then quickly drops to very small values as c ω increases. Plotting this curve for three different values of c a , we see that the larger the c a , the larger the region of c ω space in which x is very large. Given that the size of x will determine the dipole contribution to the phase and amplitude of the waveform (Eqs. (3.23) and (3.22)), it makes sense that large x would be disfavored for a GR injection. This seems to explain the disallowed region in the c a -c ω covariance plot. Translating back toc ω , very small c ω ≈ 0 corresponds toc ω ≈ 1. Hence, the lack of support forc ω = 1 in Fig. 11 is explained. Note that this dip atc ω = 1 did not happen in the case when all constraints were applied, probably because it was already ruled out by the binary pulsar and the triple system constraints.
Finally, note that we ran this parameter estimation on injected data with the entire waveform, and separately with just the = 2 contribution to the waveform. The posteriors in both cases were identical. This is not surprising, as the = 1 contribution should be suppressed compared to the = 2 contribution given how small ∆s is when s ≈ O(10 −3 ) (see Eq. (3.24) for how this impacts the waveform and Appendix B for a description of why we expect s to be of this order). However, if we include the = 1 contribution, the code takes at least twice as long to run, because of all the extra terms in the model that are required to evaluate the likelihood. In the interest of efficiency, and since it makes no difference, for the remainder of the paper, we do not include the = 1 contribution to the waveform.
B. Non-GR Injection
The next test of the waveform template involved recovering injected data when the values of the Einsteinaether parameters are distinct from those in GR. To test this we constructed a set of injection data with the EA_IMRPhenomD_NRT waveform template and Einsteinaether parameter injected values set to The posterior of α1 and α2 recovered with EA_IMRPhenomD_NRT from an injection of a GW in GR compared to the prior. Note that both are peaked towards the expected zero-value, though α1 is peaked slightly away from 0 because there are fewer combinations of α2 that lead to α1 = 0. parameters were the same as in the GR injection and are listed in Table I. Again, we ran an MCMC to perform parameter estimation on this data set, using the EA_IMRPhenomD_NRT waveform template as our recovery model. Plots of the posterior distribution recovered from this injection compared to the injected value are in Appendix D (this is the "EA Injec" data set). All of the posteriors are consistent with the injected parameters. The only posterior that dramatically changes from the recovery of a GR injection, is that of the chirp mass. We can see from Fig. 14 that when the value of α 1 is at the other edge of the prior, the posterior on the chirp mass is biased in the other direction.
α 1 = −0.245,(6.
As for the Einstein-aether parameters, shown in Figs. 15 and 16, the posterior for α 1 is distinct from the GR case and the other two posteriors are the same. It may be difficult at this stage for EA_IMRPhenomD_NRT to distinguish between GR and non-GR values of the Einsteinaether coupling constants, but this test demonstrates that it is possible for injected data with parameters distinct enough from GR. From this, we conclude that EA_IMRPhenomD_NRT can successfully recover the source parameters for an event in both the case of GR values and non-GR values of the Einstein-aether coupling constants.
VII. CONSTRAINTS ON EINSTEIN-AETHER THEORY WITH GRAVITATIONAL WAVE EVENTS FROM O1-O3
Once the waveform template has undergone testing, we are able to use it to recover the source parameters from GW events. To date, there have been two BNS mergers well above the detection threshold: GW170817 and (Eq. (3.26e)) as a function of cω for three different values of ca. From the shape of this curve, we can see that for small values of cω, x is very large. This will make the dipole contribution to the GW very large. If x above some cutoff is disfavored by GW data, then these small values of cω will also be disfavored. GW190425 [8,32]. In this section, we describe the parameter estimation studies we have conducted with these two events. We remind the reader again that we have not considered GW events produced by binaries with one or more BHs, because the Einstein-aether sensitivities have not yet been calculated for these objects, and these sensitivities enter the dominant Einstein-aether modifications to the GR waveform. We performed parameter estimation on both events, using data from the Gravitational Wave Open Science Center [52]. The priors used for the IMRPhenomD parameters 12 were those described in Sec. IV A. Note that be-cause of the good sky localization for GW170817, we were further able to narrow the priors on the right ascension and declination for this event to α ∈ [3.4, 3.5] and sin δ ∈ [−.4, −.3]. For both events, the prior on the symmetric tidal deformability, Λ s , was the same as that given in Sec. IV B. Finally, the prior on the Einstein-aether parameters was the less restrictive prior described in Sec. VI, which included the stability conditions (Eqs. (5.3), (5.1)), Cherenkov constraints (Eq. (5.4)), and the BBN constraint(Eq. (5.18)). The complicated shape of this prior is shown in the bottom right panel of Fig. 7.
We will start by examining the results we obtain when we analyze the GW170817 event. We perform three different parameter estimation studies on this data, starting the MCMC from three different seed points. The posteriors from each run are identical, giving us good reason to believe that the MCMC explored the space adequately and converged. Visual inspection of the MCMC chains suggests the analysis has converged to a stable distribution. Furthermore, the Gelman-Rubin statistic for these runs gave anR = 1.0009 < 1.1, which also indicates convergence.
We plot the posteriors we obtain when we analyze the GW170817 event directly on top of LIGO's for convenient comparison (Fig. 17) [31]. Note that the prior we use for the χ 1 and χ 2 parameters is narrower than that used by LIGO. If we use the same prior as LIGO's for χ 1 and χ 2 , our posteriors for these parameters match LIGO's and the results for all the other parameters are statistically consistent with our previous posteriors. Comparing the plots in Fig. 17, we find that all the posteriors for the GR parameters are consistent with LIGO's except for the chirp mass. Given what we saw with this parameter in the injection studies, this is not surprising. Correlations between the Einstein-aether parameter α 1 and the chirp mass tend to dramatically increase the width of the posterior on the latter parameter and expand it asymmetrically. This is explicitly demonstrated in Fig. 18.
The posteriors for the Einstein-aether parameters are shown in Figs. 19 and 20. There is no improvement over the prior aside from a slight disfavoring ofc ω = 1 (equivalent to c ω = 0). The reason for this was explained in Sec. VI. We did not expand the prior on the Einstein-aether parameters further to explore wider regions of parameter space because numerical instabilities and floating-point errors in the waveform calculation prevented us from performing the inference analysis. Furthermore, the sensitivity model breaks down for certain combinations of the Einstein-aether coupling constants outside the priors we have chosen (see Appendix B for more detail).
For the GW190425 event, the Einstein-aether posteriors were no more informative (they were identical to those obtained from the analysis of the GW170817 event). This is not surprising given the lower SNR of this signal. The combined SNR of GW170817 was estimated to be 32.4 (accounting for the SNR in each of the three detectors, LIGO Hanford, LIGO Livingston and Virgo), while the SNR of GW190425 was just 12.9 (in the LIGO Livingston detector) [8,32]. The SNR of the GW170817 detection was about 2.5 times larger than that of GW190425. We expect statistical error to be inversely proportional to SNR. Therefore, as the SNR increases, the statistical error decreases. Thus, it makes sense that posteriors from GW190425 do not contain more information than those from GW170817.
VIII. CONCLUSIONS
The posteriors shown in the previous section represent the first direct search for Einstein-aether modifications in GW data. Our study is also one of the first tests to compare LVC data to a waveform with the GR transversetraceless polarization and with additional non-GR polarizations simultaneously, as predicted from a specific modified theory. While this study was unable to place tight constraints on the Einstein-aether parameters, there is still a lot to learn from it. Our analysis reveals the complications that may arise in modified theories with multiple coupling constants to constrain, especially if any of those constants is degenerate with astrophysical parameters. Our analysis further demonstrates that constraints from the absence of a dipole term in GW radiation may continue to dominate other constraints from GW observations. Finally, this work summarizes all of the current constraints on Einstein-aether theory, giving a careful description of each region of parameter space and how sensitivities in this theory are affected in those regions. From this study, it is clear that region 1 of parameter space (as described in Sec. V B) will be accessible to GW studies before region 2 is.
The results of this study prompt the question: what might improve the constraints that GW data can place on Einstein-aether theory? There are several possible avenues to approach this question. Firstly we can consider the types of events that are being studied. It is possible that there are certain combinations of astrophysical source parameters that are better for constraining this theory than others. We only considered source parameters similar to those detected with BNS mergers to date. Perhaps there is some type of "golden event", that if we were fortunate enough to observe it, would greatly constrain the theory further. A good candidate for such a golden event is a mixed compact binary consisting of a low-mass BH and a neutron star. The analysis of such a system would require first the calculation of Einstein-aether sensitivities for BHs. We can also consider what might be achieved with future events and future GW detectors. As detectors continue to improve and higher SNR events are detected, how will constraints on Einstein-aether theory change? It seems reasonable to expect some improvement that scales as 1/SNR, but it is unclear exactly how much the posteriors will change, because of the strong correlations between the Einstein- aether parameters and other system parameters (like the chirp mass). Furthermore, as more BNS events are detected, constraints from each event can be combined, since the value of the Einstein-aether coupling constants must be consistent across all events. On the order of 10 BNS events are predicted for the LVC fourth observing run, O4, starting later this year [53].
Another possible consideration is improvement of the waveform template itself. This waveform template was built off of IMRPhenomD_NRT, which was fit to numerical relativity simulations in GR. There have so far been no numerical relativity simulations of binary NS mergers in Einstein-aether theory. It is possible that fitting a waveform template to NR simulations in this theory would make it more accurate and better able to constrain the theory. However, developing such a simulation comes with its own set of challenges, and we doubt that the modifications would be so large to improve constraints beyond what has already been achieved with binary pulsar and solar system observations.
Another large avenue of possible future work would be to extend this analysis to BHNS mergers or BBH mergers, if Einstein-aether theory sensitivities were known for BHs, as mentioned before. If that were accomplished, the number of events that could be used for this study would increase dramatically, even before the next observing runs begin. At the very least, one could make assumptions about what the sensitivity for BHs in this theory is likely to be, and then examine the BHNS merger events. This would not place true constraints on Einstein-aether theory parameters, because simplifying assumptions would have been made, but it may give some idea of what we might hope to learn from these events in the future. Ultimately, there is still much that could be investigated about GWs in Einstein-aether theory. It would be especially useful to determine if there is any point at which GW constraints on Einstein-aether theory will surpass those from current experiments. To minimize confusion for anyone attempting to reproduce our code, we will describe here in detail the modifications that were made to the IMRPhenomD waveform template to make it consistent with IMRPhenomD_NRTidalv2.
Eq. 17 of Dietrich et. al gives the tidal phase correction in the frequency domain [41]: with
ψ T (x) = −κ T eff 39 16η x 5/2P NRTidalv2 (x),(1.κ T eff = 3 16Λ , (1.2)
whereΛ is the commonly used mass-weighted tidal deformability (Eq. (4.1)), η is the symmetric mass ratio (Eq. (3.27c)), and
x = ω 2 2/3 = (πmf GW ) 2/3 ,(1.P NRTidalv2 (x) = 1 + 4 i=0ñ 1+i/2 x 1+i/2 1 + 2 j=0d 1+j/2 x 1+j/2 .
(1.4)
The coefficients are given in Eqs. 19-21 of the NRTidal paper [41]. However, in order for our waveform to match LALSuite as well as it does, we needed to use the same number of significant digits. Hence, we took the values of these coefficients directly from LALSuite's code. We copy them here in table II for convenience. The tidal amplitude correction in the frequency domain is given by Eq. 24 of Dietrich et. al [41]: [43,54]. The exact form of this taper can be found in Eq. 7 of [54]. Putting it all together, the final amplitude in the frequency domain is [41],Ã
A NRTidalv2 T = −5πν= (à BBH +à NRTidalv2 T ) ×à Planck . (1.7)
The IMRPhenomD_NRTidalv2 waveform template also accounts for spin-spin effects in the phase. The terms added to the BBH baseline phase are [41],
Ψ SS = 3x −5/2 128η ψ (1) SS,2P N x 2 + ψ (1) SS,3P N x 3 +ψ (1) SS,3.5P N x 7/2 + [(1) ↔ (2)] (1.8)
where (1) and (2) represent the two bodies in the binary system (with m 1 ≥ m 2 as before). The 2PN and 3PN terms were already implemented in LALSuite [55][56][57]: 10) and the 3.5PN term was added by [41]:
ψ (1) SS,2P N = −50(C (1) Q − 1)µ 2 1 χ 2 1 , (1.9) ψ (1) SS,3P N = 5 84 9407 + 8218µ 1 − 2016µ 2 1 × (C (1) Q − 1)µ 2 1 χ 2 1 ,(1.ψ (1) SS,3.5P N = 10 µ 2 1 + 308 3 µ 1 χ 1 + µ 2 2 − 89 3 µ 2 χ 2 − 40π (C (1) Q − 1)µ 2 1 χ 2 1 − 440(C (1) Oc − 1)µ 3 1 χ 3 1 ,(1.11)
where µ 1,2 = m 1,2 /m as before, χ 1,2 are the spins of each body, and C
(1,2) Q and C (1,2) Oc are the spin-induced deformabilities for the individual stars which can be related to the tidal deformability with the universal relations [58],
C (1,2) Q = exp 4 i=0 q i ln(Λ 1,2 ) i , (1.12) C (1,2) Oc = exp 4 i=0 o i ln(C (1,2) Q ) i ,(1.13)
with coefficients in table III. We computed C Q and C Oc for the specific case Λ 1 = Λ 2 = 350 to compare against the values used for Fig. 7 of [41], and caught a small typo in the caption of that image. The correct values, which were used to create the plot, are C Q = 5.29 and C Oc = 10.5. Note that because the 2PN and 3PN spin-spin terms were added to the code earlier, they are implemented in a different way from the 3.5PN spin-spin term and the tidal effects. To make our code consistent with LALSuite, we had to follow their convention. Thus, the 2PN and 3PN spin-spin terms were added to the PN terms in the inspiral only. This carries through to higher frequencies via boundary conditions when the different parts of the waveform are stitched together. Meanwhile, the 3.5PN spin-spin term and the tidal modifications to the phase and amplitude are added to the entire waveform so that the underlying BBH model did not need to be recalibrated. The implementation of the sensitivity model in GWAT was tested in Sec. IV C and compared against previous work [13]. However, this was done for the most restrictive prior on the Einstein-aether parameters (described in detail in Sec. V C) and as discussed in Sec. VI, in this region of parameter space the prior is more informative than the likelihood. Thus, we also considered a slightly less restrictive prior as outlined in Sec. VI. In this appendix we demonstrate how this new prior affects the calculation of sensivities in Einstein-aether theory.
We begin by plotting sensitivity as a function of compactness for 50,000 random values of compactness when the Einstein-aether parameters are varied in the region of parameter space relevant to this work (Fig. 21). Recall from Sec. VI, the prior includes the stability conditions, the Cherenkov constraint, and the BBN constraint, while it excludes the solar system constraints and the constraint on α 1 from binary pulsars and the triple system. In this region, the sensitivities calculated are approximately three orders of magnitude larger than in the region considered in previous work. This increase is consistent with the increase in magnitude of α 1 from one region to the other since the dominant contribution to sensitivity from the Einstein-aether coupling constants is linear in α 1 (recall Eq. (2.15) and the fact that α 2 is much smaller than α 1 ).
One important consequence of working in a less restrictive region of parameter space is that it is possible to select a combination of coupling constants with s ≥ 1. Given the definition of s in terms of σ (Eq. (2.14)), s ≥ 1 is unphysical. Furthermore, when s > 1, there are quantities in the waveform (namely A (2) (f ), Eq. (3.22)) that depend on (1 − s) that the code will fail to calculate. Therefore, points with s ≥ 1 should also be rejected.
Note that in the region of parameter space we use in this study, only 10 out of 50, 000 points had s ≥ 1. So the problematic points are occurring with a frequency of 0.02% and can be safely removed from our data without affecting our result. However, this issue only gets worse as one moves to larger regions of parameter space and the magnitude of α 1 increases. We recommend that anyone wishing to examine a less restrictive region of the parameter space thoroughly test the sensitivity model in that region to ensure it does not break down.
To explicitly illustrate how much our result depends on the sensitivity model, we performed parameter estima- tion on the same injected data 13 while computing the sensitivity to different orders in the binding energy to mass ratio. In Fig. 22, we compare three different runs with s computed to {O(Ω/m), O(Ω 2 /m 2 ), O(Ω 3 /m 3 )} respectively. The difference in shape for the correlation between M and α 1 can be explained with Eqs. (2.15) and (3.29). To explain this shape analytically, we will treat α 2 as negligible compared to α 1 (a good approximation in the region we sample in) and keep Ω/m constant. Then as α 1 is varied from [−.25, 0] the first term in Eq. (2.15) is the largest and is positive, the second term is smaller and negative, and the third term provides a very small positive contribution. All three terms tend to zero as α 1 → 0. Adding these terms together order by order, we get three different expressions for s and we can see how they depend on α 1 . This same dependence appears in the correlation plot betweenM and α 1 because of the dependence ofM on s (Eq. (3.29)). Given how much our posteriors depend on how many terms are included in the sensitivity calculation, we recommend that the sensitivity model be further investigated (for instance, computed to higher orders) before constraints are placed with GW data. 13 The injected data was generated with the IMRPhenomD_NRT waveform template and used the source parameters listed in Table I
Appendix C: Cherenkov Constraints
To summarize the constraints of [9], when c T < 1, is also satisfied. Note that all of the emission processes which would place the constraints of Eqs. (3.1) -(3.3) vanish as the c i 's tend to zero. However, the emission of two scalar aether field excitations via an off-shell graviton propagator does not vanish in this limit and provides a bound on the ratios of the c i for c S < 1, namely 2 [c a − (2c σ + c θ ) /3] c ω + c σ < 3 × 10 −19 .
(3.5)
Together, Eqs. (3.1)-(3.5) are the conditions explicitly checked by GWAT as part of the prior. Any points that meet the conditions for the constraint to be imposed (e.g. c V < 1), but do not satisfy these equations (in this example, Eq. (3.2)) are rejected. It is important to note that because we are setting c σ = 0 identically, the constraint of Eq. (3.2) will be satisfied for every combination of the Einstein-aether parameters. Thus, for the prior, note that c V < 1 is actually allowed. However, given that c V = c ω /2c a when c σ = 0, there are conditions in the likelihood that disfavor c ω < 2c a (or equivalently c V < 1) in the posterior, as discussed in Sec. VI. 24. Posteriors recovered from injections (GWs in GR and then GWs in Einstein-aether, labeled EA on the plots) with the EA_IMRPhenomD_NRT waveform template. All injected values were successfully recovered within the 90% credible region except for the chirp mass posterior which is extremely asymmetric in the case of the GR injection. This is because of the correlation betweenM and α1 and is discussed in Sec. VI.
a i = {−919.6, 330.3, −857.2}, (2.20) b i = {−383.5, 192.5, −811.1}. (2.21)
FIG. 3. Comparison between the binary Love relation implemented in GWAT (points in blue) and that computed in [21] (points in black) for three different values of q. Beneath, the relative fractional differences (for q = 0.50, q = 0.75, and q = 0.90 respectively) demonstrate that the GWAT implementation is correct.
FIG. 4 .
4Left:
2c) must be positive. Since c 2 T > 0 ⇒ (1 − c σ ) > 0, Eqs. (5.2b) and (5.2c) immediately imply
(5.3) requires that c a be restricted to the range [0, 2] and c ω be positive, as shown in the top left panel of Fig. 5, which we generated via rejection sampling. Similarly, Eq. (5.1) disallows c θ ∈ (−2, 0) because, from Eq. (2.11c) with c σ = 0,
13) which implies c θ ≥ 0 or c θ < −2, leading to the shape of the top right panel ofFig. 5.Let us now focus on the Cherenkov constraint, which through rejection sampling leads to the constraints on parameter space shown in the bottom left panel ofFig. 5. To better understand these constraints, consider first the Cherenkov bound c S ≥ 1, which leads to
point we must keep careful track of negative signs since both c θ and (1 − 2c a ) can be either positive or negative in the region of parameter space considered. There are four possible combinations with their respective version of the inequality. For example, consider c θ < 0 and c a > 1/2. Then Eq. (5.15) becomes1 − 2c a 3c a c θ ≤ 1 ⇒ c θ ≥ 3c a 1 − 2c a .(5.16) Therefore, in the bottom right corner of the (c θ , c a ) correlation of the bottom left panel of Fig. 5, all the points accepted in our rejection sampling must fall above the line 3c a /(1 − 2c a ). The other accepted points in this panel can be explained similarly. Let us now focus on the BBN constraints of Eqs. (5.7a) and (5.7b). The lower bound on c θ [Eq. (5.7b)] is minimized when c a is maximized, and since c a ∈ [0
BBN constraint immediately restricts any sampling to the top left corner of the c θ -c a parameter space in the bottom left panel of Fig. 5, and adds an upper bound along the line (2 − 8c a )/7, resulting in the bottom right panel of Fig. 5.
= 1 +
1O(c a ), κ 3 = 1 + O(c 7/2 a ), s = O(c a ) and x = O(c 5/2 a ), assuming a finite, nonzero c θ and c ω , which were taken to be independent from c a for the purposes of this expansion. Furthermore, for c a ≈ 10 −7 , c 5/2 a ≈ 10 −18 and c 7/2 a ≈ 10 −25 . Therefore, these quantities barely differ from their values in the GR limit 7 : Z = 1, κ 3 = 1, s = 0, and x = 0. On the other hand, in region 1 where we take c θ ≈ 3c a , Z = 4/3 + O(c a ), κ 3 = 1 + O(c a ), s = O(c a ), and x = O(c a ). Recall that in region 1, c a ≈ O(10 −5 ).
FIG. 5 .
5Plots demonstrating the effect of successively adding current constraints on Einstein-aether theory to the prior in the c parameterization. Each parameter was sampled uniformly in the region [−3, 3] (the bottom right panel is shown in a smaller range simply so that it is visible). Points that did not obey these constraints were rejected. The constraints were applied in the following order (beginning in the top left corner and ending in the bottom right corner): positive energy conditions, Eq. (5.3); positive speeds of different GW polarizations, Eq. (5.1); Cherenkov constraint, Eq. (5.4); BBN constraint, Eq. (5.18).
FIG. 6 .
6Plots showing how the addition of the solar system and binary pulsar constraints affect the prior in region 1 of parameter space. In this region, we sample uniformly on ca, δc θ , and cω as described in Eq.(5.19). Both plots include all the constraints ofFig. 5as well as the solar system constraints, Eqs. (5.9) and (5.10). The plot on the right further adds the constraint from binary pulsar and triple systems, Eq. (5.11).
FIG. 7 .
7Similar to Fig. 5 but for (α1, α2,cω) parameterization uniformly sampled in the region described by Eq. (5.21). Again, the constraints were applied in the following order (beginning in the top left corner and ending in the bottom right corner): positive energy conditions, Eq. (5.3); positive speeds of different GW polarizations, Eq. (5.1); Cherenkov constraint, Eq. (5.4); BBN constraint, Eq. (5.18).
FIG. 8 .
8Similar to Fig. 6 but for (α1, α2,cω) parameterization uniformly sampled in the region described by Eq. (5.21). Both plots include all the constraints of Fig. 7 as well as the solar system constraints, Eqs. (5.9) and (5.10). The plot on the right further adds the constraint from binary pulsar and triple systems, Eq. (5.11). α sin(δ) cos(ι) tc DLM η χ1 χ2 Λs 3.42 -.37 -.82 3.0 63 1.188 0.25 .003 -.002 242 TABLE I. Source parameters used for injections.
FIG. 9 .
9A covariance plot of the posterior ofM and α1 recovered with EA_IMRPhenomD_NRT from three injections of GWs in GR. Note that the α1 prior biases theM posterior to smaller values because it is peaked away from zero.
were chosen because they satisfy the complicated Einstein-aether prior and are as distinct as possible from the GR injection (for α 1 ). All the other source
FIG. 11 .FIG. 12 .
1112The posterior ofcω recovered with EA_IMRPhenomD_NRT from an injection of a GW in GR compared to the prior. Note that bothcω = 0 andcω = 1 are possible GR values, butcω = 1 is slightly disfavored by the posterior. A covariance plot of the posterior of ca and cω recovered with EA_IMRPhenomD_NRT from three injections of GWs in GR (in color) as compared to the prior (in black).
FIG. 13. A plot of x (Eq. (3.26e)) as a function of cω for three different values of ca. From the shape of this curve, we can see that for small values of cω, x is very large. This will make the dipole contribution to the GW very large. If x above some cutoff is disfavored by GW data, then these small values of cω will also be disfavored.
FIG. 14. A covariance plot of the posterior ofM and α1 recovered from an injection of a GW in Einstein-aether theory. Note that when the injected value of α1 is close to the maximum possible magnitude, theM parameter is biased in the other direction compared toFig. 9.
FIG. 15 .FIG. 16 .
1516The posterior of α1 and α2 recovered from an injection of a GW in Einstein-aether theory compared to the posterior recovered from an injection in GR. Note that all posteriors are consistent with the injected value, though as inFig. 10, α1 is peaked slightly away from the injected value because there are fewer combinations of α2 that lead to α1 = −0.245. The posterior ofcω recovered from an injetion of a GW in Einstein-aether theory compared to an injection in GR. These look identical.
FIG. 17 .
17Comparison of our posteriors with those published by the LIGO/Virgo (LVC) collaboration for six of the source parameters of GW170817. All are consistent except for the chirp mass, which, as discussed in the text, is shifted due to Einstein-aether correlations. Our spin posteriors are also different from LVC's because of our use of a small spin prior.
FIG. 18 .
18Correlation between theM and α1 parameters for GW170817. Just as in the injections, this correlation tends to widen the chirp mass posterior.
FIG. 19 .FIG. 20 .
1920The posteriors for α1 and α2 from GW170817 plotted over the prior. Three separate runs are shown here and they all converge to the same answer, which is indistinguishable from the prior. The posteriors forcω from GW170817 plotted over the prior. Again three separate runs are shown that are all consistent with each other and indistinguishable from the prior aside from a slight disfavoring ofcω = 1 (reason explained in Sec. VI).
FIG. 21 .
21Sensitivity as a function of compactness varying the Einstein-aether parameters in the region of parameter space used for this study. A comparison withFig. 4reveals that in this region, the sensitivities are approximately three orders of magnitude larger than in the most restrictive region of parameter space.
FIG. 22 .
22The posteriors forM, α1 and α2 for injected data when sensitivity is computed to different orders. This demonstrates how much our results might depend on the sensitivity model.
2c a − (c σ + c ω )] (c ω + c σ ) 2 < 7 × 10 −32 ,(3.2)and when c S < 1,(c σ − c a ) 2 c a < 1 × 10 −30 . (3.3)This last constraint only holds when2 [(c θ + 2c σ )/3 − c a ]c ω + c σ > 10 −22(3.4)
6b )
6bwhile the coefficients {b ij , c ij } are given byb ij =
−14.40 14.45
31.36 −32.25
−22.44 20.35
,
(4.7)
c ij =
−15.25 15.37
37.33 −43.20
−29.93 35.18
,
(4.8)
tion Graduate Research Fellowship Program under Grant No. DGE -1746047. S. P. acknowledges partial support by the Center for AstroPhysical Surveys (CAPS) at the National Center for Supercomputing Applications (NCSA), University of Illinois Urbana-Champaign. K.Y. acknowledges support from NSF Grant PHY-1806776, PHY-2207349, a Sloan Foundation Research Fellowship and the Owens Family Foundation. N. Y. acknowledges support from NSF Grant No. AST 2009268.This work made use of the Illinois Campus Cluster, a computing resource that is operated by the Illinois Campus Cluster Program (ICCP) in conjunction with the National Center for Supercomputing Applications (NCSA) and which is supported by funds from the University of Illinois at Urbana-Champaign.Additionally, this research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Ad-vanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. Appendix A: IMRPhenomD_NRTidal Modifications
1)TABLE II. The coefficients of the Padé approximant used in the tidal correction to the phase. To make our code consistent with LALSuite it was necessary to use these exact numbers.iñ 1+i/2d1+i/2
0 −12.615214237993088 −15.111207827736678
1 19.0537346970349
22.195327350624694
2 −21.166863146081035 8.064109635305156
3 90.55082156324926
0
4 −60.25357801943598
0
3 )
3sinceω = 2πmf GW , with m = m 1 + m 2 , is the dimensionless GW frequency. The last expression in Eq. (1.1) is the Padé approximant (Eq. 18 of Dietrich et. al ) which is a function of x with eight numerical coefficients, four of which were determined by fitting to data[43]:
with n 1 = 3.354 × 10 −2 , n 2 = 4.315 × 10 −5 , d 1 = 7.542 × 10 −2 , d 2 = 2.236 × 10 −4 , and reducing the amplitude to zero by the time f = 1.2f merger24
9m 2
R
κ T
eff x 13/4 1 + 449
108 x + 22672
9 x 2.89
1 + 13477.8x 4
.
(1.5)
A Planck taper is used to end the inspiral waveform,
beginning at the merger frequency [43],
f merger =
0.3586
2mπ
m 2
m 1
1 + n 1 κ T
eff + n 2 κ T
eff
2
1 + d 1 κ T
eff + d 2 κ T
eff
2 , (1.6)
TABLE III .
IIIThe coefficients for the quadrupolar and octupolar spin-induced deformabilities as a function of tidal deformability.Appendix B: Order of Magnitude of the Sensitivity Parameter
A PN expansion is one in which all quantities are series-expanded in small velocities and weak-fields[16].
Figure 2of[39] andFigure 11.5 of[38] illustrate how these angles relate the orientation of the detector and the source.3 To use these expressions in the IMRPhenomD waveform model, we need to convert to the convention of that paper, which defined the Fourier transform ash(f ) = h(t)e −2iπf t dt[40], instead of as in Eq.(3.10). To transform these expressions to those used in the code, one can simply take i → −i.
Note that the exponent on Λs in Eq. (4.5) is negative, which corrects a small typo in Ref.[21] that those authors also corrected recently.
Note that though there is good agreement in the range of compactnesses relevant for this study, this agreement does not hold in the small C limit. As described in Sec. II B, the sensitivity calculation depends on the Tolmann VII EoS. While this analytic EoS is physically reasonable for realistic NS compactnesses, the justification for this model breaks down for very small C.
Note that if ca is exaclty zero, the quantities {Z, κ 3 , s, x} are identical to their GR limit, even for a nonzero c θ , cω. This implies that if ca were restricted to exactly zero, GW data would not be able to constrain Einstein-aether theory.8 Recall that in region 1, α 1 10 −4 but it is not true that α 1 << 10 −4 .
Recall that by Eq. (5.20)cω = 1 is equivalent to cω = 0.
θ = {α , sin δ, ψ, cos ι, φ ref , tc, D L , M, η, χ 1 , χ 2 }
ACKNOWLEDGMENTSThe authors would like to thank Toral Gupta, Enrico Barausse, Anzhong Wang, Chao Zhang, Tim Dietrich, and Nathan Johnson-McDaniel for helpful discussions. K.S. would like to acknowledge that this material is based upon work supported by the National Science Founda-Appendix D: Recovery of Injected ParametersIn this section we present comparisons between posteriors recovered with the EA_IMRPhenomD_NRT waveform template and injected values (Figs. 23 and 24). As described in section VI, this was done for two different cases: a GR case and a non-GR case. In the GR case, the input data was constructed with the IMRPhenomD_NRT waveform template which does not specify the Einstein-aether parameters. In the non-GR case, the EA injection, the input data was constructed with the EA_IMRPhenomD_NRT waveform template and the Einstein-aether parameters were given values distinct from those in the GR case (no longer zero or 1).
GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During the Second Part of the Third Observing Run. R Abbott, LIGO ScientificarXiv:2111.03606VIRGO, KAGRA). gr-qcR. Abbott et al. (LIGO Scientific, VIRGO, KAGRA), GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During the Second Part of the Third Observing Run (2021), arXiv:2111.03606 [gr-qc].
Tests of general relativity with binary black holes from the second LIGO-Virgo gravitational-wave transient catalog. R Abbott, LIGO Scientific10.1103/PhysRevD.103.122002arXiv:2010.14529Phys. Rev. D. 103122002gr-qcR. Abbott et al. (LIGO Scientific, Virgo), Tests of general relativity with binary black holes from the second LIGO- Virgo gravitational-wave transient catalog, Phys. Rev. D 103, 122002 (2021), arXiv:2010.14529 [gr-qc].
The Confrontation between General Relativity and Experiment. C M Will, 10.12942/lrr-2014-4arXiv:1403.7377Living Rev. Rel. 17gr-qcC. M. Will, The Confrontation between General Rela- tivity and Experiment, Living Rev. Rel. 17, 4 (2014), arXiv:1403.7377 [gr-qc].
Modified Gravity and Cosmology. T Clifton, P G Ferreira, A Padilla, C Skordis, 10.1016/j.physrep.2012.01.001arXiv:1106.2476Phys. Rept. 5131astro-ph.COT. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis, Modified Gravity and Cosmology, Phys. Rept. 513, 1 (2012), arXiv:1106.2476 [astro-ph.CO].
Modern tests of Lorentz invariance. D Mattingly, 10.12942/lrr-2005-5arXiv:gr-qc/0502097Living Rev. Rel. 8D. Mattingly, Modern tests of Lorentz invariance, Living Rev. Rel. 8, 5 (2005), arXiv:gr-qc/0502097.
T Jacobson, 10.22323/1.043.0020arXiv:0801.1547Einstein-aether gravity: A Status report. 020gr-qcT. Jacobson, Einstein-aether gravity: A Status report, PoS QG-PH, 020 (2007), arXiv:0801.1547 [gr-qc].
Einstein-Aether theory. C Eling, T Jacobson, D Mattingly, arXiv:gr-qc/0410001Deserfest: A Celebration of the Life and Works of Stanley Deser. C. Eling, T. Jacobson, and D. Mattingly, Einstein-Aether theory, in Deserfest: A Celebration of the Life and Works of Stanley Deser (2004) pp. 163-179, arXiv:gr- qc/0410001.
Gravitational Waves and Gammarays from a Binary Neutron Star Merger: GW170817 and GRB 170817A. B P Abbott, LIGO Scientific10.3847/2041-8213/aa920carXiv:1710.05834Astrophys. J. Lett. 84813astro-ph.HEB. P. Abbott et al. (LIGO Scientific, Virgo, Fermi- GBM, INTEGRAL), Gravitational Waves and Gamma- rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A, Astrophys. J. Lett. 848, L13 (2017), arXiv:1710.05834 [astro-ph.HE].
J W Elliott, G D Moore, H Stoica, 10.1088/1126-6708/2005/08/066arXiv:hep-ph/0505211Constraining the new Aether: Gravitational Cerenkov radiation. 66J. W. Elliott, G. D. Moore, and H. Stoica, Constraining the new Aether: Gravitational Cerenkov radiation, JHEP 08, 066, arXiv:hep-ph/0505211.
Lorentz-violating vector fields slow the universe down. S M Carroll, E A Lim, 10.1103/PhysRevD.70.123525arXiv:hep-th/0407149Phys. Rev. D. 70123525S. M. Carroll and E. A. Lim, Lorentz-violating vector fields slow the universe down, Phys. Rev. D 70, 123525 (2004), arXiv:hep-th/0407149.
J Muller, J G Williams, S G Turyshev, 10.1007/978-3-540-34377-6_21arXiv:gr-qc/0509114Lunar laser ranging contributions to relativity and geodesy. 349J. Muller, J. G. Williams, and S. G. Turyshev, Lunar laser ranging contributions to relativity and geodesy, Astrophys. Space Sci. Libr. 349, 457 (2008), arXiv:gr- qc/0509114.
Probing gravity to the second postnewtonian order and to one part in 10 7 using the spin axis of the sun. K Nordtvedt, 10.1086/165603Astrophys. Journal. 320871K. Nordtvedt, Probing gravity to the second post- newtonian order and to one part in 10 7 using the spin axis of the sun., Astrophys. Journal 320, 871 (1987).
T Gupta, M Herrero-Valea, D Blas, E Barausse, N Cornish, K Yagi, N Yunes, 10.1088/1361-6382/ac1a69arXiv:2104.04596New binary pulsar constraints on Einstein-aether theory after GW170817. 38gr-qcT. Gupta, M. Herrero-Valea, D. Blas, E. Barausse, N. Cornish, K. Yagi, and N. Yunes, New binary pulsar constraints on Einstein-aether theory after GW170817, Class. Quant. Grav. 38, 195003 (2021), arXiv:2104.04596 [gr-qc].
Projected Constraints on Lorentz-Violating Gravity with Gravitational Waves. D Hansen, N Yunes, K Yagi, 10.1103/PhysRevD.91.082003arXiv:1412.4132Phys. Rev. D. 9182003gr-qcD. Hansen, N. Yunes, and K. Yagi, Projected Constraints on Lorentz-Violating Gravity with Gravitational Waves, Phys. Rev. D 91, 082003 (2015), arXiv:1412.4132 [gr-qc].
Gravitational waves from the quasicircular inspiral of compact binaries in Einstein-aether theory. C Zhang, X Zhao, A Wang, B Wang, K Yagi, N Yunes, W Zhao, T Zhu, 10.1103/PhysRevD.104.069905arXiv:1911.10278Phys. Rev. D. 10169905Phys.Rev.D. gr-qcC. Zhang, X. Zhao, A. Wang, B. Wang, K. Yagi, N. Yunes, W. Zhao, and T. Zhu, Gravitational waves from the quasicircular inspiral of compact binaries in Einstein-aether theory, Phys. Rev. D 101, 044002 (2020), [Erratum: Phys.Rev.D 104, 069905 (2021)], arXiv:1911.10278 [gr-qc].
Gravitational Radiation from Post-Newtonian Sources and Inspiralling Compact Binaries. L Blanchet, 10.12942/lrr-2014-2arXiv:1310.1528Living Rev. Rel. 17gr-qcL. Blanchet, Gravitational Radiation from Post- Newtonian Sources and Inspiralling Compact Binaries, Living Rev. Rel. 17, 2 (2014), arXiv:1310.1528 [gr-qc].
Strong Binary Pulsar Constraints on Lorentz Violation in Gravity. K Yagi, D Blas, N Yunes, E Barausse, 10.1103/PhysRevLett.112.161101arXiv:1307.6219Phys. Rev. Lett. 112161101gr-qcK. Yagi, D. Blas, N. Yunes, and E. Barausse, Strong Bi- nary Pulsar Constraints on Lorentz Violation in Grav- ity, Phys. Rev. Lett. 112, 161101 (2014), arXiv:1307.6219 [gr-qc].
Improved gravitational-wave constraints on higher-order curvature theories of gravity. S E Perkins, R Nair, H O Silva, N Yunes, 10.1103/PhysRevD.104.024060arXiv:2104.11189Phys. Rev. D. 10424060gr-qcS. E. Perkins, R. Nair, H. O. Silva, and N. Yunes, Improved gravitational-wave constraints on higher-order curvature theories of gravity, Phys. Rev. D 104, 024060 (2021), arXiv:2104.11189 [gr-qc].
Binary Love Relations. K Yagi, N Yunes, 10.1088/0264-9381/33/13/13LT01arXiv:1512.02639Class. Quant. Grav. 33grqcK. Yagi and N. Yunes, Binary Love Relations, Class. Quant. Grav. 33, 13LT01 (2016), arXiv:1512.02639 [gr- qc].
Approximate Universal Relations among Tidal Parameters for Neutron Star Binaries. K Yagi, N Yunes, 10.1088/1361-6382/34/1/015006arXiv:1608.06187Class. Quant. Grav. 3415006gr-qcK. Yagi and N. Yunes, Approximate Universal Rela- tions among Tidal Parameters for Neutron Star Binaries, Class. Quant. Grav. 34, 015006 (2017), arXiv:1608.06187 [gr-qc].
Equation-of-state insensitive relations after GW170817. Z Carson, K Chatziioannou, C.-J Haster, K Yagi, N Yunes, 10.1103/PhysRevD.99.083016arXiv:1903.03909Phys. Rev. D. 9983016gr-qcZ. Carson, K. Chatziioannou, C.-J. Haster, K. Yagi, and N. Yunes, Equation-of-state insensitive relations after GW170817, Phys. Rev. D 99, 083016 (2019), arXiv:1903.03909 [gr-qc].
. K Yagi, N Yunes, I-Love-Q , 10.1126/science.1236462arXiv:1302.4499Science. 341gr-qcK. Yagi and N. Yunes, I-Love-Q, Science 341, 365 (2013), arXiv:1302.4499 [gr-qc].
I-Love-Q Relations in Neutron Stars and their Applications to Astrophysics, Gravitational Waves and Fundamental Physics. K Yagi, N Yunes, 10.1103/PhysRevD.88.023009arXiv:1303.1528Phys. Rev. D. 8823009gr-qcK. Yagi and N. Yunes, I-Love-Q Relations in Neutron Stars and their Applications to Astrophysics, Gravita- tional Waves and Fundamental Physics, Phys. Rev. D 88, 023009 (2013), arXiv:1303.1528 [gr-qc].
Equation-of-state-independent relations in neutron stars. A Maselli, V Cardoso, V Ferrari, L Gualtieri, P Pani, 10.1103/PhysRevD.88.023007arXiv:1304.2052Phys. Rev. D. 8823007gr-qcA. Maselli, V. Cardoso, V. Ferrari, L. Gualtieri, and P. Pani, Equation-of-state-independent relations in neutron stars, Phys. Rev. D 88, 023007 (2013), arXiv:1304.2052 [gr-qc].
GR Injec 1 GR Injec 2 GR Injec 3 EA Injec Injected value FIG. 23. Posteriors recovered from injections (GWs in GR and then GWs in Einstein-aether, labeled EA on the plots) with the EA_IMRPhenomD_NRT waveform template. T Jacobson, D Mattingly, 10.1103/PhysRevD.64.024028arXiv:gr-qc/0007031Phys. Rev. D. 6424028Gravity with a dynamical preferred frame. All injected values lie within the 90% credible regionT. Jacobson and D. Mattingly, Gravity with a dynam- ical preferred frame, Phys. Rev. D 64, 024028 (2001), GR Injec 1 GR Injec 2 GR Injec 3 EA Injec Injected value FIG. 23. Posteriors recovered from injections (GWs in GR and then GWs in Einstein-aether, labeled EA on the plots) with the EA_IMRPhenomD_NRT waveform template. All injected values lie within the 90% credible region. arXiv:gr-qc/0007031.
Undoing the twist: The Hořava limit of Einstein-aether theory. T Jacobson, 10.1103/PhysRevD.89.081501arXiv:1310.5115Phys. Rev. D. 8981501gr-qcT. Jacobson, Undoing the twist: The Hořava limit of Einstein-aether theory, Phys. Rev. D 89, 081501 (2014), arXiv:1310.5115 [gr-qc].
Einstein-Aether waves. T Jacobson, D Mattingly, 10.1103/PhysRevD.70.024003arXiv:gr-qc/0402005Phys. Rev. D. 7024003T. Jacobson and D. Mattingly, Einstein-Aether waves, Phys. Rev. D 70, 024003 (2004), arXiv:gr-qc/0402005.
Strong field effects on binary systems in Einstein-aether theory. B Z Foster, 10.1103/PhysRevD.76.084033arXiv:0706.0704Phys. Rev. D. 7684033gr-qcB. Z. Foster, Strong field effects on binary systems in Einstein-aether theory, Phys. Rev. D 76, 084033 (2007), arXiv:0706.0704 [gr-qc].
Post-Newtonian parameters and constraints on Einstein-aether theory. B Z Foster, T Jacobson, 10.1103/PhysRevD.73.064015arXiv:gr-qc/0509083Phys. Rev. D. 7364015B. Z. Foster and T. Jacobson, Post-Newtonian param- eters and constraints on Einstein-aether theory, Phys. Rev. D 73, 064015 (2006), arXiv:gr-qc/0509083.
Parameter estimation for compact binaries with ground-based gravitational-wave observations using the LALInference software library. J Veitch, 10.1103/PhysRevD.91.042003arXiv:1409.7215Phys. Rev. D. 9142003gr-qcJ. Veitch et al., Parameter estimation for compact bina- ries with ground-based gravitational-wave observations using the LALInference software library, Phys. Rev. D 91, 042003 (2015), arXiv:1409.7215 [gr-qc].
Properties of the binary neutron star merger GW170817. B P Abbott, LIGO Scientific10.1103/PhysRevX.9.011001arXiv:1805.11579Phys. Rev. X. 911001gr-qcB. P. Abbott et al. (LIGO Scientific, Virgo), Properties of the binary neutron star merger GW170817, Phys. Rev. X 9, 011001 (2019), arXiv:1805.11579 [gr-qc].
GW190425: Observation of a Compact Binary Coalescence with Total Mass ∼ 3.4M. B P Abbott, LIGO Scientific10.3847/2041-8213/ab75f5arXiv:2001.01761Astrophys. J. Lett. 892astro-ph.HEB. P. Abbott et al. (LIGO Scientific, Virgo), GW190425: Observation of a Compact Binary Coalescence with To- tal Mass ∼ 3.4M , Astrophys. J. Lett. 892, L3 (2020), arXiv:2001.01761 [astro-ph.HE].
Measuring the neutron star tidal deformability with equation-of-state-independent relations and gravitational waves. K Chatziioannou, C.-J Haster, A Zimmerman, 10.1103/PhysRevD.97.104036arXiv:1804.03221Phys. Rev. D. 97104036gr-qcK. Chatziioannou, C.-J. Haster, and A. Zimmerman, Measuring the neutron star tidal deformability with equation-of-state-independent relations and gravitational waves, Phys. Rev. D 97, 104036 (2018), arXiv:1804.03221 [gr-qc].
. K Schumacher, N Yunes, K Yagi, in prepK. Schumacher, N. Yunes, and K. Yagi, in prep.
Radiation damping in Einstein-aether theory. B Z Foster, 10.1103/PhysRevD.75.129904arXiv:gr-qc/0602004Phys. Rev. D. 73129904Phys.Rev.DB. Z. Foster, Radiation damping in Einstein-aether theory, Phys. Rev. D 73, 104012 (2006), [Erratum: Phys.Rev.D 75, 129904 (2007)], arXiv:gr-qc/0602004.
Constraints on Einstein-AEther theory and Hořava gravity from binary pulsar observations. K Yagi, D Blas, E Barausse, N Yunes, 10.1103/PhysRevD.89.084067arXiv:1311.7144Phys. Rev. D. 8969901Phys.Rev.D. gr-qcK. Yagi, D. Blas, E. Barausse, and N. Yunes, Con- straints on Einstein-AEther theory and Hořava gravity from binary pulsar observations, Phys. Rev. D 89, 084067 (2014), [Erratum: Phys.Rev.D 90, 069902 (2014), Erra- tum: Phys.Rev.D 90, 069901 (2014)], arXiv:1311.7144 [gr-qc].
Model-Independent Test of General Relativity: An Extended post-Einsteinian Framework with Complete Polarization Content. K Chatziioannou, N Yunes, N Cornish, 10.1103/PhysRevD.86.022004arXiv:1204.2585Phys. Rev. D. 86129901Phys.Rev.D. gr-qcK. Chatziioannou, N. Yunes, and N. Cornish, Model- Independent Test of General Relativity: An Extended post-Einsteinian Framework with Complete Polarization Content, Phys. Rev. D 86, 022004 (2012), [Erratum: Phys.Rev.D 95, 129901 (2017)], arXiv:1204.2585 [gr-qc].
. E Poisson, C Will, Gravity, Cambridge University PressE. Poisson and C. Will, Gravity (Cambridge University Press, 2014).
Gravitational-Wave Tests of General Relativity with Ground-Based Detectors and Pulsar Timing-Arrays. N Yunes, X Siemens, 10.12942/lrr-2013-9arXiv:1304.3473Living Rev. Rel. 16gr-qcN. Yunes and X. Siemens, Gravitational-Wave Tests of General Relativity with Ground-Based Detectors and Pulsar Timing-Arrays, Living Rev. Rel. 16, 9 (2013), arXiv:1304.3473 [gr-qc].
Frequency-domain gravitational waves from nonprecessing black-hole binaries. I. New numerical waveforms and anatomy of the signal. S Husa, S Khan, M Hannam, M Pürrer, F Ohme, X Jiménez Forteza, A Bohé, 10.1103/PhysRevD.93.044006arXiv:1508.07250Phys. Rev. D. 9344006gr-qcS. Husa, S. Khan, M. Hannam, M. Pürrer, F. Ohme, X. Jiménez Forteza, and A. Bohé, Frequency-domain gravitational waves from nonprecessing black-hole bina- ries. I. New numerical waveforms and anatomy of the sig- nal, Phys. Rev. D 93, 044006 (2016), arXiv:1508.07250 [gr-qc].
Improving the NR-Tidal model for binary neutron star systems. T Dietrich, A Samajdar, S Khan, N K Johnson-Mcdaniel, R Dudi, W Tichy, 10.1103/PhysRevD.100.044003arXiv:1905.06011Phys. Rev. D. 10044003gr-qcT. Dietrich, A. Samajdar, S. Khan, N. K. Johnson- McDaniel, R. Dudi, and W. Tichy, Improving the NR- Tidal model for binary neutron star systems, Phys. Rev. D 100, 044003 (2019), arXiv:1905.06011 [gr-qc].
Frequencydomain gravitational waves from nonprecessing blackhole binaries. II. A phenomenological model for the advanced detector era. S Khan, S Husa, M Hannam, F Ohme, M Pürrer, X Jiménez Forteza, A Bohé, 10.1103/PhysRevD.93.044007arXiv:1508.07253Phys. Rev. D. 9344007gr-qcS. Khan, S. Husa, M. Hannam, F. Ohme, M. Pür- rer, X. Jiménez Forteza, and A. Bohé, Frequency- domain gravitational waves from nonprecessing black- hole binaries. II. A phenomenological model for the ad- vanced detector era, Phys. Rev. D 93, 044007 (2016), arXiv:1508.07253 [gr-qc].
Matter imprints in waveform models for neutron star binaries: Tidal and self-spin effects. T Dietrich, 10.1103/PhysRevD.99.024029arXiv:1804.02235Phys. Rev. D. 9924029gr-qcT. Dietrich et al., Matter imprints in waveform models for neutron star binaries: Tidal and self-spin effects, Phys. Rev. D 99, 024029 (2019), arXiv:1804.02235 [gr-qc].
Systematic and statistical errors in a bayesian approach to the estimation of the neutron-star equation of state using advanced gravitational wave detectors. L Wade, J D E Creighton, E Ochsner, B D Lackey, B F Farr, T B Littenberg, V Raymond, 10.1103/PhysRevD.89.103012arXiv:1402.5156Phys. Rev. D. 89103012gr-qcL. Wade, J. D. E. Creighton, E. Ochsner, B. D. Lackey, B. F. Farr, T. B. Littenberg, and V. Raymond, System- atic and statistical errors in a bayesian approach to the estimation of the neutron-star equation of state using ad- vanced gravitational wave detectors, Phys. Rev. D 89, 103012 (2014), arXiv:1402.5156 [gr-qc].
A positive energy theorem for Einstein-aether and Hořava gravity. D Garfinkle, T Jacobson, 10.1103/PhysRevLett.107.191102arXiv:1108.1835Phys. Rev. Lett. 107191102gr-qcD. Garfinkle and T. Jacobson, A positive energy theorem for Einstein-aether and Hořava gravity, Phys. Rev. Lett. 107, 191102 (2011), arXiv:1108.1835 [gr-qc].
Energy in the Einstein-aether theory. C Eling, 10.1103/PhysRevD.80.129905arXiv:gr-qc/0507059Phys. Rev. D. 73129905Phys.Rev.DC. Eling, Energy in the Einstein-aether theory, Phys. Rev. D 73, 084026 (2006), [Erratum: Phys.Rev.D 80, 129905 (2009)], arXiv:gr-qc/0507059.
Preciado-López, Wellposed Cauchy formulation for Einstein-aether theory. O Sarbach, E Barausse, J A , 10.1088/1361-6382/ab2e13arXiv:1902.05130Class. Quant. Grav. 36165007gr-qcO. Sarbach, E. Barausse, and J. A. Preciado-López, Well- posed Cauchy formulation for Einstein-aether theory, Class. Quant. Grav. 36, 165007 (2019), arXiv:1902.05130 [gr-qc].
Relativistic gravity with a dynamical preferred frame. D Mattingly, T Jacobson, 10.1142/9789812778123_0042arXiv:gr-qc/01120122nd Meeting on CPT and Lorentz Symmetry. D. Mattingly and T. Jacobson, Relativistic gravity with a dynamical preferred frame, in 2nd Meeting on CPT and Lorentz Symmetry (2002) pp. 331-335, arXiv:gr- qc/0112012.
LIGO Document T2000012-v1. B O'reilly, M Branchesi, S Haino, G Gemme, Tech. Rep. B. O'Reilly, M. Branchesi, S. Haino, and G. Gemme, LIGO Document T2000012-v1 , Tech. Rep. (2020).
Inference from Iterative Simulation Using Multiple Sequences. A Gelman, D B Rubin, 10.1214/ss/1177011136Statist. Sci. 7457A. Gelman and D. B. Rubin, Inference from Iterative Simulation Using Multiple Sequences, Statist. Sci. 7, 457 (1992).
. A Gelman, J B Carlin, H S Stern, D B Dunson, A Vehtari, D B Rubin, 10.1201/b16018Chapman and Hall/CRCA. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin, Bayesian Data Analysis (Chapman and Hall/CRC, 2013).
Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo. R Abbott, LIGO Scientific10.1016/j.softx.2021.100658arXiv:1912.11716SoftwareX. 13100658gr-qcR. Abbott et al. (LIGO Scientific, Virgo), Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo, SoftwareX 13, 100658 (2021), arXiv:1912.11716 [gr-qc].
Prospects for observing and localizing gravitational-wave transients with advanced LIGO, advanced virgo and KAGRA. B P Abbott, 10.1007/s41114-020-00026-9Living Reviews in Relativity. 23B. P. Abbott et al., Prospects for observing and local- izing gravitational-wave transients with advanced LIGO, advanced virgo and KAGRA, Living Reviews in Relativ- ity 23, 10.1007/s41114-020-00026-9 (2020).
A tapering window for time-domain templates and simulated signals in the detection of gravitational waves from coalescing compact binaries. D J A Mckechan, C Robinson, B S Sathyaprakash, 10.1088/0264-9381/27/8/084020arXiv:1003.2939Class. Quant. Grav. 2784020gr-qcD. J. A. McKechan, C. Robinson, and B. S. Sathyaprakash, A tapering window for time-domain tem- plates and simulated signals in the detection of gravi- tational waves from coalescing compact binaries, Class. Quant. Grav. 27, 084020 (2010), arXiv:1003.2939 [gr-qc].
Quadratic-in-spin effects in the orbital dynamics and gravitational-wave energy flux of compact binaries at the 3PN order. A Bohé, G Faye, S Marsat, E K Porter, 10.1088/0264-9381/32/19/195010arXiv:1501.01529Class. Quant. Grav. 32195010gr-qcA. Bohé, G. Faye, S. Marsat, and E. K. Porter, Quadratic-in-spin effects in the orbital dynamics and gravitational-wave energy flux of compact binaries at the 3PN order, Class. Quant. Grav. 32, 195010 (2015), arXiv:1501.01529 [gr-qc].
Readyto-use post-Newtonian gravitational waveforms for binary black holes with nonprecessing spins: An update. C K Mishra, A Kela, K G Arun, G Faye, 10.1103/PhysRevD.93.084054arXiv:1601.05588Phys. Rev. D. 9384054grqcC. K. Mishra, A. Kela, K. G. Arun, and G. Faye, Ready- to-use post-Newtonian gravitational waveforms for bi- nary black holes with nonprecessing spins: An update, Phys. Rev. D 93, 084054 (2016), arXiv:1601.05588 [gr- qc].
Testing the binary black hole nature of a compact binary coalescence. N V Krishnendu, K G Arun, C K Mishra, 10.1103/PhysRevLett.119.091101arXiv:1701.06318Phys. Rev. Lett. 11991101gr-qcN. V. Krishnendu, K. G. Arun, and C. K. Mishra, Testing the binary black hole nature of a compact bi- nary coalescence, Phys. Rev. Lett. 119, 091101 (2017), arXiv:1701.06318 [gr-qc].
Approximate Universal Relations for Neutron Stars and Quark Stars. K Yagi, N Yunes, 10.1016/j.physrep.2017.03.002arXiv:1608.02582Phys. Rept. 6811gr-qcK. Yagi and N. Yunes, Approximate Universal Relations for Neutron Stars and Quark Stars, Phys. Rept. 681, 1 (2017), arXiv:1608.02582 [gr-qc].
| zyda_arxiv-2160000 |